Surprises Moving from C++ to C# - c#

I am a C++ programmer moving into C#. I worked with the language for a month now and understand many concepts.
What are some surprises I may get while moving from C++ to C#? I was warned about destructors not being executed as I intended. Recently I tried to do something with generics that would use T as the base class. That didn't work. I also had another problem but I'll chalk that up to inexperience in C#. I was also surprised that my app was eating RAM, then I figured out I needed to use .dispose in one function. (I thought it would clean up like a smart pointer)
What else may surprise me?
Please no language bashing. I doubt anyone will but just in case...

Fortunately, Microsoft have some of that info here: C# for C++ Developers.
The struct vs class differences is another biggie for C++ origins.

I think you've covered the main one. You should read up on garbage collection, understand why there are no destructors as such, figure out the IDisposable pattern (which kind of replaces destructors). I'd say that was the big one.
The only other thing I would say is to warn you the C# and the .Net Base Class Library are pretty big, to get the most out of it there is a lot to learn... Once you have covered the basics of garbage collection and the type system you'll want to look at LINQ, and you should take the time to explore the relevnt libraries / frameworks for your area (e.g. WPF, WCF, ASP.Net etc). But it's all good. I moved from C++ to C# and would never go back, I find it way more productive (I'm not bashing C++, I do still dable:-) )

Well, the languages are completely different as I'm sure you've realized if you've worked with C# for any time. You don't have a powerful macro or templating (I realize there are generics in C#) in C# as you do in C++. As far as memory, remember you aren't in a tightly controlled environment anymore. Expect to see a lot of memory usage in Task Manager and similar tools, this is normal. There are better, more fine-grained performance counters to see true memory usage. Also, you probably don't need to call dispose as much as you might think (by the way, check out "using" blocks if you haven't already).
Another clear one is the default constructor, in C# this does not create a new Foo object:
Foo myFoo;
You can't have anything like a "void pointer" unless you just think of that as being like having a reference of type object. As well, you need to think of Properties as syntactic sugar for methods and not public members as they look in C++ syntax.
Make sure you understand "out" and "ref" parameters.
Obviously this not a large list, just a few "pointers" (no pun intended).

This is a rather big topic. A few thoughts:
C# is garbage collected. Doesn't mean you can stop paying attention about resource allocation, but in general you don't have to worry nearly as much about the most common resource: memory.
In C# Everything is an object. There are no "primitive" datatypes, even an int is an object.
C# has generics, not templates. Templates are far richer and more complex than C#'s similarly syntaxed generics, but generics still provide nearly all of the practical utility of templates, without many of the headaches.
C# has interfaces and single inheritance. Where you might look to multiple inheritance in C++, instead look to using interfaces or a different design pattern (e.g. strategy).
C# has delegates instead of function pointers. A delegate is basically just a typed function pointer. The use of delegates and delegate-relatives (lambda expressions, events, predicates, etc.) is very powerful and worth putting significant effort into studying.
C# supports yield return. This is very fundamental to the C# way of doing things. The most common form of iterating over some set is to use foreach. It's worth understanding how IEnumerable and iterators work.

I've made pretty much the same change some months ago (before that I've made a change to Java - but I didn't really spend much time programming Java).
Here are some of the biggest traps I've come across:
Attribute vs. Variable vs. Setter
One of the biggest traps I was stepping into was knowing if you have to change an attribute or set a variable or use a setter to set some aspect of a class.
IList vs. List vs. other collections
Know the difference between IList, List and all the other collections (IMO you can't really do much with an IList).
Generics do have their own pitfalls
And if you plan to use a lot of generics, maybe reading this helps you avoiding some of my errors:
Check if a class is derived from a generic class
But in general I'd say that the change went pretty painlessly.

Differences in the object model. For example value and reference types are separate by definition, not by how they are instantiated. This has some surprises, e.g.
myWinForm.Size.Width = 100;
will not change the width, you need to create a new Size instance and assign it.

Some things that I have not seen mentioned that are not available in C++ and may be a bit surprising are attributes and reflection
Attributes as such do not give you full fledged AOP. However, they do allow you to solve a bunch of problems in a way that is very different to how you would solve them in C++.

Related

ExpandoObject (dynamics) my greatest friend or my new greatest foe?

Yes I know that it shouldn't be abused and that C# is primariy used as a static language. But seriously folks if you could just dirty up some code, in the python style, or create some dynamic do hicky, would you?
My mind is working overtime on this having spent a while loving the dynamics of python, is c# going over to the dark side through the back door?
Is the argument for static typing a dead one with this obvious addition?
Is the argument for less Unit testing a bit silly when we are all grown ups?
Or has the addition of dynamics ruined a strongly static typed and well designed language?
I lost the desire to use dynamic types when I started using type inference.
C# has expanded to including some aspects of dynamic typing, yes, but that doesn't mean that static typing is dead. It simply means that C# has added some tools that allow developers of all persuasions to solve all kinds of problems in many different ways.
I have a problem with the concept of one type system being "better" than another. That is like saying a hammer is better than a screwdriver. Without know the context of the task at hand it is impossible to make that determination! Dynamic typing is better than static typing for certain problems and situations and vice-versa. The superiority of the approach is entirely conditional on the problem at hand.
So to stick with my tool analogy, it is best to have a toolbox that contains hammers and screwdrivers and know how to use each efficiently. This will make you a better developer as you will be best equipped to solve any problem you face. C#'s new dynamic typing additions are simply an effort to help you by providing these tools in a single, convenient package.
Is the argument for static typing a
dead one with this obvious addition?
Is the argument for less Unit testing
a bit silly when we are all grown ups?
Or has the addition of dynamics ruined
a strongly static typed and well
designed language?
For a while, languages have been moving more and more into the domain of "statically typed when possible, dynamically typed when necessary". And with structural typing (statically checked duck typing) starting to work its way into mainstream languages, we might see languages evolve to the point where they're basically statically checked Python.
For what its worth, dynamically typed code is just as mindful of types as statically typed code. Idiomatic C# is still statically typed, and will remain that way for a long time to come.
As I understand it, the dynamic keyword was introduced more to facilitate interop and method invocation on unknown types at runtime rather than the kind of dynamic typing you find in languages like python.
Essentially, where you would previously have to call InvokeMember to call a method on an unknown type, you would instead create a dynamic object and just call the method, which would be resolved at runtime. The code becomes a great deal easier to read. Why would you want to call a method (or access a property) on an unknown type? Well, WPF does it all the time when you use databinding.
You also use it when you want to use an interop dll using weak binding, such as for example, if you wanted to write code that used office interop, but you wanted to support more than one version of office. I've had to do this before, and the code for it is horrendous. The dynamic keyword would make such code far easier to read and understand.
See this article for more info:
http://www.hanselman.com/blog/C4AndTheDynamicKeywordWhirlwindTourAroundNET4AndVisualStudio2010Beta1.aspx
As far as i remember, type errors is about 5-10% of all found errors, so we have fewer errors for languages with static typing for free. Unit and regression tests is also a few smaller for static typing.
Dynamic typing is nice for OO languages. In case of FP language (and with HM type system especially) dynamic vs static typing don't impact your decisions of program design at large.
But there is moment where you want nice code performance and that moment will show dark side of dynamic types to you.
Yes.
No.
No.
Yes.
No.
Strong typing is still the best way to go for large projects. Not only does it make code completion (IntelliSense) much better, but it can tell you obvious problems at compile time. For example, say socket.Write takes a string. In C# you won't be able to run your program if you try to pass it a number, while in Python you would only find out about your bug when your program crashes.
On the other hand, it's easy to imagine how useful it would be to have a JSON parser that acts like an expando object, automatically growing the properties specified in the JSON.
To elaborate my point a bit, I think C# will mostly stay safe from the evils of dynamic typing, while still reaping its benefits. This is because the system still encourages types on everything, as opposed to other dynamic languages where types are entirely optional (or even just advisory). In C# you will be able to just "git 'er done" with duck typing, expando properties, and other dynamic typing goodness, but it will be well-marked with the dynamic keyword which helps you to keep it self-contained.

Why does Java not use the out parameter in its language syntax while c# does?

While I am not a huge fan of using the out parameter in c# I would like to know why Java chose not to include it in its language syntax. Is there any special reason or maybe its because a person can simply pass an object as a parameter type?
Probably because the designers didn't feel the need to allow for multiple ways of returning objects.
The same question can be asked about delegates, generic, etc.
But the fact of the matter is that the C# designers learned from Java's mistakes/inconveniences, which is why many I've spoken to feel C# is the nicer language to work with.
Java was designed to be a very simple language, with a very simple syntax - a kind of "Spartan OO programming language" in contrast to C++ with its abundance of features that hardly anyone knows and understands completely (including compiler implementors).
Basically, features were omitted unless they were perceived to be absolutely necessary.
On one hand, the goal was achieved - there are very few areas in which Java's behavior is hard to understand or predict.
On the other hand, the missing features have to be worked around, which leads to Java's oft-maligned verbosity. The designers of C# tried to build on this experience.
However, I personally wouldn't count out parameters as a great loss.
The number 1 use case for out parameters is smooth interop with native code.
As Java does not prioritise smooth interop with native code, it doesn't especially need this feature.
There's no logical need for output or ref parameters. As they are available, they have become part of a common idiom in the CLR languages, the "TryXXX" idiom for a non-throwing version of a method. But that could have been done by returning a compound value type (a struct in C#) instead:
struct TryResult<T>
{
public T Result;
public bool Succeeded;
}
TryResult<int> Parse(string intString)
{
...
Having an out parameter basically allows for two return values from a method.
Java left this feature out to simplify it's syntax and remove the potential for misuse and programming errors.
In order to return more than one value you can have your method return an object or a collection of objects. So you don't necessarily require out parameters to return more than item from the method.
You make it sounds as though Java and C# were designed around the same time. Remember, Java is nearly 15 years old. The lack of out parameters (if you can call it a lack) is the least of its omissions.
C# has a much lower barrier to entry for random features. "Out" parameters are not that useful (and get abused), so they are not supported in Java. There were lots of features submitted under Project Coin for JDK7, but IIRC out parameters were not among them.
I feel like I must be missing something here, so I'm fully prepared to be downvoted into oblivion...
Doesn't Java support "out" parameters equally as much as C++? My C#-fu is not very strong, but I thought the out keyword was just a way of telling the compiler to allow objects to be changed inside the method. If I'm right, then both Java and C++ support this too, just without the explict (and IMHO nice distinction) of using a keyword. In C++, you can pass a pointer or a reference to the function and the function can modify the object in whatever way it wishes. Perhaps it would be more like C#'s implementation to pass in a pointer to a pointer and have the function allocate an object there, but whatever. In Java, you can achieve the former exactly the same as using references in C++, since every object is a reference in Java.
So, where's the error of my ways? There must be a reason no one has mentioned this yet. I look forward to learning what it is.

How does Objective-C compare to C#? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've recently purchased a Mac and use it primarily for C# development under VMWare Fusion. With all the nice Mac applications around I've started thinking about Xcode lurking just an install click away, and learning Objective-C.
The syntax between the two languages looks very different, presumably because Objective-C has its origins in C and C# has its origins in Java/C++. But different syntaxes can be learnt so that should be OK.
My main concern is working with the language and if it will help to produce well-structured, readable and elegant code. I really enjoy features such as LINQ and var in C# and wonder if there are equivalents or better/different features in Objective-C.
What language features will I miss developing with Objective-C? What features will I gain?
Edit: The framework comparisons are useful and interesting but a language comparison are what this question is really asking (partly my fault for originally tagging with .net). Presumably both Cocoa and .NET are very rich frameworks in their own right and both have their purpose, one targeting Mac OS X and the other Windows.
Thank you for the well thought out and reasonably balanced viewpoints so far!
No language is perfect for all tasks, and Objective-C is no exception, but there are some very specific niceties. Like using LINQ and var (for which I'm not aware of a direct replacement), some of these are strictly language-related, and others are framework-related.
(NOTE: Just as C# is tightly coupled with .NET, Objective-C is tightly coupled with Cocoa. Hence, some of my points may seem unrelated to Objective-C, but Objective-C without Cocoa is akin to C# without .NET / WPF / LINQ, running under Mono, etc. It's just not the way things are usually done.)
I won't pretend to fully elaborate the differences, pros, and cons, but here are some that jump to mind.
One of the best parts of Objective-C is the dynamic nature — rather than calling methods, you send messages, which the runtime routes dynamically. Combined (judiciously) with dynamic typing, this can make a lot of powerful patterns simpler or even trivial to implement.
As a strict superset of C, Objective-C trusts that you know what you're doing. Unlike the managed and/or typesafe approach of languages like C# and Java, Objective-C lets you do what you want and experience the consequences. Obviously this can be dangerous at times, but the fact that the language doesn't actively prevent you from doing most things is quite powerful. (EDIT: I should clarify that C# also has "unsafe" features and functionality, but they default behavior is managed code, which you have to explicitly opt out of. By comparison, Java only allows for typesafe code, and never exposes raw pointers in the way that C and others do.)
Categories (adding/modifying methods on a class without subclassing or having access to source) is an awesome double-edged sword. It can vastly simplify inheritance hierarchies and eliminate code, but if you do something strange, the results can sometimes be baffling.
Cocoa makes creating GUI apps much simpler in many ways, but you do have to wrap your head around the paradigm. MVC design is pervasive in Cocoa, and patterns such as delegates, notifications, and multi-threaded GUI apps are well-suited to Objective-C.
Cocoa bindings and key-value observing can eliminate tons of glue code, and the Cocoa frameworks leverage this extensively. Objective-C's dynamic dispatch works hand-in-hand with this, so the type of the object doesn't matter as long as it's key-value compliant.
You will likely miss generics and namespaces, and they have their benefits, but in the Objective-C mindset and paradigm, they would be niceties rather than necessities. (Generics are all about type safety and avoiding casting, but dynamic typing in Objective-C makes this essentially a non-issue. Namespaces would be nice if done well, but it's simple enough to avoid conflicts that the cost arguably outweighs the benefits, especially for legacy code.)
For concurrency, Blocks (a new language feature in Snow Leopard, and implemented in scores of Cocoa APIs) are extremely useful. A few lines (frequently coupled with Grand Central Dispatch, which is part of libsystem on 10.6) can eliminates significant boilerplate of callback functions, context, etc. (Blocks can also be used in C and C++, and could certainly be added to C#, which would be awesome.) NSOperationQueue is also a very convenient way to add concurrency to your own code, by dispatching either custom NSOperation subclasses or anonymous blocks which GCD automatically executes on one or more different threads for you.
I've been programming in C, C++ and C# now for over 20 years, first started in 1990. I have just decided to have a look at the iPhone development and Xcode and Objective-C. Oh my goodness... all the complaints about Microsoft I take back, I realise now how bad things code have been. Objective-C is over complex compared to what C# does. I have been spoilt with C# and now I appreciate all the hard work Microsoft have put in. Just reading Objective-C with method invokes is difficult to read. C# is elegant in this. That is just my opinion, I hoped that the Apple development language was a good as the Apple products, but dear me, they have a lot to learn from Microsoft. There is no question C#.NET application I can get an application up and running many times faster than XCode Objective-C. Apple should certainly take a leaf out of Microsoft's book here and then we'd have the perfect environment. :-)
No technical review here, but I just find Objective-C much less readable.
Given the example Cinder6 gave you:
C#
List<string> strings = new List<string>();
strings.Add("xyzzy"); // takes only strings
strings.Add(15); // compiler error
string x = strings[0]; // guaranteed to be a string
strings.RemoveAt(0); // or non-existant (yielding an exception)
Objective-C
NSMutableArray *strings = [NSMutableArray array];
[strings addObject:#"xyzzy"];
[strings addObject:#15];
NSString *x = strings[0];
[strings removeObjectAtIndex:0];
It looks awful. I even tried reading 2 books on it, they lost me early on,
and normally I don't get that with programming books / languages.
I'm glad we have Mono for Mac OS, because if I'd had to rely on Apple
to give me a good development environment...
Manual memory management is something beginners to Objective-C seems to have most problem with, mostly because they think it is more complex than it is.
Objective-C and Cocoa by extension relies on conventions over enforcement; know and follow a very small set of rules and you get a lot for free by the dynamic run-time in return.
The not 100% true rule, but good enough for everyday is:
Every call to alloc should be matched with a release at the end of the current scope.
If the return value for your method has been obtained by alloc then it should be returned by return [value autorelease]; instead of being matched by a release.
Use properties, and there is no rule three.
The longer explanation follows.
Memory management is based on ownership; only the owner of an object instance should ever release the object, everybody else should always do nothing. This mean that in 95% of all code you treat Objective-C as if it was garbage collected.
So what about the other 5%? You have three methods to look out for, any object instance received from these method are owned by the current method scope:
alloc
Any method beginning with the word new, such as new or newService.
Any method containing the word copy, such as copy and mutableCopy.
The method have three possible options as of what to do with it's owned object instances before it exits:
Release it using release if it is no longer needed.
Give ownership to the a field (instance variable), or a global variable by simply assigning it.
Relinquish ownership but give someone else a chance to take ownership before the instance goes away by calling autorelease.
So when should you pro-actively take ownership by calling retain? Two cases:
When assigning fields in your initializers.
When manually implementing setter method.
Sure, if everything you saw in your life is Objective C, then its syntax looks like the only possible. We could call you a "programming virgin".
But since lots of code is written in C, C++, Java, JavaScript, Pascal and other languages, you'll see that ObjectiveC is different from all of them, but not in a good way. Did they have a reason for this? Let's see other popular languages:
C++ added a lot extras to C, but it changed the original syntax only as much as needed.
C# added a lot extras compared to C++ but it changed only things that were ugly in C++ (like removing the "::" from the interface).
Java changed a lot of things, but it kept the familiar syntax except in parts where the change was needed.
JavaScript is a completely dynamic language that can do many things ObjectiveC can't. Still, its creators didn't invent a new way of calling methods and passing parameters just to be different from the rest of the world.
Visual Basic can pass parameters out of order, just like ObjectiveC. You can name the parameters, but you can also pass them the regular way. Whatever you use, it's normal comma-delimited way that everyone understands. Comma is the usual delimiter, not just in programming languages, but in books, newspapers, and written language in general.
Object Pascal has a different syntax than C, but its syntax is actually EASIER to read for the programmer (maybe not to the computer, but who cares what computer thinks). So maybe they digressed, but at least their result is better.
Python has a different syntax, which is even easier to read (for humans) than Pascal. So when they changed it, making it different, at least they made it better for us programmers.
And then we have ObjectiveC. Adding some improvements to C, but inventing its own interface syntax, method calling, parameter passing and what not. I wonder why didn't they swap + and - so that plus subtracts two numbers. It would have been even cooler.
Steve Jobs screwed up by supporting ObjectiveC. Of course he can't support C#, which is better, but belongs to his worst competitor. So this is a political decision, not a practical one. Technology always suffers when tech decisions are made for political reasons. He should lead the company, which he does good, and leave programming matters to real experts.
I'm sure there would be even more apps for iPhone if he decided to write iOS and support libraries in any other language than ObjectiveC. To everyone except die-hard fans, virgin programmers and Steve Jobs, ObjectiveC looks ridiculous, ugly and repulsive.
One thing I love about objective-c is that the object system is based on messages, it lets you do really nice things you couldn't do in C# (at least not until they support the dynamic keyword!).
Another great thing about writing cocoa apps is Interface Builder, it's a lot nicer than the forms designer in Visual Studio.
The things about obj-c that annoy me (as a C# developer) are the fact that you have to manage your own memory (there's garbage collection, but that doesn't work on the iPhone) and that it can be very verbose because of the selector syntax and all the [ ].
As a programmer just getting started with Objective-C for iPhone, coming from C# 4.0, I'm missing lambda expressions, and in particular, Linq-to-XML. The lambda expressions are C#-specific, while the Linq-to-XML is really more of a .NET vs. Cocoa contrast. In a sample app I was writing, I had some XML in a string. I wanted to parse the elements of that XML into a collection of objects.
To accomplish this in Objective-C/Cocoa, I had to use the NSXmlParser class. This class relies on another object which implements the NSXMLParserDelegate protocol with methods that are called (read: messages sent) when an element open tag is read, when some data is read (usually inside the element), and when some element end tag is read. You have to keep track of the parsing status and state. And I honestly have no idea what happens if the XML is invalid. It's great for getting down to the details and optimize performance, but oh man, that's a whole lot of code.
By contrast, here's the code in C#:
using System.Linq.Xml;
XDocument doc = XDocument.Load(xmlString);
IEnumerable<MyCustomObject> objects = doc.Descendants().Select(
d => new MyCustomObject{ Name = d.Value});
And that's it, you've got a collection of custom objects drawn from XML. If you wanted to filter those elements by value, or only to those that contain a specific attribute, or if you just wanted the first 5, or to skip the first 1 and get the next 3, or just find out if any elements were returned... BAM, all right there in the same line of code.
There are many open-source classes that make this processing a lot easier in Objective-C, so that does much of the heavy lifting. It's just not this built in.
*NOTE: I didn't actually compile the code above, it's just meant as an example to illustrate the relative lack of verbosity required by C#.
Probably most important difference is memory management. With C# you get garbage collection, by virtue of it being a CLR based language. With Objective-C you need to manage memory yourself.
If you're coming from a C# background (or any modern language for that matter), moving to a language without automatic memory management will be really painful, as you will spend a lot of your coding time on properly managing memory (and debugging as well).
Here's a pretty good article comparing the two languages:
http://www.coderetard.com/2008/03/16/c-vs-objective-c/
Other than the paradigm difference between the 2 languages, there's not a lot of difference. As much as I hate to say it, you can do the same kind of things (probably not as easily) with .NET and C# as you can with Objective-C and Cocoa. As of Leopard, Objective-C 2.0 has garbage collection, so you don't have to manage memory yourself unless you want to (code compatibility with older Macs and iPhone apps are 2 reasons to want to).
As far as structured, readable code is concerned, much of the burden there lies with the programmer, as with any other language. However, I find that the message passing paradigm lends itself well to readable code provided you name your functions/methods appropriately (again, just like any other language).
I'll be the first to admit that I'm not very familiar with C# or .NET. But the reasons Quinn listed above are quite a few reasons that I don't care to become so.
The method calls used in obj-c make for easily read code, in my opinion much more elegant than c# and obj-c is built on top of c so all c code should work fine in obj-c. The big seller for me though is that obj-c is an open standard so you can find compilers for any system.

Is there any reason C# does not support manual inline methods? And what about optional parameters?

Is there any design reason for that (like the reason they gave up multi inheritance)?
or it just wasn't important enough?
And same question applies for optional parameters in methods... this was already in the first version of vb.net... so it surely no laziness that cause MS not to allow optional parameters, probably architecture decision.. and it seems they had change of heart about that, because C# 4 is going to include that..
What was the decision and why did they give it up?
Edit:
Maybe readers didn't fully understand me. I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used millions of times per second.
Say I have a method called Add(int num), and this method is used quiet a lot with 1 as parameter (Add(1);), I've found out it is faster to implement a special method especially for one. And I don't mean overloading - Writing a new method called AddOne, and literally copy the Add method into it, except that instead of using num I'm writing 1. This might seems horribly weird to you, but it's actually faster.
(as much as ugly it is)
That made me wonder why C# doesn't support manual inline which can be amazingly helpful here.
Edit 2:
I asked myself whether or not to add this. I'm very well familiar with the weirdness (and disadvantages) of choosing a platform such as dot net for such project, but I think dot net optimizations are more important than you think... especially features such as Any CPU etc.
To answer part of your question, see Eric Gunnerson's blog post: Why doesn't C# have an 'inline' keyword?
A quote from his post:
For C#, inlining happens at the JIT
level, and the JIT generally makes a
decent decision.
EDIT: I'm not sure of the reason for delayed optional parameters support, however saying they "gave up" on it sounds as though they were expected to implement it based on our expectations of what other languages offered. I imagine it wasn't high on their priority list and they had deadlines to get certain features out the door for each version. It probably didn't rise in importance till now, especially since method overloading was an available alternative. Meanwhile we got generics (2.0), and the features that make LINQ possible etc. (3.0). I'm happy with the progression of the language; the aforementioned features are more important to me than getting support for optional parameters early on.
Manual inlining would be almost useless. The JIT compiler inlines methods during native code compilation where appropriate, and I think in almost all cases the JIT compiler is better at guessing when it is appropriate than the programmer.
As for optional parameters, I don't know why they weren't there in previous versions. That said, I don't like them to be there in C# 4, because I consider them somewhat harmful because the parameter get baked into the consuming assembly and you have to recompile it if you change the standard values in a DLL and want the consuming assembly to use the new ones.
EDIT:
Some additional information about inlining. Although you cannot force the JIT compiler to inline a method call, you can force it to NOT inline a method call. For this, you use the System.Runtime.CompilerServices.MethodImplAttribute, like so:
internal static class MyClass
{
[System.Runtime.CompilerServices.MethodImplAttribute(MethodImplOptions.NoInlining)]
private static void MyMethod()
{
//Powerful, magical code
}
//Other code
}
My educated guess: the reason earlier versions of C# didn't have optional parameters is because of bad experiences with them in C++. On the surface, they look straight-forward enough, but there are a few bothersome corner cases. I think one of Herb Sutter's books describes this in more detail; in general, it has to do with overriding virtual methods. Maximilian has mentioned one of the .NET corner cases in his answer.
You can also pretty much get by with out them by manually writing multiple overloads; that may not be very nice for the author of the class, but clients will hardly notice the difference between overloads and optional parameters.
So after all these years w/o them, why did C# 4.0 add them? 1) improved parity with VB.NET, and 2) easier interop with COM.
I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used literally millions of times per second.
Then you chose a wrong language. I assume you actually profiled your code (right?) and know that there is nothing apart from micro-optimisations that can help you. Also, you're using a high-performance native bigint library and not writing your own, right?
If that's true, don't use .NET. If you think you can gain speed on partial specialisation, go to Haskell, C, Fortran or any other language that either does it automatically, or can expose inlining to you to do it by hand.
If Add(1) really matters to you, heap allocations will matter too.
However, you should really look at what the profiler can tell you...
C# has added them in 4.0: http://msdn.microsoft.com/en-us/library/dd264739(VS.100).aspx
As to why they weren't done from the beginning, its most likely because they felt method overloads gave more flexibility. With overloading you can specify multiple 'defaults' based on the other parameters that you're taking. Its also not that much more syntax.
Even in languages like C++, inlining something doesn't guarantee that it'll happen; it's a hint to the compiler. The compiler can either take the hint, or do its own thing.
C# is another step removed from the generated assembly code (via IL + the JIT), so it becomes even harder to guarantee that something will inline. Furthermore, you have issues like the x86 + x64 implementations of the JIT differing in behaviour.
Java doesn't include an inline keyword either. The better Java JITs can inline even virtual methods, nor does the use of keywords like private or final make any difference (it used to, but that is now ancient history).

How costly is .NET reflection?

I constantly hear how bad reflection is to use. While I generally avoid reflection and rarely find situations where it is impossible to solve my problem without it, I was wondering...
For those who have used reflection in applications, have you measured performance hits and, is it really so bad?
In his talk The Performance of Everyday Things, Jeff Richter shows that calling a method by reflection is about 1000 times slower than calling it normally.
Jeff's tip: if you need to call the method multiple times, use reflection once to find it, then assign it to a delegate, and then call the delegate.
It is. But that depends on what you're trying to do.
I use reflection to dynamically load assemblies (plugins) and its performance "penalty" is not a problem, since the operation is something I do during startup of the application.
However, if you're reflecting inside a series of nested loops with reflection calls on each, I'd say you should revisit your code :)
For "a couple of time" operations, reflection is perfectly acceptable and you won't notice any delay or problem with it. It's a very powerful mechanism and it is even used by .NET, so I don't see why you shouldn't give it a try.
Reflection performance will depend on the implementation (repetitive calls should be cached eg: entity.GetType().GetProperty("PropName")). Since most of the reflection I see on a day to day basis is used to populate entities from data readers or other repository type structures I decided to benchmark performance specifically on reflection when it is used to get or set an objects properties.
I devised a test which I think is fair since it caches all the repeating calls and only times the actual SetValue or GetValue call. All the source code for the performance test is in bitbucket at: https://bitbucket.org/grenade/accessortest. Scrutiny is welcome and encouraged.
The conclusion I have come to is that it isn't practical and doesn't provide noticeable performance improvements to remove reflection in a data access layer that is returning less than 100,000 rows at a time when the reflection implementation is done well.
The graph above demonstrates the output of my little benchmark and shows that mechanisms that outperform reflection, only do so noticeably after the 100,000 cycles mark. Most DALs only return several hundred or perhaps thousands of rows at a time and at these levels reflection performs just fine.
If you're not in a loop, don't worry about it.
Not massively. I've never had an issue with it in desktop development unless, as Martin states, you're using it in a silly location. I've heard a lot of people have utterly irrational fears about its performance in desktop development.
In the Compact Framework (which I'm usually in) though, it's pretty much anathema and should be avoided like the plague in most cases. I can still get away with using it infrequently, but I have to be really careful with its application which is way less fun. :(
My most pertinent experience was writing code to compare any two data entities of the same type in a large object model property-wise. Got it working, tried it, ran like a dog, obviously.
I was despondent, then overnight realised that wihout changing the logic, I could use the same algorithm to auto-generate methods for doing the comparison but statically accessing the properties. It took no time at all to adapt the code for this purpose and I had the ability to do deep property-wise comparison of entities with static code that could be updated at the click of a button whenever the object model changed.
My point being: In conversations with colleagues since I have several times pointed out that their use of reflection could be to autogenerate code to compile rather than perform runtime operations and this is often worth considering.
It's bad enough that you have to be worried even about reflection done internally by the .NET libraries for performance-critical code.
The following example is obsolete - true at the time (2008), but long ago fixed in more recent CLR versions. Reflection in general is still a somewhat costly thing, though!
Case in point: You should never use a member declared as "Object" in a lock (C#) / SyncLock (VB.NET) statement in high-performance code. Why? Because the CLR can't lock on a value type, which means that it has to do a run-time reflection type check to see whether or not your Object is actually a value type instead of a reference type.
As with all things in programming you have to balance performance cost with with any benefit gained. Reflection is an invaluable tool when used with care. I created a O/R mapping library in C# which used reflection to do the bindings. This worked fantastically well. Most of the reflection code was only executed once, so any performance hit was quite small, but the benefits were great. If I were writing a new fandangled sorting algorithm, I would probably not use reflection, since it would probably scale poorly.
I appreciate that I haven't exactly answered your question here. My point is that it doesn't really matter. Use reflection where appropriate. It's just another language feature that you need to learn how and when to use.
Reflection can have noticeable impact on performance if you use it for frequent object creation. I've developed application based on Composite UI Application Block which is relying on reflection heavily. There was a noticeable performance degradation related with objects creation via reflection.
However in most cases there are no problems with reflection usage. If your only need is to inspect some assembly I would recommend Mono.Cecil which is very lightweight and fast
Reflection is costly because of the many checks the runtime must make whenever you make a request for a method that matches a list of parameters. Somewhere deep inside, code exists that loops over all methods for a type, verifies its visibility, checks the return type and also checks the type of each and every parameter. All of this stuff costs time.
When you execute that method internally theres some code that does stuff like checking you passed a compatible list of parameters before executing the actual target method.
If possible it is always recommended that one caches the method handle if one is going to continually reuse it in the future. Like all good programming tips, it often makes sense to avoid repeating oneself. In this case it would be wasteful to continually lookup the method with certain parameters and then execute it each and everytime.
Poke around the source and take a look at whats being done.
As with everything, it's all about assessing the situation. In DotNetNuke there's a fairly core component called FillObject that uses reflection to populate objects from datarows.
This is a fairly common scenario and there's an article on MSDN, Using Reflection to Bind Business Objects to ASP.NET Form Controls that covers the performance issues.
Performance aside, one thing I don't like about using reflection in that particular scenario is that it tends to reduce the ability to understand the code at a quick glance which for me doesn't seem worth the effort when you consider you also lose compile time safety as opposed to strongly typed datasets or something like LINQ to SQL.
Reflection does not drastically slow the performance of your app. You may be able to do certain things quicker by not using reflection, but if Reflection is the easiest way to achieve some functionality, then use it. You can always refactor you code away from Reflection if it becomes a perf problem.
I think you will find that the answer is, it depends. It's not a big deal if you want to put it in your task-list application. It is a big deal if you want to put it in Facebook's persistence library.

Categories

Resources