I thought I read somewhere that ordering your using statements and getting rid of unused ones had some kind of performance benefit...but I can't seem to find any evidence or resources to support that...is there any truth to this?
No, the using statement, for setting the namespaces, has no performance cost. The IL code produced will be the same, regardless of the order of the statements. The only advantage would be for clarity and readability. Also, removing unused ones would speed up compilation time, but run time performance won't be any different.
From the msdn,
using-namespace-directives in the
same compilation unit or namespace
body do not affect each other and can
be written in any order.
Very much useful link you would like to visit.
An additional issue not mentioned by other commenters: Your using statements affect what appears in Intellisense (particular for extension methods). Removing unused statements will keep your Intellisense performing better and only showing relevant extension methods.
There would not be. At least certainly not at runtime. It is possible that at build time it takes some extra time to process them, but it wouldn't be noticable. The using statements aren't carried over into the IL code, everything has its full name instead, so it is only a build-time thing.
Now, personally, I hot-key "sort and remove usings" and hit it constantly, just because it keeps the code cleaner. But it is just an obsessive-compulsive thing :)
Related
I've always run Remove and Sort Usings as a matter of course, because it seems the right thing to do.
But just now I got to wondering: Why do we do this?
Certainly, there's always a benefit to clean & compact code.
And there must be some benefit if MS took the time to have it as a menu item in VS.
Can anyone answer: why do this?
What are the compile-time or run-time (or other) benefits from removing and/or sorting usings?
As #craig-w mentions, there's a very small compile time performance improvement.
The way that the compiler works, is when it encounters a type, it looks in the current namespace, and then starts searching each namespace with a using directive in the order presented until it finds the type it's looking for.
There's an excellent writeup on this in the book CLR Via C# by Jeffrey Richter (http://www.amazon.com/CLR-via-4th-Developer-Reference/dp/0735667454/ref=sr_1_1?ie=UTF8&qid=1417806042&sr=8-1&keywords=clr+via+c%23)
As to why MS provided the menu option, I would imagine that enough internal developers were asking for it for the same reasons that you mention: cleaner, more concise code.
There's probably a teeny-tiny (i.e. minuscule/virtually unmeasurable) performance improvement during compilation because it doesn't have to search through namespaces that you aren't actually using for unqualified types. I do it because it's just neater and easier to read in the end. Also, I use the Productivity Power Tools and have them set to do the Remove and Sort when I save the file.
I've always run Remove and Sort Usings as a matter of course, because it seems the right thing to do.
But just now I got to wondering: Why do we do this?
Certainly, there's always a benefit to clean & compact code.
And there must be some benefit if MS took the time to have it as a menu item in VS.
Can anyone answer: why do this?
What are the compile-time or run-time (or other) benefits from removing and/or sorting usings?
As #craig-w mentions, there's a very small compile time performance improvement.
The way that the compiler works, is when it encounters a type, it looks in the current namespace, and then starts searching each namespace with a using directive in the order presented until it finds the type it's looking for.
There's an excellent writeup on this in the book CLR Via C# by Jeffrey Richter (http://www.amazon.com/CLR-via-4th-Developer-Reference/dp/0735667454/ref=sr_1_1?ie=UTF8&qid=1417806042&sr=8-1&keywords=clr+via+c%23)
As to why MS provided the menu option, I would imagine that enough internal developers were asking for it for the same reasons that you mention: cleaner, more concise code.
There's probably a teeny-tiny (i.e. minuscule/virtually unmeasurable) performance improvement during compilation because it doesn't have to search through namespaces that you aren't actually using for unqualified types. I do it because it's just neater and easier to read in the end. Also, I use the Productivity Power Tools and have them set to do the Remove and Sort when I save the file.
currently we have quite a chunky auditing system for objects within our application, the flow goes like this..
-Class implements an interface
-Interfaces forces the class to override some methods for adding properties that need auditing to a List of KeyValuePairs
-Class is then also needs to recreate the objects state from a list of key value pairs
Now the developer needs to add all this to there class, also our objects change quite often so we didn't just serialise the class.
What I would like to do is to use attributes to mark the properties as auditable and then do everything automatically so the developer doesn't really need to do anything.
My Main question is - I know people always say reflection is slow, how slow are we talking? what performance hits am I going to get from looking through the class and looking at attributes against a property and then doing any required logic?
thanks for any help
Ste,
It's hard to give a specific answer because it depends on what adequate performance is for your application.
Reflection is slower then normal compiled code but when worrying about performance problems it's always better to have something that works and then use profiling to find the real performance bottleneck and optimize.
Premature optimization could lead to code that's much harder to maintain so your developers will be less productive.
I would start with using reflection and write a good set of unit tests so you know your code is working. If performance turns out to be a problem you can use the Visual Studio profiler to profile your unit tests and discover the bottlenecks.
There are some libraries that can speed up reflection or you could use Expression trees to replace your reflection code if it's to slow.
If the performance ok or not depends on your app context. So it's difficult to say if it is slow or fast for you, you should try it by yourself.
Most probably, imo, it would give pretty acceptable performance, but again I have no idea where you gonna use it.
Like other solutions that come to my mind, could be:
Sqlite, where to save the key/value data
Aspect Oriented Programming (like a PostSharp) to generate a data in compile time.
But the first thing I would try, is a Reflection, just like you think.
Reading this response from Marc I would suggest that Reflection should be fine for most application needs.
Before making any fundamental changes I would suggest running a profiler to find the bottlenecks in your code. If you identify the reflection / auditing process is the major pain point use an IL Emit and try again.
Reflection is the way to go here. If it's too slow(measure!) you can throw in a bit of caching, or in the worst case generate an Expression<T> and compile it.
There are two phases in your problem:
Figure out which properties you want, and return a list of their PropertyInfos. You need to do this only once per type, and then you can cache it. Thus performance of this step doesn't matter.
Getting the value of each property with PropertyInfo.GetValue.
If this step 2 is too slow, you need to generate an Expression in step 1, and the overhead over manually written code goes down to a single delegate invocation.
Is there any design reason for that (like the reason they gave up multi inheritance)?
or it just wasn't important enough?
And same question applies for optional parameters in methods... this was already in the first version of vb.net... so it surely no laziness that cause MS not to allow optional parameters, probably architecture decision.. and it seems they had change of heart about that, because C# 4 is going to include that..
What was the decision and why did they give it up?
Edit:
Maybe readers didn't fully understand me. I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used millions of times per second.
Say I have a method called Add(int num), and this method is used quiet a lot with 1 as parameter (Add(1);), I've found out it is faster to implement a special method especially for one. And I don't mean overloading - Writing a new method called AddOne, and literally copy the Add method into it, except that instead of using num I'm writing 1. This might seems horribly weird to you, but it's actually faster.
(as much as ugly it is)
That made me wonder why C# doesn't support manual inline which can be amazingly helpful here.
Edit 2:
I asked myself whether or not to add this. I'm very well familiar with the weirdness (and disadvantages) of choosing a platform such as dot net for such project, but I think dot net optimizations are more important than you think... especially features such as Any CPU etc.
To answer part of your question, see Eric Gunnerson's blog post: Why doesn't C# have an 'inline' keyword?
A quote from his post:
For C#, inlining happens at the JIT
level, and the JIT generally makes a
decent decision.
EDIT: I'm not sure of the reason for delayed optional parameters support, however saying they "gave up" on it sounds as though they were expected to implement it based on our expectations of what other languages offered. I imagine it wasn't high on their priority list and they had deadlines to get certain features out the door for each version. It probably didn't rise in importance till now, especially since method overloading was an available alternative. Meanwhile we got generics (2.0), and the features that make LINQ possible etc. (3.0). I'm happy with the progression of the language; the aforementioned features are more important to me than getting support for optional parameters early on.
Manual inlining would be almost useless. The JIT compiler inlines methods during native code compilation where appropriate, and I think in almost all cases the JIT compiler is better at guessing when it is appropriate than the programmer.
As for optional parameters, I don't know why they weren't there in previous versions. That said, I don't like them to be there in C# 4, because I consider them somewhat harmful because the parameter get baked into the consuming assembly and you have to recompile it if you change the standard values in a DLL and want the consuming assembly to use the new ones.
EDIT:
Some additional information about inlining. Although you cannot force the JIT compiler to inline a method call, you can force it to NOT inline a method call. For this, you use the System.Runtime.CompilerServices.MethodImplAttribute, like so:
internal static class MyClass
{
[System.Runtime.CompilerServices.MethodImplAttribute(MethodImplOptions.NoInlining)]
private static void MyMethod()
{
//Powerful, magical code
}
//Other code
}
My educated guess: the reason earlier versions of C# didn't have optional parameters is because of bad experiences with them in C++. On the surface, they look straight-forward enough, but there are a few bothersome corner cases. I think one of Herb Sutter's books describes this in more detail; in general, it has to do with overriding virtual methods. Maximilian has mentioned one of the .NET corner cases in his answer.
You can also pretty much get by with out them by manually writing multiple overloads; that may not be very nice for the author of the class, but clients will hardly notice the difference between overloads and optional parameters.
So after all these years w/o them, why did C# 4.0 add them? 1) improved parity with VB.NET, and 2) easier interop with COM.
I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used literally millions of times per second.
Then you chose a wrong language. I assume you actually profiled your code (right?) and know that there is nothing apart from micro-optimisations that can help you. Also, you're using a high-performance native bigint library and not writing your own, right?
If that's true, don't use .NET. If you think you can gain speed on partial specialisation, go to Haskell, C, Fortran or any other language that either does it automatically, or can expose inlining to you to do it by hand.
If Add(1) really matters to you, heap allocations will matter too.
However, you should really look at what the profiler can tell you...
C# has added them in 4.0: http://msdn.microsoft.com/en-us/library/dd264739(VS.100).aspx
As to why they weren't done from the beginning, its most likely because they felt method overloads gave more flexibility. With overloading you can specify multiple 'defaults' based on the other parameters that you're taking. Its also not that much more syntax.
Even in languages like C++, inlining something doesn't guarantee that it'll happen; it's a hint to the compiler. The compiler can either take the hint, or do its own thing.
C# is another step removed from the generated assembly code (via IL + the JIT), so it becomes even harder to guarantee that something will inline. Furthermore, you have issues like the x86 + x64 implementations of the JIT differing in behaviour.
Java doesn't include an inline keyword either. The better Java JITs can inline even virtual methods, nor does the use of keywords like private or final make any difference (it used to, but that is now ancient history).
I constantly hear how bad reflection is to use. While I generally avoid reflection and rarely find situations where it is impossible to solve my problem without it, I was wondering...
For those who have used reflection in applications, have you measured performance hits and, is it really so bad?
In his talk The Performance of Everyday Things, Jeff Richter shows that calling a method by reflection is about 1000 times slower than calling it normally.
Jeff's tip: if you need to call the method multiple times, use reflection once to find it, then assign it to a delegate, and then call the delegate.
It is. But that depends on what you're trying to do.
I use reflection to dynamically load assemblies (plugins) and its performance "penalty" is not a problem, since the operation is something I do during startup of the application.
However, if you're reflecting inside a series of nested loops with reflection calls on each, I'd say you should revisit your code :)
For "a couple of time" operations, reflection is perfectly acceptable and you won't notice any delay or problem with it. It's a very powerful mechanism and it is even used by .NET, so I don't see why you shouldn't give it a try.
Reflection performance will depend on the implementation (repetitive calls should be cached eg: entity.GetType().GetProperty("PropName")). Since most of the reflection I see on a day to day basis is used to populate entities from data readers or other repository type structures I decided to benchmark performance specifically on reflection when it is used to get or set an objects properties.
I devised a test which I think is fair since it caches all the repeating calls and only times the actual SetValue or GetValue call. All the source code for the performance test is in bitbucket at: https://bitbucket.org/grenade/accessortest. Scrutiny is welcome and encouraged.
The conclusion I have come to is that it isn't practical and doesn't provide noticeable performance improvements to remove reflection in a data access layer that is returning less than 100,000 rows at a time when the reflection implementation is done well.
The graph above demonstrates the output of my little benchmark and shows that mechanisms that outperform reflection, only do so noticeably after the 100,000 cycles mark. Most DALs only return several hundred or perhaps thousands of rows at a time and at these levels reflection performs just fine.
If you're not in a loop, don't worry about it.
Not massively. I've never had an issue with it in desktop development unless, as Martin states, you're using it in a silly location. I've heard a lot of people have utterly irrational fears about its performance in desktop development.
In the Compact Framework (which I'm usually in) though, it's pretty much anathema and should be avoided like the plague in most cases. I can still get away with using it infrequently, but I have to be really careful with its application which is way less fun. :(
My most pertinent experience was writing code to compare any two data entities of the same type in a large object model property-wise. Got it working, tried it, ran like a dog, obviously.
I was despondent, then overnight realised that wihout changing the logic, I could use the same algorithm to auto-generate methods for doing the comparison but statically accessing the properties. It took no time at all to adapt the code for this purpose and I had the ability to do deep property-wise comparison of entities with static code that could be updated at the click of a button whenever the object model changed.
My point being: In conversations with colleagues since I have several times pointed out that their use of reflection could be to autogenerate code to compile rather than perform runtime operations and this is often worth considering.
It's bad enough that you have to be worried even about reflection done internally by the .NET libraries for performance-critical code.
The following example is obsolete - true at the time (2008), but long ago fixed in more recent CLR versions. Reflection in general is still a somewhat costly thing, though!
Case in point: You should never use a member declared as "Object" in a lock (C#) / SyncLock (VB.NET) statement in high-performance code. Why? Because the CLR can't lock on a value type, which means that it has to do a run-time reflection type check to see whether or not your Object is actually a value type instead of a reference type.
As with all things in programming you have to balance performance cost with with any benefit gained. Reflection is an invaluable tool when used with care. I created a O/R mapping library in C# which used reflection to do the bindings. This worked fantastically well. Most of the reflection code was only executed once, so any performance hit was quite small, but the benefits were great. If I were writing a new fandangled sorting algorithm, I would probably not use reflection, since it would probably scale poorly.
I appreciate that I haven't exactly answered your question here. My point is that it doesn't really matter. Use reflection where appropriate. It's just another language feature that you need to learn how and when to use.
Reflection can have noticeable impact on performance if you use it for frequent object creation. I've developed application based on Composite UI Application Block which is relying on reflection heavily. There was a noticeable performance degradation related with objects creation via reflection.
However in most cases there are no problems with reflection usage. If your only need is to inspect some assembly I would recommend Mono.Cecil which is very lightweight and fast
Reflection is costly because of the many checks the runtime must make whenever you make a request for a method that matches a list of parameters. Somewhere deep inside, code exists that loops over all methods for a type, verifies its visibility, checks the return type and also checks the type of each and every parameter. All of this stuff costs time.
When you execute that method internally theres some code that does stuff like checking you passed a compatible list of parameters before executing the actual target method.
If possible it is always recommended that one caches the method handle if one is going to continually reuse it in the future. Like all good programming tips, it often makes sense to avoid repeating oneself. In this case it would be wasteful to continually lookup the method with certain parameters and then execute it each and everytime.
Poke around the source and take a look at whats being done.
As with everything, it's all about assessing the situation. In DotNetNuke there's a fairly core component called FillObject that uses reflection to populate objects from datarows.
This is a fairly common scenario and there's an article on MSDN, Using Reflection to Bind Business Objects to ASP.NET Form Controls that covers the performance issues.
Performance aside, one thing I don't like about using reflection in that particular scenario is that it tends to reduce the ability to understand the code at a quick glance which for me doesn't seem worth the effort when you consider you also lose compile time safety as opposed to strongly typed datasets or something like LINQ to SQL.
Reflection does not drastically slow the performance of your app. You may be able to do certain things quicker by not using reflection, but if Reflection is the easiest way to achieve some functionality, then use it. You can always refactor you code away from Reflection if it becomes a perf problem.
I think you will find that the answer is, it depends. It's not a big deal if you want to put it in your task-list application. It is a big deal if you want to put it in Facebook's persistence library.