Does C# compiler (in VS2008 or VS2010) remove unused methods while compiling ?
I assume that it may have a problem deciding if public methods will ever be used, so I guess it will compile all the public methods.
But what about private methods that are never used inside a class ?
EDIT:
Are there a set of rules about the compiler's optmization that are documented anywhere ?
Just checked in reflector with a release build. The compiler doesn't remove the unused private methods.
There are ways to use a method without the compiler knowledge, like with reflection. So the compiler doesn't try to guess. It just leaves the methods there.
The only private methods the compiler removes are partial methods without implementation.
For the C# compiler optimizations, look here (archive.org).
The compiler doesn't strip any method from the assembly, public or private. I could in fact cause weird issues with reflection, and prevent runtime calls to such methods.
There are a lot of frameworks (like the XAML parser) which enable you to call private methods without static bindings (think about OnClick="myFunction" in a XAML file) This markup will call the potentially private myFunction when the OnClick event is raised... But the compiler has no informations about such a behavior at compile time.
Dynamic code suffer from the same issue, IL generation too. And you can access private methods from any object when executing under full trust.
This optimization is effectively implemented at the JIT level, which is good because then it works for both public/private/whatever methods. If a method is never called (ignoring ngen, etc.), it never gets JITed. Now you might say that this is still a waste of space for metadata etc. but as others have pointed out, private isn't so private.
No they won't be removed. It may give you warning for that but won't do it itself.
Related
Would C# compiler optimize empty void methods away?
Something like
private void DoNothing()
{
}
As essentially, no code is run aside from adding DoNothing to the call stack and removing it again, wouldn't it be better to optimize this call away?
Would C# compiler optimize empty void methods away?
No. They could still be accessed via reflection, so it's important that the method itself stays.
Any call sites are likely to include the call as well - but the JIT may optimize them away. It's in a much better position to do so. It's basically a special case of inlining, where the inlined code is empty.
Note that if you call it on another object:
foo.DoNothing();
that's not a no-op, because it will check that foo is non-null.
If you want you could intercept the post build event for every project and run an IL inspecting tool that will reflect your generated dll, inspect every methodinfo in your type and request it's IL looking for empty IL patterns like only NoOp IL instructions, and remove the unwanted methods.
For example:
var ilBytes = SomeMethodInfo.GetMethodBody().GetILAsByteArray();
A good obfuscation tool will "prune" methods in this way. preemptive.com/products/dotfuscator/features#pruning – weston 5 mins ago
You could use the tool externally of visual studio to find empty methods and remove them from the file they are defined or used in.
Never. Compiler doesn't has to do with what's empty or not written. Its just what you write, you get in your MSIL. you can check it here in ILDASM
I have a C# program where some parts of code are generated using D-style mixins (i.e., the body of the method is compiled, executed, and results inserted into a class). The method is marked with [MixinAttribute] and, naturally, I don't want it to be compiled into the program. Is there some cheap way of preventing the method decorated with this attribute from being included in a build?
The only way is with compiler conditionals:
#if DEBUG
[MixinAttribute]
// method you don't want included
#endif
The problem with this approach is that you then create a member which will be unavailable in builds where DEBUG is not defined. You then have to mark all usages with the conditional, and I don't think this is what you want. It's not quite clear but I think what you are really asking is how you create dynamic call sites at build time, or, rather, at JIT time (which is what the ConditionalAttribute controls). If this is the case, you can't really do this easily in C# without using some kind of dynamic dispatch overriding (using some proxying library) or by using some post-processing tool like PostSharp to manipulate the compiler output.
I have a class library and am using only part of it. Is there a need to delete what isn't being used in order to shrink the size of the created code (in release configuration)?
As far as I've seen, the compiler takes care of that, and removing the code doesn't change the EXE file size. Will this always be true? Removing all unneeded code would take very long, so I want to know if there's need for that.
More information: there are methods and classes in the class library that aren't called from the executing code, but are referenced by other parts of code in the class library (which themselves are never called).
No, the compiler includes the "dead" code as well. A simple reason for this is that it's not always possible to know exactly what code will and won't be executed. For example, even a private method that is never referenced could be called via reflection, and public methods could be referenced by external assemblies.
You can use a tool to help you find and remove unused methods (including ones only called by other unused methods). Try What tools and techniques do you use to find dead code? and Find unused code to get you started.
It all gets compiled. Regardless of whether it is called or not. The code may be called by an external library.
The only way to make the compiler ignore code is by using Compiler Preprocessor Directives. More about those here.
I doubt the compiler will remove anything. The fact is, the compiler can't tell what is used and what is not, as types can be instantiated and methods called by name, thanks to reflection.
Let's suppose there is a class library called Utility. You created a new project and added this class library to that project. Even if your EXE calls only 1-2 methods from the class library, it's never a good idea to delete the unreferenced code.
It would go against the principle of reusablity. Despite the fact that there would be some classes present in the library unreferenced from the EXE, it would not have any bad impact on performance or size of the program.
Determining all and only dead code is (if one makes the idealization that one has a "math world" like language) recursively undecidable, in most languages. (A few rare ones like the Blaise language are decidable.)
to the question of whether there is a "need to delete what isn't being used in order to shrink the size of the created code": i think this would only be useful to save network bandwidth. removing unused code is crucial in web applications to improve loading speeds etc.
if you're code is an exe or a library, the only reason i see to remove dead code, is to improve your code quality. so that someone looking at your code 2 years down the line won't scratch their heads wondering what it does.
I got a pretty common scenario, namely a self implemented ILogger interface. It contains several methods like _logger.Debug("Some stuff") and so on. The implementation is provided by a LoggingService, and used in classes the normal way.
Now I have a question regarding performance, I am writing for Windows Phone 7, and because of the limited power of these devices, little things may matter.
I do not want to:
Include a precompiler directive on each line, like #IF DEBUG
Use a condition like log4net e.g. _logger.DebugEnabled
The way I see it, in the release version, I just return NullLoggers, which contain an empty implementation of the interface, doing nothing.
The question is: Does the compiler recognize such things (may be hard, he can't know on compile time which logger I assign). Is there any way to give .NET a hint for that?
The reason for my question, I know entering an empty function will not cause a big delay, no problem there. But there are a lot of strings in the source code of my application, and if they are never used, they do not really need to be part of my application...
Or am I overthinking a tiny problem (perhaps the "string - code" ratio just looks awful in my code editor, and its no big deal anyway)..
Thanks for tips,
Chris
Use the Conditional attribute:
[Conditional("DEBUG")]
public void Debug(string message) { /* ... */ }
The compiler will remove all calls to this method for any build configurations that don't match the string in the conditional attribute. Note that this attribute is applied to the method not the call site. Also note that it is the call site instruction that is removed, not the method itself.
It is probably a very small concern to have logging code in your application that does not "run". The overhead of the "null" logger or conditionals is likely to be very small in the scheme of things. The strings will incur memory overhead which could be worrying for a constrained device, but as it is WP7 the minimum specs are not that constrained in reality.
I understand that logging code looks fugly though. :)
If you really want to strip that logging code out...
In .Net you can use the ConditionalAttribute to mark methods for conditional compilation. You could leverage this feature to ensure that all logging calls are removed from compilation for specified build configurations. As long as methods that you have decorated with the conditional attributes follows a few rules, the compiler will literally strip the call chain out.
However, if you wanted to use this approach then you would have to forgo your interface design as the conditional attribute cannot be applied to interface members, and you cannot implement interfaces with conditional members.
Does C# inline access to properties? I'm aware of the 32 byte (instruction?) limit on the JIT for inlining, but will it inline properties or just pure method calls?
It's up to the JIT (the C# compiler doesn't do any inlining as far as I'm aware), but I believe the JIT will inline trivial properties in most cases.
Note that it won't inline members of types deriving from MarshalByRefObject which includes System.Windows.Forms.Control (via System.ComponentModel.Component).
I've also seen double fields end up being less efficient when accessed via properties - it could be that there are some subtleties around that (due to register use etc).
Also note that the 64-bit and 32-bit JITs are different, including their treatment of what gets inlined.
EDIT: I've just found a 2004 blog entry by David Notario with some more information. However, that was before 2.0 shipped - I wouldn't be surprised to see that at least some of it had changed now. Might be of interest anyway.
EDIT: Another question referred to a 2008 Vance Morrison blog entry which gives more information. Interesting stuff.
A property access is just a pure method call. There is no difference in the IL the compiler emits for a property access and for a method call with a similar signature, which sort of answers your question.
It took me a while to figure out that in Visual Studio you can view the disassembly of managed code, after the JIT compiles it.
So why not create a class with a very simple accessor property, run it in release mode, set a breakpoint, and see what the disassembly says?
I posted a similar question recently:
Why are public fields faster than properties?
The issue with mine was that a public field was faster than a property because I'm running 64-bit Vista and the JIT compiled my code to 64-bit as well, and my properties were not in-lined. Forcing the project to compile for x86 did in-line the property and there was no speed difference between the property and the public field.
So, the C# 32-bit JIT does in-line properties, the 64-bit doesn't, nor any other non-static methods.