Is it possible to write an Attribute that can track methods to detect if those methods are never called?
[Track]
void MyMethod(){
}
output:
warning: method "MyMethod" in "MyClass" has no references in code.
It is not strictly necessary to have it run at compile time, but it should work when the application is initialized (better at compile time anyway).
This tag will be putted to track methods on Audio Library, since audio is refactored very frequently and we usually search for audio methods with 0 references in code we want to mark these methods so we can detect quickly and remove unused audio assets.
Basically each time we add a new sound effect, we may later no longer trigger it (calling its method), and the audio file/playback code can remain in the application for a long time.
Maybe this is the answer you're looking for?
Finding all references to a method with Roslyn
you can use the code there to automate something of your own with Reflection I'd say
A partial answer is found here:
C# reflection and finding all references
I can use that info to get references to methods marked with a particular attribute, however that is a run-time script (But better than nothing).
A little late, but better than never: I use WinGrep by Gnu to search all folders and files for the name of the method:
`C:\>grep -irw "method_name" * --include=*.cs --include=*.sql --include=*.txt`
You can include as many, or as few, file name extensions as makes sense for you. In the example above, I show the top directory as C:, but you can start the search at any directory that makes sense.
The huge advantage of using grep over IDE based searches is that it will search across multiple projects and solutions.
Related
I am not sure the best way to explain this so please leave comments if you do not understand.
Basically, I have a few libraries for various tasks to work with different programs - notification is just one example.
Now, I am building a new program, and I want it to be as lightweight as possible. Whilst I would like to include my notification engine, I do not think many people would actually use its functionality, so, I would rather not include it by default - just as an optional download.
How would I program this?
With unmanaged Dlls and P/Invoke, I can basically wrap the whole lot in a try/catch loop, but I am not sure about the managed version.
So far, the best way I can think of is to check if the DLL file exists upon startup then set a field bool or similar, and every time I would like a notification to be fired, I could do an if/check the bool and fire...
I have seen from the debug window that DLL files are only loaded as they are needed. The program would obviously compile as all components will be visible to the project, but would it run on the end users machine without the DLL?
More importantly, is there a better way of doing this?
I would ideally like to have nothing about notifications in my application and somehow have it so that if the DLL file is downloaded, it adds this functionality externally. It really is not the end of the world to have a few extra bytes calling notification("blabla"); (or similar), but I am thinking a lot further down the line when I have much bigger intentions and just want to know best practices for this sort of thing.
I do not think many people would
actually use its functionality, so, I
would rather not include it by default
- just as an optional download.
Such things are typically described as plugins (or add-ons, or extensions).
Since .NET 4, the standard way to do that is with the Managed Exensibility Framework. It is included in the framework as the System.ComponentModel.Composition assembly and namespace. To get started, it is best to read the MSDN article and the MEF programming guide.
You can use System.Reflection.Assembly and its LoadFile method to dynamically load a DLL. You can then use the methods in Assembly to get Classes, types etc. embedded in the DLL and call them.
If you just check if the .dll exists or load every .dll in a plugin directory you can get what you want.
To your question if the program will run on the user's machine without the dlls already being present - yes , the program would run. As long as you dont do something that needs the runtime to load the classes defined in the dll , it does not matter if the dll is missing from the machine. To the aspect you are looking for regarding loading the dll on demand , I think you are well of using some sort of a configuration and Reflection ( either directly or by some IoC strategy. )
Try to load the plugin at startup.
Instead of checking a boolean all over the place, you can create a delegate field for the notification and initialize it to a no-op function. If loading the plugin succeeds, assign the delegate to the plugin implementation. Then everywhere the event occurs can just call the delegate, without worrying about the fact that the plugin might or might not be available.
I want to "hot" load some pre-packaged assembli(es) into a separate AppDomain, the thing however is I do not know the name of the entry point class or even the assembly file. I need to find this entry point so I can run some initialization routine.
So what I intend to do is to run ReflectionOnlyLoad on all the files and find the one that follows a certain convention ie. annotated/implements a certain interface etc.
Question is, will I start leaking memory if I were to run ReflectionOnlyLoad from the main AppDomain over and over? If this can't be run from the main app domain, what are my options, because again I do not know where the entry point is.
Also any additional information about the subtleties in using ReflectionOnlyLoad is appreciated.
I recommend Mono.Cecil. It's a simple assembly you can use on .net (it doesn't require the Mono runtime). It offers an API to load assemblies as data, and works pretty well. I found the API easy to work with, and it suffered from none of the problems I experienced when using reflection-only-load.
You can also use CCI, which is an open source project by MS that offers an assembly reader.
See also: CCI vs. Mono.Cecil -- advantages and disadvantages
ReflectionOnlyLoad won't solve your problem, see docs
Why don't you execute the code for finding the entry point etc. in the new AppDomain?
Cannot reflect through the dlls. Even with reflection only load, the type sticks to the main AppDomain.
2 Solutions:
Put the entry point in an xml somewhere and parse that.
Use a
2 stage AppDomain, one for the reflector, and then another for the
actual object.
I picked (1) since it's the most sensible.
(2) I have to pass through 2 separate proxies in order to issue command to the actual remote object, that or I need to couple the interfaces much more closely than I like. Not to mention being a pain to code.
I have a class library and am using only part of it. Is there a need to delete what isn't being used in order to shrink the size of the created code (in release configuration)?
As far as I've seen, the compiler takes care of that, and removing the code doesn't change the EXE file size. Will this always be true? Removing all unneeded code would take very long, so I want to know if there's need for that.
More information: there are methods and classes in the class library that aren't called from the executing code, but are referenced by other parts of code in the class library (which themselves are never called).
No, the compiler includes the "dead" code as well. A simple reason for this is that it's not always possible to know exactly what code will and won't be executed. For example, even a private method that is never referenced could be called via reflection, and public methods could be referenced by external assemblies.
You can use a tool to help you find and remove unused methods (including ones only called by other unused methods). Try What tools and techniques do you use to find dead code? and Find unused code to get you started.
It all gets compiled. Regardless of whether it is called or not. The code may be called by an external library.
The only way to make the compiler ignore code is by using Compiler Preprocessor Directives. More about those here.
I doubt the compiler will remove anything. The fact is, the compiler can't tell what is used and what is not, as types can be instantiated and methods called by name, thanks to reflection.
Let's suppose there is a class library called Utility. You created a new project and added this class library to that project. Even if your EXE calls only 1-2 methods from the class library, it's never a good idea to delete the unreferenced code.
It would go against the principle of reusablity. Despite the fact that there would be some classes present in the library unreferenced from the EXE, it would not have any bad impact on performance or size of the program.
Determining all and only dead code is (if one makes the idealization that one has a "math world" like language) recursively undecidable, in most languages. (A few rare ones like the Blaise language are decidable.)
to the question of whether there is a "need to delete what isn't being used in order to shrink the size of the created code": i think this would only be useful to save network bandwidth. removing unused code is crucial in web applications to improve loading speeds etc.
if you're code is an exe or a library, the only reason i see to remove dead code, is to improve your code quality. so that someone looking at your code 2 years down the line won't scratch their heads wondering what it does.
My C# .NET solution files are a mess and I am trying to find a way of getting things in order.
I tried to put all close files together in the same folder I am creating for that purpose. For example, I put interfaces, abstract classes, and all their inherited classes at the same folder. By the way - when I do that, I need to write a "using" statement pointing to that folder so I can use those classes in other files (also a mess I guess).
Is there an elegant way of doing things more clean, and not a list of files that I find very confusing?
Is it a good idea to (let's say) open a abstract class file and add nested classes for all the classes derived from it?
Is there a way of telling the solution to automatically set the folder "using" statements above every class I create?
The best way is when your solution file system structure reflects your program architecture and not your code architecture.
For example: if you define an abstract class and after have entities that implement it: put them into the same "basket" (solution folder) if they make a part of the same software architectual unit.
In this case one by looking on your solution tree can see what is your architecture about (more or less) from very top view.
There are different ways to enforce the architecture vision, understanding and felling of the code file system. For example if you use some known frameworks, like NHibernate, or (say) ASP.NET MVC tend to call the things in the name the technolgy calls them, in this way one who is familiar with that technology can easily find itself in your architecture.
For example WPF force you define in code things in some way, but also you need to define byb the way Model, ModelView, View.. which you will do intuitively in seprate files. The technology enforcce you to define your file system in way it was thought.
By the way the topic you're asking for, is broad known dilema/question, not resolved, cuase the code is just characters sequence and nothing else.
Good luck.
It sounds like you're hitting the point where you actually need to break things up a bit, but you're resisting this because more files seems like more complexity. That's true to a point. But there's also a point where files just become big and unmanageable, which is where you might end up if you try to do nested classes.
Keeping code in different namespaces is actually a good thing--that's the "issue" you're running into with the folders and having to add using statements at the top of your files. Namespacing allows you to logically divide your code, and even occasionally reuse a class name, without stepping on other parts of your code base.
What version of Visual Studio are you using? One little known feature of Visual Studio is that it can automatically create the using directive when you type a class name. That would eliminate one pain point.
If I was in your shoes, I'd start looking for logical places to segment my code into different projects. You can definitely go overboard here as well, but it's pretty common to have:
A "core" project that contains your business logic and business objects.
UI projects for the different user interfaces you build, such as a website or Windows Forms app.
A datalayer project that handles all interactions with the database. Your business logic talks to the datalayer instead of directly to the database, which makes it easier to make changes to your database setup down the road.
As your code base grows, a tool like ReSharper starts to become really important. I work on a code base that has ~1 million lines and 10 or so projects in the solution, and I couldn't live without ReSharper's go-to-file navigation feature. It lets you hit a keyboard shortcut and start typing a file name and just jump to it when it finds a match. It's sort of like using Google to find information instead of trying to bookmark every interesting link you come across. Once I made this mental shift, navigating through the code base became so much easier.
Try using multiple projects in the same solution to bring order. Seperate projects for web, entity, data access, setup, testing, etc.
IF the files are in the same namespace you won't need a using statement. If you're breaking your code into multiple projects you'll need to reference the other projects with using statements.
Its up to you. Break things apart logically. Use subfolders where you deem necessary.
Not sure.
Yes, but you'll need to create a template. Search for tuturorials on that.
1) Your solution folders should match your namespace structure. Visual Studio is set up to work this way and will automatically create a matching namespace. Yes, this requires a using for stuff in the folders but that's what it's for.
So yes, group common stuff together under an appropriate namespace.
2) Yes, subclasses should probably live in the same namespace/folder as their abstract base, or a sub folder of it. I'm not sure if you mean all in the same file? If so I would say generally not unless they're very very simple. Different files, same folder.
3) Not that I'm aware of. If you right click the classname when you use it you can get Studio to automatically resolve it and add a using (Ctrl + . also does this)
I'm wondering if there are any reasons (apart from tidying up source code) why developers use the "Remove Unused Usings" feature in Visual Studio 2008?
There are a few reasons you'd want to take them out.
It's pointless. They add no value.
It's confusing. What is being used from that namespace?
If you don't, then you'll gradually accumulate pointless using statements as your code changes over time.
Static analysis is slower.
Code compilation is slower.
On the other hand, there aren't many reasons to leave them in. I suppose you save yourself the effort of having to delete them. But if you're that lazy, you've got bigger problems!
I would say quite the contrary - it's extremely helpful to remove unneeded, unnecessary using statements.
Imagine you have to go back to your code in 3, 6, 9 months - or someone else has to take over your code and maintain it.
If you have a huge long laundry list of using statement that aren't really needed, looking at the code could be quite confusing. Why is that using in there, if nothing is used from that namespace??
I guess in terms of long-term maintainability in a professional environment, I'd strongly suggest to keep your code as clean as possible - and that includes dumping unnecessary stuff from it. Less clutter equals less confusion and thus higher maintainability.
Marc
In addition to the reasons already given, it prevents unnecessary naming conflicts. Consider this file:
using System.IO;
using System.Windows.Shapes;
namespace LicenseTester
{
public static class Example
{
private static string temporaryPath = Path.GetTempFileName();
}
}
This code doesn't compile because both the namespaces System.IO and System.Windows.Shapes each contain a class called Path. We could fix it by using the full class path,
private static string temporaryPath = System.IO.Path.GetTempFileName();
or we could simply remove the line using System.Windows.Shapes;.
This seems to me to be a very sensible question, which is being treated in quite a flippant way by the people responding.
I'd say that any change to source code needs to be justified. These changes can have hidden costs, and the person posing the question wanted to be made aware of this. They didn't ask to be called "lazy", as one person inimated.
I have just started using ReSharper, and it is starting to give warnings and style hints on the project I am responsible for. Amongst them is the removal of redundant using directive, but also redundant qualifiers, capitalisation and many more. My gut instinct is to tidy the code and resolve all hints, but my business head warns me against unjustified changes.
We use an automated build process, and therefore any change to our SVN repository would generate changes that we couldn't link to projects/bugs/issues, and would trigger automated builds and releases which delivered no functional change to previous versions.
If we look at the removal of redundant qualifiers, this could possibly cause confusion to developers as classes in our Domain and Data layers are only differentiated by the qualifiers.
If I look at the proper use of capitalisation of anachronyms (i.e. ABCD -> Abcd), then I have to take into account that ReSharper doesn't refactor any of the Xml files we use that reference class names.
So, following these hints is not as straight-forward as it appears, and should be treated with respect.
Less options in the IntelliSense popup (particularly if the namespaces contain lots of Extension methods).
Theoretically IntelliSense should be faster too.
Remove them. Less code to look at and wonder about saves time and confusion. I wish more people would KEEP THINGS SIMPLE, NEAT and TIDY.
It's like having dirty shirts and pants in your room. It's ugly and you have to wonder why it's there.
Code compiles quicker.
Recently I got another reason why deleting unused imports is quite helpful and important.
Imagine you have two assemblies, where one references the other (for now let´s call the first one A and the referenced B). Now when you have code in A that depends on B everything is fine. However at some stage in your development-process you notice that you actually don´t need that code any more but you leave the using-statement where it was. Now you not only have a meaningless using-directive but also an assembly-reference to B which is not used anywhere but in the obsolete directive. This firstly increases the amount of time needed for compiling A, as B has to be loaded also.
So this is not only an issue on cleaner and easier to read code but also on maintaining assembly-references in production-code where not all of those referenced assemblies even exist.
Finally in our exapmle we had to ship B and A together, although B is not used anywhere in A but in the using-section. This will massively affect the runtime-performance of A when loading the assembly.
It also helps prevent false circular dependencies, assuming you are also able to remove some dll/project references from your project after removing the unused usings.
At least in theory, if you were given a C# .cs file (or any single program source code file), you should be able to look at the code and create an environment that simulates everything it needs. With some compiling/parsing technique, you may even create a tool to do it automatically. If this is done by you at least in mind, you can ensure you understand everything that code file says.
Now consider, if you were given a .cs file with 1000 using directives which only 10 was actually used. Whenever you look at a symbol that is newly introduced in the code that references the outside world, you will have to go through those 1000 lines to figure out what it is. This obviously slows down the above procedure. So if you can reduce them to 10, it will help!
In my opinion, the C# using directive is very very weak, since you cannot specify single generic symbol without genericity being lost, and you cannot use using alias directive to use extension methods. This is not the case in other languages like Java, Python and Haskell, in those languages you are able to specify (almost) exactly what you want from the outside world. But even then, I will suggest to use using alias whenever possible.