Do you use NDepend? - c#

I've been trying out NDepend, been reading a few blogposts about it and even heard a podcast. I think that NDepend might be a really useful tool, but I still don't see where I would use it.
How do you use it? Do you use it, why? Why not?
I would like to hear about some down-to-earth real world examples.

I've used NDepend extensively over the past few years. Basically it is a dependency analysis tool, and so this can help you with lots of dependency related issues.
One of the main things I use it for is to examine the dependencies between my assemblies, types and methods. This helps me to keep a view of whether coupling between types is out of hand, and also helps me spot refactoring opportunities.
When embarking on a massive refactor, e.g. extracting.moving types to other assemblies, this lets you see what depends on what so you don't have to do the old "move my types to another assembly, then try and compile and see what breaks"
NDepend also has a great visual matrix for viewing this sort of information.
Additionally, it has a fantastic query language, CQL, which lets you write custom queries. These can be simple things such as "show me all methods that call this method", to queries to highlight dead code, queries on cylcomatic complexity, coupling, etc, and much much more.
In turn, it can be integrated into a build process, so you can have build warnings/failures based on CQL queries, such as "fail the build if a method has more than 100 lines of code but no comments" (this is an example - I'm not suggesting this particular metric is a good thing).
It can also import code coverage data and give you a visual representation of areas with little code coverage, as well as allowing you to run CQL queries against code coverage information (e.g. show me methods with less than 70% code coverage)
You can also load your current build of your project, and a previous build, and run queries between them such as "show me all new types that have <70% code coverage" - this helps you introduce tighter rules on existing codebases.
This is a fantastic tool, and isn't too difficult to learn. It is scary at the start, just because of the sheer volume of informaiton it gives you, but is highly recommended.

I also find it invaluable in understanding the structure of complicated method calls. I can call up all methods transitively using a particular method or field, for example, and can see if there are possible problems with circular calls, or unwanted dependencies, or paths which are more convoluted than necessary, etc.
The dependency graph is also now interactive, so I can remove methods which I am not currently interested in, and move others around to give a good visualization of what is going on.

I've found it useful to visualize changes between versions of assemblies. Even for a snapshot of changes in a given release...
I think it shines in a Continuous Integration environment where you can set up CQL queries to measure code metrics you're interested in (Cyclomatic Complexity, Long Methods, etc.), and then you can measure your improvement in those areas over time.

Actually this tool is helpful if you have e.g. interface which is using by another part of application developed by different person/vendor. Everytime you want to change the interface you have to find out who is using your interface to avoid breaking its code (assembly won't build)
This is applicable for bigger projects.

This tool is helpful when your application has a huge number of assemblies.
It helps me find out the code dependencies and as well as changes between releases

I'm also using NDepend to compare two versions of some assembly. NDepend have this excelent feature. Thats gives me view about changes and work progress in assembly, methods that have been added, methods removed and many more.

Related

Is there tool for .Net/C# to capture *run-time* dependencies between classes?

What tool I can use for .Net/C# project to capture run-time dependencies between classes ? I found this question to be very useful but the suggested tools capture a static dependency graph. I simply want to see graph of instantiations of classes.
I'm using VS 2008 (but can install other version if needed).
UPD: My goal is this. I have huge old codebase. It has (for example) 500 classes but because DB-driven workflow has changed over the years only (for example) 100 class are used now. That's why static dependency analysis will be too overwhelming to digest.
CLRProfiler can capture/display CallGraph: http://www.scribd.com/doc/3376247/CLRProfiler.
The file format is documented, so you can add your own analisys.
ANTS lets you visualize the call graph. Not precisely what you're looking for, but it might help. They have a 14 day trial and if you decide to buy it, it's well worth the money for profiling your .NET apps.
The .NET Memory Profiler lets you view instance graphs as well. A bit less spendy than ANTS and might do what you need to do.
Not a call graph but a call list but IntelliTrace in VS can tell you the call history. If you filter for calls to constructors you probably can get most of the way there.
Doxygen is a free tool that can also generate call graphs along with some basic documentation.

Ultra-Simple LightWeight Source Control for Visual Studio Projects?

I am using tortoise SVN with Ankh. I really have spent too much time tweaking and cleaning mess from time to time and I lost hope in educating each every developer on how to use things properly. I am sorry but I am fed up and tired restoring the repository/reverting/fixing merges manually, sometimes even having to write some code again.
So here's my question : Is there a chimpanzee-friendly solution for source control privileging Simplicity over Flexibility ? Projects and teams are small and I figured out that we just need VERY simple and basic chekout/checkin mechanisms, with no flourish, and limited functionality and features. That would help me stop being paranoid about projects integrity.
I know that there is no easy way to do this and there is minimum techinicity and discipline required, but I ended up wondering if we Really needed all that in our case, as in the long run, it causes more trouble than it helps.
Your problem sounds like it has more to do with process and branching strategies than anything else.
If your developers know to always get the latest code before checking in and resolving conflicts locally, running all tests etc, you will already have a leg up.
Educate your developers instead of trying to use a dumbed down SCM (that in the future will probably not be adequate to your needs).
As for branching strategy - I had found that branch per feature is the most natural way to work and mostly avoids merge conflicts.
Changing SCMs will not help with your issues if you don't tackle process and branching.
First, I would suggest that you force developers to clean up their own messes, not do it for them. By doing it for them, you are only encouraging them to stay ignorant. By all mean, be a resource and provide help for them, but make them do it themselves. They will quickly learn what they have to.
Second, there are few options that have the kind of integration with VS that most developers would like. SVN is one of them. Team System is another (but a much more expensive and complciated solution). Visual Source Safe is also an option, but it's really an old, out of date system that hasn't been updated since 2005 (and even that, that was largely a patch job to a system that hadn't been updated in 7 years before it).
If you want free, there is nothing worth using that is simpler than Subversion. Everything else will be ancient technology (like CVS) that will have even more problems. There are several free SCM's that are more powerful, like git and Mercurial, but you would have even more problems. If you're willing to pay, then many third party tools have better merge and visualization tools. One I like is AccuRev.
There are also some better commercial SVN plug-ins for visual studio that may help as well. I've not used any of them, but they may improve the developers use of SVN.
Try the combination of Mercurial and Tortoisehg as GUI.
You can also use it from Visual Studio with VisualHG.
Every developer is free to clone and manage her own repository.
Once you reach an agreement you can push up to a colleague's repository or a central location.
To aid with adoption, you might convince others to watch the DVCS video on the FogCreek Kiln page.
See what-makes-merging-in-dvcs-easy and similar SO discussions regarding the relative ease of merging.
I would say that every developer that works in a team should have a strong understanding of source control principles. Maybe you should get better developers! :-)
To answer your question I have always found Team System wonderful and very flexible. With such good IDE integration, it can be configured to ensure best practice in source control. However, it is quite a big source control system so may be over the top for your purposes.
I believe the issues is more of process than product.
Strict written documentation and process might work
Keep it as simple as possible.
You might make adherence to the process a contractual obligation.
That said I have had very good luck with Visual SVN for Visual Studio.
It is easy to use and integrates well.
If that is too hard, might revert to TortoiseSVN which is pretty idiot proof.
As for an alternate super simple product I know not of such a product, but
if you really need something lightweight, then datestamped and named zip
files is a the poor and ignorants form of source control. Merging and
restoring is a bitch though.

The best way to explore/investigate/understand class hierarchy and principle of work of new project

Imagine such situation. You get some legacy code or get some new framework. You need to investigate and understand how to work with this code as soon as possible. There is no chance to ask for help from previous developer. What are the best practices/methods/ways/steps/tools (preferred .NET Framework tools stack) to use to get maximum efficiency in investigating new to you code base.
If it is framework and there is no much documentation and unit tests, what tools you usually use to explore class hierarchy, methods, events? Is it default Object Browser, Architecture Explorer of MS Visual Studio or some other tools like Resharper hierarchy/file view?
There really isn't a best way to do this as there are so many variables and every project is different from the next.
To be absolutely honest the best way to get your head around it is to create a sandpit/test environment and, for want of a better description, play with it. Then play with it some more.
As an example of 'playing with it', using debugger and stepping through the code will tell you a lot about the flow and structure of the code. It is also worth mentioning that you should never trust comments, verify functionality yourself. Code may have changed since a comment was written.
The best way for diving into a new application with a large code base, the best solution that I've found is to get big picture of it through reverse engineering facility in applications like Enterprise Architect or so.
If it's not available to you, try class diagram provided by VS.
So far you can get the static definition of program , but for understanding the flow of execution follow the main scenarios execution path by facilities that you mat find in Resharper, VS2008(generate sequence diagrams, and ...) and VS2010(view call hierarchy and ...).
As said in previous answers debugging and profiling applications is also very helpful, set breakpoints, look at call stack, watch the objects and ....
I find that usually the best way to start with a completely unknown code base is just trying to get it to run.
After that, if there are bugs that need to be addressed, try to fix some of those.
That will give you insight into how difficult it is to update/maintain the system. You should also start to see code patterns, or lack thereof, emerge.
I often find that unit tests are a good place to start, providing there are some! At least through unit tests you get short examples of how it works, and where is should fail. Hopefully there is some documentation lying about too...
In VS2010, there is tool under Architecture which will help you analyze your code base and generate dependency diagram for you.
Check Project dependencies within a solution.
This will give you an idea about the projects flow within a solution
Check for the external Dlls used in references.
This will tell more information of the system how it is used.
Now you can make assumptions now about the flow of the architecture.
You can then run the application and Check for some logs which will give you idea about the class and functions flow.
You can then start with debugging the code/module which is assigned top you.
This will now put you in better position to now make any changes.

How to mark and remove unwanted code (methods, properties) from a .net (C#) project

I have received a legacy .Net c# solution with many class library projects to review, re-factor and reuse. This solution is not used anywhere and lying in the code junkyard. The solution is compiling properly though.
There are 4 primary methods, that is needed from the main class library. I just want to retain all the subsequent classes, methods, properties used by these 4 methods from all the other projects in the solution and strip off all the other codes which is junk for me. Currently, i am going by manual tracing from these 4 main methods in Visual Studio 2010's "Call Hierarchy" feature.
Is there a automated process to quickly identify the related codes to my main methods and extract it to brand new shiny solution (which hopefully builds successfully) so that i just have to see the relevant code devoid of other codes that is not needed by my four main methods.
Thanks.
Static tools are useful -- try NDepend -- but dynamic calls mean that any part of your codebase might be accessed by these methods. Try running a code coverage tool such as NCover with an extensive suite of unit tests, and perhaps manual tests as well, and then analyze the tool's output.
I find ReSharper pretty useful for detecting unreachable or unused code, so I'd certainly give that a go. You can download a demo version from http://www.jetbrains.com/resharper/.
It may not do everything for you, and finding unused code across multiple classes (in other words public methods that are not called) isn't so easy, but it's a good start.
Related to this, I would also advise you get a good set of unit tests in place before you undertake any refactoring, so that you can easily spot if/when you break functionality.

Should you obfuscate a commercial .Net application?

I was thinking about obfuscating a commercial .Net application. But is it really worth the effort to select, buy and use such a tool? Are the obfuscated binaries really safe from reverse engineering?
You may not have to buy a tool - Visual Studio.NET comes with a community version of Dotfuscator. Other free obfuscation tools are listed here, and they may meet your needs.
It's possible that the obfuscated binaries aren't safe from reverse engineering, just like it's possible that your bike lock might be breakable/pickable. However, it's often the case that a small inconvenience is enough to deter would be code/bicycle thieves.
Also, if ever it comes time to assert your rights to a piece of code in court, having been seen to make an effort to protect it (by obfuscating it) may give you extra points. :-)
You do have to consider the downsides, though - it can be more difficult to use reflection with obfuscated code, and if you're using something like log4net to generate parts of log lines based on the name of the class involved, these messages can become much more difficult to interpret.
Remember that obfuscation is only a barrier to the casual examiner of your code. If someone is serious about figuring out what you wrote, you will have a very hard time stopping them.
If you have secrets in your code (like passwords), you're doing it wrong.
If you worried someone might produce your own software with your ideas, you'll have more luck in the marketplace by providing new versions that your customers want, with technical support, and by being a partner to them. Good business wins.
At our company we evaluated several different obfuscation technologies, but they all had problems. The biggest problem was that we rely a lot on reflection, e.g. to dynamically create grids based upon property names.
So all of the obfuscators rename things, you can disable it of course, but then you lose a lot of the benefit of obfuscation.
Also, in our code we have a lot of NUnit tests which rely on a lot more of the methods and properties being public, this prevented some of the obfuscators from being able to obfuscate those classes.
In the end we settled on a product called .NET Reactor
It works very well, and we don't have any of the problems associated with the other products.
"In contrast to obfuscators .NET Reactor completely stops any decompiling by mixing any pure .NET assembly (written in C#, VB.NET, Delphi.NET, J#, MSIL...) with native machine code. In detail, .NET Reactor builds a native wall between potential hackers and your .NET code. The result is a standard Windows based, not MSIL compatible, file. The original .NET code remains intact, well protected by native code and invisible for prying eyes. The original .NET code is not copied on harddisk at any time. There is no tool which is able to decompile .NET Reactor protected assemblies."
The fact that you actually can reverse engineer it does not make obfuscation useless. It does raise the bar significantly.
An unobfuscated .NET assembly will show you all the source, highlighted and all just by downloading the .NET Reflector. Add obfuscation to that and you'll reduce very significatively the amount of people who'll be able to modify the code.
It depends on you are you protecting yourself from. If you'll ship it unobfuscated, you might as well open source the application and benefit from marketing. Shipping it obfuscated will only allow people to relatively easily generate modified binaries through patches instead of being able to steal your code and create a direct competitor. Getting the actual source from obfuscated code is very hard, depending on the obfuscator, of course.
I think that it depends on the type of your product. If it is directed to be used by developers - obfuscation will hurt your customers. We've been using the ArcGIS products at work, and all the DLLs are obfuscated. It's making our job a lot harder, since we can't use Reflector to decipher weird behaviors. And we're buying customers who paid thousands of dollars for the product.
So please, don't obfuscate unless you really have to.
Things you should take into account:
Obfuscation does not protect your code or logic. It just makes it harder to read and understand.
Obfuscation does no one stop from reverse engineering. It just slows the process down.
Your intellectual property is protected by law in most countries. So if an competitor uses your code or specific implementation, you can sue him.
The one and only problem obfuscation can solve is that someone creates a 1:1 (or close to 1:1) copy of your specific implementation.
Also in an ideal world reverse engineering of an obfuscated application is economical unattractive.
But back to reality:
There exists no tool on this planet that stops someone from copying user interfaces, behaviors or results any application provide or produce. Obfuscation is in this situations 100% useless
The best obfuscator on the market cannot stop one from using some kind of disassembler or hex editor and for some geeks this is pretty good to look into the heart of an application. It's just harder than on an unobfuscated code.
So the reality is that you can make it harder and more time consuming to look into your application but you won't really get any reliable protection. Regardless if you use a free or an commercial product.
Advanced technologies like control flow obfuscation or code virtualization may help to make understanding of logic sometimes really hard but they can also cause a lot of funny and hard to debug or solve problems. So they are sometimes more like an additional problem than a solution.
From my point of view obfuscation is not worth the money some companies charge for their products. If you want to nag casual developers, open source obfuscators are good enough. If you want to make it as hard as possible to look into the heart of your applications, you need to use cryptographic containers with virtual execution environments and virtual filesystems but they also provide attack vectors and may also be a source for a bag full of problems.
Your intellectual property and your products are in most countries protected by law. So if there's one competitor analyzing and copying your code, you can sue him. If a bad guy or and hacker or cracker takes your application you are pranked - but an obfuscator does not make a difference.
So you should first think about your targets, your market and what you want to achieve with an obfuscator. As you can read here (and at other places) obfuscation does not really solve the problem of reverse engineering. It only makes it harder and more time consuming. But if this is what you want, you may have a look to open source obfuscators like e.g. sharpObfuscator or obfuscar which may be good enough to nag casual coders (a List can be found here: List of .NET Obfuscators on Wikipedia).
If it is possible in your scenario you might also be interested in SaaS-Concepts. This means that you provide access to your software but not the software itself. So the customer normally has no access to your assemblies. But depending on service level, security and user base it can be expensive, complex and difficult to realize a reliable, confident and performant SaaS-Service.
No, obfuscation has been proven that it does not prevent someone from being able to decipher the compiled code. It makes it more difficult to do so but not impossible.
I am very confortable reading x86 assembly code, what about people that is working with assembly for more than 20 years ?
You will always find someone that only need a minute to see what your c# or c code is doing...
Just a note to anyone else reading this years later - I just skimmed through the Dotfuscator Community Edition (that comes with VS2008) license a few hours ago, and I believe that you cannot use this version to distribute a commercial product, or to obfuscate code from a project that involves any developers other than yourself. So for commercial app developers, it's really just a trial version.
...snip...
these messages can become much more
difficult to interpret
Yes, but the free community edition that comes with Visual Studio has a map functionality.
With that you can back track the obfuscated method names to the original names.
I've had success putting the output from one free obfuscator into a different obfuscator. In Dotfuscator CE, only some of the obfuscation tricks are included, so using a second obfuscator that has different tricks makes it more obfuscated.
It's quite simple to reverse engineer a .net app using .net reflector - since the app will generate VB, VC and C# code straight from the MSIL, and it's possible to pull out all kinds of useful gems.
Code obfuscators hide code quite well from most reverse engineering hacks, and would be a good idea to use on proprietary and competitive code that adds value to your app.
There's a pretty good article on obfuscation and it's workings here
This post and the surrounding question have some discussion which might be of value. It isn't a yes-or-no issue.
Yes you definitely should. Not to protect it from a determined person, but to get some profit and have customers. By the way, if you reach a point here someone tries to crack your software, that means you sell a popular software.
The problem is what tool to choose for the job. Check out my experience with commercial obfuscators: https://stackoverflow.com/questions/337134/what-is-the-best-net-obfuscator-on-the-market/2356575#2356575
Yes, we do. We use BitHelmet obfuscator. It's new, but it works really well.
But is it really worth the effort to select, buy and use such a tool?
I found Eazfuscator cheap (free), and easy to use: took about a day.
I already had extensive automated tests (good coverage), so I reckon I could find any bugs that are/were introduced by obfuscation.

Categories

Resources