We are trying to get ReSharper introduced to our company but it would have to be for all developers. Management want us to justify the cost with a business case.
I am unsure how to go about getting proof that ReSharper will benefit the business. What kind of statistics can you get from it?
I am unsure how to go about getting proof that Resharper will benefit the business.
If they asked for a business case, they're not asking for proof, just some kind of fact-based estimate of the likely return on their investment.
So, for example:
A license costs (say) $250 per developer, a developer costs (say) $50,000 per year.
A developer with Resharper costs 0.5% more than a developer without Resharper.
That gives you a basic financial model - if you get more than a 0.5% productivity gain, then it's worth it, if you get less, it isn't. Some corporates apply a minimum return on investment (ROI) factor - so if the factor is 1.2, then you would have to show a 0.7% benefit to get approval. The factor is very unlikely to be more than 3.
You could tweak that model - depreciate the license over 3 years, include the procurement costs, changing cost of capital, etc., but a simple, conservative model is likely to have the broadest appeal.
Then all you need is some evidence that you get more than a 0.5% productivity improvement. You could run a benchmark, or a pilot with a small number of developers for this. Pick some typical tasks and time them with and without Resharper. There is a 30 day trial version available so you could run a pilot before you have to purchase.
The PDF on the Resharper home page claims a 35% productivity increase - you can take that with a pinch of salt, but unless that's exaggerated by a factor of 70, it's still a worthwhile investment. The number of recommendations on the web, and developers claiming to buy it with their own money suggest that it isn't a wild exaggeration.
When you present the business case, you might like to illustrate that percentage as a dollar value too.
Developers only spend part of their day in their IDE, so you should probably adjust the expected returns downwards because of that. The real number is probably between 20% and 80%, but the lower end of the range might not be a politically acceptable number to present. You're interested in what proportion of the output is affected by the investment.
I don't have any connection with Jetbrains - and I'm answering a question about how to make a business case, not selling licenses! The anecdotal evidence from where I work is that the developers who have used Resharper have only good things to say about it. In some very specific cases it has saved weeks or months by automating mechanical tasks that have to be applied over a lot of files. The rest of the time it's hard to measure, but since the developers use it all of the time, they must be getting some real value out of it.
There's a quality argument too - you could measure this as a productivity increase, or a cost saving, or just an additional argument - depending on how quality issues are perceived at management level in your company.
ReSharper does not track itself in any way that would provide useful statistics. Also, I'm pretty sure no college/company/consultants have any sort of meaningful hard data. This is just too complex. I suppose you could measure the time savings from (A.) code insertion, (B.) refactoring quickly, and (C.) getting it right the first time because ReSharper didn't make the mistakes a human would. Just these savings pays for ReSharper soon enough. For a $300 license, all ReSharper has to do it save you 5-6 hours per developer. That's hours.
But the real benefits of ReSharper are impossible to measure:
Since good structure is now as easy to make as bad structure, you do it right!
Your designs are better, because you spend you time thinking about design rather than coding cruft.
Whatever your level, you learn from ReSharper . The refactorings available are those demanded by top-level developers. By using them you learn these good practices.
Mistakes are more forgiving. If you structure your code poorly, it's easy to fix. I find myself more daring and willing to try new things, because there is less risk. This has resulted in some great code.
I'm afraid your powers-that-be will need to trust you, or trust the testimonials on the web-site, or trust a consultant, or experience ReSharper for themselves. If your managers are not themselves quality developers, you're going to have an uphill battle. I wish you luck.
I bought ReSharper with my own money a few months ago, because I knew the best developers used it (or coderush). And best means they create more maintainable solutions for less time/money. It has surpassed my expectations. Getting code out there quicker and being able to refactor quicker is what I expected. All well and good. What I did not expect was how this would increase the time I had to make the right development decisions and do the right things at the most efficient time. Before there was just not enough time to do things right; now there is.
So it's impossible to tell management whether ReSharper pays for itself 20 times over, or 100, or 500, but I think 20 should be enough.
I know business managers loves them some numbers, but the best business case is anecdotal:
It makes developers happy.
True, it does increase productivity, but that's hard to prove. Making developers happy should be enough, since happy developers are more productive. You might want to point out that the static code analysis is built in to it, therefore nudging developers toward writing better code, gently training them to code cleanly.
No statistics, but here's a very good blog article arguing the case for Resharper. Some coworkers and I used some of these justifications to get it bought for us.
EDIT
Changed the link to point to the internet archive version
Basically, it's a tool to reduce development time:
Visualize more problems immediately
Improved coding speed by showing warnings (from info up to error) and allowing developers to fix them by a simple Ctrl + Space
Enforce naming conventions (customizable)
Way better refactoring: Not only leading to fewer bugs, but also allowing more operations; refactoring improves the velocity (no refactoring leads to slower and slower development speed)
Way faster code navigation (meaning opening the desired file location):
camelCase find file/class/symbol, by Ctrl+[Shift]+T
Find where a piece is used in all the source code
Developers can learn something: The auto-correction suggestions are usually taking into account some refactoring tricks and the latest .NET features. It's not only like an MS Word spell corrector, but it's also even going to tell you how you could say the same better.
Note: Technically, it can be installed on a single machine. If installed on the machine of the lead dev or project manager, (s)he can review code much faster. Refactoring and integration are some important tasks of a lead dev.
On a downside, I don't believe in the advertised gain. That gain is based on a bad development process with idealistic gain. What I can tell you is that it made my life better as a developer.
The best business case for ReSharper has to do more with the ability for it, when coupled with StyleCop Add-In (free), to allow a small team of developers to quickly create consistent, coherent, standards-based, maintainable code. Until it was introduced in our organization we had nothing but numerous stylistic approaches, not to mention the defects, bugs and other problems ReSharper helped us identify and correct. It is quite simply the best VS Add-In I've ever encountered.
As an aside, you should pick up GhostDoc (free VS Add-In) as well. It makes documenting your code much easier as well. These two tools together are invaluable.
An issue you may run into is that Management may not so much be looking for a justification for ReSharper, but justification for those things that ReSharper does: refactoring, code cleanup, increased ability to navigate code, unit test support.
If you've got Management that needs to justify something like ReSharper, then they may not yet have "justified" modern software development practices, either.
While I cannot, even from my own organization provide direct metrics, the tool provides a wealth of assistance and hints for developers.
It will also, when properly used, help an organization have more consistent code following the organizations code standards.
It will also highlight new features in newer.net frameworks and gently show developers how they can be applied to their code.
The tool is fantastic in getting rid of some code smells.
Aside from that aspect, once developers become more proficient in its use, it has a great number navigational features that allow them to quickly zip through code.
See the ReSharper Benefits For You and Your Business document for a small ROI analysis. Unfortunately it is not backed by any hard data and boils down to the assumption that developer productivity increases by 35 percent when using ReSharper, but it sums up all the arguments for using a productivity solution like ReSharper.
If management just want a set up numbers put in front of them, I have knocked up a basic app that should give an indication to the potential ROI that can be had from purchasing a tool like ReSharper. Even if you don't follow the 35% claim in productivity improvement, a 1% improvement still brings a ROI.
Related
I'm working with a product development firm having multiple releases simultaneously for same product.
We have around 4 environments with their own copy of SQL database and TFS branches.
Now the problem is we spend lot of time on merging code, resolving conflicts and merging within various branches to make sure we do not mess with deployment.
We are taking help of Redgate tool(new for this) for sql db side management but still feel like we are not in good condition.
Can you please suggest me best architecture/solution or set of tools that can be implemented ?
If you are concerned about the number of merging related activities that are going on then we need to reduce the number of merging activities. This is not going to be an easy thing to change as the culture and expectation within your organisation is currently tuned to produce this result.
You need to move towards a single line, or single branching model. If you are using Git then you can still use many short lived activity branches for Hotfixes or Releases as laid down in GitFlow, but your source line where you add all new code (DEV, MASTER, TRUNK, Main, whatever) should be a single line. As soon as you have feature or version branches you are in the world of merging.
There are a number of engineering activities and practices that you can use to support much of what you are physically doing now in the new model:
Feature toggles - This is your primary engineering solution for merging. If you are working on a single code line, and always have coders check in working code, then feature toggles allow you to ship features that are half done and you don't want folks to see. You hide them.... Now the first thing that you are going to throw out is "but we do database work and you can't do that there", and you would be wrong. Many organisations practice feature toggles and include database work. You need to have a solid and consistent practice of 'additive only' so that you don't break existing work and actually do work to make sure that both a new feature and an old one can coexist. There is work in that, but not as much as merging (in my experience) and not as error or bug prone. One key to remember is to think of them as Feature toggle and not code toggles. If you add a new feature then hide it till its ready. If you are incrementally improving an existing feature then just ship the new functionality. Achieving this WILL be hard and will require courage to implement major cultural changes at your organisation from coders and testers, all the way up to sales and management.
Definition of Done - Which leads to the question of how do we maintain quality in this new world of feature toggles? Think about this: if you have 3 feature teams all working on different functionality and one team decided to reduce their quality but what they have is buggy but good enough, what would be the impact? You are protected from this in a branching model until then end when you make all sorts of compromises to make everyone's mediocre (or just plane crap) code work together. Now we have to have this on every check-in and every release. So what do you need? You need a shared and agreed Definition of Done that represents the quality bar that must be met to ship. Without it you will have chaos. The cultural issue here is that you need everyone, every coder, and every tester, on board with the sacrosanct nature of the DOD. No you cant just compromise just this once as it will have a knock on effect.
Reduce cycle time - Which leads to our ship cycle. You need to 'ship' more regularly. Or more specifically you need to create potentially shippable increments of working software on a regular basis. This support the above in a number of ways, but first and foremost it reduces the amount of work that is under way. This will help reduce the complexity and help teams focus. With what is in effect shorter batch sizes we can get a much more regular adherence to the definition of done and have those touch points of "working software with no further work required to ship it". The side advantages here is that you increase your business ability to change as they can change at the end of each cycle sure in the knowledge that unfinished features are not going to in fact introduce complexity. You also gain the ability to inspect and adapt more frequently. Most companies, on gathering the evidence, find that more than 60% of their software is used little if ever. Lets use the reduced cycle time to get users in front of the software and only focus on building the 40% that they care about. (whoa! did we just get a 60% efficiency gain there?)
There are a number of other supporting practices that it would make a lot of sense for you to adopt to get there and I would probably recommend that you read the Scrum Guide (http://www.scrumguides.org/) and think about how you might start moving towards the goals above.
I develop and maintain a large (500k+ LOC) WinForms app written in C# 2.0. It's multi-user and is currently deployed on about 15 machines. The development of the system is ongoing (can be thought of as a perpetual beta), and there's very little done to shield users from potential new bugs that might be introduced in a weekly build.
For this reason, among others, i've found myself becoming very reliant on edit-and-continue in the debugger. It helps not only with bug-hunting and bug-fixing, but in some cases with ongoing development as well. I find it extremely valuable to be able to execute newly-written code from within the context of a running application - there's no need to recompile and add a specific entry point to the new code (having to add dummy menu options, buttons, etc to the app and remembering to remove them before the next production build) - everything can be tried and tested in real-time without stopping the process.
I hold edit-and-continue in such high regard that I actively write code to be fully-compatible with it. For example, I avoid:
Anonymous methods and inline delegates (unless completely impossible to rewrite)
Generic methods (except in stable, unchanging utility code)
Targeting projects at 'Any CPU' (i.e. never executing in 64-bit)
Initializing fields at the point of declaration (initialisation is moved to the constructor)
Writing enumerator blocks that use yield (except in utility code)
Now, i'm fully aware that the new language features in C# 3 and 4 are largely incompatible with edit-and-continue (lambda expressions, LINQ, etc). This is one of the reasons why i've resisted moving the project up to a newer version of the Framework.
My question is whether it is good practice to avoid using these more advanced constructs in favor of code that is very, very easy to debug? Is there legitimacy in this sort of development, or is it wasteful? Also, importantly, do any of these constructs (lambda expressions, anonymous methods, etc) incur performance/memory overheads that well-written, edit-and-continue-compatible code could avoid? ...or do the inner workings of the C# compiler make such advanced constructs run faster than manually-written, 'expanded' code?
Without wanting to sound trite - it is good practice to write unit/integration tests rather than rely on Edit-Continue.
That way, you expend the effort once, and every other time is 'free'...
Now I'm not suggesting you retrospectively write units for all your code; rather, each time you have to fix a bug, start by writing a test (or more commonly multiple tests) that proves the fix.
As #Dave Swersky mentions in the comments, Mchael Feathers' book, Working Effectively with Legacy Code is a good resource (It's legacy 5 minutes after you wrote it, right?)
So Yes, I think it's a mistake to avoid new C# contructs in favor of allowing for edit and continue; BUT I also think it's a mistake to embrace new constructs just for the sake of it, and especially if they lead to harder to understand code.
I love 'Edit and Continue'. I find it is a huge enabler for interactive development/debugging and I too find it quite annoying when it doesn't work.
If 'Edit and Continue' aids your development methodology then by all means make choices to facilitate it, keeping in mind the value of what you are giving up.
One of my pet peeves is that editing anything in a function with lambda expressions breaks 'Edit and Continue'. If I trip over it enough I may write out the lambda expression. I'm on the fence with lambda expressions. I can do some things quicker with them but they don't save me time if I end up writing them out later.
In my case, I avoid using lambda expressions when I don't really need to. If they get in the way I may wrap them in a function so that I can 'Edit and Continue' the code that uses them. If they are gratuitous I may write them out.
Your approach doesn't need to be black and white.
Wanted to clarify these things a bit
it is good practice to avoid using these more advanced constructs in favor of code that is very, very easy to debug?
Edit and Continue is not really debugging, it is developing. I make this distinction because the new C# features are very debuggable. Each version of the language adds debugging support for new language features to make them as easy as possible debug.
everything can be tried and tested in real-time without stopping the process.
This statement is misleading. It's possible with Edit and Continue to verify a change fixes a very specific issue. It's much harder to verify that the change is correct and doesn't break a host of other issues. Namely because edit and continue doesn't modify the binaries on disk and hence doesn't allow for items such as unit testing.
Overall though yes I think it's a mistake to avoid new C# contructs in favor of allowing for edit and continue. Edit and Continue is a great feature (really loved it when I first encountered it in my C++ days). But it's value as a production server helper doesn't make up for the producitivy gains from the new C# features IMHO.
My question is whether it is good practice to avoid using these more advanced constructs in favor of code that is very, very easy to debug
I would argue that any time you are forcing yourself to write code that is:
Less expressive
Longer
Repeated (from avoiding generic methods)
Non-portable (never debug and test 64bit??!?!?)
You are adding to your overall maintenance cost far more than the loss of the "Edit and Continue" functionality in the debugger.
I would write the best code possible, not code that makes a feature of your IDE work.
While there is nothing inherently wrong with your approach, it does limit you to the amount of expressiveness understood by the IDE. Your code becomes a reflection of its capabilities, not the language's, and thus your overall value in the development world decreases because you are holding yourself back from learning other productivity-enhancing techniques. Avoiding LINQ in favor of Edit-and-Continue feels like an enormous opportunity cost to me personally, but the paradox is that you have to gain some experience with it before you can feel that way.
Also, as has been mentioned in other answers, unit-testing your code removes the need to run the entire application all the time, and thus solves your dilemma in a different way. If you can't right-click in your IDE and test just the 3 lines of code you care about, you're doing too much work during development already.
You should really introduce continues integration, which can help you to find and eliminate bugs before deploying software. Especially big projects (I consider 500k quite big) need some sort of validation.
http://www.codinghorror.com/blog/2006/02/revisiting-edit-and-continue.html
Regarding the specific question: Don't avoid these constructs and don't rely on your mad debugging skills - try to avoid bugs at all (in deployed software). Write unit tests instead.
I've also worked on very large permanent-beta projects.
I've used anonymous methods and inline delegates to keep some relatively simple bits of use-one logic close to their sole place of use.
I've used generic methods and classes for reuse and reliability.
I've initialised classes in constructors to as full an extent as possible, to maintain class invariants and eliminate the possibility of bugs caused by objects in invalid states.
I've used enumerator blocks to reduce the amount of code needed to create an enumerator class to a few lines.
All of these are useful in maintaining a large rapidly changing project in a reliable state.
If I can't edit-and-continue, I edit and start again. This costs me a few seconds most of the time, a couple of minutes in nasty cases. Its worth it for the hours that greater ability to reason about code and greater reliability through reuse saves me.
There's a lot I'll do to make it easier to find bugs, but not if it'll make it easier to have bugs too.
You could try Test Driven Development. I found it very useful to avoid using the debugger at all. You start from a new test (e.g. unit test), and then you only run this unit test to check your development - you don't need the whole application running all the time. And this means you don't need edit-and-continue!
I know that TDD is the current buzz-word, but it really works for me. If I need to use the debugger I take it as a personal failure :)
Relying on Edit and Cont. sounds as if there is very little time spent on designing new features, let alone unit tests. This I find to be bad because you probably end up doing a lot of debugging and bug fixing, and sometimes your bug fixes cause more bugs, right?
However, it's very hard to judge whether you should or should not use language features or not, because this also depends on many, many other factors : project reqs, release deadlines, team skills, cost of code manageability after refactoring, to name a few.
Hope this helps!
The issue you seem to be having is:
It takes too long to rebuild you app,
start it up again and get to the bit
of UI you are working on.
As everyone has said, Unit Tests will help reduce the number of times you have to run your app to find/fix bugs on none UI code; however they don’t help with issues like the layout of the UI.
In the past I have written a test app that will quickly load the UI I am working on and fill it with dummy data, so as to reduce the cycle time.
Separating out none UI code into other classes that can be tested with unit tests, will allow you do use all C# constructs in those classes. Then you can just limit the constructs in use in the UI code its self.
When I started writing lots of unit tests, my usage of “edit-and-continue” went down, I how hardly use it apart from UI code.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
So we have varying opinions on how to deal with our source in regard to parallel Scrum teams sprints. The background info is we have two 7 man teams working on the same baseline of code in 2 week iterations towards the same product release.
Team A: In one camp we have the preferred approach being: Each team work in its own branch and at the end of each sprint if the changes are acceptable then merge to the trunk. The reason is team A doesn't want to risk having bad code introduced from team B.
Team B: In the other camp we have the preferred approach being: Both teams work in the trunk, check in small working changesets early and often and rely on the continuous integration build to prevent and identify broken builds early. Team B does not want to perform large merges every few weeks.
Thoughts? Is either approach more valid than the other? Is there a 3rd approach neither team has recommended?
Edit: Just to clarify both teams want CI, but Team A would have a multiple nightly builds, one for each branch. Team B is in favor of a single build on the single branch.
Thanks~
In any normal case with people involved the merging will soon be postponed "until we've stabilized this one thing". This one thing rarely gets stabilized very soon, so the branches start diverging. It's not bad in the beginning, but soon you'll notice that you'll have merging and stabilizing sprints going on. This is what's happened in all projects I've seen branches being used for something like this.
Of the two options above, I would suggest biting the bullet right away, and using option B. It will generate some overhead, but at least people end up merging their changes immediately when they still remember what and why they are doing things. In a couple of weeks, it'll be much harder.
Solution C could be to extract a third project with the common code for both teams, leaving the actively developed parts for each team to modify at will. The shared code base should hopefully be more stable. Don't know if such a solution would work in your case.
Team B. You're introducing overhead -- the large merges -- for the purpose of avoiding a hypothetical situation ("bad code") that won't occur often, and if it does, will cause less work to correct than all of the merges will accumulate. Also, these merges themselves may introduce errors.
In short, you're adding more work to deal with an unproven problem which in the end might cause yet more work.
On the other hand, if team B (or A for that matter) is checking in bad code, the place to catch it is in the CI, and if they are introducing bad code, that's a management problem, not a procedural problem.
Why not let Team B work on the trunk and Team A on a branch? Everyone gets what they want! (just kidding: this is really no different from approach A)
Seriously, though, you are trying to choose between "work in silos" and "have only one huge team" (or more diplomatically: "stable workspaces" and "proactive collaboration"). I would suggest a hybrid approach. But I also agree with Kai Inkinen's idea to refactor the codebase into shared vs team-specific code. Your teams are large enough that the code they generate will be significant: more components makes sense and might avoid this issue altogether. Assuming you don't have that option:
Some disadvantages of A:
(think: what's the trunk going to look like at the end of week 2? How confident will you feel about that code?)
encourages "big crunch" merges near the end of a sprint (in fact you are planning on it!). Expect conflicts, duplicated effort, and sub-optimal code structure after the merge.
prevents a team from taking advantage of good changes made by the other team
at the end of the sprint: who gets to merge to trunk first? That team might as well have been on the trunk all along!
Some disadvantages of B:
(think: Team B wants to share changes among themselves that aren't good enough for the trunk yet. You know this will happen, or else why are there two teams at all?)
discourages frequent checkins
encourages "working copy transfers"
whenever someone from Team B "breaks" the codebase, Team A will say "I told you so"
My suggested approach:
have both teams use separate branches, to encourage frequent checkins for each team. Checkins are good. (If a team (A) wants to keep their code protected, they will find ways to achieve the same ends even if you don't give them a branch, i.e., working copy transfers: "hey Kim, I know it's not perfect, and you haven't checked it in yet, but can you send me your fix for that file?")
have both teams "deliver" to trunk often: not just at the end of a sprint. Deliver each "story" as it is completed.
have both teams "sync" from the trunk often (ideally after every "delivery" from the other team). This, in conjunction with the action above, keeps merges as small as possible and should make the end of the sprint completely painless. Team A may resist this; they should be encouraged to "respond to change" in the code -- they'll have to merge eventually.
http://svnbook.red-bean.com/nightly/en/svn.branchmerge.basicmerging.html#svn.branchemerge.basicmerging.stayinsync
With this approach, teams will be rewarded (with comparatively easy merges) for checking in often and for delivering changes to the trunk as soon as possible. Both of these are good things.
I should also point out that with something like git/Mercurial, you have a bit more flexibility.
I'd go for continuous Integration. There are very few reasons to do otherwise. Check this link by Martin Fowler for an excellent explanation of the advantages of it.
If it is possible to auto-format code before and after a source control commit, checkout, diff, etc. does a company really need a standard code style?
It feels like standard coding style debates that have been raging since programming began like "put the bracket on the following line" or "properly indent your (" are no longer essential.
I realize in languages where white space matters the diff will have to consider it but for languages where the style is a personal preference is there really a need to worry about it anymore?
Auto-format can really only address whitespace.
It wont address developers giving variables bizarre nonsensical names.
It won't address some developers having functions return null on an error vs throwing an exception.
I'm sure others can think of more examples.
This is what we do at my work:
We all use Eclipse. We don't have a policy for using Eclipse but somehow none of us is an IDEA/IntelliJ guy. We also think our code should be written with legacy in mind. This means our code has to be readable in a certain way even years after (#1) no matter who wrote it and if that person even is in the company anymore.
Eclipse has couple handy features, automatic format on save and a specific Formatter tool. As you can see from the linked screenshot, it can be configured with XML. Thus there's a bunch of premade XML:s available for every worker in our company so that when a new guy comes in, we walk him through of the whole process and configure their Eclipse for them (yes, it's slightly evil thing to do) so that it actually uses those formatting XML:s we have provided. We do not enforce automatic format on save, we don't want to be completely intrusive, we just want to push all our developers into the right directions. For even increased compatability, we mostly use rules defined in JCC.
Next comes the important part, the actual builds. We are those who embrace automatic builds and for that we use Hudson Continuous Integration Server. There's two important parts in our configurations beyond this:
We use CVS loginfo to trigger builds whenever something is committed.
We utilize several plugins available for Hudson, including Continuous Integration Game in conjuction with the most important one, Checkstyle.
The Checkstyle Plugin is the magician in our code style enforcement guide line:
After commiting code to CVS, Hudson build is triggered
After build has been completed succesfully (all unit tests pass etc.), Checkstyle inspects the actual source files
Checkstyle ranks the code based rules we have defined for it
Continuous Integration Game sees the result of Checkstyle and awards/takes away points for the person who has the ownership for the relevant part of the code
Leaderboard shows total points for every commiter in system
Basically this means that when anyone commits ugly code into our CVS, our build server automatically reduces that person's points.
This means that eventually any one of us can be ranked on the Leaderboard based on the general code quality in both look and OO principles such as Law of Demeter, Cyclomatic complexity etc. etc. Naturally this isn't a completely serious statistic, but it's a good indication you're doing something wrong when causing a build to be initiated in our CI won't reduce your points - most of our commits are worth between 1 and 5 points.
And is it working? Sort of, I don't think anyone of us at my work writes ugly or unmaintainable code and personally I love to hunt all kinds of scores so it's definitely motivating me to make code that looks nice and follows all the OO paradigms I know of.
And do we as a company really need it? I think we do as you should see from reading this entire answer, it can be considered a good practice for the advancements it brings.
#1: in a related note, I refactored legacy code from 2002 today which used those standards, didn't look "bad" at all even in its original form and certainly not worse in its new form
No, not really.
If you can actually get it to work consistently and not make it flag code has changed due to a different style of laying the code out.
However, this is just a small part of coding standards. It won't cover multiple return statements, the use or not of ternary operators, etc.
It is always nice if the coding style that the shop uses is the same one that is also followed by the development tools.
Otherwise, if there is a large body of code that already follows a shop standard which is NOT the same as that of the tools you have two choices:
Modify all of the code to follow the tool standard, or
Maintain the existing shop standard.
Many shops do the latter. Regardless, there does need to be some kind of standard, and it does need to be followed.
Some development tools allow you to tweak their standard. In some cases you may be able to bring the tools in alignment with the shop standard.
It probably doesn't matter that much anymore if you can ensure that everybody in the team sees the source code "correctly" formatted, whatever they think it is. However I've not seen a system that can do that - you can do parts of it (say, reformat before and after checkin/checkout) but these days you also have to consider web interfaces into the version control, external code review systems that interact directly with the version control system etc.
The main purpose of a standard code style is (IMHO) to ensure that you can read other team members' code easily without having to start reverse engineering it because all the code is written using the same sort of guiding principles. Indenting and parentheses placement seem to be a major hangup on this but they are only a very small and in my opinion, somewhat overblown and not very important part of the need to make code consistent.
Unfortunately I'm not aware of any tools that can automatically apply consistent coding principles to source code...
Yes, coding styles are needed if there is a desire to have a homogeneous code base. Such a code base can be useful in preventing individual ownership of parts of the code base, which can cause problems when people leave the team. If you can't imagine having wildly different styles and problems understanding all of it, just look at all the different ways English text can be organized in various communications, all written but quite different such as tweets, e-mail, text messages, IM, message board posts, etc. and changes in fonts, capitalization, decorations, etc.
Since yesterday, I am analyzing one of our project with Ndepend (free for most of its features) and more I am using it, and more I have doubt about the real value of this type of software (code-analysis software).
Let me explain, The system build a report about the health of the system and class by Rank every metric. I thought it would be a good starting point to do modifications but most of the top result are here because they have over 100 lines inside the class (we have big headers and we do use VS comments styles) so it's not a big deal... than the number of Afferent Coupling level (CA) is always too high and this is almost very true for Interface that we used a lot... so at this moment I do not see something wrong but NDepend seem to do not like it (if you have suggestion to improve that tell me because I do not see the need for). It's the samething for the metric called "NOC" for Number of children that most of my Interface are too high...
For the moment, the only very useful metric is the Cyclomatic Complexity...
My question is : Do you find is worth it to analyse code with Automatic Code Analyser like NDepend? If yes, how do you filter all information that I have mentionned that doesn't really show the real health of the system?
Actually metrics are just one feature of NDepend, did you try to use VisualNDepend that lets you analyze your project much more in depth than the report? By reading your comment, I am almost sure you didn't play with NDepend UI (standalone or integrated in Visual Studio) which is the best way to filter data about your code base.
I am one of the developers of NDepend and we use it a lot to analyze our own code. Basically we write our own quality rules with Code Rules over LINQ Queries (CQLinq). These rules automatically make sure that we don't have regression on our design. Here you'll find the list of around 200 default code rules.
Here are some unique features of NDepend and not related to code metrics:
Write CQLinq rules to make sure we don't have architectural flaws, such as dependency cycles between our components, UI using directly the DB or DB entangled with the business objects.
Make sure we don't have problem with code coverage by tests (like we make sure with a CQLinq rule that if a class is supposed to be 100% covered, it will remain 100% covered in the future)
Enforce side-effects-free code (immutable class/pure methods)
Use the ability to compare 2 analysis to code review changes since the last release, before doing a new release. More specifically, I enjoy using NDepend to know which method has been added and refactored since the last release, and is not 100% covered by tests.
Having an optimal encapsulation for all our members and types (like knowing which internal methods can be declared as private). This is also related to dead-code detection that NDepend also supports.
For a complete list of features if NDepend, see here.
I don't necessarily see NDepend results as "good" or "bad" in software engineering, there's always a good reason why an application is designed the way it is. I see it as a report that can probably help me point out issues with my design, but I have the final word when it comes to deciding if a method needs to be refactored or if it's good the way I designed it. In general, don't get too caught up trying to answer if it's worth it or not. It definitely is, instead I would suggest you carefully review the results. This will help you view your design from another perspective and there may be occasions where you decide the way you designed it is the best to achieve your applications goals.