Related
I'm working with a product development firm having multiple releases simultaneously for same product.
We have around 4 environments with their own copy of SQL database and TFS branches.
Now the problem is we spend lot of time on merging code, resolving conflicts and merging within various branches to make sure we do not mess with deployment.
We are taking help of Redgate tool(new for this) for sql db side management but still feel like we are not in good condition.
Can you please suggest me best architecture/solution or set of tools that can be implemented ?
If you are concerned about the number of merging related activities that are going on then we need to reduce the number of merging activities. This is not going to be an easy thing to change as the culture and expectation within your organisation is currently tuned to produce this result.
You need to move towards a single line, or single branching model. If you are using Git then you can still use many short lived activity branches for Hotfixes or Releases as laid down in GitFlow, but your source line where you add all new code (DEV, MASTER, TRUNK, Main, whatever) should be a single line. As soon as you have feature or version branches you are in the world of merging.
There are a number of engineering activities and practices that you can use to support much of what you are physically doing now in the new model:
Feature toggles - This is your primary engineering solution for merging. If you are working on a single code line, and always have coders check in working code, then feature toggles allow you to ship features that are half done and you don't want folks to see. You hide them.... Now the first thing that you are going to throw out is "but we do database work and you can't do that there", and you would be wrong. Many organisations practice feature toggles and include database work. You need to have a solid and consistent practice of 'additive only' so that you don't break existing work and actually do work to make sure that both a new feature and an old one can coexist. There is work in that, but not as much as merging (in my experience) and not as error or bug prone. One key to remember is to think of them as Feature toggle and not code toggles. If you add a new feature then hide it till its ready. If you are incrementally improving an existing feature then just ship the new functionality. Achieving this WILL be hard and will require courage to implement major cultural changes at your organisation from coders and testers, all the way up to sales and management.
Definition of Done - Which leads to the question of how do we maintain quality in this new world of feature toggles? Think about this: if you have 3 feature teams all working on different functionality and one team decided to reduce their quality but what they have is buggy but good enough, what would be the impact? You are protected from this in a branching model until then end when you make all sorts of compromises to make everyone's mediocre (or just plane crap) code work together. Now we have to have this on every check-in and every release. So what do you need? You need a shared and agreed Definition of Done that represents the quality bar that must be met to ship. Without it you will have chaos. The cultural issue here is that you need everyone, every coder, and every tester, on board with the sacrosanct nature of the DOD. No you cant just compromise just this once as it will have a knock on effect.
Reduce cycle time - Which leads to our ship cycle. You need to 'ship' more regularly. Or more specifically you need to create potentially shippable increments of working software on a regular basis. This support the above in a number of ways, but first and foremost it reduces the amount of work that is under way. This will help reduce the complexity and help teams focus. With what is in effect shorter batch sizes we can get a much more regular adherence to the definition of done and have those touch points of "working software with no further work required to ship it". The side advantages here is that you increase your business ability to change as they can change at the end of each cycle sure in the knowledge that unfinished features are not going to in fact introduce complexity. You also gain the ability to inspect and adapt more frequently. Most companies, on gathering the evidence, find that more than 60% of their software is used little if ever. Lets use the reduced cycle time to get users in front of the software and only focus on building the 40% that they care about. (whoa! did we just get a 60% efficiency gain there?)
There are a number of other supporting practices that it would make a lot of sense for you to adopt to get there and I would probably recommend that you read the Scrum Guide (http://www.scrumguides.org/) and think about how you might start moving towards the goals above.
It looks that eventually we moved our project to a bad branching strategy - "Cascading" (in question "TFS -- Sustainability of Cascading Branches" it was recognized viable actually). Now, after reading few articles and seeing it leads to infinite branching tree, I realized that it's bad and want to fix it. But it's not so easy as I expected. TFS allows merges only with parent branches (siblings merge isn't possible).
Here is the diagram of our current branches hierarchy:
Looks weird, but it all started from small changes in latest release branch while trunk has been suspended in the middle of 3-month work. Then we made few internediate releases one based on previous one (up to 1.1.4). But when we started release 1.2.0, the story with trunk repeated and we must have suspend 1.2.0 and implement 1.1.5 hotfix.
Now I faced the need of merging 1.1.5 changes into 1.2.0 branch which isn't possible directly. I can only merge 1.1.5 with 1.1.4 which I wanted to avoid (who knows maybe we'll need to implement 1.1.6 based on 1.1.4 tomorrow). But it seems I have no other way out. Still it will be possible to create 1.1.6 branch from non-latest revision of 1.1.4.
Is it what I must do and isn't there a better way out?
And now comes big troubles and the main question. Eventually I'll need to merge all changes into trunk. And 1.1.5-1.2.0 problem is just a miniature copy of that. That's why I'm asking an optimal and least risky way to perform this merge.
My current plan is the following:
Create branch "1.1.4 final" with the latest stable released version.
Optional: Create similar final branches for all previous numbered releases. Should I do it? Most likely I won't need that versions in the future anymore.
Merge 1.1.5 into 1.1.4
Merge 1.1.4 into 1.2.0. There wasn't much changes in 1.1.5, so I shouldn't run into problems here.
Merge 1.1.4 into 1.1.3, 1.1.2 and downto the release branch. Should I expect conflicts or problems here? I expect and hope not.
Merge release into trunk. Most scary part =)
Stabilize code in trunk.
Now it is time to create a better branching strategy... I'm very unsure about this part at the moment.
Create new "stable" (Main) branch from trunk
Reparent trunk to become child of stable. This solution is suggested in related question "How to fix TFS incorrect branching"
Should I remove current "release" and "R***" branches, or leave them as garbage?
For next comming releases do not create separate branches - instead label the final revisions in stable branch. Actually "stable" branch will consist only of final release checkins.
I'm NOT going to create the "integration" branch for QA of stable features - currently we all live in single active branch without problems.
Create "alternate" branch based on stable for paralel development in case we'll need to once more suspend current work to make some urgent fix. Should I keep single alternate branch forever, or delete it once merged and create a new one (like R125) when again needed?
The idea of this change is to have fixed limited number of branches (ideally 2, at most 3-4).
Please share your thoughts and concerns on my strategy, or propose a better one. I'm asking because not sure it all will work as I expect, don't know if it's the easiest way, and the mistake cost is huge.
Is it what I must do and isn't there a better way out?
I'd carefully perform a baseless merge of all the changes from the branches under release into trunk. I'd do this one branch at a time, and merge "All changes up to a specific version" and select all "Latest Version". That will give you a trunk that contains all of the changes from your releases.
Should I expect conflicts or problems here?
You may get conflicts, but with a bit of care and some forensic investigation of the history, you can get the changes into your trunk.
The normal process when working on release branches (even those not directly related to trunk) is to check into the release branch, then to RI (reverse integration) merge the change back to trunk. Some people prefer to check into trunk first and then merge into the release branch to avoid the situation where trunk may get forgotten about. It's six of one, half a dozen of the other IMO.
Should I remove current "release" and "R***" branches, or leave them as garbage?
I don't think it matters, you could move them into a folder called obsolete releases if you want to hide them, or just delete them - TFS deletes are soft.
Please share your thoughts and concerns on my strategy, or propose a better one
I wouldn't create a stable. Once I had everything in trunk I would be happy, the purpose of a trunk/Main branch is to be the stable releasable version of the code, if the developers cannot keep it that way (I'm not blaming them BTW), then working in feature branches and regularly FI merging into the feature branch is the best way.
Where you go next really depends on the process your company has for releases.
One option is to "label" trunk when you have a release you would like.
Start -----L:R1-----C->
If you then need to put in a bug fix before release, you can branch from the label:
Start -----L:R1-----C->
| /
B:R1 |--C/
Check the change into the R1 branch (B:R1) and merge it back to trunk.
This gives you a branch for releases if needed, but not too deep structure, you may end up with many branches, but you can use folders to keep them organised.
I hope this helps, in closing make sure you read the ALM Rangers branching guide - it covers the main TFVC branching strategies you are likely to need and when you should use them.
And finally, my question to anyone who wants to make a branch is "How will you release this code?", this helps me make branches to solve problems, instead of creating problems. If I don't know how the code will be released, I don't know the best way to structure the branches - I was once involved in a project with a hierarchy of 23 branches off Main, that all ended up coming together before been tested or released - we could have used one :).
Last thing, if you have a VSOnline account or another Team Project Collection, you could try re-creating a simpler version of your problem and experimenting with solutions.
Your merge stratigics looks ok to me but i will try to finish with three branches diagram.
We are using three branches Dev,Test,Release.
most of the builds are from dev and are labeled. (same as trunk at your diagram).
Then we move it to qa and continue development.
If there is an issue \bug and Dev branch is in future development we set test branch to the label and fix the issue on test branch and merge it to dev and again and take label.
If we have an issue with production we use the label on release branch fix the issue, label it and ofcourse merge it to dev.
This how it all done using three branches.
Ofcourse you can always use feature branches for long and defacult features.
Finally, I preformed that merge! Thanks #DaveShaw for recommendations, I mostly used them. But want to publicate my invention which I believe significantly saved time.
Instead of performing a baseless merge from R120 directly into trunk, I created an internediate branch dev, from the common root revision of R120 and trunk and preformed a baseless merge from R120 into dev. That generated more than 600 files with conflicts! But it was easy to resolve them - Take everything from the R120.
Then, since branches dev and trunk have common root, I could merge them with regular merge (not baseless). That performed much-much better than a baseless one and generated only 11 files with conflicts and I could resolve them in only 1 day - all those were real conflicts needing manual resolution and code editing to merge. So that saved me time to distinguish real 11 conflicting files from 600 those which are not a real conflicts and can be resolved automatically.
Next, I will stabilize both branches and switch their roles (it appeared that currently dev plays role of main (stable) and trunk is broken). So I decided to use branching strategy "Development isolation" which will evolve soon to "Development, Feature and Release isolation".
For most of my remaining questions, #DaveShaw provided good explanative answers I have nothing to add to.
I develop and maintain a large (500k+ LOC) WinForms app written in C# 2.0. It's multi-user and is currently deployed on about 15 machines. The development of the system is ongoing (can be thought of as a perpetual beta), and there's very little done to shield users from potential new bugs that might be introduced in a weekly build.
For this reason, among others, i've found myself becoming very reliant on edit-and-continue in the debugger. It helps not only with bug-hunting and bug-fixing, but in some cases with ongoing development as well. I find it extremely valuable to be able to execute newly-written code from within the context of a running application - there's no need to recompile and add a specific entry point to the new code (having to add dummy menu options, buttons, etc to the app and remembering to remove them before the next production build) - everything can be tried and tested in real-time without stopping the process.
I hold edit-and-continue in such high regard that I actively write code to be fully-compatible with it. For example, I avoid:
Anonymous methods and inline delegates (unless completely impossible to rewrite)
Generic methods (except in stable, unchanging utility code)
Targeting projects at 'Any CPU' (i.e. never executing in 64-bit)
Initializing fields at the point of declaration (initialisation is moved to the constructor)
Writing enumerator blocks that use yield (except in utility code)
Now, i'm fully aware that the new language features in C# 3 and 4 are largely incompatible with edit-and-continue (lambda expressions, LINQ, etc). This is one of the reasons why i've resisted moving the project up to a newer version of the Framework.
My question is whether it is good practice to avoid using these more advanced constructs in favor of code that is very, very easy to debug? Is there legitimacy in this sort of development, or is it wasteful? Also, importantly, do any of these constructs (lambda expressions, anonymous methods, etc) incur performance/memory overheads that well-written, edit-and-continue-compatible code could avoid? ...or do the inner workings of the C# compiler make such advanced constructs run faster than manually-written, 'expanded' code?
Without wanting to sound trite - it is good practice to write unit/integration tests rather than rely on Edit-Continue.
That way, you expend the effort once, and every other time is 'free'...
Now I'm not suggesting you retrospectively write units for all your code; rather, each time you have to fix a bug, start by writing a test (or more commonly multiple tests) that proves the fix.
As #Dave Swersky mentions in the comments, Mchael Feathers' book, Working Effectively with Legacy Code is a good resource (It's legacy 5 minutes after you wrote it, right?)
So Yes, I think it's a mistake to avoid new C# contructs in favor of allowing for edit and continue; BUT I also think it's a mistake to embrace new constructs just for the sake of it, and especially if they lead to harder to understand code.
I love 'Edit and Continue'. I find it is a huge enabler for interactive development/debugging and I too find it quite annoying when it doesn't work.
If 'Edit and Continue' aids your development methodology then by all means make choices to facilitate it, keeping in mind the value of what you are giving up.
One of my pet peeves is that editing anything in a function with lambda expressions breaks 'Edit and Continue'. If I trip over it enough I may write out the lambda expression. I'm on the fence with lambda expressions. I can do some things quicker with them but they don't save me time if I end up writing them out later.
In my case, I avoid using lambda expressions when I don't really need to. If they get in the way I may wrap them in a function so that I can 'Edit and Continue' the code that uses them. If they are gratuitous I may write them out.
Your approach doesn't need to be black and white.
Wanted to clarify these things a bit
it is good practice to avoid using these more advanced constructs in favor of code that is very, very easy to debug?
Edit and Continue is not really debugging, it is developing. I make this distinction because the new C# features are very debuggable. Each version of the language adds debugging support for new language features to make them as easy as possible debug.
everything can be tried and tested in real-time without stopping the process.
This statement is misleading. It's possible with Edit and Continue to verify a change fixes a very specific issue. It's much harder to verify that the change is correct and doesn't break a host of other issues. Namely because edit and continue doesn't modify the binaries on disk and hence doesn't allow for items such as unit testing.
Overall though yes I think it's a mistake to avoid new C# contructs in favor of allowing for edit and continue. Edit and Continue is a great feature (really loved it when I first encountered it in my C++ days). But it's value as a production server helper doesn't make up for the producitivy gains from the new C# features IMHO.
My question is whether it is good practice to avoid using these more advanced constructs in favor of code that is very, very easy to debug
I would argue that any time you are forcing yourself to write code that is:
Less expressive
Longer
Repeated (from avoiding generic methods)
Non-portable (never debug and test 64bit??!?!?)
You are adding to your overall maintenance cost far more than the loss of the "Edit and Continue" functionality in the debugger.
I would write the best code possible, not code that makes a feature of your IDE work.
While there is nothing inherently wrong with your approach, it does limit you to the amount of expressiveness understood by the IDE. Your code becomes a reflection of its capabilities, not the language's, and thus your overall value in the development world decreases because you are holding yourself back from learning other productivity-enhancing techniques. Avoiding LINQ in favor of Edit-and-Continue feels like an enormous opportunity cost to me personally, but the paradox is that you have to gain some experience with it before you can feel that way.
Also, as has been mentioned in other answers, unit-testing your code removes the need to run the entire application all the time, and thus solves your dilemma in a different way. If you can't right-click in your IDE and test just the 3 lines of code you care about, you're doing too much work during development already.
You should really introduce continues integration, which can help you to find and eliminate bugs before deploying software. Especially big projects (I consider 500k quite big) need some sort of validation.
http://www.codinghorror.com/blog/2006/02/revisiting-edit-and-continue.html
Regarding the specific question: Don't avoid these constructs and don't rely on your mad debugging skills - try to avoid bugs at all (in deployed software). Write unit tests instead.
I've also worked on very large permanent-beta projects.
I've used anonymous methods and inline delegates to keep some relatively simple bits of use-one logic close to their sole place of use.
I've used generic methods and classes for reuse and reliability.
I've initialised classes in constructors to as full an extent as possible, to maintain class invariants and eliminate the possibility of bugs caused by objects in invalid states.
I've used enumerator blocks to reduce the amount of code needed to create an enumerator class to a few lines.
All of these are useful in maintaining a large rapidly changing project in a reliable state.
If I can't edit-and-continue, I edit and start again. This costs me a few seconds most of the time, a couple of minutes in nasty cases. Its worth it for the hours that greater ability to reason about code and greater reliability through reuse saves me.
There's a lot I'll do to make it easier to find bugs, but not if it'll make it easier to have bugs too.
You could try Test Driven Development. I found it very useful to avoid using the debugger at all. You start from a new test (e.g. unit test), and then you only run this unit test to check your development - you don't need the whole application running all the time. And this means you don't need edit-and-continue!
I know that TDD is the current buzz-word, but it really works for me. If I need to use the debugger I take it as a personal failure :)
Relying on Edit and Cont. sounds as if there is very little time spent on designing new features, let alone unit tests. This I find to be bad because you probably end up doing a lot of debugging and bug fixing, and sometimes your bug fixes cause more bugs, right?
However, it's very hard to judge whether you should or should not use language features or not, because this also depends on many, many other factors : project reqs, release deadlines, team skills, cost of code manageability after refactoring, to name a few.
Hope this helps!
The issue you seem to be having is:
It takes too long to rebuild you app,
start it up again and get to the bit
of UI you are working on.
As everyone has said, Unit Tests will help reduce the number of times you have to run your app to find/fix bugs on none UI code; however they don’t help with issues like the layout of the UI.
In the past I have written a test app that will quickly load the UI I am working on and fill it with dummy data, so as to reduce the cycle time.
Separating out none UI code into other classes that can be tested with unit tests, will allow you do use all C# constructs in those classes. Then you can just limit the constructs in use in the UI code its self.
When I started writing lots of unit tests, my usage of “edit-and-continue” went down, I how hardly use it apart from UI code.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
So we have varying opinions on how to deal with our source in regard to parallel Scrum teams sprints. The background info is we have two 7 man teams working on the same baseline of code in 2 week iterations towards the same product release.
Team A: In one camp we have the preferred approach being: Each team work in its own branch and at the end of each sprint if the changes are acceptable then merge to the trunk. The reason is team A doesn't want to risk having bad code introduced from team B.
Team B: In the other camp we have the preferred approach being: Both teams work in the trunk, check in small working changesets early and often and rely on the continuous integration build to prevent and identify broken builds early. Team B does not want to perform large merges every few weeks.
Thoughts? Is either approach more valid than the other? Is there a 3rd approach neither team has recommended?
Edit: Just to clarify both teams want CI, but Team A would have a multiple nightly builds, one for each branch. Team B is in favor of a single build on the single branch.
Thanks~
In any normal case with people involved the merging will soon be postponed "until we've stabilized this one thing". This one thing rarely gets stabilized very soon, so the branches start diverging. It's not bad in the beginning, but soon you'll notice that you'll have merging and stabilizing sprints going on. This is what's happened in all projects I've seen branches being used for something like this.
Of the two options above, I would suggest biting the bullet right away, and using option B. It will generate some overhead, but at least people end up merging their changes immediately when they still remember what and why they are doing things. In a couple of weeks, it'll be much harder.
Solution C could be to extract a third project with the common code for both teams, leaving the actively developed parts for each team to modify at will. The shared code base should hopefully be more stable. Don't know if such a solution would work in your case.
Team B. You're introducing overhead -- the large merges -- for the purpose of avoiding a hypothetical situation ("bad code") that won't occur often, and if it does, will cause less work to correct than all of the merges will accumulate. Also, these merges themselves may introduce errors.
In short, you're adding more work to deal with an unproven problem which in the end might cause yet more work.
On the other hand, if team B (or A for that matter) is checking in bad code, the place to catch it is in the CI, and if they are introducing bad code, that's a management problem, not a procedural problem.
Why not let Team B work on the trunk and Team A on a branch? Everyone gets what they want! (just kidding: this is really no different from approach A)
Seriously, though, you are trying to choose between "work in silos" and "have only one huge team" (or more diplomatically: "stable workspaces" and "proactive collaboration"). I would suggest a hybrid approach. But I also agree with Kai Inkinen's idea to refactor the codebase into shared vs team-specific code. Your teams are large enough that the code they generate will be significant: more components makes sense and might avoid this issue altogether. Assuming you don't have that option:
Some disadvantages of A:
(think: what's the trunk going to look like at the end of week 2? How confident will you feel about that code?)
encourages "big crunch" merges near the end of a sprint (in fact you are planning on it!). Expect conflicts, duplicated effort, and sub-optimal code structure after the merge.
prevents a team from taking advantage of good changes made by the other team
at the end of the sprint: who gets to merge to trunk first? That team might as well have been on the trunk all along!
Some disadvantages of B:
(think: Team B wants to share changes among themselves that aren't good enough for the trunk yet. You know this will happen, or else why are there two teams at all?)
discourages frequent checkins
encourages "working copy transfers"
whenever someone from Team B "breaks" the codebase, Team A will say "I told you so"
My suggested approach:
have both teams use separate branches, to encourage frequent checkins for each team. Checkins are good. (If a team (A) wants to keep their code protected, they will find ways to achieve the same ends even if you don't give them a branch, i.e., working copy transfers: "hey Kim, I know it's not perfect, and you haven't checked it in yet, but can you send me your fix for that file?")
have both teams "deliver" to trunk often: not just at the end of a sprint. Deliver each "story" as it is completed.
have both teams "sync" from the trunk often (ideally after every "delivery" from the other team). This, in conjunction with the action above, keeps merges as small as possible and should make the end of the sprint completely painless. Team A may resist this; they should be encouraged to "respond to change" in the code -- they'll have to merge eventually.
http://svnbook.red-bean.com/nightly/en/svn.branchmerge.basicmerging.html#svn.branchemerge.basicmerging.stayinsync
With this approach, teams will be rewarded (with comparatively easy merges) for checking in often and for delivering changes to the trunk as soon as possible. Both of these are good things.
I should also point out that with something like git/Mercurial, you have a bit more flexibility.
I'd go for continuous Integration. There are very few reasons to do otherwise. Check this link by Martin Fowler for an excellent explanation of the advantages of it.
We are trying to get ReSharper introduced to our company but it would have to be for all developers. Management want us to justify the cost with a business case.
I am unsure how to go about getting proof that ReSharper will benefit the business. What kind of statistics can you get from it?
I am unsure how to go about getting proof that Resharper will benefit the business.
If they asked for a business case, they're not asking for proof, just some kind of fact-based estimate of the likely return on their investment.
So, for example:
A license costs (say) $250 per developer, a developer costs (say) $50,000 per year.
A developer with Resharper costs 0.5% more than a developer without Resharper.
That gives you a basic financial model - if you get more than a 0.5% productivity gain, then it's worth it, if you get less, it isn't. Some corporates apply a minimum return on investment (ROI) factor - so if the factor is 1.2, then you would have to show a 0.7% benefit to get approval. The factor is very unlikely to be more than 3.
You could tweak that model - depreciate the license over 3 years, include the procurement costs, changing cost of capital, etc., but a simple, conservative model is likely to have the broadest appeal.
Then all you need is some evidence that you get more than a 0.5% productivity improvement. You could run a benchmark, or a pilot with a small number of developers for this. Pick some typical tasks and time them with and without Resharper. There is a 30 day trial version available so you could run a pilot before you have to purchase.
The PDF on the Resharper home page claims a 35% productivity increase - you can take that with a pinch of salt, but unless that's exaggerated by a factor of 70, it's still a worthwhile investment. The number of recommendations on the web, and developers claiming to buy it with their own money suggest that it isn't a wild exaggeration.
When you present the business case, you might like to illustrate that percentage as a dollar value too.
Developers only spend part of their day in their IDE, so you should probably adjust the expected returns downwards because of that. The real number is probably between 20% and 80%, but the lower end of the range might not be a politically acceptable number to present. You're interested in what proportion of the output is affected by the investment.
I don't have any connection with Jetbrains - and I'm answering a question about how to make a business case, not selling licenses! The anecdotal evidence from where I work is that the developers who have used Resharper have only good things to say about it. In some very specific cases it has saved weeks or months by automating mechanical tasks that have to be applied over a lot of files. The rest of the time it's hard to measure, but since the developers use it all of the time, they must be getting some real value out of it.
There's a quality argument too - you could measure this as a productivity increase, or a cost saving, or just an additional argument - depending on how quality issues are perceived at management level in your company.
ReSharper does not track itself in any way that would provide useful statistics. Also, I'm pretty sure no college/company/consultants have any sort of meaningful hard data. This is just too complex. I suppose you could measure the time savings from (A.) code insertion, (B.) refactoring quickly, and (C.) getting it right the first time because ReSharper didn't make the mistakes a human would. Just these savings pays for ReSharper soon enough. For a $300 license, all ReSharper has to do it save you 5-6 hours per developer. That's hours.
But the real benefits of ReSharper are impossible to measure:
Since good structure is now as easy to make as bad structure, you do it right!
Your designs are better, because you spend you time thinking about design rather than coding cruft.
Whatever your level, you learn from ReSharper . The refactorings available are those demanded by top-level developers. By using them you learn these good practices.
Mistakes are more forgiving. If you structure your code poorly, it's easy to fix. I find myself more daring and willing to try new things, because there is less risk. This has resulted in some great code.
I'm afraid your powers-that-be will need to trust you, or trust the testimonials on the web-site, or trust a consultant, or experience ReSharper for themselves. If your managers are not themselves quality developers, you're going to have an uphill battle. I wish you luck.
I bought ReSharper with my own money a few months ago, because I knew the best developers used it (or coderush). And best means they create more maintainable solutions for less time/money. It has surpassed my expectations. Getting code out there quicker and being able to refactor quicker is what I expected. All well and good. What I did not expect was how this would increase the time I had to make the right development decisions and do the right things at the most efficient time. Before there was just not enough time to do things right; now there is.
So it's impossible to tell management whether ReSharper pays for itself 20 times over, or 100, or 500, but I think 20 should be enough.
I know business managers loves them some numbers, but the best business case is anecdotal:
It makes developers happy.
True, it does increase productivity, but that's hard to prove. Making developers happy should be enough, since happy developers are more productive. You might want to point out that the static code analysis is built in to it, therefore nudging developers toward writing better code, gently training them to code cleanly.
No statistics, but here's a very good blog article arguing the case for Resharper. Some coworkers and I used some of these justifications to get it bought for us.
EDIT
Changed the link to point to the internet archive version
Basically, it's a tool to reduce development time:
Visualize more problems immediately
Improved coding speed by showing warnings (from info up to error) and allowing developers to fix them by a simple Ctrl + Space
Enforce naming conventions (customizable)
Way better refactoring: Not only leading to fewer bugs, but also allowing more operations; refactoring improves the velocity (no refactoring leads to slower and slower development speed)
Way faster code navigation (meaning opening the desired file location):
camelCase find file/class/symbol, by Ctrl+[Shift]+T
Find where a piece is used in all the source code
Developers can learn something: The auto-correction suggestions are usually taking into account some refactoring tricks and the latest .NET features. It's not only like an MS Word spell corrector, but it's also even going to tell you how you could say the same better.
Note: Technically, it can be installed on a single machine. If installed on the machine of the lead dev or project manager, (s)he can review code much faster. Refactoring and integration are some important tasks of a lead dev.
On a downside, I don't believe in the advertised gain. That gain is based on a bad development process with idealistic gain. What I can tell you is that it made my life better as a developer.
The best business case for ReSharper has to do more with the ability for it, when coupled with StyleCop Add-In (free), to allow a small team of developers to quickly create consistent, coherent, standards-based, maintainable code. Until it was introduced in our organization we had nothing but numerous stylistic approaches, not to mention the defects, bugs and other problems ReSharper helped us identify and correct. It is quite simply the best VS Add-In I've ever encountered.
As an aside, you should pick up GhostDoc (free VS Add-In) as well. It makes documenting your code much easier as well. These two tools together are invaluable.
An issue you may run into is that Management may not so much be looking for a justification for ReSharper, but justification for those things that ReSharper does: refactoring, code cleanup, increased ability to navigate code, unit test support.
If you've got Management that needs to justify something like ReSharper, then they may not yet have "justified" modern software development practices, either.
While I cannot, even from my own organization provide direct metrics, the tool provides a wealth of assistance and hints for developers.
It will also, when properly used, help an organization have more consistent code following the organizations code standards.
It will also highlight new features in newer.net frameworks and gently show developers how they can be applied to their code.
The tool is fantastic in getting rid of some code smells.
Aside from that aspect, once developers become more proficient in its use, it has a great number navigational features that allow them to quickly zip through code.
See the ReSharper Benefits For You and Your Business document for a small ROI analysis. Unfortunately it is not backed by any hard data and boils down to the assumption that developer productivity increases by 35 percent when using ReSharper, but it sums up all the arguments for using a productivity solution like ReSharper.
If management just want a set up numbers put in front of them, I have knocked up a basic app that should give an indication to the potential ROI that can be had from purchasing a tool like ReSharper. Even if you don't follow the 35% claim in productivity improvement, a 1% improvement still brings a ROI.