Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
So we have varying opinions on how to deal with our source in regard to parallel Scrum teams sprints. The background info is we have two 7 man teams working on the same baseline of code in 2 week iterations towards the same product release.
Team A: In one camp we have the preferred approach being: Each team work in its own branch and at the end of each sprint if the changes are acceptable then merge to the trunk. The reason is team A doesn't want to risk having bad code introduced from team B.
Team B: In the other camp we have the preferred approach being: Both teams work in the trunk, check in small working changesets early and often and rely on the continuous integration build to prevent and identify broken builds early. Team B does not want to perform large merges every few weeks.
Thoughts? Is either approach more valid than the other? Is there a 3rd approach neither team has recommended?
Edit: Just to clarify both teams want CI, but Team A would have a multiple nightly builds, one for each branch. Team B is in favor of a single build on the single branch.
Thanks~
In any normal case with people involved the merging will soon be postponed "until we've stabilized this one thing". This one thing rarely gets stabilized very soon, so the branches start diverging. It's not bad in the beginning, but soon you'll notice that you'll have merging and stabilizing sprints going on. This is what's happened in all projects I've seen branches being used for something like this.
Of the two options above, I would suggest biting the bullet right away, and using option B. It will generate some overhead, but at least people end up merging their changes immediately when they still remember what and why they are doing things. In a couple of weeks, it'll be much harder.
Solution C could be to extract a third project with the common code for both teams, leaving the actively developed parts for each team to modify at will. The shared code base should hopefully be more stable. Don't know if such a solution would work in your case.
Team B. You're introducing overhead -- the large merges -- for the purpose of avoiding a hypothetical situation ("bad code") that won't occur often, and if it does, will cause less work to correct than all of the merges will accumulate. Also, these merges themselves may introduce errors.
In short, you're adding more work to deal with an unproven problem which in the end might cause yet more work.
On the other hand, if team B (or A for that matter) is checking in bad code, the place to catch it is in the CI, and if they are introducing bad code, that's a management problem, not a procedural problem.
Why not let Team B work on the trunk and Team A on a branch? Everyone gets what they want! (just kidding: this is really no different from approach A)
Seriously, though, you are trying to choose between "work in silos" and "have only one huge team" (or more diplomatically: "stable workspaces" and "proactive collaboration"). I would suggest a hybrid approach. But I also agree with Kai Inkinen's idea to refactor the codebase into shared vs team-specific code. Your teams are large enough that the code they generate will be significant: more components makes sense and might avoid this issue altogether. Assuming you don't have that option:
Some disadvantages of A:
(think: what's the trunk going to look like at the end of week 2? How confident will you feel about that code?)
encourages "big crunch" merges near the end of a sprint (in fact you are planning on it!). Expect conflicts, duplicated effort, and sub-optimal code structure after the merge.
prevents a team from taking advantage of good changes made by the other team
at the end of the sprint: who gets to merge to trunk first? That team might as well have been on the trunk all along!
Some disadvantages of B:
(think: Team B wants to share changes among themselves that aren't good enough for the trunk yet. You know this will happen, or else why are there two teams at all?)
discourages frequent checkins
encourages "working copy transfers"
whenever someone from Team B "breaks" the codebase, Team A will say "I told you so"
My suggested approach:
have both teams use separate branches, to encourage frequent checkins for each team. Checkins are good. (If a team (A) wants to keep their code protected, they will find ways to achieve the same ends even if you don't give them a branch, i.e., working copy transfers: "hey Kim, I know it's not perfect, and you haven't checked it in yet, but can you send me your fix for that file?")
have both teams "deliver" to trunk often: not just at the end of a sprint. Deliver each "story" as it is completed.
have both teams "sync" from the trunk often (ideally after every "delivery" from the other team). This, in conjunction with the action above, keeps merges as small as possible and should make the end of the sprint completely painless. Team A may resist this; they should be encouraged to "respond to change" in the code -- they'll have to merge eventually.
http://svnbook.red-bean.com/nightly/en/svn.branchmerge.basicmerging.html#svn.branchemerge.basicmerging.stayinsync
With this approach, teams will be rewarded (with comparatively easy merges) for checking in often and for delivering changes to the trunk as soon as possible. Both of these are good things.
I should also point out that with something like git/Mercurial, you have a bit more flexibility.
I'd go for continuous Integration. There are very few reasons to do otherwise. Check this link by Martin Fowler for an excellent explanation of the advantages of it.
Related
I'm working with a product development firm having multiple releases simultaneously for same product.
We have around 4 environments with their own copy of SQL database and TFS branches.
Now the problem is we spend lot of time on merging code, resolving conflicts and merging within various branches to make sure we do not mess with deployment.
We are taking help of Redgate tool(new for this) for sql db side management but still feel like we are not in good condition.
Can you please suggest me best architecture/solution or set of tools that can be implemented ?
If you are concerned about the number of merging related activities that are going on then we need to reduce the number of merging activities. This is not going to be an easy thing to change as the culture and expectation within your organisation is currently tuned to produce this result.
You need to move towards a single line, or single branching model. If you are using Git then you can still use many short lived activity branches for Hotfixes or Releases as laid down in GitFlow, but your source line where you add all new code (DEV, MASTER, TRUNK, Main, whatever) should be a single line. As soon as you have feature or version branches you are in the world of merging.
There are a number of engineering activities and practices that you can use to support much of what you are physically doing now in the new model:
Feature toggles - This is your primary engineering solution for merging. If you are working on a single code line, and always have coders check in working code, then feature toggles allow you to ship features that are half done and you don't want folks to see. You hide them.... Now the first thing that you are going to throw out is "but we do database work and you can't do that there", and you would be wrong. Many organisations practice feature toggles and include database work. You need to have a solid and consistent practice of 'additive only' so that you don't break existing work and actually do work to make sure that both a new feature and an old one can coexist. There is work in that, but not as much as merging (in my experience) and not as error or bug prone. One key to remember is to think of them as Feature toggle and not code toggles. If you add a new feature then hide it till its ready. If you are incrementally improving an existing feature then just ship the new functionality. Achieving this WILL be hard and will require courage to implement major cultural changes at your organisation from coders and testers, all the way up to sales and management.
Definition of Done - Which leads to the question of how do we maintain quality in this new world of feature toggles? Think about this: if you have 3 feature teams all working on different functionality and one team decided to reduce their quality but what they have is buggy but good enough, what would be the impact? You are protected from this in a branching model until then end when you make all sorts of compromises to make everyone's mediocre (or just plane crap) code work together. Now we have to have this on every check-in and every release. So what do you need? You need a shared and agreed Definition of Done that represents the quality bar that must be met to ship. Without it you will have chaos. The cultural issue here is that you need everyone, every coder, and every tester, on board with the sacrosanct nature of the DOD. No you cant just compromise just this once as it will have a knock on effect.
Reduce cycle time - Which leads to our ship cycle. You need to 'ship' more regularly. Or more specifically you need to create potentially shippable increments of working software on a regular basis. This support the above in a number of ways, but first and foremost it reduces the amount of work that is under way. This will help reduce the complexity and help teams focus. With what is in effect shorter batch sizes we can get a much more regular adherence to the definition of done and have those touch points of "working software with no further work required to ship it". The side advantages here is that you increase your business ability to change as they can change at the end of each cycle sure in the knowledge that unfinished features are not going to in fact introduce complexity. You also gain the ability to inspect and adapt more frequently. Most companies, on gathering the evidence, find that more than 60% of their software is used little if ever. Lets use the reduced cycle time to get users in front of the software and only focus on building the 40% that they care about. (whoa! did we just get a 60% efficiency gain there?)
There are a number of other supporting practices that it would make a lot of sense for you to adopt to get there and I would probably recommend that you read the Scrum Guide (http://www.scrumguides.org/) and think about how you might start moving towards the goals above.
I am faced with with the challenge of needing to maintain two versions of the same product for an unknown period of time. I think I have come up with a workable solution, but I wanted to bounce it off the community for a "sanity check". Currently when a new feature is requested, a branch is cut from trunk, the development is completed in that branch, and that branch is merged back to the trunk. This has been the typical strategy for most places I have worked.
Recently a new branch was cut for a major feature overhaul which will result in changes not compatible with trunk. All of our users will eventually be forced onto this new version but, for a period of time, both versions will exist in the wild. Before I get to in depth, here is a diagram of what my current plan is for branching
Version:Current is what is currently released from. As Version:New is developed, bug fixes will continue to be shipped for Version:Current that will (in all likelihood) need to be merged into Version:New Obviously any new feature / bug fix for Version:New will only need to be merged down into Version:New. Once Version:Current is no longer in the field, Version:New will become the singular trunk. While I think this is workable, I think it can quickly balloon into a management nightmare. My question is: is there a "better" branching strategy that could be followed in my given case, or is this pretty much it?
You should choose a branching strategy that best fits your team and how they work. But what you described is a pretty common way of doing things. I would add that you might want to merge bug fixes from the current version more often than just after the current version isn't being used. If that period is long the branches could stray far enough from each other to cause headaches.
Here's a nice post about common branching patterns
I've got an ASP.NET web application, that is essentially our intranet site. I made a lot of progress on the administration office's employee management pages. It ties into an SQL server database, and I'm using a three layered design (Objects, Logic, DataAccess). It was all reviewed and all of it was accepted, except! for the part that manages vacations and vacation histories.
My question, before I go into details is, how does one efficiently "untangle" code that is no longer necessary?
For example: previously I was treating each VacationDay as it's own entity with it's own history. Such that I could track the history of an individual day. To help in tracking, I have an enum called VacationDayAction, which includes options such as .Submitted, .RequestDenied, .CancellationRequested, and so on. This was in an attempt to provide meticulous detail for each day. It was then determined that we no longer need that. We do, however, still need VacationDays and all the basic functions of that (saving days, getting days, etc.), but now we no longer need any of the "history" related classes.
My problem is, when I right click a class that I no longer need in VS and go to "Show All References," I get a ton of results scattered across several pages. I need to get rid of all of them, without breaking the rest of the application. Is there not some kind of "smart" technique or method for easily untangling parts that are no longer necessary? This is particularly difficult because 90% of what I did was just fine, and needs to stay like it is. Yet scattered in that 90% is 10% of stuff that is no longer needed. I can't just go storming through with the delete key either, because with the removal of each reference, I need to be sure that any dependencies on that reference are also fixed in a way that they don't call stuff that isn't there anymore. And I still need the application is a compilable state, so that I can test along the way that the rest of the application didn't fall apart as a result of some deletion.
To give you an idea of my low level of experience, I started two years ago with having never used C#, ASP.Net, or Visual Studio. It blew my mind when, way after starting and as I was learning, someone taught me that I could use breakpoints. And then it really really blew my mind when I learned about multi-layered design. I'm wondering if there is not some technique or trick or feature that can help in scenarios like this, where you have to "untangle" and throw away unnecessary stuff.
This is not a simple question. In fact, I would say this is one of the major challenges for any systems developer; how to handle and get rid of old code which is not in use. There is lots of literature on this, and few really excellent answers. A good book may be "Working effectively with legacy code" by Michael Feathers, which deals with many related problems. It is no light read though, and will probably take some time to get through, but it will likely help you become a better coder, and better at these kinds of tasks in particular.
Maybe you can have a look at the Resharper tool? ( http://www.jetbrains.com/resharper/ ) It is a productivity tool which among other things shows "dead" code (unused code) in grey, and lets you remove it. It will also help you remove unused references from each class (again, they will be grayed out and let you remove them automatically).
Drawing diagrams where each major piece of code /component is a box with a line linking it to any related component might help you get a better overview; try to draw a hierarchy showing how different parts of the code are related and dependent.
The bottom line as far as I know, is that you just have to muddle through it, commenting out code a little at a time, then recompiling and testing it. If it still works, fine, now you can remove the commented out code completely. This would be easier if you had unit-tests covering your code, but I take it as a given that you don't, as is unfortunately often the case.
We are trying to get ReSharper introduced to our company but it would have to be for all developers. Management want us to justify the cost with a business case.
I am unsure how to go about getting proof that ReSharper will benefit the business. What kind of statistics can you get from it?
I am unsure how to go about getting proof that Resharper will benefit the business.
If they asked for a business case, they're not asking for proof, just some kind of fact-based estimate of the likely return on their investment.
So, for example:
A license costs (say) $250 per developer, a developer costs (say) $50,000 per year.
A developer with Resharper costs 0.5% more than a developer without Resharper.
That gives you a basic financial model - if you get more than a 0.5% productivity gain, then it's worth it, if you get less, it isn't. Some corporates apply a minimum return on investment (ROI) factor - so if the factor is 1.2, then you would have to show a 0.7% benefit to get approval. The factor is very unlikely to be more than 3.
You could tweak that model - depreciate the license over 3 years, include the procurement costs, changing cost of capital, etc., but a simple, conservative model is likely to have the broadest appeal.
Then all you need is some evidence that you get more than a 0.5% productivity improvement. You could run a benchmark, or a pilot with a small number of developers for this. Pick some typical tasks and time them with and without Resharper. There is a 30 day trial version available so you could run a pilot before you have to purchase.
The PDF on the Resharper home page claims a 35% productivity increase - you can take that with a pinch of salt, but unless that's exaggerated by a factor of 70, it's still a worthwhile investment. The number of recommendations on the web, and developers claiming to buy it with their own money suggest that it isn't a wild exaggeration.
When you present the business case, you might like to illustrate that percentage as a dollar value too.
Developers only spend part of their day in their IDE, so you should probably adjust the expected returns downwards because of that. The real number is probably between 20% and 80%, but the lower end of the range might not be a politically acceptable number to present. You're interested in what proportion of the output is affected by the investment.
I don't have any connection with Jetbrains - and I'm answering a question about how to make a business case, not selling licenses! The anecdotal evidence from where I work is that the developers who have used Resharper have only good things to say about it. In some very specific cases it has saved weeks or months by automating mechanical tasks that have to be applied over a lot of files. The rest of the time it's hard to measure, but since the developers use it all of the time, they must be getting some real value out of it.
There's a quality argument too - you could measure this as a productivity increase, or a cost saving, or just an additional argument - depending on how quality issues are perceived at management level in your company.
ReSharper does not track itself in any way that would provide useful statistics. Also, I'm pretty sure no college/company/consultants have any sort of meaningful hard data. This is just too complex. I suppose you could measure the time savings from (A.) code insertion, (B.) refactoring quickly, and (C.) getting it right the first time because ReSharper didn't make the mistakes a human would. Just these savings pays for ReSharper soon enough. For a $300 license, all ReSharper has to do it save you 5-6 hours per developer. That's hours.
But the real benefits of ReSharper are impossible to measure:
Since good structure is now as easy to make as bad structure, you do it right!
Your designs are better, because you spend you time thinking about design rather than coding cruft.
Whatever your level, you learn from ReSharper . The refactorings available are those demanded by top-level developers. By using them you learn these good practices.
Mistakes are more forgiving. If you structure your code poorly, it's easy to fix. I find myself more daring and willing to try new things, because there is less risk. This has resulted in some great code.
I'm afraid your powers-that-be will need to trust you, or trust the testimonials on the web-site, or trust a consultant, or experience ReSharper for themselves. If your managers are not themselves quality developers, you're going to have an uphill battle. I wish you luck.
I bought ReSharper with my own money a few months ago, because I knew the best developers used it (or coderush). And best means they create more maintainable solutions for less time/money. It has surpassed my expectations. Getting code out there quicker and being able to refactor quicker is what I expected. All well and good. What I did not expect was how this would increase the time I had to make the right development decisions and do the right things at the most efficient time. Before there was just not enough time to do things right; now there is.
So it's impossible to tell management whether ReSharper pays for itself 20 times over, or 100, or 500, but I think 20 should be enough.
I know business managers loves them some numbers, but the best business case is anecdotal:
It makes developers happy.
True, it does increase productivity, but that's hard to prove. Making developers happy should be enough, since happy developers are more productive. You might want to point out that the static code analysis is built in to it, therefore nudging developers toward writing better code, gently training them to code cleanly.
No statistics, but here's a very good blog article arguing the case for Resharper. Some coworkers and I used some of these justifications to get it bought for us.
EDIT
Changed the link to point to the internet archive version
Basically, it's a tool to reduce development time:
Visualize more problems immediately
Improved coding speed by showing warnings (from info up to error) and allowing developers to fix them by a simple Ctrl + Space
Enforce naming conventions (customizable)
Way better refactoring: Not only leading to fewer bugs, but also allowing more operations; refactoring improves the velocity (no refactoring leads to slower and slower development speed)
Way faster code navigation (meaning opening the desired file location):
camelCase find file/class/symbol, by Ctrl+[Shift]+T
Find where a piece is used in all the source code
Developers can learn something: The auto-correction suggestions are usually taking into account some refactoring tricks and the latest .NET features. It's not only like an MS Word spell corrector, but it's also even going to tell you how you could say the same better.
Note: Technically, it can be installed on a single machine. If installed on the machine of the lead dev or project manager, (s)he can review code much faster. Refactoring and integration are some important tasks of a lead dev.
On a downside, I don't believe in the advertised gain. That gain is based on a bad development process with idealistic gain. What I can tell you is that it made my life better as a developer.
The best business case for ReSharper has to do more with the ability for it, when coupled with StyleCop Add-In (free), to allow a small team of developers to quickly create consistent, coherent, standards-based, maintainable code. Until it was introduced in our organization we had nothing but numerous stylistic approaches, not to mention the defects, bugs and other problems ReSharper helped us identify and correct. It is quite simply the best VS Add-In I've ever encountered.
As an aside, you should pick up GhostDoc (free VS Add-In) as well. It makes documenting your code much easier as well. These two tools together are invaluable.
An issue you may run into is that Management may not so much be looking for a justification for ReSharper, but justification for those things that ReSharper does: refactoring, code cleanup, increased ability to navigate code, unit test support.
If you've got Management that needs to justify something like ReSharper, then they may not yet have "justified" modern software development practices, either.
While I cannot, even from my own organization provide direct metrics, the tool provides a wealth of assistance and hints for developers.
It will also, when properly used, help an organization have more consistent code following the organizations code standards.
It will also highlight new features in newer.net frameworks and gently show developers how they can be applied to their code.
The tool is fantastic in getting rid of some code smells.
Aside from that aspect, once developers become more proficient in its use, it has a great number navigational features that allow them to quickly zip through code.
See the ReSharper Benefits For You and Your Business document for a small ROI analysis. Unfortunately it is not backed by any hard data and boils down to the assumption that developer productivity increases by 35 percent when using ReSharper, but it sums up all the arguments for using a productivity solution like ReSharper.
If management just want a set up numbers put in front of them, I have knocked up a basic app that should give an indication to the potential ROI that can be had from purchasing a tool like ReSharper. Even if you don't follow the 35% claim in productivity improvement, a 1% improvement still brings a ROI.
If it is possible to auto-format code before and after a source control commit, checkout, diff, etc. does a company really need a standard code style?
It feels like standard coding style debates that have been raging since programming began like "put the bracket on the following line" or "properly indent your (" are no longer essential.
I realize in languages where white space matters the diff will have to consider it but for languages where the style is a personal preference is there really a need to worry about it anymore?
Auto-format can really only address whitespace.
It wont address developers giving variables bizarre nonsensical names.
It won't address some developers having functions return null on an error vs throwing an exception.
I'm sure others can think of more examples.
This is what we do at my work:
We all use Eclipse. We don't have a policy for using Eclipse but somehow none of us is an IDEA/IntelliJ guy. We also think our code should be written with legacy in mind. This means our code has to be readable in a certain way even years after (#1) no matter who wrote it and if that person even is in the company anymore.
Eclipse has couple handy features, automatic format on save and a specific Formatter tool. As you can see from the linked screenshot, it can be configured with XML. Thus there's a bunch of premade XML:s available for every worker in our company so that when a new guy comes in, we walk him through of the whole process and configure their Eclipse for them (yes, it's slightly evil thing to do) so that it actually uses those formatting XML:s we have provided. We do not enforce automatic format on save, we don't want to be completely intrusive, we just want to push all our developers into the right directions. For even increased compatability, we mostly use rules defined in JCC.
Next comes the important part, the actual builds. We are those who embrace automatic builds and for that we use Hudson Continuous Integration Server. There's two important parts in our configurations beyond this:
We use CVS loginfo to trigger builds whenever something is committed.
We utilize several plugins available for Hudson, including Continuous Integration Game in conjuction with the most important one, Checkstyle.
The Checkstyle Plugin is the magician in our code style enforcement guide line:
After commiting code to CVS, Hudson build is triggered
After build has been completed succesfully (all unit tests pass etc.), Checkstyle inspects the actual source files
Checkstyle ranks the code based rules we have defined for it
Continuous Integration Game sees the result of Checkstyle and awards/takes away points for the person who has the ownership for the relevant part of the code
Leaderboard shows total points for every commiter in system
Basically this means that when anyone commits ugly code into our CVS, our build server automatically reduces that person's points.
This means that eventually any one of us can be ranked on the Leaderboard based on the general code quality in both look and OO principles such as Law of Demeter, Cyclomatic complexity etc. etc. Naturally this isn't a completely serious statistic, but it's a good indication you're doing something wrong when causing a build to be initiated in our CI won't reduce your points - most of our commits are worth between 1 and 5 points.
And is it working? Sort of, I don't think anyone of us at my work writes ugly or unmaintainable code and personally I love to hunt all kinds of scores so it's definitely motivating me to make code that looks nice and follows all the OO paradigms I know of.
And do we as a company really need it? I think we do as you should see from reading this entire answer, it can be considered a good practice for the advancements it brings.
#1: in a related note, I refactored legacy code from 2002 today which used those standards, didn't look "bad" at all even in its original form and certainly not worse in its new form
No, not really.
If you can actually get it to work consistently and not make it flag code has changed due to a different style of laying the code out.
However, this is just a small part of coding standards. It won't cover multiple return statements, the use or not of ternary operators, etc.
It is always nice if the coding style that the shop uses is the same one that is also followed by the development tools.
Otherwise, if there is a large body of code that already follows a shop standard which is NOT the same as that of the tools you have two choices:
Modify all of the code to follow the tool standard, or
Maintain the existing shop standard.
Many shops do the latter. Regardless, there does need to be some kind of standard, and it does need to be followed.
Some development tools allow you to tweak their standard. In some cases you may be able to bring the tools in alignment with the shop standard.
It probably doesn't matter that much anymore if you can ensure that everybody in the team sees the source code "correctly" formatted, whatever they think it is. However I've not seen a system that can do that - you can do parts of it (say, reformat before and after checkin/checkout) but these days you also have to consider web interfaces into the version control, external code review systems that interact directly with the version control system etc.
The main purpose of a standard code style is (IMHO) to ensure that you can read other team members' code easily without having to start reverse engineering it because all the code is written using the same sort of guiding principles. Indenting and parentheses placement seem to be a major hangup on this but they are only a very small and in my opinion, somewhat overblown and not very important part of the need to make code consistent.
Unfortunately I'm not aware of any tools that can automatically apply consistent coding principles to source code...
Yes, coding styles are needed if there is a desire to have a homogeneous code base. Such a code base can be useful in preventing individual ownership of parts of the code base, which can cause problems when people leave the team. If you can't imagine having wildly different styles and problems understanding all of it, just look at all the different ways English text can be organized in various communications, all written but quite different such as tweets, e-mail, text messages, IM, message board posts, etc. and changes in fonts, capitalization, decorations, etc.