Related
I started at a new company which manages multiple projects (around 30). However, all their projects are in one git-repository. I know wanted to split all our projects into one git-repository per project. To achieve that I went ahead and extracted every folder into a new folder, containing it's own git repository.
However, some references were broken. While investigating I found that project referencing was done in multiple ways, dependent on the project
Including the entire solution/project in the current solution.
Referencing the .csproj-file of another solution.
Referencing the built .dll (bin/debug).
In my opinion, the first way should not the way to be, right? This seems like a way too big overhead. So I'm split between 2 and 3, and I would like to hear how you people are doing it?
Looking forward for your input!
It's normal to have code you want to share between multiple solutions.
For this, we use projects like 'Infrastructure' or 'Logging' with their own CI builds. When done, we create a release build which uploads the dll's to a private nuget server.
These projects are than included as dll's in the other projects through nuget and updated when needed. You also don't break other solutions when you change something in your logging, you have to update the logging version first.
What I do is to have a nuget server in the company or you can use Azure DevOps to do that: https://learn.microsoft.com/en-us/azure/devops/artifacts/get-started-nuget?view=azure-devops.
After you set the nuget server you can update/import the packages for each project. So, when you update the code of any project, post it to the nuget server and you can update all other projects.
Your question sounds like "How to split a solution into smaller solutions".
30 projects is not that much, At 30 C#-projects it's just the starting point to split your solution.The solution is the base for the Repository also.
If you analyze the dependencies of C#-projects, you certainly can form kind of clusters,
are there basics- referenced by everything, and almost front end parts, referenced by nothing.
Basic-Projects (Depends from Nothing, but referenced by many) Tend to be more stable and have less frequent changes, it's also more dangerous to change, more danger of braking changes. It's good to make access to them more complicated (=put it in a different solution). You do not change it frivolously, just because you see the source code and you edit it.
The code and architecture becomes more cleaner, since programmers tend to use a wrapper, derived classes or interfaces to do, what they want to do, inside their active solution.
They do not change dependent solutions as fast and easy any more. So it stays more stable.
You can consider a Solution as a product of it's own, as a Library or Final Product.
So splitting projects in the aspect, what is potentially being used in upcoming projects in the next years, and what is being used as a product for one client only.
Suppose you start a new Product next week, what Projects would you most likely include there ? They belong into a library.
It's also simplifying life to new programmers, if you tell them "Just use it, you don't need to dig in the source code", or "get familiar with this solution only" if you group your C# projects into such clusters. They are not so overwhelmed by quantity.
Also the branching is done per solution, you create a branch on one solution per client request, and a branch of another solution to stay up to date with technology. This is much easier to handle with smaller bundles of projects.
Nuget-Server as proposed by others, is a good way to maintain updates. Without a Server you link to DLLs directly. If you do not have many updates, either you invest time in setting up the server, or in copying a few DLL's around, twice a year. One is not more complicated or time consuming as the other. Manual jobs done by different people cause the risk of human errors. But the task "copy all DLL's from one directory to another directory" might still work. Do not reference the output directory from one solution directly to the other solution. Put the "productive DLL" in a separate directory and do your update by saying "yes, I want to update - use it now". "Automated update" just if someone decides to built the other solution might cause trouble.
Right now I am saving my all visual studio projects in C drive.
Now I want to keep copy of that all projects in some other drive, so if my C drive get crashed then also I can access all projects.
So what is best way for that.
If I just make zip of current projects from C drive and paste it in another drive.
And when needed I extract it, then will it work OR any error will come.
Thanks for help in Advance.
If I just make zip of current projects from C drive and paste it in another drive.
Well you could do that but it's rather tedius; error-prone and rather brute-force. It's difficult to maintain history.
A better choice is to use some form of source control (SC) / software configuration management (SCM). SC is a tool for maintaining a code repository. It works by associating metadata about every source file and any changes you make.
e.g.
Git
Subversion
Microsoft TFS
Perforce
IBM/Rational ClearCase
Microsoft SourceSafe (ewww, retried thankfully)
Source control not only keeps a copy somewhere else (ideally a different computer) but it also allows you to
keep track of what changed
rollback a change
share with your friends or colleges
integrates nicely with your IDE of choice (VS) or command-line
But in this day and age there are plenty of free cloud-based solutions that offer you more than just acting as a code repository such as stats; wikis; bug tracking; and spiffy charts. Check out:
Microsoft Visual Studio Team Services (VSTS)
Github
Atlassian Bitbucket
Summary
Irrespective of whether you perform manual folder copies or use source control; both will lead to a copy of your code. However only the latter introduces workflows and due-diligence (via SCM) so that as you code you are unlikely to lose information due to the procedures and safeguards in place.
A word on file backup
If for some absolute reason you decide not to proceed with SC but rather stick with plain-old-file-backup then at least follow the fine wisdom of Scott Hanselman (MSFT) where he talks about file backup best practices:
I've got a number of backups because I practice the Backup Rule of Three.
3 copies of anything you care about - Two isn't enough if it's important.
2 different formats - Example: Dropbox+DVDs or Hard Drive+Memory Stick or CD+Crash Plan, or more
1 off-site backup - If the house burns down, how will you get your memories back?
...using apps like CrashPlan.
Scott will most likely agree that his plan wasn't intended for source code but at least you have 3 backups of files as he recommends.
See Also
Hanselman, S, "Is your stuff backed up? Recovering from a hardware failure"
Have a look at Visual Studio Team Services. You can add code to source control (I would use Git if I were you) and manage your projects there for free.
Having your code in a source control system has many benefits, like having history of each commit.
Next to that, VSTS has lots of opions like Continuous Integration / Continuous Deployment, Testing, project management support like making your project an Agile project.
What will you do if hard drive crash?
Code Management is a practice and there are many tools to help you to manage your code. Try GitHub or bitbucket
Moreover you can also zip the code and save on external disks but check how much risk is involved with your code.
You could use a .zip archive to back up your work, but this is slightly laborious and will likely include a number of files you do not need to get up and running again (for example the build output, nuget packages folders, etc.) which will bloat your archives.
A better option would be to use a Version Control System of some kind, which will allow you to back up those parts of the project/solution that actually need to be backed up while ignoring the parts that can be rebuilt from the code. A good walk through of what and why can be found in Version Control By Example, which also includes comparisons about different types of VCS as well as how to perform many of the usual tasks.
There are various free options out there, based on a number of different providers. As some examples, I've used the following services, and all of them will give you a free account, and some will also give you private repositories (so that random members of the public can't see your work if that's what you want):
GitHub - unlimited public repositories, uses Git.
BitBucket - unlimited private repositories, uses Git or Mercurial
Visual Studio Team Services - unlimited private repositories, uses Git or TFS
Using an online provider will give you the added benefit of the backups being on a third party - so if your disk fails you'll still have a backup, as well as the other benefits a VCS will provide (the ability to rollback to a specific point in time, annotations about changes, etc.).
I'm a hobbyist programmer and I've created an application for my office. Every so often I need to improve the code, add features or fix issues that come up under certain circumstances - I've found bugs or ineffective coding even after 3-4 months of heavy usage of the application. The thing is that whenever I modify the code, visual studio saves the changes. This means that if I want to use the program I'll have to be really fast in coding and debugging or it won't build - and I won't be able to use it...
Is there any way to keep the old version of the program without having to save the complete project folder elsewhere? Like creating a new version but keeping the option to go back to the old - working - one...
What you are looking for is called source control.
There are many systems out there, two popular ones are subversion and Git.
Used properly, you will have a full history of each file you have in your project.
There are two other answers here regarding source control at the time I write this, but there is another angle on this as well.
You're executing your production copy from the development directory. Don't do this.
When you have developed the program to a stable version, make a copy of it somewhere else and use that copy. In this way you're free to keep developing on the software without destroying your ability to keep using the existing stable version.
As for source control, you should definitely use that as well if you're not already doing it. It would, among other things, allow you to go back and hotfix the stable version with minor bugfixes while still doing major rewrites of the software, as well as the features others here have mentioned, full history of your project, "unlimited" undo, etc.
I'm not sure what you mean that Visual Studio saves the code when you modify it. It does by default save when you build, but I don't think it saves while you're typing.
Anyway, what you're looking for is called a source control system.
You can try Team Foundation Service from Microsoft.
It works fine and you can share youre project whit colleagues.
http://tfs.visualstudio.com/
EDIT:
This is a free of charge option you can use, until you want to share youre project with more than 4 persons!! than you have to pay for TFS
You need source control.
If your project is open source you can use codeplex, it's an open-source Website where engineers and computer scientists share projects and ideas. Its features include wiki pages, source control based on Mercurial, Team Foundation Server or Subversion (also powered by TFS), Git,discussion forums, issue tracking, project tagging, RSS support, statistics, and releases
If you don't want to share your code you can use Team Foundation Server
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
We have a solution with around 100+ projects, most of them C#. Naturally, it takes a long time to both open and build, so I am looking for best practices for such beasts. Along the lines of questions I am hoping to get answers to, are:
how do you best handle references between projects
should "copy local" be on or off?
should every project build to its own folder, or should they all build to the same output folder(they are all part of the same application)
Are solutions' folders a good way of organizing stuff?
I know that splitting the solution up into multiple smaller solutions is an option, but that comes with its own set of refactoring and building headaches, so perhaps we can save that for a separate thread :-)
You might be interested in these two MSBuild articles that I have written.
MSBuild: Best Practices For Creating Reliable Builds, Part 1
MSBuild: Best Practices For Creating Reliable Builds, Part 2
Specificially in Part 2 there is a section Building large source trees that you might want to take a look at.
To briefly answer your questions here though:
CopyLocal? For sure turn this off
Build to one or many output folders? Build to one output folder
Solution folders? This is a matter of taste.
Sayed Ibrahim Hashimi
My Book: Inside the Microsoft Build Engine : Using MSBuild and Team Foundation Build
+1 for sparing use of solution folders to help organise stuff.
+1 for project building to its own folder. We initially tried a common output folder and this can lead to subtle and painful to find out-of-date references.
FWIW, we use project references for solutions, and although nuget is probably a better choice these days, have found svn:externals to work well for both 3rd party and (framework type) in-house assemblies. Just get into the habit of using a specific revision number instead of HEAD when referencing svn:externals (guilty as charged:)
Unload projects you don't use often, and buy a SSD. A SSD doesn't improve compile time, but Visual Studio becomes twice faster to open/close/build.
We have a similar problem as we have 109 separate projects to deal with. To answer the original questions based on our experiences:
1. How do you best handle references between projects
We use the 'add reference' context menu option. If 'project' is selected, then the dependency is added to our single, global solution file by default.
2. Should "copy local" be on or off?
Off in our experience. The extra copying just adds to the build times.
3. Should every project build to its own folder, or should they all build to the same output folder(they are all part of the same application)
All of our output is put in a single folder called 'bin'. The idea being that this folder is the same as when the software is deployed. This helps prevents issues that occur when the developer setup is different from the deployment setup.
4. Are solutions folders a good way of organizing stuff?
No in our experience. One person's folder structure is another's nightmare. Deeply nested folders just increase the time it takes to find anything. We have a completely flat structure but name our project files, assemblies and namespaces the same.
Our way of structuring projects relies on a single solution file. Building this takes a long time, even if the projects themselves have not changed. To help with this, we usually create another 'current working set' solution file. Any projects that we are working on get added in to this. Build times are vastly improved, although one problem we have seen is that Intellisense fails for types defined in projects that are not in the current set.
A partial example of our solution layout:
\bin
OurStuff.SLN
OurStuff.App.Administrator
OurStuff.App.Common
OurStuff.App.Installer.Database
OurStuff.App.MediaPlayer
OurStuff.App.Operator
OurStuff.App.Service.Gateway
OurStuff.App.Service.CollectionStation
OurStuff.App.ServiceLocalLauncher
OurStuff.App.StackTester
OurStuff.Auditing
OurStuff.Data
OurStuff.Database
OurStuff.Database.Constants
OurStuff.Database.ObjectModel
OurStuff.Device
OurStuff.Device.Messaging
OurStuff.Diagnostics
...
[etc]
We work on a similar large project here. Solution folders has proved to be a good way of organising things, and we tend to just leave copy local set to true. Each project builds to its own folder, and then we know for each deployable project in there we have the correct subset of the binaries in place.
As for the time opening and time building, that's going to be hard to fix without breaking into smaller solutions. You could investigate parallelising the build (google "Parallel MS Build" for a way of doing this and integrating into the UI) to improve speed here. Also, look at the design and see if refactoring some of your projects to result in fewer overall might help.
In terms of easing the building pain, you can use the "Configuration Manager..." option for builds to enable or disable building of specific projects. You can have a "Project [n] Build" that could exclude certain projects and use that when you're targeting specific projects.
As far as the 100+ projects goes, I know you don't want to get hammered in this question about the benefits of cutting down your solution size, but I think you have no other option when it comes to speeding up load time (and memory usage) of devenv.
What I typically do with this depends a bit on how the "debug" process actually happens. Typically though I do NOT set copy local to be true. I setup the build directory for each project to output everything to the desired end point.
Therefore after each build I have a populated folder with all dll's and any windows/web application and all items are in the proper location. Copy local wasn't needed since the dll's end up in the right place in the end.
Note
The above works for my solutions, which typically are web applications and I have not experienced any issues with references, but it might be possible!
We have a similar issue. We solve it using smaller solutions. We have a master solution that opens everything. But perf. on that is bad. So, we segment up smaller solutions by developer type. So, DB developers have a solution that loads the projects they care about, service developers and UI developers the same thing. It's rare when somebody has to open up the whole solution to get what they need done on a day to day basis. It's not a panacea -- it has it's upsides and downsides. See "multi-solution model" in this article (ignore the part about using VSS :)
I think with solutions this large the best practice should be to break them up. You can think of the "solution" as a place to bring together the necessary projects and perhaps other pieces to work on a solution to a problem. By breaking the 100+ projects into multiple solutions specialized to developing solutions for only a part of the overall problem you can deal with less at a given time there by speeding your interactions with the required projects and simplifying the problem domain.
Each solution would produce the output which it is responsible for. This output should have version information which can be set in an automated process. When the output is stable you can updated the references in dependent projects and solutions with the latest internal distribution. If you still want to step into the code and access the source you can actually do this with the Microsoft symbol server which Visual Studio can use to allow you to step into referenced assemblies and even fetch the source code.
Simultaneous development can be done by specifying interfaces upfront and mocking out the assemblies under development while you are waiting for dependencies that are not complete but you wish to develop against.
I find this to be a best practice because there is no limit to how complex the overall effort can get when you break down it down physically in this manner. Putting all the projects into a single solution will eventually hit an upper limit.
Hope this information helps.
We have about 60+ projects and we don't use solution files. We have a mix of C# and VB.Net projects. The performance was always an issue. We don't work on all the projects at the same time. Each developer creates their own solution files based on the projects they're working on. The solution files doesn't get checked into our source control.
All Class library projects would build to a CommonBin folder at the root of the source directory. Executable / Web Projects build to their individual folder.
We don't use project references, instead file based reference from the CommonBin folder. I wrote a custom MSBuild Task that would inspect the projects and determine the build order.
We have been using this for few years now and have no complaints.
It all has to do with your definition and view on what a solution and a project are. In my mind a solution is just that, a logical grouping of projects that solve a very specific requirement. We develop a large Intranet application. Each application within that Intranet has it's own solution, which may also contain projects for exes or windows services. And then we have a centralized framework with things like base classes and helpers and httphandlers/httpmodules. The base framework is fairly large and is used by all applications. By splitting up the many solutions in this way you reduce the amount of projects required by a solution, as most of them have nothing to do with one another.
Having that many projects in a solution is just bad design. There should be no reason to have that many projects under a solution. The other problem I see is with project references, they can really screw you up eventually, especially if you ever want to split up your solution into smaller ones.
My advice is to do this and develop a centralized framework (your own implementation of Enterprise Library if you will). You can either GAC it to share or you can directly reference the file location so that you have a central store. You could use the same tactic for centralized business objects as well.
If you want to directly reference the DLL you will want to reference it in your project with copy local false (somewhere like c:\mycompany\bin\mycompany.dll). A runtime you will need to add some settings to your app.config or web.config to make it reference a file not in the GAC or runtime bin. In all actuality it doesn't matter if it's copy local or not, or if the dll ends up in the bin or is even in the GAC, because the config will override both of those. I think it is bad practice to copy local and have a messy system. You will most likely have to copy local temporarily if you need to debug into one of those assemblies though.
You can read my article on how to use a DLL globally without the GAC. I really dislike the GAC mostly because it prevents xcopy deployment and does not trigger an autorestart on applications.
http://nbaked.wordpress.com/2010/03/28/gac-alternative/
Set CopyLocal=false will reduce build time, but can cause different issues during deployment time.
There are many scenarios, when you need to have Copy Local’ left to True, e.g.
Top-level projects,
Second-level dependencies,
DLLs called by reflection.
My experience with setting CopyLocal=false wasn't successful. See summary of pro and cons in my blog post "Do NOT Change "Copy Local” project references to false, unless understand subsequences."
We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines.
A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash)
We are looking at merging projects. We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a DLL hell as we try to keep things in synch.
I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages.
UPDATE
I neglected to mention this is a C# solution. Thanks for all the C++ suggestions, but it's been a few years since I've had to worry about headers.
EDIT:
Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped)
New 3GHz laptop - the power of lost utilization works wonders when whinging to management
Disable Anti Virus during compile
'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI
Still not rip-snorting through a compile, but every bit helps.
Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance
WORKAROUND
We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them.
We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.
The Chromium.org team listed several options for accelerating the build (at this point about half-way down the page):
In decreasing order of speedup:
Install Microsoft hotfix 935225.
Install Microsoft hotfix 947315.
Use a true multicore processor (ie. an Intel Core Duo 2; not a Pentium 4 HT).
Use 3 parallel builds. In Visual Studio 2005, you will find the option in Tools > Options... > Projects and Solutions > Build and Run > maximum number of parallel project builds.
Disable your anti-virus software for .ilk, .pdb, .cc, .h files and only check for viruses on modify. Disable scanning the directory where your sources reside. Don't do anything stupid.
Store and build the Chromium code on a second hard drive. It won't really speed up the build but at least your computer will stay responsive when you do gclient sync or a build.
Defragment your hard drive regularly.
Disable virtual memory.
We have nearly 100 projects in one solution and a dev build time of only seconds :)
For local development builds we created a Visual Studio Addin that changes Project references to DLL references and unloads the unwanted projects (and an option to switch them back of course).
Build our entire solution once
Unload the projects we are not currently working on and change all project references to DLL references.
Before check-in change all references back from DLL to project references.
Our builds now only take seconds when we are working on only a few projects at a time. We can also still debug the additional projects as it links to the debug DLLs. The tool typically takes 10-30 seconds to make a large number of changes, but you don't have to do it that often.
Update May 2015
The deal I made (in comments below), was that I would release the plugin to Open Source if it gets enough interest. 4 years later it has only 44 votes (and Visual Studio now has two subsequent versions), so it is currently a low-priority project.
I had a similar issue on a solution with 21 projects and 1/2 million LOC. The biggest difference was getting faster hard drives. From the performance monitor the 'Avg. Disk Queue' would jump up significantly on the laptop indicating the hard drive was the bottle neck.
Here's some data for total rebuild times...
1) Laptop, Core 2 Duo 2GHz, 5400 RPM Drive (not sure of cache. Was standard Dell inspiron).
Rebuild Time = 112 seconds.
2) Desktop (standard issue), Core 2 Duo 2.3Ghz, single 7200RPM Drive 8MB Cache.
Rebuild Time = 72 seconds.
3) Desktop Core 2 Duo 3Ghz, single 10000 RPM WD Raptor
Rebuild Time = 39 seconds.
The 10,000 RPM drive can not be understated. Builds where significantly quicker plus everything else like displaying documentation, using file explorer was noticable quicker. It was a big productivity boost by speeding the code-build-run cycle.
Given what companies spend on developer salaries it is insane how much they can waste buy equiping them with the same PCs as the receptionist uses.
For C# .NET builds, you can use .NET Demon. It's a product that takes over the Visual Studio build process to make it faster.
It does this by analyzing the changes you made, and builds only the project you actually changed, as well as other projects that actually relied on the changes you made. That means if you only change internal code, only one project needs to build.
Turn off your antivirus. It adds ages to the compile time.
Use distributed compilation. Xoreax IncrediBuild can cut compilation time down to few minutes.
I've used it on a huge C\C++ solution which usually takes 5-6 hours to compile. IncrediBuild helped to reduce this time to 15 minutes.
Instructions for reducing your Visual Studio compile time to a few seconds
Visual Studio is unfortunately not smart enough to distinguish an assembly's interface changes from inconsequential code body changes. This fact, when combined with a large intertwined solutions, can sometimes create a perfect storm of unwanted 'full-builds' nearly every time you change a single line of code.
A strategy to overcome this is to disable the automatic reference-tree builds. To do this, use the 'Configuration Manager' (Build / Configuration Manager...then in the Active solution configuration dropdown, choose 'New') to create a new build configuration called 'ManualCompile' that copies from the Debug configuration, but do not check the 'Create new project configurations' checkbox. In this new build configuration, uncheck every project so that none of them will build automatically. Save this configuration by hitting 'Close'. This new build configuration is added to your solution file.
You can switch from one build configuration to another via the build configuration dropdown at the top of your IDE screen (the one that usually shows either 'Debug' or 'Release'). Effectively this new ManualCompile build configuration will render useless the Build menu options for: 'Build Solution' or 'Rebuild Solution'. Thus, when you are in the ManualCompile mode, you must manually build each project that you are modifying, which can be done by right-clicking on each affected project in the Solution Explorer, and then selecting 'Build' or 'Rebuild'. You should see that your overall compile times will now be mere seconds.
For this strategy to work, it is necessary for the VersionNumber found in the AssemblyInfo and GlobalAssemblyInfo files to remain static on the developer's machine (not during release builds of course), and that you don't sign your DLLs.
A potential risk of using this ManualCompile strategy is that the developer might forget to compile required projects, and when they start the debugger, they get unexpected results (unable to attach debugger, files not found, etc.). To avoid this, it is probably best to use the 'Debug' build configuration to compile a larger coding effort, and only use the ManualCompile build configuration during unit testing or for making quick changes that are of limited scope.
If this is C or C++, and you're not using precompiled headers, you should be.
We had a 80+ projects in our main solution which took around 4 to 6 minutes to build depending on what kind of machine a developer was working. We considered that to be way too long: for every single test it really eats away your FTEs.
So how to get faster build times? As you seem to already know it is the number of projects that really hurt the buildtime. Of course we did not want to get rid of all our projects and simply throw all sourcefiles into one. But we had some projects that we could combine nevertheless: As every "Repository project" in the solution had its own unittest project, we simply combined all the unittest projects into one global-unittest project. That cut down the number of projects with about 12 projects and somehow saved 40% of the time to build the entire solution.
We are thinking about another solution though.
Have you also tried to setup a new (second) solution with a new project? This second solution should simply incorporates all files using solution folders. Because you might be surprised to see the build time of that new solution-with-just-one-project.
However, working with two different solutions will take some carefull consideration. Developers might be inclined to actually -work- in the second solution and completely neglect the first. As the first solution with the 70+ projects will be the solution that takes care of your object-hierarchy, this should be the solution where your buildserver should run all your unittests. So the server for Continous Integration must be the first project/solution. You have to maintain your object-hierarchy, right.
The second solution with just one project (which will build mucho faster) will than be the project where testing and debugging will be done by all developers. You have to take care of them looking at the buildserver though! If anything breaks it MUST be fixed.
Make sure your references are Project references, and not directly to the DLLs in the library output directories.
Also, have these set to not copy locally except where absolutely necessary (The master EXE project).
I posted this response originally here:
https://stackoverflow.com/questions/8440/visual-studio-optimizations#8473
You can find many other helpful hints on that page.
If you are using Visual Studio 2008, you can compile using the /MP flag to build a single project in parallel. I have read that this is also an undocumented feature in Visual Studio 2005, but have never tried myself.
You can build multiple projects in parallel by using the /M flag, but this is usually already set to the number of available cores on the machine, though this only applies to VC++ I believe.
I notice this question is ages old, but the topic is still of interest today. The same problem bit me lately, and the two things that improved build performance the most were (1) use a dedicated (and fast) disk for compiling and (2) use the same outputfolder for all projects, and set CopyLocal to False on project references.
Some additional resources:
https://stackoverflow.com/questions/8440/visual-studio-optimizations
http://weblogs.asp.net/scottgu/archive/2007/11/01/tip-trick-hard-drive-speed-and-visual-studio-performance.aspx
http://arnosoftwaredev.blogspot.com/2010/05/how-to-improve-visual-studio-compile.html
http://blog.brianhartsock.com/2009/12/22/analyzing-visual-studio-build-performance/
Some analysis tools:
tools->options->VC++ project settings -> Build Timing = Yes
will tell you build time for every vcproj.
Add /Bt switch to compiler command line to see how much every CPP file took
Use /showIncludes to catch nested includes (header files that include other header files), and see what files could save a lot of IO by using forward declarations.
This will help you optimize compiler performance by eliminating dependencies and performance hogs.
Before spending money to invest in faster hard drives, try building your project entirely on a RAM disk (assuming you have the RAM to spare). You can find various free RAM disk drivers on the net. You won't find any physical drive, including SSDs, that are faster than a RAM disk.
In my case, a project that took 5 minutes to build on a 6-core i7 on a 7200 RPM SATA drive with Incredibuild was reduced by only about 15 seconds by using a RAM disk. Considering the need to recopy to permanent storage and the potential for lost work, 15 seconds is not enough incentive to use a RAM disk and probably not much incentive to spend several hundreds of dollars on a high-RPM or SSD drive.
The small gain may indicate that the build was CPU bound or that Windows file caching was rather effective, but since both tests were done from a state where the files weren't cached, I lean heavily towards CPU-bound compiles.
Depending on the actual code you're compiling your mileage may vary -- so don't hesitate to test.
How big is your build directory after doing a complete build? If you stick with the default setup then every assembly that you build will copy all of the DLLs of its dependencies and its dependencies' dependencies etc. to its bin directory. In my previous job when working with a solution of ~40 projects my colleagues discovered that by far the most expensive part of the build process was copying these assemblies over and over, and that one build could generate gigabytes of copies of the same DLLs over and over again.
Here's some useful advice from Patrick Smacchia, author of NDepend, about what he believes should and shouldn't be separate assemblies:
http://codebetter.com/patricksmacchia/2008/12/08/advices-on-partitioning-code-through-net-assemblies/
There are basically two ways you can work around this, and both have drawbacks. One is to reduce the number of assemblies, which is obviously a lot of work. Another is to restructure your build directories so that all your bin folders are consolidated and projects do not copy their dependencies' DLLs - they don't need to because they are all in the same directory already. This dramatically reduces the number of files created and copied during a build, but it can be difficult to set up and can leave you with some difficulty pulling out only the DLLs required by a specific executable for packaging.
Perhaps take some common functions and make some libraries, that way the same sources are not being compiled over and over again for multiple projects.
If you are worried about different versions of DLLs getting mixed up, use static libraries.
Turn off VSS integration. You may not have a choice in using it, but DLLs get "accidentally" renamed all the time...
And definitely check your pre-compiled header settings. Bruce Dawson's guide is a bit old, but still very good - check it out: http://www.cygnus-software.com/papers/precompiledheaders.html
I have a project which has 120 or more exes, libs and dlls and takes a considerable time to build. I use a tree of batch files that call make files from one master batch file. I have had problems with odd things from incremental (or was it temperamental) headers in the past so I avoid them now. I do a full build infrequently, and usually leave it to the end of the day while I go for a walk for an hour (so I can only guess it takes about half an hour). So I understand why that is unworkable for working and testing.
For working and testing I have another set of batch files for each app (or module or library) which also have all the debugging settings in place -- but these still call the same make files. I may switch DEBUG on of off from time to time and also decide on builds or makes or if I want to also build libs that the module may depend on, and so on.
The batch file also copies the completed result into the (or several) test folders. Depending of the settings this completes in several seconds to a minute (as opposed to say half an hour).
I used a different IDE (Zeus) as I like to have control over things like .rc files, and actually prefer to compile from the command line, even though I am using MS compliers.
Happy to post an example of this batch file if anyone is interested.
Disable file system indexing on your source directories (specifically the obj directories if you want your source searchable)
If this is a web app, setting batch build to true can help depending on the scenario.
<compilation defaultLanguage="c#" debug="true" batch="true" >
You can find an overview here: http://weblogs.asp.net/bradleyb/archive/2005/12/06/432441.aspx
You also may want to check for circular project references. It was an issue for me once.
That is:
Project A references Project B
Project B references Project C
Project C references Project A
One cheaper alternative to Xoreax IB is the use of what I call uber-file builds. It's basically a .cpp file that has
#include "file1.cpp"
#include "file2.cpp"
....
#include "fileN.cpp"
Then you compile the uber units instead of the individual modules. We've seen compile times from from 10-15 minutes down to 1-2 minutes. You might have to experiemnt with how many #includes per uber file make sense. Depends on the projects. etc. Maybe you include 10 files, maybe 20.
You pay a cost so beware:
You can't right click a file and say "compile..." as you have to exclude the individual cpp files from the build and include only the uber cpp files
You have to be careful of static global variable conflicts.
When you add new modules, you have to keep the uber files up to date
It's kind of a pain, but for a project that is largely static in terms of new modules, the intial pain might be worth it. I've seen this method beat IB in some cases.
If it's a C++ project, then you should be using precompiled headers. This makes a massive difference in compile times. Not sure what cl.exe is really doing (with not using precompiled headers), it seems to be looking for lots of STL headers in all of the wrong places before finally going to the correct location. This adds entire seconds to every single .cpp file being compiled. Not sure if this is a cl.exe bug, or some sort of STL problem in VS2008.
Looking at the machine that you're building on, is it optimally configured?
We just got our build time for our largest C++ enterprise-scale product down from 19 hours to 16 minutes by ensuring the right SATA filter driver was installed.
Subtle.
There's undocumented /MP switch in Visual Studio 2005, see http://lahsiv.net/blog/?p=40, which would enable parallel compilation on file basis rather than project basis. This may speed up compiling of the last project, or, if you compile one project.
When choosing a CPU: L1 cache size seems to have a huge impact on compilation time. Also, it is usually better to have 2 fast cores than 4 slow ones. Visual Studio doesn't use the extra cores very effectively. (I base this on my experience with the C++ compiler, but it is probably also true for the C# one.)
I'm also now convinced there is a problem with VS2008. I'm running it on a dual core Intel laptop with 3G Ram, with anti-virus switched off. Compiling the solution is often quite slick, but if I have been debugging a subsequent recompile will often slow down to a crawl. It is clear from the continuous main disk light that there is a disk I/O bottleneck (you can hear it, too). If I cancel the build and shutdown VS the disk activity stops. Restart VS, reload the solution and then rebuild, and it is much faster. Unitl the next time
My thoughts are that this is a memory paging issue - VS just runs out of memory and the O/S starts page swapping to try to make space but VS is demanding more than page swapping can deliver, so it slows down to a crawl. I can't think of any other explanation.
VS definitely is not a RAD tool, is it?
Does your company happen to use Entrust for their PKI/Encryption solution by any chance? It turns out, we were having abysmal build performance for a fairly large website built in C#, taking 7+ minutes on a Rebuild-All.
My machine is an i7-3770 with 16gb ram and a 512GB SSD, so performance should not have been that bad. I noticed my build times were insanely faster on an older secondary machine building the same codebase. So I fired up ProcMon on both machines, profiled the builds, and compared the results.
Lo and behold, the slow-performing machine had one difference -- a reference to the Entrust.dll in the stacktrace. Using this newly acquired info, I continued to search StackOverflow and found this: MSBUILD (VS2010) very slow on some machines. According to the accepted answer the problem lies in the fact the Entrust handler was processing the .NET certificate checks instead of the native Microsoft handler. Tt is also suggested that Entrust v10 solves this issue that is prevalent in Entrust 9.
I currently have it uninstalled and my build times plummeted to 24 seconds. YYMV with the number of projects you currently are building and may not directly address the scaling issue you were inquiring about. I will post an edit to this response if I can provide a fix without resorting to an uninstallation the software.
It's sure there's a problem with VS2008. Because the only thing I've done it's to install VS2008 for upgrading my project which has been created with VS2005.
I've only got 2 projects in my solution. It isn't big.
Compilation with VS2005 : 30 secondes
Compilation with VS2008 : 5 minutes
Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, if you are having issues, I recommend reading then, just what has helped us)
New 3GHz laptop - the power of lost utilization works wonders when whinging to management
Disable Anti Virus during compile
'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI
Still not rip-snorting through a compile, but every bit helps.
We are also testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them.
We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.
Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance