Building takes long time. How fight with that? - c#

Speaking about compiled languages (c# in my case) I think that problem would always remain, no matter how performant your develop machine is. Build time could be more or less depending on concrete environment, but often it's enough to make your attention wanna move from your task to something else like stackoverflow, youtube, twitter etc. and it's just very annoying.
I'm happy for java developers because of Java's dynamic class loading, but what can .net (and others) developers do to make build process less painful and obtrusive?

We use multiple build configurations to trade-off between speed and a comprehensive build.
A full build does time-consuming things like FX cop analysis, ASP.NET compilation, all unit test projects, Entity Framework view pre-generation, etc.
A "fast build" typically takes just a few seconds and those the bare minimum needed to get the project running.
Developers switch between the full build and the fast build throughout their workflow, as needed.

Don't the class files have to be build as well? Wouldn't that just put the workload to runtime in contrast to compile time? That's not really a difference isn't it? The bigger software grows, naturally, the longer it takes to build it, depending on the machine and not on the language or framework - this is the tradeoff for things like strong typing, interpreted byte code (or binary code depending on the language/compiler) instead of interpreted source code at each run (as you have with php and python etc). I don't think java improves things much, there will a timeframe you have to build your application in.
I think in comparision to C and C++ both C# and java have improved immensely on the account of compile time.
Just use the time for slacking off:
source

Some things to try:
Defragment the drive containing your source code
Exclude your source code folders from the virus scanner
Exclude your source code folders from the Windows Search indexer
Disable any Visual Studio extensions that you are not using

The remark in your question about one's attention wandering off-task reminded me of this Joel on Software post.
So investing in solid-state disks (since I'm assuming you're talking about the build process on a dev box while you're developing and debugging) could help.
Besides, making your computer faster in general can't hurt, right? :)

In addition to many of the other suggestions to get a faster machine, remove unnecessary projects from your solution, etc, consider Visual Studio 2010 + a multicore machine. VS2010 can take advantage of all of your cores when doing a build. Check out this thread for more on how to set that up.

Are you doing a rebuild all every time or do you have everything in the same assembly? I'm working with quite large projects and my build time isn't that high. I got several assemblies and I only modify a few each time I do changes to the project.
If you find yourself modifying assemblies all over the place, you might try to refactor your code structure. Or maybe you haven't taken yourself time to do unit tests? They do not only help you with the testing, but to get better code structure (hard to test apps with lousy design).
Another alternative is to use tools that speed up builds, for instance: http://www.xoreax.com/

I've worked on some very large C# projects and have rarely seen Debug build times exceed 2 minutes.
What generally sucks time are things like static analysis (e.g. fxcop), unit tests, code signing (if using a code sign service), etc. The easiest way to keep these under control is to either limit them to Release builds or to have a separate build definition for 'Full Build' and exclude these steps from your Debug and Release builds.
If these aren't your problems, look to your computer performance as others have said. Fragmentation, slow build disks, anti-virus, etc.

Related

Continuous Integration for stack with Visual C++ and C#

Please recommend a good continuous integration that would build and integrate with the .net stack and the visual c++ as well.
Some recommendations I have got are
Jenkins
CruiseControl
Teamcity
Because of the polyglot nature of the project, which continuous integration solution would you recommend?
I have used all three over several years. Some of the answers below state that most of the work will be producing your own build scripts. This has been true in my experience as well. We use a combination of MSBuild and Powershell scripts for our build process, which can be run under just about any CI tool, so picking one comes down to what you're looking for in terms of customization, integration with other systems, performance, and ease of use.
Short answer:
I recommend Jenkins. So far it seems to be the best combination of the above qualities. It has a ton of plugins, some localization and is actively developed by the OSS community.
Long answer:
I started with Cruise Control .Net. It was easily configurable with a text file and I found it highly reliable. However, we moved away from it because Thoughtworks was moving toward a paid product (Cruise, now Go) and future development was in question. A new team has since forked the project but there is little word about future development since.
We moved to TeamCity, which is free and has a great ajax-y UI. It is easy to setup and get going and has a lot of features for distributed builds. We quit using TeamCity for several reasons. The server does a ton of stuff and it was a bit overkill for our basic needs. Even so, it was not very customizable (see Time Zones and notification contents) and we often found the administration UI confusing. That was all still okay, but we also had steadily worsening performance problems. We started with a standard HSSQLDB out-of-the-box, moved our installation to SQL server when we started experiencing degraded performance, then had to quit using the server at all as performance continued to degrade over time. I'm not sure what the culprit was but I couldn't find any cleanup to do that would explain the constantly worsening performance as the Tomcat web server fought with SQL Server for resources, even when there were no active builds running. I am sure it's my fault and I was missing some crucial setting or needed to feed the server more memory, but this is a shared utility box, we did not have these issues with CC.Net, and most of all, I am not a Java/Tomcat guy and don't have a lot of extra time to keep fighting with these issues.
We've moved to Jenkins now. It seems to be working fine so far but we've only been with it a short while. It was easy to set up, does not seem to be taking nearly as many resources as TeamCity and has a ridiculous number of plugins. The only downside so far is like many OSS products, it does not seem to have the best documentation and it does so much that I may be tweaking knobs for a while to get it set up the way we want.
Between CruiseControl and TeamCity, TeamCity is faster and easier to set up, but you may need to check on licensing for it. I can't speak to Jenkins, never having used it.
Jenkins has the big advantage of being very extensible (currently over 400 plugins), which allows you to combine it with a huge number of other tools. So it gives you complete freedom in your other tool choices. I recently read that this is one problem of TeamCity, that you get locked in using the whole stack of tools (e.g. using SVN or Git as version control system will not be possible).
I am using Jenkins myself for our projects which has both Java and C++ code, and I am very happy with the tool. We had CruiseControl before, and have not once regretted the switch.
I have tried both Cruise Control and Jenkins, and Jenkins impressed me with very fast and user-friendly set up.
The three you list are all sensible choices, and the main problem will be producing the build script(s) needed to do produce the build artifact(s). If you manage to make them do everything needed, changing CI system shouldn't be a big issue.
After implementing all three in different shops, I'd chose all of the above. Pick one.

Selecting a Build Server

I'm planning to setup my own build server. I'm primary building C#, C/C++ and Java projects. I would also like my build server to run some external programs/scripts such as my unit tests, code static analysis and doxygen.
Suggestions?
Use Hudson Continuous-Integration software.
We're using JetBrains TeamCity. It's easy to configure, user friendly, has convenient plug-ins for notifications on build events, you can install multiple build workers, define any build engine (.net, java...), it can output artifacts, it can trigger automatically on check-in, it can execute any custom build script etc, etc... and most of all - it's free (for up to 20 configurations).
We've looked far and wide, and we found this to be the best...
Hardware: Discs. Quite some, or a decent SSD. A lot of the stuff you do will be disc based from the compiler side. Not talking about the get latest version (alone), but for example a c++ compiler generates QUITE a number of interim files in the build process. A decent fast subsystem can make a recognizable difference. Especially fi this is not for you but for some colleagues as well, sotit may run a lot concurrently.
Well, enough RAM and a modern multi core CPU go without saying.
I've used Trac and Bitten, which worked quite well. I used it for C# and Python projects.
I have it building, creating docs and running unit tests. Currently I'm investigating running dotCover for test coverage, which shouldn't be too difficult, because bitten basically allows you to call any shell command you need.
Actually, I usually run the build system on an old (not-so-fast) system - it doesn't need to be very quick for me. I like to have developers behind the fast machines ;-)

Will more CPUs/cores help with VS.NET build times?

I was wondering if anyone knew whether Visual Studio .NET had a parallel build process or not? I have a solution with lots of projects, every project has lots of markup/code, lots of types, etc. Just sitting there with intellisense on runs it up to about 700MB. But the build times are really slow and only seem to max out one of my two cpu cores.
Does this mean the build process is single threaded? My solution's build dependency chain isn't linear, so I don't see why it couldn't be building some of the projects in parallel. I remember Joel Spolsky blogging about his new SSD, and how it didn't help with compile times, but he didn't mention which compiler he was using. We're using VS 2005. Anyone know how it's compilation works? And is it any different/better in 2008/2010?
EDIT: Lots of good responses, here, but I'm interested specifically in C# and ASP.NET. No love for us web folks?
MSBuild (which VS uses to do builds, from 2005/.NET2) supports parallel builds. By default VS will set the maximum degree of parallelism to your number of processors. Use Tools | Options | Projects and Solutions | Build and Run to override this default.
Of course any one build might have more limited (or no) capacity to allow parallel builds. E.g. only one assembly in a solution provides no scope to build in parallel. Equally a large number of assemblies with lots of dependencies might block parallelism (A depends on B, C depends on A&B, D depends on C has no scope for parallel builds).
(NB. for C++, in VS 2005 & 2008 uses its own build system, in 2010 C++ will also be built with MSBuild.)
Scott Hanselman has a blog post from a couple years ago that details getting Faster Builds with MSBuild using Parallel Builds and Multicore CPUs that should be of interest. He also has a follow up post Hack: Parallel MSBuilds from within the Visual Studio IDE.
I would suggest that an SSD drive would provide the biggest benefit explicitly for builds
SuperUser which also refutes Joel's article
SO asking for best laptop etc: discussions on SSDs, cores, how VS works etc
Qualifier: I bought an Intel SSD for home use a month or 3 ago. Lordy it's fast and arguably the best piece of kit I've ever bought except for my Voodoo 2...
With VS2k5 it depends on which language you're trying to use. C/C++ has 'experimental' support for multi-threaded building, but this feature isn't officially supported until 2k8 using the /m: switch
I have used parallel builds in Visual Studio 2008. It does speed things up, but has many annoying side effects.
I often get failed builds not because some compilation failed but because VS was unable to write into a locked symbol database. I have also got really messed up Intellisense results. Sometimes I have to repeat the build two or three times to get a final success.

Should I use Mono on a real project?

Has anyone used Mono, the open source .NET implementation on a large or medium sized project? I'm wondering if it's ready for real world, production environments. Is it stable, fast, compatible, ... enough to use? Does it take a lot of effort to port projects to the Mono runtime, or is it really, really compatible enough to just take of and run already written code for Microsoft's runtime?
I've used it for a number of internal and commercial projects with great success. My warnings:
Write lots of unit tests and make sure they ALL pass under Mono -- this will save you a lot of trouble.
Unless you absolutely have to, do NOT use their embedding API. It's damn easy to use, but it's ungodly easy to garbage collect valid memory or leak all of your memory.
Don't ever, ever, ever even come close to SVN and unless there's no choice, do not compile your own. Things change so often in SVN that it's highly likely you'll end up implementing something that doesn't work on a release version if your project is significantly large.
Don't try and figure out problems on your own for long, use the IRC channel. The people there are helpful and you'll save yourself days upon days -- don't make the same mistake I did.
Good luck!
Edit: The reason I say not to compile your own from source (release or SVN) is that it's easy to configure it differently than release binaries and hide bugs, for instance in the garbage collection.
Edit 2: Forgot to answer the second part to your question. In my case, I had no issues with porting code, but I wasn't using any MS-specific libraries (WinForms, ASP.NET, etc). If you're only using System.* stuff, you'll be fine; beyond that, you may run into issues. Mono 2.0 is quite solid, though.
I find Mono to be mostly binary compatible with MS. Hence I simply compile with MS, and run anywhere, like Java is meant to be!
The performance of Mono on Linux is getting very close to MS, as little as 2 times slower in some cases, vs 5-10 times slower when running Mono on Windows (but you should really stick to MS then).
I had some experience with Mono.
Pure .NET stuff (like business logic, controllers or algorithms) can be ported without any problems. Yet, weird things start showing up in the components that interact with operating system, UI, services or persistence. So be prepared for some debugging and hacking.
Things that might help:
Component-Driven Development - so that code is reused by Windows .NET and Mono, while differences are isolated and tested)
Continuous Integration running and checking everything against Mono and MS.NET, so that possible issues could be discovered as fast as possible (automated deployment and sanity checks are also recommended)
There are not a lot of UI component suites for shell development in Mono.
When a component vendor says that his code is "compatible with Mono", it is not same as "runs on Mono and is supported".
Although in the present, there are some companies going into production with Mono, I'd still wait before rushing in there due to:
Lack of decent and commercially supported UI component suites
Issues with efficient garbage collection
Not the best debugging experience (compare with the historical debugger in VS 2010)
PS: if there is a company offering fully managed cloud computing solution (not just a VM, but more like Hadoop equivalent for .NET), then I'll be forced to jump in despite these issues.
If you are doing ASP.NET 2.0 work, it works very well. Winforms may work, but it can cause display issues. If you want compatibility in a forms app, I would suggest GTK#, as it is crossplatform.
Like suggested, as long as you thouroughly test, I would agree to using it commercially if that is a viable option for you, unless it is winforms you need. In my opinion, i would stay away from it for now. And forget WPF as there is no support at this time, and there may never be (although they are working on moonlight, aka silverlight for linux)
I haven't used Mono myself but you may be interested to know that FogBugz uses Mono to provide Lucene.NET on Linux platforms. (I only know this because Joel mentioned it in passing in Stack Overflow Podcast #24.)
I've got a bunch of shell apps in production.
I agree with #cody-brocious, write a lot of unit tests. I found in the past that Regular Expressions didn't work exactly the same way as the windows CLR.
It's actually simplier than you think to get into, just compile and run. If you use NAnt on your projects its even easier to transition.
I typically install mono from the source releases and I haven't had any problems.
I've used it for encryption/decryption tools and it worked fine.
In the future, I would consider using Mono/C#, but I would not expect it to be 100% exactly like .Net on Windows.
Of course, you can, especially after Mono 2.0 has been released.
Mono 2.0, is ready for real projects.
You can check this

Very slow compile times on Visual Studio 2005

We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines.
A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash)
We are looking at merging projects. We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a DLL hell as we try to keep things in synch.
I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages.
UPDATE
I neglected to mention this is a C# solution. Thanks for all the C++ suggestions, but it's been a few years since I've had to worry about headers.
EDIT:
Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped)
New 3GHz laptop - the power of lost utilization works wonders when whinging to management
Disable Anti Virus during compile
'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI
Still not rip-snorting through a compile, but every bit helps.
Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance
WORKAROUND
We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them.
We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.
The Chromium.org team listed several options for accelerating the build (at this point about half-way down the page):
In decreasing order of speedup:
Install Microsoft hotfix 935225.
Install Microsoft hotfix 947315.
Use a true multicore processor (ie. an Intel Core Duo 2; not a Pentium 4 HT).
Use 3 parallel builds. In Visual Studio 2005, you will find the option in Tools > Options... > Projects and Solutions > Build and Run > maximum number of parallel project builds.
Disable your anti-virus software for .ilk, .pdb, .cc, .h files and only check for viruses on modify. Disable scanning the directory where your sources reside. Don't do anything stupid.
Store and build the Chromium code on a second hard drive. It won't really speed up the build but at least your computer will stay responsive when you do gclient sync or a build.
Defragment your hard drive regularly.
Disable virtual memory.
We have nearly 100 projects in one solution and a dev build time of only seconds :)
For local development builds we created a Visual Studio Addin that changes Project references to DLL references and unloads the unwanted projects (and an option to switch them back of course).
Build our entire solution once
Unload the projects we are not currently working on and change all project references to DLL references.
Before check-in change all references back from DLL to project references.
Our builds now only take seconds when we are working on only a few projects at a time. We can also still debug the additional projects as it links to the debug DLLs. The tool typically takes 10-30 seconds to make a large number of changes, but you don't have to do it that often.
Update May 2015
The deal I made (in comments below), was that I would release the plugin to Open Source if it gets enough interest. 4 years later it has only 44 votes (and Visual Studio now has two subsequent versions), so it is currently a low-priority project.
I had a similar issue on a solution with 21 projects and 1/2 million LOC. The biggest difference was getting faster hard drives. From the performance monitor the 'Avg. Disk Queue' would jump up significantly on the laptop indicating the hard drive was the bottle neck.
Here's some data for total rebuild times...
1) Laptop, Core 2 Duo 2GHz, 5400 RPM Drive (not sure of cache. Was standard Dell inspiron).
Rebuild Time = 112 seconds.
2) Desktop (standard issue), Core 2 Duo 2.3Ghz, single 7200RPM Drive 8MB Cache.
Rebuild Time = 72 seconds.
3) Desktop Core 2 Duo 3Ghz, single 10000 RPM WD Raptor
Rebuild Time = 39 seconds.
The 10,000 RPM drive can not be understated. Builds where significantly quicker plus everything else like displaying documentation, using file explorer was noticable quicker. It was a big productivity boost by speeding the code-build-run cycle.
Given what companies spend on developer salaries it is insane how much they can waste buy equiping them with the same PCs as the receptionist uses.
For C# .NET builds, you can use .NET Demon. It's a product that takes over the Visual Studio build process to make it faster.
It does this by analyzing the changes you made, and builds only the project you actually changed, as well as other projects that actually relied on the changes you made. That means if you only change internal code, only one project needs to build.
Turn off your antivirus. It adds ages to the compile time.
Use distributed compilation. Xoreax IncrediBuild can cut compilation time down to few minutes.
I've used it on a huge C\C++ solution which usually takes 5-6 hours to compile. IncrediBuild helped to reduce this time to 15 minutes.
Instructions for reducing your Visual Studio compile time to a few seconds
Visual Studio is unfortunately not smart enough to distinguish an assembly's interface changes from inconsequential code body changes. This fact, when combined with a large intertwined solutions, can sometimes create a perfect storm of unwanted 'full-builds' nearly every time you change a single line of code.
A strategy to overcome this is to disable the automatic reference-tree builds. To do this, use the 'Configuration Manager' (Build / Configuration Manager...then in the Active solution configuration dropdown, choose 'New') to create a new build configuration called 'ManualCompile' that copies from the Debug configuration, but do not check the 'Create new project configurations' checkbox. In this new build configuration, uncheck every project so that none of them will build automatically. Save this configuration by hitting 'Close'. This new build configuration is added to your solution file.
You can switch from one build configuration to another via the build configuration dropdown at the top of your IDE screen (the one that usually shows either 'Debug' or 'Release'). Effectively this new ManualCompile build configuration will render useless the Build menu options for: 'Build Solution' or 'Rebuild Solution'. Thus, when you are in the ManualCompile mode, you must manually build each project that you are modifying, which can be done by right-clicking on each affected project in the Solution Explorer, and then selecting 'Build' or 'Rebuild'. You should see that your overall compile times will now be mere seconds.
For this strategy to work, it is necessary for the VersionNumber found in the AssemblyInfo and GlobalAssemblyInfo files to remain static on the developer's machine (not during release builds of course), and that you don't sign your DLLs.
A potential risk of using this ManualCompile strategy is that the developer might forget to compile required projects, and when they start the debugger, they get unexpected results (unable to attach debugger, files not found, etc.). To avoid this, it is probably best to use the 'Debug' build configuration to compile a larger coding effort, and only use the ManualCompile build configuration during unit testing or for making quick changes that are of limited scope.
If this is C or C++, and you're not using precompiled headers, you should be.
We had a 80+ projects in our main solution which took around 4 to 6 minutes to build depending on what kind of machine a developer was working. We considered that to be way too long: for every single test it really eats away your FTEs.
So how to get faster build times? As you seem to already know it is the number of projects that really hurt the buildtime. Of course we did not want to get rid of all our projects and simply throw all sourcefiles into one. But we had some projects that we could combine nevertheless: As every "Repository project" in the solution had its own unittest project, we simply combined all the unittest projects into one global-unittest project. That cut down the number of projects with about 12 projects and somehow saved 40% of the time to build the entire solution.
We are thinking about another solution though.
Have you also tried to setup a new (second) solution with a new project? This second solution should simply incorporates all files using solution folders. Because you might be surprised to see the build time of that new solution-with-just-one-project.
However, working with two different solutions will take some carefull consideration. Developers might be inclined to actually -work- in the second solution and completely neglect the first. As the first solution with the 70+ projects will be the solution that takes care of your object-hierarchy, this should be the solution where your buildserver should run all your unittests. So the server for Continous Integration must be the first project/solution. You have to maintain your object-hierarchy, right.
The second solution with just one project (which will build mucho faster) will than be the project where testing and debugging will be done by all developers. You have to take care of them looking at the buildserver though! If anything breaks it MUST be fixed.
Make sure your references are Project references, and not directly to the DLLs in the library output directories.
Also, have these set to not copy locally except where absolutely necessary (The master EXE project).
I posted this response originally here:
https://stackoverflow.com/questions/8440/visual-studio-optimizations#8473
You can find many other helpful hints on that page.
If you are using Visual Studio 2008, you can compile using the /MP flag to build a single project in parallel. I have read that this is also an undocumented feature in Visual Studio 2005, but have never tried myself.
You can build multiple projects in parallel by using the /M flag, but this is usually already set to the number of available cores on the machine, though this only applies to VC++ I believe.
I notice this question is ages old, but the topic is still of interest today. The same problem bit me lately, and the two things that improved build performance the most were (1) use a dedicated (and fast) disk for compiling and (2) use the same outputfolder for all projects, and set CopyLocal to False on project references.
Some additional resources:
https://stackoverflow.com/questions/8440/visual-studio-optimizations
http://weblogs.asp.net/scottgu/archive/2007/11/01/tip-trick-hard-drive-speed-and-visual-studio-performance.aspx
http://arnosoftwaredev.blogspot.com/2010/05/how-to-improve-visual-studio-compile.html
http://blog.brianhartsock.com/2009/12/22/analyzing-visual-studio-build-performance/
Some analysis tools:
tools->options->VC++ project settings -> Build Timing = Yes
will tell you build time for every vcproj.
Add /Bt switch to compiler command line to see how much every CPP file took
Use /showIncludes to catch nested includes (header files that include other header files), and see what files could save a lot of IO by using forward declarations.
This will help you optimize compiler performance by eliminating dependencies and performance hogs.
Before spending money to invest in faster hard drives, try building your project entirely on a RAM disk (assuming you have the RAM to spare). You can find various free RAM disk drivers on the net. You won't find any physical drive, including SSDs, that are faster than a RAM disk.
In my case, a project that took 5 minutes to build on a 6-core i7 on a 7200 RPM SATA drive with Incredibuild was reduced by only about 15 seconds by using a RAM disk. Considering the need to recopy to permanent storage and the potential for lost work, 15 seconds is not enough incentive to use a RAM disk and probably not much incentive to spend several hundreds of dollars on a high-RPM or SSD drive.
The small gain may indicate that the build was CPU bound or that Windows file caching was rather effective, but since both tests were done from a state where the files weren't cached, I lean heavily towards CPU-bound compiles.
Depending on the actual code you're compiling your mileage may vary -- so don't hesitate to test.
How big is your build directory after doing a complete build? If you stick with the default setup then every assembly that you build will copy all of the DLLs of its dependencies and its dependencies' dependencies etc. to its bin directory. In my previous job when working with a solution of ~40 projects my colleagues discovered that by far the most expensive part of the build process was copying these assemblies over and over, and that one build could generate gigabytes of copies of the same DLLs over and over again.
Here's some useful advice from Patrick Smacchia, author of NDepend, about what he believes should and shouldn't be separate assemblies:
http://codebetter.com/patricksmacchia/2008/12/08/advices-on-partitioning-code-through-net-assemblies/
There are basically two ways you can work around this, and both have drawbacks. One is to reduce the number of assemblies, which is obviously a lot of work. Another is to restructure your build directories so that all your bin folders are consolidated and projects do not copy their dependencies' DLLs - they don't need to because they are all in the same directory already. This dramatically reduces the number of files created and copied during a build, but it can be difficult to set up and can leave you with some difficulty pulling out only the DLLs required by a specific executable for packaging.
Perhaps take some common functions and make some libraries, that way the same sources are not being compiled over and over again for multiple projects.
If you are worried about different versions of DLLs getting mixed up, use static libraries.
Turn off VSS integration. You may not have a choice in using it, but DLLs get "accidentally" renamed all the time...
And definitely check your pre-compiled header settings. Bruce Dawson's guide is a bit old, but still very good - check it out: http://www.cygnus-software.com/papers/precompiledheaders.html
I have a project which has 120 or more exes, libs and dlls and takes a considerable time to build. I use a tree of batch files that call make files from one master batch file. I have had problems with odd things from incremental (or was it temperamental) headers in the past so I avoid them now. I do a full build infrequently, and usually leave it to the end of the day while I go for a walk for an hour (so I can only guess it takes about half an hour). So I understand why that is unworkable for working and testing.
For working and testing I have another set of batch files for each app (or module or library) which also have all the debugging settings in place -- but these still call the same make files. I may switch DEBUG on of off from time to time and also decide on builds or makes or if I want to also build libs that the module may depend on, and so on.
The batch file also copies the completed result into the (or several) test folders. Depending of the settings this completes in several seconds to a minute (as opposed to say half an hour).
I used a different IDE (Zeus) as I like to have control over things like .rc files, and actually prefer to compile from the command line, even though I am using MS compliers.
Happy to post an example of this batch file if anyone is interested.
Disable file system indexing on your source directories (specifically the obj directories if you want your source searchable)
If this is a web app, setting batch build to true can help depending on the scenario.
<compilation defaultLanguage="c#" debug="true" batch="true" >
You can find an overview here: http://weblogs.asp.net/bradleyb/archive/2005/12/06/432441.aspx
You also may want to check for circular project references. It was an issue for me once.
That is:
Project A references Project B
Project B references Project C
Project C references Project A
One cheaper alternative to Xoreax IB is the use of what I call uber-file builds. It's basically a .cpp file that has
#include "file1.cpp"
#include "file2.cpp"
....
#include "fileN.cpp"
Then you compile the uber units instead of the individual modules. We've seen compile times from from 10-15 minutes down to 1-2 minutes. You might have to experiemnt with how many #includes per uber file make sense. Depends on the projects. etc. Maybe you include 10 files, maybe 20.
You pay a cost so beware:
You can't right click a file and say "compile..." as you have to exclude the individual cpp files from the build and include only the uber cpp files
You have to be careful of static global variable conflicts.
When you add new modules, you have to keep the uber files up to date
It's kind of a pain, but for a project that is largely static in terms of new modules, the intial pain might be worth it. I've seen this method beat IB in some cases.
If it's a C++ project, then you should be using precompiled headers. This makes a massive difference in compile times. Not sure what cl.exe is really doing (with not using precompiled headers), it seems to be looking for lots of STL headers in all of the wrong places before finally going to the correct location. This adds entire seconds to every single .cpp file being compiled. Not sure if this is a cl.exe bug, or some sort of STL problem in VS2008.
Looking at the machine that you're building on, is it optimally configured?
We just got our build time for our largest C++ enterprise-scale product down from 19 hours to 16 minutes by ensuring the right SATA filter driver was installed.
Subtle.
There's undocumented /MP switch in Visual Studio 2005, see http://lahsiv.net/blog/?p=40, which would enable parallel compilation on file basis rather than project basis. This may speed up compiling of the last project, or, if you compile one project.
When choosing a CPU: L1 cache size seems to have a huge impact on compilation time. Also, it is usually better to have 2 fast cores than 4 slow ones. Visual Studio doesn't use the extra cores very effectively. (I base this on my experience with the C++ compiler, but it is probably also true for the C# one.)
I'm also now convinced there is a problem with VS2008. I'm running it on a dual core Intel laptop with 3G Ram, with anti-virus switched off. Compiling the solution is often quite slick, but if I have been debugging a subsequent recompile will often slow down to a crawl. It is clear from the continuous main disk light that there is a disk I/O bottleneck (you can hear it, too). If I cancel the build and shutdown VS the disk activity stops. Restart VS, reload the solution and then rebuild, and it is much faster. Unitl the next time
My thoughts are that this is a memory paging issue - VS just runs out of memory and the O/S starts page swapping to try to make space but VS is demanding more than page swapping can deliver, so it slows down to a crawl. I can't think of any other explanation.
VS definitely is not a RAD tool, is it?
Does your company happen to use Entrust for their PKI/Encryption solution by any chance? It turns out, we were having abysmal build performance for a fairly large website built in C#, taking 7+ minutes on a Rebuild-All.
My machine is an i7-3770 with 16gb ram and a 512GB SSD, so performance should not have been that bad. I noticed my build times were insanely faster on an older secondary machine building the same codebase. So I fired up ProcMon on both machines, profiled the builds, and compared the results.
Lo and behold, the slow-performing machine had one difference -- a reference to the Entrust.dll in the stacktrace. Using this newly acquired info, I continued to search StackOverflow and found this: MSBUILD (VS2010) very slow on some machines. According to the accepted answer the problem lies in the fact the Entrust handler was processing the .NET certificate checks instead of the native Microsoft handler. Tt is also suggested that Entrust v10 solves this issue that is prevalent in Entrust 9.
I currently have it uninstalled and my build times plummeted to 24 seconds. YYMV with the number of projects you currently are building and may not directly address the scaling issue you were inquiring about. I will post an edit to this response if I can provide a fix without resorting to an uninstallation the software.
It's sure there's a problem with VS2008. Because the only thing I've done it's to install VS2008 for upgrading my project which has been created with VS2005.
I've only got 2 projects in my solution. It isn't big.
Compilation with VS2005 : 30 secondes
Compilation with VS2008 : 5 minutes
Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, if you are having issues, I recommend reading then, just what has helped us)
New 3GHz laptop - the power of lost utilization works wonders when whinging to management
Disable Anti Virus during compile
'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI
Still not rip-snorting through a compile, but every bit helps.
We are also testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them.
We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.
Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance

Categories

Resources