UPDATE: Focus your answers on hardware solutions please.
What hardware/tools/add-in are you using to improve ASP.NET compilation and first execution speed? We are looking at solid state hard drives to speed things up, but the prices are really high right now.
I have two 7200rpm harddrives in RAID 0 right now and I'm not satisfied with the performance anymore.
So my main question is what is the best cost effective way right now to improve ASP.NET compilation speed and overall development performance when you do a lot of debugging?
Scott Gu has a pretty good blog post about this, anyone has anything else to suggest?
http://weblogs.asp.net/scottgu/archive/2007/11/01/tip-trick-hard-drive-speed-and-visual-studio-performance.aspx
One of the important things to do is keeping projects of not-so-often changed assemblies unloaded. When a change occurs, load it, compile and unload again. It makes huge differences in large solutions.
First make sure your that you are using Web Application Projects (WAP). In our experience, compared to Website Projects, WAP compiles roughly 10x faster.
Then, consider migrating all the logic (including complex UI components) into separate library projects. The C# compiler way faster than the ASP.NET compiler (at least for VS2005).
If you have lots of 3rd party "Referenced Assemblies" ensuring that CopyLocal=False on all projects except for the web application project makes quite a big difference.
I blogged about the perf increases I managed to get by making this simple change:
Speeding Up your Desktop Build - Part 1
You can precompile the site, which will make the first run experience better
http://msdn.microsoft.com/en-us/library/ms227972.aspx
I would recommend adding as much memory to your PC as you can. If your solutions are super large you may want to explore 64bit so that your machine can address more than 3GB of ram.
Also Visual Studio 2008 SP1 seems to be markedly faster than previous versions, make certain you are running the latest release with the Service Packs.
Good Luck!
If you are looking purely at hardware you will need to search for benchmarks around the web. There are a few articles written just on the performance of hardware on Visual Studio compilation, I am just too lazy to find them and link them.
Hardware solutions can be endless because you can get some really high end quipment if you have the money. Otherwise its the usual more memory, faster processor, and faster hard drive. Moving to a 64bit OS also helps.
If you want to be even more specific? Just off the top of my head...8GB or more of memory, if you cant afford solid state drives I'd go for 10K to 15K RPM hard drives. There is a debate of whether quad-core makes any difference but look up the benchmarks and see what works for you.
If you are looking at just hardware, Multi-core processors, high amounts of ram, and ideally 10K or 15K hard drives.
I personally have noticed a huge improvement in performance with the change to 10K RPM drives.
The best way to improve ASP.NET compile time is to throw more hardware at it. An OCZ Vertex Turbo SSD drive and an Intel i7 960 gave me a huge boost. You can see my results here.
I switched from websites to web applications. Compile time went down by a factor of ten at least.
Also, I try not to use the debugger if possible ("Run without debugging"). This cuts down the time it takes to start the web application.
Related
Is there a way for me to get the amount of memory and processor power needed for my application. I recently had a very unpleasant experience when one of my applications kept freezing the computers on which it was working. This is obviously related to the lack of hardware power, because it works perfectly on the stronger computers that I used for testing purposes, where the application worked perfectly. So my question is - is there a way to calculate the amount of hardware power needed to run the application smoothly?
Almost all of my applications are done in C#, so I would need a method that can work with that type of application.
Thanks
This is obviously related to the lack of hardware power
This entirely depends on what your application is doing. If you are solving problems in a "not so time efficient way", then you can optimize the code.
I would suggest that you analyze your code with a profiler.
This will tell you:
What parts of your code are taking up most RAM/CPU
How much RAM in total did your application need when it peeked
Information about CPU consumption
This is obviously related to the lack of hardware power, because it works perfectly on the
stronger computers that I used for testing purposes,
Whoever set up testing should be fired.
You have to have one set of computers that are similar to the ones the application will run in for testing. That was accepted practice 20 years ago - seems modern times do not care about that.
Seriously, you NEED to have a test set that is representative on your lowest accepted hardware level.
Otherwise - no, sorry, no magic button. Profilers do NOT necessarily help (debugging, profiler may use more memory). Try a profiler. Optimize code. But at the end... you need to have a decent testbed.
I'd argue that this should be checked during software installation. Later, if user was prompted for updating his/her hardware and dismissed the warning, you shouldn't care about that.
If you're using Windows Installer (MSI), you can play with a custom action and use System.Management classes to detect whatever you want.
I have seen some real slow build times in a big legacy codebase without proper assembly decomposition, running on a 2G RAM machine. So, if I wanted to speed it up without code overhaul, would a 16G (or some other such huge number) RAM machine be radically faster, if the fairy IT department were to provide one? In other words, is RAM the major bottleneck for sufficiently large dotnet projects or are there other dominant issues?
Any input about similar situation for building Java is also appreciated, just out of pure curiosity.
Performance does not improve with additional RAM once you have more RAM than the application uses. You are likely not to see any more improvement by using 128GB of RAM.
We cannot guess the amount needed. Measure by looking at task manager.
It certainly won't do you any harm...
2G is pretty small for a dev machine, I use 16G as a matter of course.
However, build times are going to be gated by file access sooner or later, so whilst you might get a little improvement I suspect you won't be blown away by it. ([EDIT] as a commenter says, compilation is likely to be CPU bound too).
Have you looked into parallel builds (e.g. see this SO question: Visual Studio 2010, how to build projects in parallel on multicore).
Or, can you restructure your code base and maybe remove some less frequently updated assemblies in to a separate sln, and then reference these as DLLs (this isn't a great idea in all cases, but sometimes it can be expedient). From you description of the problem I'm guessing this is easier said than done, but this is how we've achieved good results in our code base.
The whole RAM issue is actually one of ROI (Return on Interest). The more RAM you add to a system, the less likely the application is going to have to search for a memory location large enough to store an object of a particular size and the faster it'll go; however, after a certain point it's so unlikely that the system will pick a location that is not large enough to store the object that it's pointless to go any higher. (note that read/write speeds of the RAM stick play a role in this as well).
In summary: # 2gb RAM, you definitely should upgrade that to something more like 8gb or the suggested 16gb however doing something more than that would be almost pointless because the bottleneck will come from the processor then.
ALSO it's probably a good idea to note the speed of the RAM too because then your RAM can bottleneck because it can only handle XXXXmhz clock speed at most. Generally, though, 1600mhz is fine.
I'm looking for a way to find bottleneck methods in a solution (lots of projects).
Lets say i have a HUGE program (1000s of methods) and i want to improve performance by finding methods that are called a lot (actually used at runtime), and optimize them.
I need this for a complex problem that's written in C++, C#, CLI/C++. (I can compile it all in debug and have the .pdb files)
So, I'm looking for some kind of analyzer that will tell me how much cpu time each method is using.
What tool/addon/feature can I use in Visual Studio to get that information ?
I want to be able to run the program for a few minutes, and then analyze the method's cpu usage. Or even better - amount of cpu / number of calls.
Would be even better if I could sort by namespace or dll/package/project.
The more expensive Visual Studio versions should provide a Profiler builtin: see this thread.
However there are more methods to profile, this topic has been covered a lot of times on stackoverflow, here for example.
Following one of Christian Goltz links, I've found a program that might do what I want, it profiles both managed and unmanaged code:
AQTime Pro
I'm had some good experiences with the DotTrace product by JetBrains. Not sure if it has the IDE integration or all the features that you're looking for, but it definitely gets the job done.
This method is low-tech, but works perfectly well.
I also work in a huge application, and when we have performance problems it finds them quickly.
All what I know about performance testing is what it's name suggests!
But I have some problems specially with the database querying techniques and how will it affect my application's performance at normal times and at stress!
So can performance tests calculates for me a certain page's performance ?
Can I do that on the development machine (my own pc/local host) ?
Or I have to test it on the hosting server ? do I have to own a server or shared hosting is okay ?
what are the available books/articles ? and the good free tools to use ?
I know I asked a lot of questions but they will actually all adds up to help anyone that is having the same spins in my head when trying to decide which technique to use and can't get a definite opinion from the experienced ones!
Thanks in advance for your time and effort =)
First, if you know you have problems with your db architecture, then it sounds like you don't really need to do load testing at this time, you'd be better served figuring out what your db issues are.
As for the overall, "how can I load test, and what are some good directions to go?" It depends on a couple of things. First, you could test in your dev environment, though unless its the same setup as the production environment (server setup / cpu memory / ect.), then it is only going to be an estimate. In general I prefer to use a staging / test environment that mimics the production environment as closely as possible.
If you think you're going to have an application with high usage you'll want to know what your performance is period, whether dedicated or shared hosting. I will say, however, that if you are expecting a high traffic site / application, you'll probably have a number of reasons to have a dedicated hosting environment (or a cloud based solution).
There are some decent free tools available, specifically there is http://jmeter.apache.org/ which can plug into a bunch of stuff, the catch is that, while the gui interface is better than years ago, its not as good as some of the commercial options available.
You'll ultimately run into an issue where you can only bang on something so much from a single client computer, even with one of these packages, and you'll need to start distributing that load. That is where the commercial packages start to really provide some good benefits.
For C# specifically, and .Net projects in general, Visual Studio (depedning on your version) should have something like Test Projects, which you can read more about here: http://msdn.microsoft.com/en-us/library/ms182605(v=vs.80).aspx That may be closer, specifically, to what you were asking in the first place.
The most basic without access to the server is:
Console.write("Starting in " + DateTime.Now;)
//code
Console.write("Ending in " + DateTime.Now;)
Then you can measure what consult takes more time.
But you need to test with more scenarios, an approach can be better that other in certain cases, but vice-versa in others.
It's a tricky subject, and you will need more than just Stack Overflow to work through this - though I'm not aware of any books or web sites. This is just my experience talking...
In general, you want to know 2 things:
how many visitors can my site handle?
what do I need to do to increase that number?
You usually need to manage these concurrently.
My approach is to include performance testing into the development lifecycle, by creating a test environment (a dev machine is usually okay) on which I can control all the variables.
I use JMeter to run performance tests mimicking the common user journeys, and establish the number of users where the system starts to exceed maximum allowed response times (I typically use 1 second as the limit). Once I know where that point is, I will use analysis tools to understand what is causing the system to exceed its response time - is it the database? Should I introduce caching? Tools like PAL make this easy; at a more detailed level, you should use profilers (Redgate do a great one).
I run this process for an afternoon, once every two weeks, so there's no nasty surprise at the end of the project. By doing this, I have a high degree of confidence in my application's performance, and I know what to expect on "production" hardware.
On production, it's much harder to get accesso to the data which allows you to analyze a bottleneck - and once the site is live, it's usually harder to get permission to run performance tests which can bring the site down. On anything other than a start-up site, the infrastructure requirements mean it's usually too expensive to have a test environment that reflects live.
Therefore, I usually don't run a performance test on production which drives the app to the breaking point - but I do run "smoke tests", and collect log files which allow the PAL reports to be generated. The smoke test pushes the environment to a level which I expect to be around 50% of the breaking point - so if I think we've got a capacity of 100 concurrent users, the smoke test will go to 50 concurrent users.
I've not be coding long so I'm not familiar with which technique is quickest so I was wondering if there was a way to do this in VS or with a 3rd party tool?
Thanks
Profilers are great for measuring.
But your question was "How can I determine where the slow parts of my code are?".
That is a different problem. It is diagnosis, not measurement.
I know this is not a popular view, but it's true.
It is like a business that is trying to cut costs.
One approach (top down) is to measure the overall finances, then break it down by categories and departments, and try to guess what could be eliminated. That is measurement.
Another approach (bottom up) is to walk in at random into an office, pick someone at random, and ask them what they are doing at that moment and (importantly) why, in detail.
Do this more than once.
That is what Harry Truman did at the outbreak of WW2, in the US defense industry, and immediately uncovered massive fraud and waste, by visiting several sites. That is diagnosis.
In code you can do this in a very simple way: "Pause" it and ask it why it is spending that particular cycle. Usually the call stack tells you why, in detail.
Do this more than once.
This is sampling. Some profilers sample the call stack. But then for some reason they insist on summarizing time spent in each function, inclusive and exclusive. That is like summarizing by department in business, inclusive and exclusive.
It loses the information you need, which is the fine-grain detail that tells if the cycles are necessary.
To answer your question:
Just pause your program several times, and capture the call stack each time. If your code is very slow, the wasteful function calls will be on nearly every stack. They will point with precision to the "slow parts of your code".
ADDED: RedGate ANTS is getting there. It can give you cost-by-line, and it is quite spiffy. So if you're in .NET, and can spare 3 figures, and don't mind waiting around to install & learn it, it can tell you much of what your Pause key can tell you, and be much more pretty about it.
Profiling.
RedGate has a product.
JetBrains has a product.
I've used ANTS Profiler and I can join the others with recommendation.
The price is NEGLIGIBLE when you compare it with the amount of dev hours it will save you.
I you're developer for a living, and your company won't buy it for you, either change the company or buy it for yourself.
For profiling large complex UI applications then you often need a set of tools and approaches. I'll outline the approach and tools I used recently on a project to improve the performance of a .Net 2.0 UI application.
First of all I interviewed users and worked through the use cases myself to come up with a list of target use cases that highlighted the systems worse performing areas. I.e. I didn't want to spend n man days optimising a feature that was hardly ever used but very slow. I would want to spend time, however, optimising a feature that was a little bit sluggish but invoked a 1000 times a day, etc.
Once the candidate use cases were identified I instrumented my code with my own light weight logging class (I used some high performance timers and a custom logging solution because a needed sub-millisecond accuracy). You might, however, be able to get away with log4net and time stamps. The reason I instrumented code is that it is sometimes easier to read your own logs rather than the profiler's output. I needed both for a variety of reasons (e.g. measuring .Net user control layouts is not always straightforward using the profiler).
I then ran my instrumented code with the ANTS profiler and profiled the use case. By combining the ANTS profile and my own log files I was very quickly able to discover problems with our application.
We also profiled the server as well as the UI and were able to work out breakdowns for time spent in the UI, time spent on the wire, time spent on the server etc.
Also worth noting is that 1 run isn't enough, and the 1st run is usually worth throwing away. Let me explain: PC load, network traffic, JIT compilation status etc can all affect the time a particular operation will take. A simple strategy is to measure an operation n times (say 5), throw away the slowest and fastest run, the analyse the remianing profiles.
Eqatec profiler is a cute small profiler that is free and easy to use. It probably won't come anywhere near the "wow" factor of Ants profiler in terms of features but it still is very cool IMO and worth a look.
Use a profiler. ANTS costs money but is very nice.
i just set breakpoints, visual will tell you how many ms between breakpoint has passed. so you can find it manually.
ANTS Profiler is very good.
If you don't want to pay, the newer VS verions come with a profiler, but to be honest it doesn't seem very good. ATI/AMD make a free profiler... but its not very user friendly (to me, I couldn't get any useful info out of it).
The advice I would give is to time function calls yourself with code. If they are fast and you do not have a high-precision timer or the calls vary in slowness for a number of reasons (e.g. every x calls building some kind of cache), try running each one x10000 times or something, then dividing the result accordingly. This may not be perfect for some sections of code, but if you are unable to find a good, free, 3rd party solution, its pretty much what's left unless you want to pay.
Yet another option is Intel's VTune.