I need an accurate timer, and DateTime.Now seems not accurate enough. From the descriptions I read, System.Diagnostics.Stopwatch seems to be exactly what I want.
But I have a phobia. I'm nervous about using anything from System.Diagnostics in actual production code. (I use it extensively for debugging with Asserts and PrintLns etc, but never yet for production stuff.) I'm not merely trying to use a timer to benchmark my functions - my app needs an actual timer. I've read on another forum that System.Diagnostics.StopWatch is only for benchmarking, and shouldn't be used in retail code, though there was no reason given. Is this correct, or am I (and whoever posted that advice) being too closed minded about System.Diagnostics? ie, is it ok to use System.Diagnostics.Stopwatch in production code?
Thanks
Adrian
Under the hood, pretty much all Stopwatch does is wrap QueryPerformanceCounter. As I understand it, Stopwatch is there to provide access to the high-resolution timer - if you need this resolution in production code I don't see anything wrong with using it.
Yes, System.Diagnostics does sound like it is for debugging only, but don't let the name deceive you. The System.Diagnostics namespace may seem a bit scary sounding for use in production code at first (it did for me), but there are plenty of useful things in that namespace.
Some things, such as the Process class, are useful for interacting with the system. With Process.Start you can start other applications, launch a website for the user, open a file or folder, etc.
Others things, such as the Trace class, can help you track down bugs in production code. Granted, you will not always use them in production code, but they are very useful for logging and tracking down that elusive bug on a remote machine.
Don't worry about the name.
You say you've read on another forum not to use classes from System.Diagnostics in production. But the only source you should worry about is Microsoft, who created the code. They say that the StopWatch class:
Provides a set of methods and properties that you can use to accurately measure elapsed time.
They don't say, "except in production".
Afaik StopWatch is a shell over QueryPerformanceCounter functionality. This function is the basis of a lot of performance counters related measurements. QPF is very fast to call and perfectly safe. IF you feel paranoid about the Diagnostics namespace, pInvoke the QPF directly.
The stopwatch is basically a neat wrapper around the native QueryPerformanceCounter and QueryPerformanceFrequency methods. If you don't feel comfortable using the System.Diagnostic namespace, you can access these directly.
Using the Performance Counter is very common, there is nothing wrong with that. AFAIK, there is no higher timer precision available. Note the QPF might lead to problems with multi processor machines, but the MSDN Article linked before gives some additional information on that. It is advisable to make sure System.Diagnostics.Stopwatch does that in the background or to call the SetThreadAffinity manually - otherwise your timer might jump back in time!
Note that for very high precision measurements, there are some subtleties that need to be taken into account. If you need this much precision, these might be of some concern.
There are several different timer classes in the .NET base class library - which one is best suited to your needs can only be determined by you.
Here is a good article from MSDN magazine on the subject (Comparing the Timer Classes in the .NET Framework Class Library).
Depending on what you're using the timer for, you may have other issues to consider. Windows does not provide guarantees on timing of execution, so you shouldn't rely on it for any real-time processing (there are real-time extensions you can get for Windows that provide hard real-time scheduling). I also suspect you could lose precision as a result of context switching after you capture the time interval and before you do something with it that depends on its precision. In principle, this could be an arbitrarily long period of time; in practice it should be on the order of milliseconds. It really depends on how mission-critical this timing is.
Related
I just read http://aspalliance.com/2062_The_Darkness_Behind_DateTimeNow and started to wonder if this is really something to worry about.. The graph in the article clearly shows that using DateTime.Now is 'much' slower then using DateTime.UtcNow.
Is this graph meaningful for any application you have written? Is this something you noticed yourself? Should I be changing my code not to use DateTime.Now anymore? Basically, have you ever noticed yourself a slowdown by using DateTime.Now?
Can I go sleep without worrying about my miseducation of using DateTime.Now?
As is the answer to any preconceived performance issue, test it. Does your app call DateTime.Now many many times in some performance critical section of code? If not than I severely doubt it will cause you any appreciable problems, and even if you do you should still test it to see how much slowdown the call causes relative to the entire operation.
No, hasn't impacted any app I've written. That said, UtcNow is useful beyond perf stuff if you're designing server-based software, since your users may span timezones.
I've been badly let-down and received an application that in certain situations is at least 100 times too slow, which I have to release to our customers very soon (a matter of weeks).
Through some very simple profiling I have discovered that the bottleneck is its use of .NET Remoting to transfer data between a Windows service and the graphical front-end - both running on the same machine.
Microsoft guidelines say "Minimize round trips and avoid chatty interfaces": write
MyComponent.SaveCustomer("bob", "smith");
rather than
MyComponent.Firstname = "bob";
MyComponent.LastName = "smith";
MyComponent.SaveCustomer();
I think this is the root of the problem in our application. Unfortunately calls to MyComponent.* (the profiler shows that 99.999% of the time is spent in such statements) are scattered liberally throughout the source code and I don't see any hope of redesigning the interface in accordance with the guidelines above.
Edit: In fact, most of the time the front-end reads properties from MyComponent rather than writes to it. But I suspect that MyComponent can change at any time in the back-end.
I looked to see if I can read all properties from MyComponent in one go and then cache them locally (ignoring the change-at-any-time issue above), but that would involve altering hundreds of lines of code.
My question is: Are they any 'quick-win' things I can try to improve performance?
I need at least a 100-times speed-up. I am a C/C++/Delphi programmer and am pretty-much unfamiliar with C#/.NET/Remoting other than what I have read up on in the last couple of days. I'm looking for things that can be completed in a few days - a major restructuring of the code is not an option.
Just for starters, I have already confirmed that it is using BinaryFormatter.
(Sorry, this is probably a terrible question along the lines of 'How can I feasibly fix X if I rule out all of the feasible options'… but I'm desperate!)
Edit 2
In response to Richard's comment below: I think my question boils down to:
Is there any setting I can change to reduce the cost of a .NET Remoting round-trip when both ends of the connection are on the same machine?
Is there any setting I can change to reduce the number of round-trips - so that each invocation of a remote object property doesn't result in a separate round-trip? And might this break anything?
Under .Net Remoting you have 3 ways of communicating by HTTP, TCP and IPC. If the commnuicatin is on the same pc I sugest using IPC channels it will speed up your calls.
In short, no there are no quick wins here. Personally I would not make MyComponent (as a DTO) a MarshalByRefObject (which is presumably the problem), as those round trips are going to cripple you. I would keep it as a regular class, and just move a few key methods to pump them around (i.e. have a MarshalByRef manager/repository/etc class).
That should reduce round-trips; if you still have problems then it will probably be bandwidth related; this is easier to fix; for example by changing the serializer. protobuf-net allows you to do this easily by simply implementing ISerializable and forwarding the two methods (one from the interface, plus the ctor) to ProtoBuf.Serializer - it then does all the work for you, and works with remoting. I can provide examples of this if you like.
Actually, protobuf-net may help with CPU usage too, as it is a much more CPU-efficient serializer.
Could you make MyComponent a class that will cache the values and only submit them when SaveCustomer() is called?
You can try compressing traffic. If not 100-times increase, you'll still gain some performance benefit
If you need the latest data (always see the real value), and the cost of getting the data each time dominates the runtime then you need to be radical.
How about changing polling to push. Rather than calling the remote side each time you need a value, have the remote push all changes and cache the latest values locally.
Local lookups (after the initial get) are always up to date with all remoting overhead being done in the background (on another thread). Just be careful about thread safety for non-atomic types.
I am trying to get some detailed performance information from an application my company is developing. Examples of information I am trying to get would be how long a network transaction takes, how much CPU/memory the application is using, how long it takes for a given method to complete, etc.
I have had some failed attempts at this in the past (like trying to measure small time periods by using DateTime.Now). On top of that I don't know much of anything about getting CPU and memory statistics. Are there any good .Net classes, libraries, or frameworks out there that would help me collect this sort of information and/or log it to a text file?
What you are looking for is Performance Counters. For .net you need this Performance Counter.
Performance counters are one way to go, and the System.Diagnostics.Stopwatch class are good foundational places to look for doing this.
With performance counters (beyond those provided) you will need to manage both the infrastructure of tracking the events, as well as reporting the data. The performance counter base classes supply the connection details for hooking up to the event log, but you will need to provide other reporting infrastructure if you need to report the data in another way (such as to a log file, or database).
The stopwatch class is a wrapper around the high performance timer, giving you microsecond or nanosecond resolution depending on the processor or the platform. If you do not need that high of resolution you can use System.DateTime..Now.Ticks to get the current tick count for the processor clock and do differential math with that, giving you millisecond or bettter precision for most operations.
When tracking CPU statistics be aware that multiple processors and multiple cores will complicate any accurate statistics in some cases.
One last caution with performance counters, be aware that not all performance counters are on all machines. For instance ASP.NET counters are not present on a machine which does not have IIS installed, etc.
for a modern opensource library to do performance metrics and monitoring consider using app-metrics.
github: https://github.com/AppMetrics/AppMetrics
website https://www.app-metrics.io
A platform independent open source performance recorder build on top of aspect injector can be found here: https://gitlab.com/hectorjsmith/csharp-performance-recorder. It can be added to any C# project. Instructions on how to use can be found in the README in Gitlab.
Benefits
It's aspect oriented, meaning you don't have to litter your code with DateTime.Now - you just annotate the appropriate methods
It takes care of pretty printing the results - you can do with the what you like e.g. printing them to a file
Notes
This performance recorder is focused on the timing of methods only. It doesn't cover CPU or memory usage.
For cpu/memory use performance counters. For more specific information about specific methods (or lines of code) and specific objects, use a profiler. red-gate makes a great profiler.
http://www.red-gate.com/products/ants_performance_profiler
I'm writing a plug-in for another program in C#.NET, and am having performance issues where commands take a lot longer then I would. The plug-in reacts to events in the host program, and also depends on utility methods of the the host program SDK. My plug-in has a lot of recursive functions because I'm doing a lot of reading and writing to a tree structure. Plus I have a lot of event subscriptions between my plugin and the host application, as well as event subscriptions between classes in my plug-in.
How can I figure out what is taking so long for a task to complete? I can't use regular breakpoint style debugging, because it's not that it doesn't work it's just that it's too slow. I have setup a static "LogWriter" class that I can reference from all my classes that will allow me to write out timestamped lines to a log file from my code. Is there another way? Does visual studio keep some kind of timestamped log that I could use instead? Is there someway to view the call stack after the application has closed?
You need to use profiler. Here link to good one: ANTS Performance Profiler.
Update: You can also write messages in control points using Debug.Write. Then you need to load DebugView application that displays all your debug string with precise time stamp. It is freeware and very good for quick debugging and profiling.
My Profiler List includes ANTS, dotTrace, and AQtime.
However, looking more closely at your question, it seems to me that you should do some unit testing at the same time you're doing profiling. Maybe start by doing a quick overall performance scan, just to see which areas need most attention. Then start writing some unit tests for those areas. You can then run the profiler while running those unit tests, so that you'll get consistent results.
In my experience, the best method is also the simplest. Get it running, and while it is being slow, hit the "pause" button in the IDE. Then make a record of the call stack. Repeat this several times. (Here's a more detailed example and explanation.)
What you are looking for is any statement that appears on more than one stack sample that isn't strictly necessary. The more samples it appears on, the more time it takes. The way to tell if the statement is necessary is to look up the stack, because that tells you why it is being done.
Anything that causes a significant amount of time to be consumed will be revealed by this method, and recursion does not bother it.
People seem to tackle problems like this in one of two ways:
Try to get good measurements before doing anything.
Just find something big that you can get rid of, rip it out, and repeat.
I prefer the latter, because it's fast, and because you don't have to know precisely how big a tumor is to know it's big enough to remove. What you do need to know is exactly where it is, and that's what this method tells you.
Sounds like you want a code 'profiler'. http://en.wikipedia.org/wiki/Code_profiler#Use_of_profilers
I'm unfamiliar with which profilers are the best for C#, but I came across this link after a quick google which has a list of free open-source offerings. I'm sure someone else will know which ones are worth considering :)
http://csharp-source.net/open-source/profilers
Despite the title of this topic I must argue that the "best" way is subjective, we can only suggest possible solutions.
I have had experience using Redgate ANTS Performance Profiler which will show you where the bottlenecks are in your application. It's definitely worth checking out.
Visual Studio Team System has a profiler baked in, its far from perfect, but for simple applications you can kind of get it to work.
Recently I have had the most success with EQATECs free profiler, or rolling my own tiny profiling class where needed.
Also, there have been quite a few questions about profilers in that past see: http://www.google.com.au/search?hl=en&q=site:stackoverflow.com+.net+profiler&btnG=Google+Search&meta=&aq=f&oq=
Don't ever forget Rico Mariani's advice on how to carry out a good perf investigation.
You can also use performance counter for asp.net applications.
I've not be coding long so I'm not familiar with which technique is quickest so I was wondering if there was a way to do this in VS or with a 3rd party tool?
Thanks
Profilers are great for measuring.
But your question was "How can I determine where the slow parts of my code are?".
That is a different problem. It is diagnosis, not measurement.
I know this is not a popular view, but it's true.
It is like a business that is trying to cut costs.
One approach (top down) is to measure the overall finances, then break it down by categories and departments, and try to guess what could be eliminated. That is measurement.
Another approach (bottom up) is to walk in at random into an office, pick someone at random, and ask them what they are doing at that moment and (importantly) why, in detail.
Do this more than once.
That is what Harry Truman did at the outbreak of WW2, in the US defense industry, and immediately uncovered massive fraud and waste, by visiting several sites. That is diagnosis.
In code you can do this in a very simple way: "Pause" it and ask it why it is spending that particular cycle. Usually the call stack tells you why, in detail.
Do this more than once.
This is sampling. Some profilers sample the call stack. But then for some reason they insist on summarizing time spent in each function, inclusive and exclusive. That is like summarizing by department in business, inclusive and exclusive.
It loses the information you need, which is the fine-grain detail that tells if the cycles are necessary.
To answer your question:
Just pause your program several times, and capture the call stack each time. If your code is very slow, the wasteful function calls will be on nearly every stack. They will point with precision to the "slow parts of your code".
ADDED: RedGate ANTS is getting there. It can give you cost-by-line, and it is quite spiffy. So if you're in .NET, and can spare 3 figures, and don't mind waiting around to install & learn it, it can tell you much of what your Pause key can tell you, and be much more pretty about it.
Profiling.
RedGate has a product.
JetBrains has a product.
I've used ANTS Profiler and I can join the others with recommendation.
The price is NEGLIGIBLE when you compare it with the amount of dev hours it will save you.
I you're developer for a living, and your company won't buy it for you, either change the company or buy it for yourself.
For profiling large complex UI applications then you often need a set of tools and approaches. I'll outline the approach and tools I used recently on a project to improve the performance of a .Net 2.0 UI application.
First of all I interviewed users and worked through the use cases myself to come up with a list of target use cases that highlighted the systems worse performing areas. I.e. I didn't want to spend n man days optimising a feature that was hardly ever used but very slow. I would want to spend time, however, optimising a feature that was a little bit sluggish but invoked a 1000 times a day, etc.
Once the candidate use cases were identified I instrumented my code with my own light weight logging class (I used some high performance timers and a custom logging solution because a needed sub-millisecond accuracy). You might, however, be able to get away with log4net and time stamps. The reason I instrumented code is that it is sometimes easier to read your own logs rather than the profiler's output. I needed both for a variety of reasons (e.g. measuring .Net user control layouts is not always straightforward using the profiler).
I then ran my instrumented code with the ANTS profiler and profiled the use case. By combining the ANTS profile and my own log files I was very quickly able to discover problems with our application.
We also profiled the server as well as the UI and were able to work out breakdowns for time spent in the UI, time spent on the wire, time spent on the server etc.
Also worth noting is that 1 run isn't enough, and the 1st run is usually worth throwing away. Let me explain: PC load, network traffic, JIT compilation status etc can all affect the time a particular operation will take. A simple strategy is to measure an operation n times (say 5), throw away the slowest and fastest run, the analyse the remianing profiles.
Eqatec profiler is a cute small profiler that is free and easy to use. It probably won't come anywhere near the "wow" factor of Ants profiler in terms of features but it still is very cool IMO and worth a look.
Use a profiler. ANTS costs money but is very nice.
i just set breakpoints, visual will tell you how many ms between breakpoint has passed. so you can find it manually.
ANTS Profiler is very good.
If you don't want to pay, the newer VS verions come with a profiler, but to be honest it doesn't seem very good. ATI/AMD make a free profiler... but its not very user friendly (to me, I couldn't get any useful info out of it).
The advice I would give is to time function calls yourself with code. If they are fast and you do not have a high-precision timer or the calls vary in slowness for a number of reasons (e.g. every x calls building some kind of cache), try running each one x10000 times or something, then dividing the result accordingly. This may not be perfect for some sections of code, but if you are unable to find a good, free, 3rd party solution, its pretty much what's left unless you want to pay.
Yet another option is Intel's VTune.