Is using DateTime.Now really something to worry about? - c#

I just read http://aspalliance.com/2062_The_Darkness_Behind_DateTimeNow and started to wonder if this is really something to worry about.. The graph in the article clearly shows that using DateTime.Now is 'much' slower then using DateTime.UtcNow.
Is this graph meaningful for any application you have written? Is this something you noticed yourself? Should I be changing my code not to use DateTime.Now anymore? Basically, have you ever noticed yourself a slowdown by using DateTime.Now?
Can I go sleep without worrying about my miseducation of using DateTime.Now?

As is the answer to any preconceived performance issue, test it. Does your app call DateTime.Now many many times in some performance critical section of code? If not than I severely doubt it will cause you any appreciable problems, and even if you do you should still test it to see how much slowdown the call causes relative to the entire operation.

No, hasn't impacted any app I've written. That said, UtcNow is useful beyond perf stuff if you're designing server-based software, since your users may span timezones.

Related

what are the dangers of Thread.Sleep in Multi-Threaded code in production environment

I am dealing with Multi-Threaded code written by my predecessor within C# WinForm application that handles large volumes of data concurrently in production environment. I have identified that at some points within the code Thread.Sleep(20) is used. I am not very expert in multithreading and have basic knowledge of threading and synchronisation primitives. I need to know whether there are any dangers associated with Thread.Sleep
It's not going to be explicitly or directly dangerous. It's almost certainly wasting effort, as explicitly forcing your program to not do work when it has work to do is almost never sensible.
It's also a pretty significant red flag that there's a race condition in the code and, rather than actually figure out what it is or how to fix it, the programmer simply added in Sleep calls until he stopped seeing it. If true, it would mean that the program is still unstable and could potentially break at any time if enough other variables change, and that the issue should be actually fixed using proper synchronization techniques.

CPU is 99% when write a threading application

im create a windows application C# like a download Manager
when run this application i found the CPU is 99% and i write a threading
application how i can start to solve this Problem
thank you
Look for any tight loops you've got in your code - it's almost certainly due to one of those. Something like this:
while (!finished)
{
progressBar.Value = DownloadProgress;
}
Without seeing your code though, it's hard to guess any more accurately than that.
I suggest you start by profiling your application to identify hot spots and then rework through the code to eliminate the same. Profile your application - number of active threads - CPU consumed by the different threads, profile functions to catch any CPU heavy function.
Yes, profiling is a good idea. Use e.g. Red Gate ANTS Performance Profiler which is free to test for 14 days.
This is my favorite place for threading, I don't think you can find a simpler article on threading strategies.
http://www.albahari.com/threading/
I would try with Jons answer and then implement Muek answer.
You are having tight loops somewhere in your code. Despite low priority (if you used it) - if you have a loop that doesn't release the control back to the kernel thread scheduler it will consume all of the CPU until control is TAKEN from it. For example you can have:
while (!_shouldStop)
{
DoProcessing();
}
This really IS bad, because it will certainly use all of your CPU.
To solve this, easiest way is to use either Sleep(100) or Sleep(0) inside your loop, something like:
while (!_shouldStop)
{
DoProcessing();
Thread.Sleep(0);
}
There are also better (and somehow more complicated ways) to do this - events for example - but for start, your app will behave much better with that here.

Can Stopwatch be used in production code?

I need an accurate timer, and DateTime.Now seems not accurate enough. From the descriptions I read, System.Diagnostics.Stopwatch seems to be exactly what I want.
But I have a phobia. I'm nervous about using anything from System.Diagnostics in actual production code. (I use it extensively for debugging with Asserts and PrintLns etc, but never yet for production stuff.) I'm not merely trying to use a timer to benchmark my functions - my app needs an actual timer. I've read on another forum that System.Diagnostics.StopWatch is only for benchmarking, and shouldn't be used in retail code, though there was no reason given. Is this correct, or am I (and whoever posted that advice) being too closed minded about System.Diagnostics? ie, is it ok to use System.Diagnostics.Stopwatch in production code?
Thanks
Adrian
Under the hood, pretty much all Stopwatch does is wrap QueryPerformanceCounter. As I understand it, Stopwatch is there to provide access to the high-resolution timer - if you need this resolution in production code I don't see anything wrong with using it.
Yes, System.Diagnostics does sound like it is for debugging only, but don't let the name deceive you. The System.Diagnostics namespace may seem a bit scary sounding for use in production code at first (it did for me), but there are plenty of useful things in that namespace.
Some things, such as the Process class, are useful for interacting with the system. With Process.Start you can start other applications, launch a website for the user, open a file or folder, etc.
Others things, such as the Trace class, can help you track down bugs in production code. Granted, you will not always use them in production code, but they are very useful for logging and tracking down that elusive bug on a remote machine.
Don't worry about the name.
You say you've read on another forum not to use classes from System.Diagnostics in production. But the only source you should worry about is Microsoft, who created the code. They say that the StopWatch class:
Provides a set of methods and properties that you can use to accurately measure elapsed time.
They don't say, "except in production".
Afaik StopWatch is a shell over QueryPerformanceCounter functionality. This function is the basis of a lot of performance counters related measurements. QPF is very fast to call and perfectly safe. IF you feel paranoid about the Diagnostics namespace, pInvoke the QPF directly.
The stopwatch is basically a neat wrapper around the native QueryPerformanceCounter and QueryPerformanceFrequency methods. If you don't feel comfortable using the System.Diagnostic namespace, you can access these directly.
Using the Performance Counter is very common, there is nothing wrong with that. AFAIK, there is no higher timer precision available. Note the QPF might lead to problems with multi processor machines, but the MSDN Article linked before gives some additional information on that. It is advisable to make sure System.Diagnostics.Stopwatch does that in the background or to call the SetThreadAffinity manually - otherwise your timer might jump back in time!
Note that for very high precision measurements, there are some subtleties that need to be taken into account. If you need this much precision, these might be of some concern.
There are several different timer classes in the .NET base class library - which one is best suited to your needs can only be determined by you.
Here is a good article from MSDN magazine on the subject (Comparing the Timer Classes in the .NET Framework Class Library).
Depending on what you're using the timer for, you may have other issues to consider. Windows does not provide guarantees on timing of execution, so you shouldn't rely on it for any real-time processing (there are real-time extensions you can get for Windows that provide hard real-time scheduling). I also suspect you could lose precision as a result of context switching after you capture the time interval and before you do something with it that depends on its precision. In principle, this could be an arbitrarily long period of time; in practice it should be on the order of milliseconds. It really depends on how mission-critical this timing is.

What is the best way to debug performance problems?

I'm writing a plug-in for another program in C#.NET, and am having performance issues where commands take a lot longer then I would. The plug-in reacts to events in the host program, and also depends on utility methods of the the host program SDK. My plug-in has a lot of recursive functions because I'm doing a lot of reading and writing to a tree structure. Plus I have a lot of event subscriptions between my plugin and the host application, as well as event subscriptions between classes in my plug-in.
How can I figure out what is taking so long for a task to complete? I can't use regular breakpoint style debugging, because it's not that it doesn't work it's just that it's too slow. I have setup a static "LogWriter" class that I can reference from all my classes that will allow me to write out timestamped lines to a log file from my code. Is there another way? Does visual studio keep some kind of timestamped log that I could use instead? Is there someway to view the call stack after the application has closed?
You need to use profiler. Here link to good one: ANTS Performance Profiler.
Update: You can also write messages in control points using Debug.Write. Then you need to load DebugView application that displays all your debug string with precise time stamp. It is freeware and very good for quick debugging and profiling.
My Profiler List includes ANTS, dotTrace, and AQtime.
However, looking more closely at your question, it seems to me that you should do some unit testing at the same time you're doing profiling. Maybe start by doing a quick overall performance scan, just to see which areas need most attention. Then start writing some unit tests for those areas. You can then run the profiler while running those unit tests, so that you'll get consistent results.
In my experience, the best method is also the simplest. Get it running, and while it is being slow, hit the "pause" button in the IDE. Then make a record of the call stack. Repeat this several times. (Here's a more detailed example and explanation.)
What you are looking for is any statement that appears on more than one stack sample that isn't strictly necessary. The more samples it appears on, the more time it takes. The way to tell if the statement is necessary is to look up the stack, because that tells you why it is being done.
Anything that causes a significant amount of time to be consumed will be revealed by this method, and recursion does not bother it.
People seem to tackle problems like this in one of two ways:
Try to get good measurements before doing anything.
Just find something big that you can get rid of, rip it out, and repeat.
I prefer the latter, because it's fast, and because you don't have to know precisely how big a tumor is to know it's big enough to remove. What you do need to know is exactly where it is, and that's what this method tells you.
Sounds like you want a code 'profiler'. http://en.wikipedia.org/wiki/Code_profiler#Use_of_profilers
I'm unfamiliar with which profilers are the best for C#, but I came across this link after a quick google which has a list of free open-source offerings. I'm sure someone else will know which ones are worth considering :)
http://csharp-source.net/open-source/profilers
Despite the title of this topic I must argue that the "best" way is subjective, we can only suggest possible solutions.
I have had experience using Redgate ANTS Performance Profiler which will show you where the bottlenecks are in your application. It's definitely worth checking out.
Visual Studio Team System has a profiler baked in, its far from perfect, but for simple applications you can kind of get it to work.
Recently I have had the most success with EQATECs free profiler, or rolling my own tiny profiling class where needed.
Also, there have been quite a few questions about profilers in that past see: http://www.google.com.au/search?hl=en&q=site:stackoverflow.com+.net+profiler&btnG=Google+Search&meta=&aq=f&oq=
Don't ever forget Rico Mariani's advice on how to carry out a good perf investigation.
You can also use performance counter for asp.net applications.

C# How can I determine where the slow parts of my code are?

I've not be coding long so I'm not familiar with which technique is quickest so I was wondering if there was a way to do this in VS or with a 3rd party tool?
Thanks
Profilers are great for measuring.
But your question was "How can I determine where the slow parts of my code are?".
That is a different problem. It is diagnosis, not measurement.
I know this is not a popular view, but it's true.
It is like a business that is trying to cut costs.
One approach (top down) is to measure the overall finances, then break it down by categories and departments, and try to guess what could be eliminated. That is measurement.
Another approach (bottom up) is to walk in at random into an office, pick someone at random, and ask them what they are doing at that moment and (importantly) why, in detail.
Do this more than once.
That is what Harry Truman did at the outbreak of WW2, in the US defense industry, and immediately uncovered massive fraud and waste, by visiting several sites. That is diagnosis.
In code you can do this in a very simple way: "Pause" it and ask it why it is spending that particular cycle. Usually the call stack tells you why, in detail.
Do this more than once.
This is sampling. Some profilers sample the call stack. But then for some reason they insist on summarizing time spent in each function, inclusive and exclusive. That is like summarizing by department in business, inclusive and exclusive.
It loses the information you need, which is the fine-grain detail that tells if the cycles are necessary.
To answer your question:
Just pause your program several times, and capture the call stack each time. If your code is very slow, the wasteful function calls will be on nearly every stack. They will point with precision to the "slow parts of your code".
ADDED: RedGate ANTS is getting there. It can give you cost-by-line, and it is quite spiffy. So if you're in .NET, and can spare 3 figures, and don't mind waiting around to install & learn it, it can tell you much of what your Pause key can tell you, and be much more pretty about it.
Profiling.
RedGate has a product.
JetBrains has a product.
I've used ANTS Profiler and I can join the others with recommendation.
The price is NEGLIGIBLE when you compare it with the amount of dev hours it will save you.
I you're developer for a living, and your company won't buy it for you, either change the company or buy it for yourself.
For profiling large complex UI applications then you often need a set of tools and approaches. I'll outline the approach and tools I used recently on a project to improve the performance of a .Net 2.0 UI application.
First of all I interviewed users and worked through the use cases myself to come up with a list of target use cases that highlighted the systems worse performing areas. I.e. I didn't want to spend n man days optimising a feature that was hardly ever used but very slow. I would want to spend time, however, optimising a feature that was a little bit sluggish but invoked a 1000 times a day, etc.
Once the candidate use cases were identified I instrumented my code with my own light weight logging class (I used some high performance timers and a custom logging solution because a needed sub-millisecond accuracy). You might, however, be able to get away with log4net and time stamps. The reason I instrumented code is that it is sometimes easier to read your own logs rather than the profiler's output. I needed both for a variety of reasons (e.g. measuring .Net user control layouts is not always straightforward using the profiler).
I then ran my instrumented code with the ANTS profiler and profiled the use case. By combining the ANTS profile and my own log files I was very quickly able to discover problems with our application.
We also profiled the server as well as the UI and were able to work out breakdowns for time spent in the UI, time spent on the wire, time spent on the server etc.
Also worth noting is that 1 run isn't enough, and the 1st run is usually worth throwing away. Let me explain: PC load, network traffic, JIT compilation status etc can all affect the time a particular operation will take. A simple strategy is to measure an operation n times (say 5), throw away the slowest and fastest run, the analyse the remianing profiles.
Eqatec profiler is a cute small profiler that is free and easy to use. It probably won't come anywhere near the "wow" factor of Ants profiler in terms of features but it still is very cool IMO and worth a look.
Use a profiler. ANTS costs money but is very nice.
i just set breakpoints, visual will tell you how many ms between breakpoint has passed. so you can find it manually.
ANTS Profiler is very good.
If you don't want to pay, the newer VS verions come with a profiler, but to be honest it doesn't seem very good. ATI/AMD make a free profiler... but its not very user friendly (to me, I couldn't get any useful info out of it).
The advice I would give is to time function calls yourself with code. If they are fast and you do not have a high-precision timer or the calls vary in slowness for a number of reasons (e.g. every x calls building some kind of cache), try running each one x10000 times or something, then dividing the result accordingly. This may not be perfect for some sections of code, but if you are unable to find a good, free, 3rd party solution, its pretty much what's left unless you want to pay.
Yet another option is Intel's VTune.

Categories

Resources