I'm writing a plug-in for another program in C#.NET, and am having performance issues where commands take a lot longer then I would. The plug-in reacts to events in the host program, and also depends on utility methods of the the host program SDK. My plug-in has a lot of recursive functions because I'm doing a lot of reading and writing to a tree structure. Plus I have a lot of event subscriptions between my plugin and the host application, as well as event subscriptions between classes in my plug-in.
How can I figure out what is taking so long for a task to complete? I can't use regular breakpoint style debugging, because it's not that it doesn't work it's just that it's too slow. I have setup a static "LogWriter" class that I can reference from all my classes that will allow me to write out timestamped lines to a log file from my code. Is there another way? Does visual studio keep some kind of timestamped log that I could use instead? Is there someway to view the call stack after the application has closed?
You need to use profiler. Here link to good one: ANTS Performance Profiler.
Update: You can also write messages in control points using Debug.Write. Then you need to load DebugView application that displays all your debug string with precise time stamp. It is freeware and very good for quick debugging and profiling.
My Profiler List includes ANTS, dotTrace, and AQtime.
However, looking more closely at your question, it seems to me that you should do some unit testing at the same time you're doing profiling. Maybe start by doing a quick overall performance scan, just to see which areas need most attention. Then start writing some unit tests for those areas. You can then run the profiler while running those unit tests, so that you'll get consistent results.
In my experience, the best method is also the simplest. Get it running, and while it is being slow, hit the "pause" button in the IDE. Then make a record of the call stack. Repeat this several times. (Here's a more detailed example and explanation.)
What you are looking for is any statement that appears on more than one stack sample that isn't strictly necessary. The more samples it appears on, the more time it takes. The way to tell if the statement is necessary is to look up the stack, because that tells you why it is being done.
Anything that causes a significant amount of time to be consumed will be revealed by this method, and recursion does not bother it.
People seem to tackle problems like this in one of two ways:
Try to get good measurements before doing anything.
Just find something big that you can get rid of, rip it out, and repeat.
I prefer the latter, because it's fast, and because you don't have to know precisely how big a tumor is to know it's big enough to remove. What you do need to know is exactly where it is, and that's what this method tells you.
Sounds like you want a code 'profiler'. http://en.wikipedia.org/wiki/Code_profiler#Use_of_profilers
I'm unfamiliar with which profilers are the best for C#, but I came across this link after a quick google which has a list of free open-source offerings. I'm sure someone else will know which ones are worth considering :)
http://csharp-source.net/open-source/profilers
Despite the title of this topic I must argue that the "best" way is subjective, we can only suggest possible solutions.
I have had experience using Redgate ANTS Performance Profiler which will show you where the bottlenecks are in your application. It's definitely worth checking out.
Visual Studio Team System has a profiler baked in, its far from perfect, but for simple applications you can kind of get it to work.
Recently I have had the most success with EQATECs free profiler, or rolling my own tiny profiling class where needed.
Also, there have been quite a few questions about profilers in that past see: http://www.google.com.au/search?hl=en&q=site:stackoverflow.com+.net+profiler&btnG=Google+Search&meta=&aq=f&oq=
Don't ever forget Rico Mariani's advice on how to carry out a good perf investigation.
You can also use performance counter for asp.net applications.
Related
I am developing a Windows Store application. Currently, I am getting intermittent hangs as described in this blog post. The issue appears to be that not enough space is given to remainder-defined column widths and TextBlocks attempting to format themselves (possibly due to the ellipsis processing). My app tends to hang indefinitely when this happens.
The question I have less related to how to solve the issue (as it seems to be described fairly well in the blog post), but instead how to find the issues. I have one fairly regularly (approximately one in five or ten start-ups) on a Hub Page, so I've been looking through there (as it's the most notable instance of issue), but it's a true Heisenbug in that it never seems to happen when debugging (or when you look for it).
So, how do I find the offending code? Is there just a pattern I need to look for (ColumnWidth="*"?). Is there a simpler way to solve this, such as changing the base style to remove one of the possibly offending properties listed in the blog post?
It seems possible that this is being caused by another issue, but this seems to be the most likely/plausible as of right now (as with the hubs I have a similar situation to what is being described there).
Also, is there a way to track when this happens in the wild? MSFT provides crash dumps on hangs, but they seem to give little to no information in them at all (and on top of that they only appear 5 days after they happen, which is less than ideal).
Thanks!
This is a complicated question to answer.
First, I think you have identified a real problem with WinRT. You theorize that the layout subsystem seems busy calculating your layout, and based on some condition that occurs around 20% of the time it does not finish in any reasonable time. Reasonable guess.
The problem, then, is when such an event does not occur during debug. In my personal development experience, errors that do not occur in debug are 99.99% timing related. Something is not finishing before a second process begins. Debugging lets those first, long process finish.
This is a real computer science question, and not so much a WinRT or Windows 8 question. To that end, the best answer I can give you without any code samples (why no code samples?) is the typical approach I employ when I reach the same dilemma. I hope it helps, at least a little.
Start with your brain.
I have always joked with developers just how much debugging can be done outside the debugger - and in your mind. Mentally walking the pipeline of your app and looking for race-condition dependencies that might cause deadlocks. Believe it or not, this solves a lot of problems a debugger could never catch - because debuggers unwind timing dependencies.
Next is simplicity.
The more complex the problem the less likely you will find the culprit. In the case of a XAML application, I tend to remove or disable value converters first. Then, I look to remove data templates. If you have element bindings, those go next. If simplifying the XAML does help - that's just the beginning to figuring it out. If it doesn't, things just got easier.
Your code behind can be disabled with just a few keystrokes and found guilty or innocent. It's the most likely place for your problem, I find, and the reason we work so hard to keep it simple, clean, and minimal. After that, there's the view model. Though it's not impossible for your view model to be the one, and indeed you still have to check, it's probably not the root of your evil.
Lastly, there's the app pipeline that loads your page, loads your data, or does anything else. Step by step your only real option is to slowly remove things from your app until you don't see the problem. Removing the problem, though is not solving it. That's a case by case thing based on your app and the logic in it. Reality is, you might see the problem leave when removing XAML, while the real problem is in the view model or elsewhere.
What am I really saying? The silver bullet you are asking for really isn't there. There are several Microsoft tools and even more third party tools to look for bottlenecks, latency problems, slow code, and stuff - but in all reality, the scenario you describe is plain ole programming. I am not saying you aren't the victim of a bug. I'm saying, with the information we have, this is all I can do for you.
You'll get it.
Third thing to do is to add logging, and instrumentation to your app.
Best of luck.
Given that Jerry has answered this at a higher level I figured I would add in the lower level answers that from the way your question is phrased makes me think you are interested in. I guess first I would like to address the last item which is the dump files. There is a mechanism for getting dump files of a process 'in the wild' that Microsoft provides which is through Windows Error Reporting. If you are wanting to collect dump files from failed client processes you could sign up for Windows Error Reporting (I must admit I have never actually done it, but I did look into it and tried to get my current employer to allow me to do this, but it didn't end successfully). To sign up go to the Establish a Hardware/Desktop Account Page.
As far as what to do with dump files once you get them, you would be wanting to download the debugging tools for windows (part of the Windows SDK download) and/or the Debug Diag Tool (I must confess I am more of a debugging tools for windows user than a Debug Diag user). These will provide you with the tools to look into what is going on at a lower level. Obviously you can only go so far as you won't have access to private Microsoft symbols, but you do have access to public symbols and usually those are enough to give you a pretty good idea of the problem area.
Your primary tools will depend on how reproducible the issue is. If it is only reproducible on some client machines then you will have to rely on looking at a single dump file that you probably got a hold of from Windows Error Reporting. In this case what I would do is open it up using the appropriate version of Windbg (either x86 or x64) and look at what was going on at the time the dump was taken. Depending on how savvy you are depends on how far you can go. Probably a simple starter would be to run
.symfix
.reload
.loadby sos clr
!EEStack
This will load Microsoft public symbols, the sos extension dll for dealing with Managed code inspection, and then will dump the contents of the stack for each thread in the process. From looking at the names of the method that appear on the call stacks you might be able to get a pretty good idea of at least the area of the code where the lock is occuring.
You can go much farther than this as Windbg provides the ability to go pretty deep into deadlock analysis (for instance there is an extension available for Windbg called sosex that provides a command !dlk which can sometimes automate the detection of a deadlock for you from a single dump file. To load an extension dll into Windbg you just have to download it and then call .load fullpathtodll). If the problem is reproducible locally you might even be more successful with WPA/WPR or if you are really fortunate a simple procmon trace. These tools do have a pretty decent barrier to entry as they take some time to learn. But if you are really interested in the topic your best resources would be the Defrag Tools series on Channel9 and anything by Mario Hewardt (especially his book "Advanced .Net Debugging"). Again, getting familiar with these tools can take a bunch of time, but at the very least if you just know how to dump the contents of the stacks from a dump file you can sometimes get what you need just from that so a basic understanding of these tools can be beneficial as well.
I'm looking for a way to find bottleneck methods in a solution (lots of projects).
Lets say i have a HUGE program (1000s of methods) and i want to improve performance by finding methods that are called a lot (actually used at runtime), and optimize them.
I need this for a complex problem that's written in C++, C#, CLI/C++. (I can compile it all in debug and have the .pdb files)
So, I'm looking for some kind of analyzer that will tell me how much cpu time each method is using.
What tool/addon/feature can I use in Visual Studio to get that information ?
I want to be able to run the program for a few minutes, and then analyze the method's cpu usage. Or even better - amount of cpu / number of calls.
Would be even better if I could sort by namespace or dll/package/project.
The more expensive Visual Studio versions should provide a Profiler builtin: see this thread.
However there are more methods to profile, this topic has been covered a lot of times on stackoverflow, here for example.
Following one of Christian Goltz links, I've found a program that might do what I want, it profiles both managed and unmanaged code:
AQTime Pro
I'm had some good experiences with the DotTrace product by JetBrains. Not sure if it has the IDE integration or all the features that you're looking for, but it definitely gets the job done.
This method is low-tech, but works perfectly well.
I also work in a huge application, and when we have performance problems it finds them quickly.
I have written a Winform application in C#. How can I check the performance of my code. By that I mean, how can I check which forms references are active at a given time or event, so that I can remove them if they are not required (make them available for garbage collection). Is there a way to do it using VS 2005 or any free tool. Any tutorials or guide will be useful.
[Edit] Sorry if my question is confusing. I am not looking for a professional tool, but ways to know/understand the working of my code better and code more efficiently.
Thanks
Making code efficient is always a secondary step for me. First I write the code so that it works. Next, I profile it if i am unhappy with the performance. The truth is most applications run fast enough after the first time writing them. Sometimes though, better performance is needed. Performance can be gained many different ways. It all depends on your application. I write LOB apps mainly, so I deal with alot of IO to databases, services and storage. These calls are all very expensive and need to be limited so they are my first area to optimize. I optimize by lazy-loading, eager-loading, batching calls, making less frequent calls and so on. I recently had a winforms app that created hundreds of controls dynamically and it took a long time. That's another bottleneck that I have to address. I use a profiler to measure the performance of the applications.
Use the free Equatec profiler. It will show you how long calls take and how many times a call is made. The profiler gives a nice report and visual display that can drill down the call stacks.
Red Gate Performance Profiler
...it's been said here a million times before. If you suspect performance issues, profile your application. It will tell you how long calls are taking and point out the bottlenecks in your code.
Kobra,
What you're looking for is called a Memory Profiler. There happens to be one (paid) version for .NET aptly named ".NET Memory Profiler", I've not used it extensively but it should answer the questions you're asking. There are a few others ones which will do basically the same thing, like giving you instance counts of loaded types, and help you identify when instances are not being garbage collected for one reason or another (i.e. Event Handler References, Static Properties, etc).
Hope this helps,
Dylan
I need to know how can I get every instruction's duration so I can maintain the code to increase the performance of my program .
Use a profiler. If you have Visual Studio Team System, there's one included. Otherwise take a look at ANTS or dotTrace.
what you're looking for is a profiler I believe :)
see: Profiler and List of Performance Analysis Tools
You want an application profiler for that, it shows exactly what code takes how long.
You need to use a profiler to accomplish this. It exists several profilers, some are free.
My preference goes to Red Gate Ants.
I dont think you should go upto level of instructions to measure performance bottlenecks. Micro optimization can be harmful. You should go up to function profiling. If you are using VS2005 or 2008 you can use
Performance wizard
CLR profiler
to profile your functions.
Alternatly I personally recomend using Ants Profiler
Since Ants and dotTrace are very good but commercial tools (I wouldn't call them expensive - they're worth the money), I recently heard about the EQATEC Profiler, which is free of charge. Did not try it because of lack of time, but maybe you want to give it a try.
(no, I am not affiliated with them)
If you have an application running and you want to improve its performance using a profiler (.net or database) is a must .DotTrace and Ants are famous ones for good reasons.
If you are using SQL Server ,SQL server profiler is a great tool to trace and watch what's going on ,on the server side of your application.
If you want to decide what approach is better to use you can use ILDASM to disassemble your code to IL and see what's going on under the hood Although it's not a simple task but I think it worth it.
You might want to take a look at FxCop as well, might give you some more vauge hints as to what could be improved. (Oh, and it's free!)
I'm surprised no one have mentioned this yet, but if you want to know the cost of individual instructions, look them up here or here.
The cost of individual instructions varies between CPU's, but both AMD and Intel (and any other CPU maker) documents this.
The problem is that determining the cost of instructions is not straightforward. You have a lot of metrics to consider: There's the latency, whether it is pipelined (fully or partially), how big the instruction is (affects instruction cache) and so on. So this information is only really helpful if you're writing a single really performance-sensitive function where you're either writing assembly yourself, or closely reading the compiler-generated ASM to find and eliminate inefficiencies. And if you know a fair bit about how the CPU works.
But before you get to this point, you should use a profiler as everyone else have suggested. That helps you narrow down where the time is being spent, and what needs optimizing.
I've not be coding long so I'm not familiar with which technique is quickest so I was wondering if there was a way to do this in VS or with a 3rd party tool?
Thanks
Profilers are great for measuring.
But your question was "How can I determine where the slow parts of my code are?".
That is a different problem. It is diagnosis, not measurement.
I know this is not a popular view, but it's true.
It is like a business that is trying to cut costs.
One approach (top down) is to measure the overall finances, then break it down by categories and departments, and try to guess what could be eliminated. That is measurement.
Another approach (bottom up) is to walk in at random into an office, pick someone at random, and ask them what they are doing at that moment and (importantly) why, in detail.
Do this more than once.
That is what Harry Truman did at the outbreak of WW2, in the US defense industry, and immediately uncovered massive fraud and waste, by visiting several sites. That is diagnosis.
In code you can do this in a very simple way: "Pause" it and ask it why it is spending that particular cycle. Usually the call stack tells you why, in detail.
Do this more than once.
This is sampling. Some profilers sample the call stack. But then for some reason they insist on summarizing time spent in each function, inclusive and exclusive. That is like summarizing by department in business, inclusive and exclusive.
It loses the information you need, which is the fine-grain detail that tells if the cycles are necessary.
To answer your question:
Just pause your program several times, and capture the call stack each time. If your code is very slow, the wasteful function calls will be on nearly every stack. They will point with precision to the "slow parts of your code".
ADDED: RedGate ANTS is getting there. It can give you cost-by-line, and it is quite spiffy. So if you're in .NET, and can spare 3 figures, and don't mind waiting around to install & learn it, it can tell you much of what your Pause key can tell you, and be much more pretty about it.
Profiling.
RedGate has a product.
JetBrains has a product.
I've used ANTS Profiler and I can join the others with recommendation.
The price is NEGLIGIBLE when you compare it with the amount of dev hours it will save you.
I you're developer for a living, and your company won't buy it for you, either change the company or buy it for yourself.
For profiling large complex UI applications then you often need a set of tools and approaches. I'll outline the approach and tools I used recently on a project to improve the performance of a .Net 2.0 UI application.
First of all I interviewed users and worked through the use cases myself to come up with a list of target use cases that highlighted the systems worse performing areas. I.e. I didn't want to spend n man days optimising a feature that was hardly ever used but very slow. I would want to spend time, however, optimising a feature that was a little bit sluggish but invoked a 1000 times a day, etc.
Once the candidate use cases were identified I instrumented my code with my own light weight logging class (I used some high performance timers and a custom logging solution because a needed sub-millisecond accuracy). You might, however, be able to get away with log4net and time stamps. The reason I instrumented code is that it is sometimes easier to read your own logs rather than the profiler's output. I needed both for a variety of reasons (e.g. measuring .Net user control layouts is not always straightforward using the profiler).
I then ran my instrumented code with the ANTS profiler and profiled the use case. By combining the ANTS profile and my own log files I was very quickly able to discover problems with our application.
We also profiled the server as well as the UI and were able to work out breakdowns for time spent in the UI, time spent on the wire, time spent on the server etc.
Also worth noting is that 1 run isn't enough, and the 1st run is usually worth throwing away. Let me explain: PC load, network traffic, JIT compilation status etc can all affect the time a particular operation will take. A simple strategy is to measure an operation n times (say 5), throw away the slowest and fastest run, the analyse the remianing profiles.
Eqatec profiler is a cute small profiler that is free and easy to use. It probably won't come anywhere near the "wow" factor of Ants profiler in terms of features but it still is very cool IMO and worth a look.
Use a profiler. ANTS costs money but is very nice.
i just set breakpoints, visual will tell you how many ms between breakpoint has passed. so you can find it manually.
ANTS Profiler is very good.
If you don't want to pay, the newer VS verions come with a profiler, but to be honest it doesn't seem very good. ATI/AMD make a free profiler... but its not very user friendly (to me, I couldn't get any useful info out of it).
The advice I would give is to time function calls yourself with code. If they are fast and you do not have a high-precision timer or the calls vary in slowness for a number of reasons (e.g. every x calls building some kind of cache), try running each one x10000 times or something, then dividing the result accordingly. This may not be perfect for some sections of code, but if you are unable to find a good, free, 3rd party solution, its pretty much what's left unless you want to pay.
Yet another option is Intel's VTune.