a coworker, who doesnt work here anymore, disabled the code-optimization of one of our projects some time ago. There is no way I can contact him anymore and looking at the changes he made with his commit I cannot see any reason as to why he did that.
Our program has some great bottlenecks and even though I am not sure if enabling this option again would bring any boost in performance, I would say its the better way to go..
The main problem here is that we would have to check every bit of this badly-written software for any side-effects which could take literal months to do..
My question here is if there are any reasons known as to why someone should deactivate this option and for what reason?
When debugging with that option turned on, some variables can't be inspected as their lifetime has been minimized. It is very likely he deactivated it to be able to debug more comfortably. There should be no effect on runtime apart from a slight slow down.
You can always try to check on a non-production environment if reactivating it helps with the bottlenecks you mention. That being said, do not expect any major speed-up.
Related
On a virtual machine, I was able to unintentionally reproduce an issue that only shows up once every or twice a year. The software is in a state where as long as the application is alive and running, I can reproduce the issue. The only problem is everything was built in release. So when I debug with Visual Studio and try to look at some values, I get the following message:
Cannot evaluate expression because the code of the current method is optimized.
To my knowledge, the only way to work around this is to build in debug. Unfortunately, this isn't possible because as soon as I close the application and restart it in debug instead of release, I may never get a chance to reproduce this issue again.
Is there any tool or anything I can do to keep the software in its current state, yet be able to retrieve some of the values I'm interested in? Again, this is a release build, so I realize a lot of necessary information to debug is missing. I do have the release pdbs / source code for the assemblies I'm interested in. Unlikely that it matters, but I'm trying to look at a Window object's IsLoaded property among potentially a few other properties.
Maybe you can try Project Properties - Build - Advanced - Debug info = Full?
According to this answer I think this will allow you to attach a debugger. Other answers in the thread have mixed answers on the effects of this option, but it may be worth a try.
I was finally able to solve this by using Snoop, a WPF Spy Utility. It gave me the value of the IsLoaded property I was interested in. Some peers also mentioned using Ants Memory Profiler, but it is pretty expensive and isn't as trivial to use.
I am developing a Windows Store application. Currently, I am getting intermittent hangs as described in this blog post. The issue appears to be that not enough space is given to remainder-defined column widths and TextBlocks attempting to format themselves (possibly due to the ellipsis processing). My app tends to hang indefinitely when this happens.
The question I have less related to how to solve the issue (as it seems to be described fairly well in the blog post), but instead how to find the issues. I have one fairly regularly (approximately one in five or ten start-ups) on a Hub Page, so I've been looking through there (as it's the most notable instance of issue), but it's a true Heisenbug in that it never seems to happen when debugging (or when you look for it).
So, how do I find the offending code? Is there just a pattern I need to look for (ColumnWidth="*"?). Is there a simpler way to solve this, such as changing the base style to remove one of the possibly offending properties listed in the blog post?
It seems possible that this is being caused by another issue, but this seems to be the most likely/plausible as of right now (as with the hubs I have a similar situation to what is being described there).
Also, is there a way to track when this happens in the wild? MSFT provides crash dumps on hangs, but they seem to give little to no information in them at all (and on top of that they only appear 5 days after they happen, which is less than ideal).
Thanks!
This is a complicated question to answer.
First, I think you have identified a real problem with WinRT. You theorize that the layout subsystem seems busy calculating your layout, and based on some condition that occurs around 20% of the time it does not finish in any reasonable time. Reasonable guess.
The problem, then, is when such an event does not occur during debug. In my personal development experience, errors that do not occur in debug are 99.99% timing related. Something is not finishing before a second process begins. Debugging lets those first, long process finish.
This is a real computer science question, and not so much a WinRT or Windows 8 question. To that end, the best answer I can give you without any code samples (why no code samples?) is the typical approach I employ when I reach the same dilemma. I hope it helps, at least a little.
Start with your brain.
I have always joked with developers just how much debugging can be done outside the debugger - and in your mind. Mentally walking the pipeline of your app and looking for race-condition dependencies that might cause deadlocks. Believe it or not, this solves a lot of problems a debugger could never catch - because debuggers unwind timing dependencies.
Next is simplicity.
The more complex the problem the less likely you will find the culprit. In the case of a XAML application, I tend to remove or disable value converters first. Then, I look to remove data templates. If you have element bindings, those go next. If simplifying the XAML does help - that's just the beginning to figuring it out. If it doesn't, things just got easier.
Your code behind can be disabled with just a few keystrokes and found guilty or innocent. It's the most likely place for your problem, I find, and the reason we work so hard to keep it simple, clean, and minimal. After that, there's the view model. Though it's not impossible for your view model to be the one, and indeed you still have to check, it's probably not the root of your evil.
Lastly, there's the app pipeline that loads your page, loads your data, or does anything else. Step by step your only real option is to slowly remove things from your app until you don't see the problem. Removing the problem, though is not solving it. That's a case by case thing based on your app and the logic in it. Reality is, you might see the problem leave when removing XAML, while the real problem is in the view model or elsewhere.
What am I really saying? The silver bullet you are asking for really isn't there. There are several Microsoft tools and even more third party tools to look for bottlenecks, latency problems, slow code, and stuff - but in all reality, the scenario you describe is plain ole programming. I am not saying you aren't the victim of a bug. I'm saying, with the information we have, this is all I can do for you.
You'll get it.
Third thing to do is to add logging, and instrumentation to your app.
Best of luck.
Given that Jerry has answered this at a higher level I figured I would add in the lower level answers that from the way your question is phrased makes me think you are interested in. I guess first I would like to address the last item which is the dump files. There is a mechanism for getting dump files of a process 'in the wild' that Microsoft provides which is through Windows Error Reporting. If you are wanting to collect dump files from failed client processes you could sign up for Windows Error Reporting (I must admit I have never actually done it, but I did look into it and tried to get my current employer to allow me to do this, but it didn't end successfully). To sign up go to the Establish a Hardware/Desktop Account Page.
As far as what to do with dump files once you get them, you would be wanting to download the debugging tools for windows (part of the Windows SDK download) and/or the Debug Diag Tool (I must confess I am more of a debugging tools for windows user than a Debug Diag user). These will provide you with the tools to look into what is going on at a lower level. Obviously you can only go so far as you won't have access to private Microsoft symbols, but you do have access to public symbols and usually those are enough to give you a pretty good idea of the problem area.
Your primary tools will depend on how reproducible the issue is. If it is only reproducible on some client machines then you will have to rely on looking at a single dump file that you probably got a hold of from Windows Error Reporting. In this case what I would do is open it up using the appropriate version of Windbg (either x86 or x64) and look at what was going on at the time the dump was taken. Depending on how savvy you are depends on how far you can go. Probably a simple starter would be to run
.symfix
.reload
.loadby sos clr
!EEStack
This will load Microsoft public symbols, the sos extension dll for dealing with Managed code inspection, and then will dump the contents of the stack for each thread in the process. From looking at the names of the method that appear on the call stacks you might be able to get a pretty good idea of at least the area of the code where the lock is occuring.
You can go much farther than this as Windbg provides the ability to go pretty deep into deadlock analysis (for instance there is an extension available for Windbg called sosex that provides a command !dlk which can sometimes automate the detection of a deadlock for you from a single dump file. To load an extension dll into Windbg you just have to download it and then call .load fullpathtodll). If the problem is reproducible locally you might even be more successful with WPA/WPR or if you are really fortunate a simple procmon trace. These tools do have a pretty decent barrier to entry as they take some time to learn. But if you are really interested in the topic your best resources would be the Defrag Tools series on Channel9 and anything by Mario Hewardt (especially his book "Advanced .Net Debugging"). Again, getting familiar with these tools can take a bunch of time, but at the very least if you just know how to dump the contents of the stacks from a dump file you can sometimes get what you need just from that so a basic understanding of these tools can be beneficial as well.
Is there a way for me to get the amount of memory and processor power needed for my application. I recently had a very unpleasant experience when one of my applications kept freezing the computers on which it was working. This is obviously related to the lack of hardware power, because it works perfectly on the stronger computers that I used for testing purposes, where the application worked perfectly. So my question is - is there a way to calculate the amount of hardware power needed to run the application smoothly?
Almost all of my applications are done in C#, so I would need a method that can work with that type of application.
Thanks
This is obviously related to the lack of hardware power
This entirely depends on what your application is doing. If you are solving problems in a "not so time efficient way", then you can optimize the code.
I would suggest that you analyze your code with a profiler.
This will tell you:
What parts of your code are taking up most RAM/CPU
How much RAM in total did your application need when it peeked
Information about CPU consumption
This is obviously related to the lack of hardware power, because it works perfectly on the
stronger computers that I used for testing purposes,
Whoever set up testing should be fired.
You have to have one set of computers that are similar to the ones the application will run in for testing. That was accepted practice 20 years ago - seems modern times do not care about that.
Seriously, you NEED to have a test set that is representative on your lowest accepted hardware level.
Otherwise - no, sorry, no magic button. Profilers do NOT necessarily help (debugging, profiler may use more memory). Try a profiler. Optimize code. But at the end... you need to have a decent testbed.
I'd argue that this should be checked during software installation. Later, if user was prompted for updating his/her hardware and dismissed the warning, you shouldn't care about that.
If you're using Windows Installer (MSI), you can play with a custom action and use System.Management classes to detect whatever you want.
I'm writing a plug-in for another program in C#.NET, and am having performance issues where commands take a lot longer then I would. The plug-in reacts to events in the host program, and also depends on utility methods of the the host program SDK. My plug-in has a lot of recursive functions because I'm doing a lot of reading and writing to a tree structure. Plus I have a lot of event subscriptions between my plugin and the host application, as well as event subscriptions between classes in my plug-in.
How can I figure out what is taking so long for a task to complete? I can't use regular breakpoint style debugging, because it's not that it doesn't work it's just that it's too slow. I have setup a static "LogWriter" class that I can reference from all my classes that will allow me to write out timestamped lines to a log file from my code. Is there another way? Does visual studio keep some kind of timestamped log that I could use instead? Is there someway to view the call stack after the application has closed?
You need to use profiler. Here link to good one: ANTS Performance Profiler.
Update: You can also write messages in control points using Debug.Write. Then you need to load DebugView application that displays all your debug string with precise time stamp. It is freeware and very good for quick debugging and profiling.
My Profiler List includes ANTS, dotTrace, and AQtime.
However, looking more closely at your question, it seems to me that you should do some unit testing at the same time you're doing profiling. Maybe start by doing a quick overall performance scan, just to see which areas need most attention. Then start writing some unit tests for those areas. You can then run the profiler while running those unit tests, so that you'll get consistent results.
In my experience, the best method is also the simplest. Get it running, and while it is being slow, hit the "pause" button in the IDE. Then make a record of the call stack. Repeat this several times. (Here's a more detailed example and explanation.)
What you are looking for is any statement that appears on more than one stack sample that isn't strictly necessary. The more samples it appears on, the more time it takes. The way to tell if the statement is necessary is to look up the stack, because that tells you why it is being done.
Anything that causes a significant amount of time to be consumed will be revealed by this method, and recursion does not bother it.
People seem to tackle problems like this in one of two ways:
Try to get good measurements before doing anything.
Just find something big that you can get rid of, rip it out, and repeat.
I prefer the latter, because it's fast, and because you don't have to know precisely how big a tumor is to know it's big enough to remove. What you do need to know is exactly where it is, and that's what this method tells you.
Sounds like you want a code 'profiler'. http://en.wikipedia.org/wiki/Code_profiler#Use_of_profilers
I'm unfamiliar with which profilers are the best for C#, but I came across this link after a quick google which has a list of free open-source offerings. I'm sure someone else will know which ones are worth considering :)
http://csharp-source.net/open-source/profilers
Despite the title of this topic I must argue that the "best" way is subjective, we can only suggest possible solutions.
I have had experience using Redgate ANTS Performance Profiler which will show you where the bottlenecks are in your application. It's definitely worth checking out.
Visual Studio Team System has a profiler baked in, its far from perfect, but for simple applications you can kind of get it to work.
Recently I have had the most success with EQATECs free profiler, or rolling my own tiny profiling class where needed.
Also, there have been quite a few questions about profilers in that past see: http://www.google.com.au/search?hl=en&q=site:stackoverflow.com+.net+profiler&btnG=Google+Search&meta=&aq=f&oq=
Don't ever forget Rico Mariani's advice on how to carry out a good perf investigation.
You can also use performance counter for asp.net applications.
I've not be coding long so I'm not familiar with which technique is quickest so I was wondering if there was a way to do this in VS or with a 3rd party tool?
Thanks
Profilers are great for measuring.
But your question was "How can I determine where the slow parts of my code are?".
That is a different problem. It is diagnosis, not measurement.
I know this is not a popular view, but it's true.
It is like a business that is trying to cut costs.
One approach (top down) is to measure the overall finances, then break it down by categories and departments, and try to guess what could be eliminated. That is measurement.
Another approach (bottom up) is to walk in at random into an office, pick someone at random, and ask them what they are doing at that moment and (importantly) why, in detail.
Do this more than once.
That is what Harry Truman did at the outbreak of WW2, in the US defense industry, and immediately uncovered massive fraud and waste, by visiting several sites. That is diagnosis.
In code you can do this in a very simple way: "Pause" it and ask it why it is spending that particular cycle. Usually the call stack tells you why, in detail.
Do this more than once.
This is sampling. Some profilers sample the call stack. But then for some reason they insist on summarizing time spent in each function, inclusive and exclusive. That is like summarizing by department in business, inclusive and exclusive.
It loses the information you need, which is the fine-grain detail that tells if the cycles are necessary.
To answer your question:
Just pause your program several times, and capture the call stack each time. If your code is very slow, the wasteful function calls will be on nearly every stack. They will point with precision to the "slow parts of your code".
ADDED: RedGate ANTS is getting there. It can give you cost-by-line, and it is quite spiffy. So if you're in .NET, and can spare 3 figures, and don't mind waiting around to install & learn it, it can tell you much of what your Pause key can tell you, and be much more pretty about it.
Profiling.
RedGate has a product.
JetBrains has a product.
I've used ANTS Profiler and I can join the others with recommendation.
The price is NEGLIGIBLE when you compare it with the amount of dev hours it will save you.
I you're developer for a living, and your company won't buy it for you, either change the company or buy it for yourself.
For profiling large complex UI applications then you often need a set of tools and approaches. I'll outline the approach and tools I used recently on a project to improve the performance of a .Net 2.0 UI application.
First of all I interviewed users and worked through the use cases myself to come up with a list of target use cases that highlighted the systems worse performing areas. I.e. I didn't want to spend n man days optimising a feature that was hardly ever used but very slow. I would want to spend time, however, optimising a feature that was a little bit sluggish but invoked a 1000 times a day, etc.
Once the candidate use cases were identified I instrumented my code with my own light weight logging class (I used some high performance timers and a custom logging solution because a needed sub-millisecond accuracy). You might, however, be able to get away with log4net and time stamps. The reason I instrumented code is that it is sometimes easier to read your own logs rather than the profiler's output. I needed both for a variety of reasons (e.g. measuring .Net user control layouts is not always straightforward using the profiler).
I then ran my instrumented code with the ANTS profiler and profiled the use case. By combining the ANTS profile and my own log files I was very quickly able to discover problems with our application.
We also profiled the server as well as the UI and were able to work out breakdowns for time spent in the UI, time spent on the wire, time spent on the server etc.
Also worth noting is that 1 run isn't enough, and the 1st run is usually worth throwing away. Let me explain: PC load, network traffic, JIT compilation status etc can all affect the time a particular operation will take. A simple strategy is to measure an operation n times (say 5), throw away the slowest and fastest run, the analyse the remianing profiles.
Eqatec profiler is a cute small profiler that is free and easy to use. It probably won't come anywhere near the "wow" factor of Ants profiler in terms of features but it still is very cool IMO and worth a look.
Use a profiler. ANTS costs money but is very nice.
i just set breakpoints, visual will tell you how many ms between breakpoint has passed. so you can find it manually.
ANTS Profiler is very good.
If you don't want to pay, the newer VS verions come with a profiler, but to be honest it doesn't seem very good. ATI/AMD make a free profiler... but its not very user friendly (to me, I couldn't get any useful info out of it).
The advice I would give is to time function calls yourself with code. If they are fast and you do not have a high-precision timer or the calls vary in slowness for a number of reasons (e.g. every x calls building some kind of cache), try running each one x10000 times or something, then dividing the result accordingly. This may not be perfect for some sections of code, but if you are unable to find a good, free, 3rd party solution, its pretty much what's left unless you want to pay.
Yet another option is Intel's VTune.