EDIT
Hey Guys thanks for your fast answers, I've gotten a lot nearer to the problem, seems to be a present command that makes the devices answers faster (since I wait for every device answer this would slow the whole thing down massivly)
I`ll keep on trying or kill the device :P
Thanks
-
Sorry for the caption...I simply couldn't think of a better one. I am currently starting to think I'm mad or something like that.... :)
I'll first try to describe the problem with words...if nobody has an idea i'll try to extract the important parts...
Imagine the following:
My app sends info via rs232 ports - so via an communicationsmonitor software I can watch exactly what data is sent and what time is between that.
Now I have 2 functions - to isolate the problem currently I have the exactly same while-loops containing the exact same code (I actually copied it!). Maybe the only difference is the code before the while loops are reached but even that isn't really anything big. So once the while loops are entered they run infinitly (just at the moment of course).
As I said they contain the exact same sending routine.
But one is half as fast as the other one.... (WTF??????) :)
Now my first attempt was to place breakpoints 1 line above the "while"-line
and one line beneath - The program never left the while loops...
So...can you guys maybe think of any other reasons this might have? Or possibilities to find out what it is....I'm kind of out of ideas after the breakpoint experiment showing that nothing but the same few lines are executed...
Oh btw no threads or something like that used here...so that can't be the reason either.
I'm looking forward to your ideas...thanks
Use a profiler to find where the extra cycles are going.
Given the exact same statement(s) that are run mltiple times there are absolutly no gurantees that they will have the same time of execution. There are a lot of external factors that can influence this, like CPU clock for instance.
You said that you don't have threads but your program is not the only one in the OS and you can't know what exactly the other processes do or how to control the time slice alocated to other processes.
You could set the Priority of your process higher but that still will not provide any guarantees and is not very recommended.
Related
During my current project, I'm calling a solution (which I'm going to refer to as solution2) every time the user presses a button (as long as the variables are correct). And I'm torn between calling a method inside solution2 on each correct user input, or to write everything in the start method and simply "activate" solution2 for each correct user input. I'm not too bothered which one is easier (except if one of them were going to cause major difficulty), I'm only looking for the most optimised way to do it. Thank you for your help. -TAG
If you're torn between two different means to solve a problem, and optimization is your main concern, measure it! This is a good use of the Stopwatch class, but even just recording your current time and subtracting the time after function completion to get a diff will help you out. Make a (Release!) build for each solution, and run them each a large number of times to establish which one is faster on average.
Once you've determined the most performant solution, keep that one, and consider leaving the performance tracking in so that you can identify bottlenecks in your code. This will allow you to isolate and correct performance problems with confidence. Ideally you can separate your implementation details into their own class so you can refactor and optimize freely without needing to change the rest of your code.
I want to compute the shortest path by computing the distance between school and student locations, here is my code, It works only for the first row in the database but I do not Know why! and there is no error while running but takes a lot of time!It seems like there is an infinity loop !!but I do not Know where is the error??!!
the value of counter is romper of students
Oh man, beginners ;) Funny.
but takes a lot of time!It seems like there is an infinity loop
You do realize that common sense says that an infinite loop does not take a LOT of time, but INFINITE time? SO if that thing comes back after 15 minutes it is likely atotally crappy algorithm, but it is NOT an infinite loop. Infinite is not slow, it is STOP HERE NEVER EXIT.
Given that you run 2 loops on data we do not know and that you run statements that make zero sense, I Would suggest you get a tutorial on how to use the debugger and start - debugging. That is developer baseline info and if you do not learn it now, rather give up programming - there is no chance to write more complex code without regularly using a debuger.
For pure performane there are profilers. Again, unless you hate programming and do not want to make anyhting more than small samples - you WILL need to use a profiler more or less regularly. Sp ypi cam start dpomg so now.
I think you will find out that you do a LOT of SQL stuff. THat is a select / update every time - and as such it is very inefficient (unless it happens ot often, but hey, without the data, how should I know?). I doubt you have so many busses (like: more than a couple of millions), it may make more sense to pull the data into memory once and then not hit the database until your loop is ready. Any select will take some time, and if you do that a lot - welcome to performance hell.
I am developing a Windows Store application. Currently, I am getting intermittent hangs as described in this blog post. The issue appears to be that not enough space is given to remainder-defined column widths and TextBlocks attempting to format themselves (possibly due to the ellipsis processing). My app tends to hang indefinitely when this happens.
The question I have less related to how to solve the issue (as it seems to be described fairly well in the blog post), but instead how to find the issues. I have one fairly regularly (approximately one in five or ten start-ups) on a Hub Page, so I've been looking through there (as it's the most notable instance of issue), but it's a true Heisenbug in that it never seems to happen when debugging (or when you look for it).
So, how do I find the offending code? Is there just a pattern I need to look for (ColumnWidth="*"?). Is there a simpler way to solve this, such as changing the base style to remove one of the possibly offending properties listed in the blog post?
It seems possible that this is being caused by another issue, but this seems to be the most likely/plausible as of right now (as with the hubs I have a similar situation to what is being described there).
Also, is there a way to track when this happens in the wild? MSFT provides crash dumps on hangs, but they seem to give little to no information in them at all (and on top of that they only appear 5 days after they happen, which is less than ideal).
Thanks!
This is a complicated question to answer.
First, I think you have identified a real problem with WinRT. You theorize that the layout subsystem seems busy calculating your layout, and based on some condition that occurs around 20% of the time it does not finish in any reasonable time. Reasonable guess.
The problem, then, is when such an event does not occur during debug. In my personal development experience, errors that do not occur in debug are 99.99% timing related. Something is not finishing before a second process begins. Debugging lets those first, long process finish.
This is a real computer science question, and not so much a WinRT or Windows 8 question. To that end, the best answer I can give you without any code samples (why no code samples?) is the typical approach I employ when I reach the same dilemma. I hope it helps, at least a little.
Start with your brain.
I have always joked with developers just how much debugging can be done outside the debugger - and in your mind. Mentally walking the pipeline of your app and looking for race-condition dependencies that might cause deadlocks. Believe it or not, this solves a lot of problems a debugger could never catch - because debuggers unwind timing dependencies.
Next is simplicity.
The more complex the problem the less likely you will find the culprit. In the case of a XAML application, I tend to remove or disable value converters first. Then, I look to remove data templates. If you have element bindings, those go next. If simplifying the XAML does help - that's just the beginning to figuring it out. If it doesn't, things just got easier.
Your code behind can be disabled with just a few keystrokes and found guilty or innocent. It's the most likely place for your problem, I find, and the reason we work so hard to keep it simple, clean, and minimal. After that, there's the view model. Though it's not impossible for your view model to be the one, and indeed you still have to check, it's probably not the root of your evil.
Lastly, there's the app pipeline that loads your page, loads your data, or does anything else. Step by step your only real option is to slowly remove things from your app until you don't see the problem. Removing the problem, though is not solving it. That's a case by case thing based on your app and the logic in it. Reality is, you might see the problem leave when removing XAML, while the real problem is in the view model or elsewhere.
What am I really saying? The silver bullet you are asking for really isn't there. There are several Microsoft tools and even more third party tools to look for bottlenecks, latency problems, slow code, and stuff - but in all reality, the scenario you describe is plain ole programming. I am not saying you aren't the victim of a bug. I'm saying, with the information we have, this is all I can do for you.
You'll get it.
Third thing to do is to add logging, and instrumentation to your app.
Best of luck.
Given that Jerry has answered this at a higher level I figured I would add in the lower level answers that from the way your question is phrased makes me think you are interested in. I guess first I would like to address the last item which is the dump files. There is a mechanism for getting dump files of a process 'in the wild' that Microsoft provides which is through Windows Error Reporting. If you are wanting to collect dump files from failed client processes you could sign up for Windows Error Reporting (I must admit I have never actually done it, but I did look into it and tried to get my current employer to allow me to do this, but it didn't end successfully). To sign up go to the Establish a Hardware/Desktop Account Page.
As far as what to do with dump files once you get them, you would be wanting to download the debugging tools for windows (part of the Windows SDK download) and/or the Debug Diag Tool (I must confess I am more of a debugging tools for windows user than a Debug Diag user). These will provide you with the tools to look into what is going on at a lower level. Obviously you can only go so far as you won't have access to private Microsoft symbols, but you do have access to public symbols and usually those are enough to give you a pretty good idea of the problem area.
Your primary tools will depend on how reproducible the issue is. If it is only reproducible on some client machines then you will have to rely on looking at a single dump file that you probably got a hold of from Windows Error Reporting. In this case what I would do is open it up using the appropriate version of Windbg (either x86 or x64) and look at what was going on at the time the dump was taken. Depending on how savvy you are depends on how far you can go. Probably a simple starter would be to run
.symfix
.reload
.loadby sos clr
!EEStack
This will load Microsoft public symbols, the sos extension dll for dealing with Managed code inspection, and then will dump the contents of the stack for each thread in the process. From looking at the names of the method that appear on the call stacks you might be able to get a pretty good idea of at least the area of the code where the lock is occuring.
You can go much farther than this as Windbg provides the ability to go pretty deep into deadlock analysis (for instance there is an extension available for Windbg called sosex that provides a command !dlk which can sometimes automate the detection of a deadlock for you from a single dump file. To load an extension dll into Windbg you just have to download it and then call .load fullpathtodll). If the problem is reproducible locally you might even be more successful with WPA/WPR or if you are really fortunate a simple procmon trace. These tools do have a pretty decent barrier to entry as they take some time to learn. But if you are really interested in the topic your best resources would be the Defrag Tools series on Channel9 and anything by Mario Hewardt (especially his book "Advanced .Net Debugging"). Again, getting familiar with these tools can take a bunch of time, but at the very least if you just know how to dump the contents of the stacks from a dump file you can sometimes get what you need just from that so a basic understanding of these tools can be beneficial as well.
I am interning for a company this summer, and I got passed down this program which is a total piece. It does very computationally intensive operations throughout most of its duration. It takes about 5 minutes to complete a run on a small job, and the guy I work with said that the larger jobs have taken up to 4 days to run. My job is to find a way to make it go faster. My idea was that I could split the input in half and pass the halves to two new threads or processes, I was wondering if I could get some feedback on how effective that might be and whether threads or processes are the way to go.
Any inputs would be welcomed.
Hunter
I'd take a strong look at TPL that was introduced in .net4 :) PLINQ might be especially useful for easy speedups.
Genereally speaking, splitting into diffrent processes(exefiles) is inadvicable for perfomance since starting processes is expensive. It does have other merits such as isolation(if part of a program crashes) though, but i dont think they are applicable for your problem.
If the jobs are splittable, then going multithreaded/multiprocessed will bring better speed. That is assuming, of course, that the computer they run on actually has multiple cores/cpus.
Threads or processes doesn't really matter regarding speed (if the threads don't share data). The only reason to use processes that I know of is when a job is likely to crash an entire process, which is not likely in .NET.
Use threads if theres lots of memory sharing in your code but if you think you'd like to scale the program to run across multiple computers (when required cores > 16) then develop it using processes with a client/server model.
Best way when optimising code, always, is to Profile it to find out where the Logjam's are IMO.
Sometimes you can find non obvious huge speed increases with little effort.
Eqatec, and SlimTune are two free C# profilers which may be worth trying out.
(Of course the other comments about which parallelization architecture to use are spot on - it's just I prefer analysis first....
Have a look at the Task Parallel Library -- this sounds like a prime candidate problem for using it.
As for the threads vs processes dilemma: threads are fine unless there is a specific reason to use processes (e.g. if you were using buggy code that you couldn't fix, and you did not want a bad crash in that code to bring down your whole process).
Well if the problem has a parallel solution then this is the right way to (ideally) significantly (but not always) increase performance.
However, you don't control making additional processes except for running an app that launches multiple mini apps ... which is not going to help you with this problem.
You are going to need to utilize multiple threads. There is a pretty cool library added to .NET for parallel programming you should take a look at. I believe its namespace is System.Threading.Tasks or System.Threading with the Parallel class.
Edit: I would definitely suggest though, that you think about whether or not a linear solution may fit better. Sometimes parallel solutions would taken even longer. It all depends on the problem in question.
If you need to communicate/pass data, go with threads (and if you can go .Net 4, use the Task Parallel Library as others have suggested). If you don't need to pass info that much, I suggest processes (scales a bit better on multiple cores, you get the ability to do multiple computers in a client/server setup [server passes info to clients and gets a response, but other than that not much info passing], etc.).
Personally, I would invest my effort into profiling the application first. You can gain a much better awareness of where the problem spots are before attempting a fix. You can parallelize this problem all day long, but it will only give you a linear improvement in speed (assuming that it can be parallelized at all). But, if you can figure out how to transform the solution into something that only takes O(n) operations instead of O(n^2), for example, then you have hit the jackpot. I guess what I am saying is that you should not necessarily focus on parallelization.
You might find spots that are looping through collections to find specific items. Instead you can transform these loops into hash table lookups. You might find spots that do frequent sorting. Instead you could convert those frequent sorting operations into a single binary search tree (SortedDictionary) which maintains a sorted collection efficiently through the many add/remove operations. And maybe you will find spots that repeatedly make the same calculations. You can cache the results of already made calculations and look them up later if necessary.
I'm writing a plug-in for another program in C#.NET, and am having performance issues where commands take a lot longer then I would. The plug-in reacts to events in the host program, and also depends on utility methods of the the host program SDK. My plug-in has a lot of recursive functions because I'm doing a lot of reading and writing to a tree structure. Plus I have a lot of event subscriptions between my plugin and the host application, as well as event subscriptions between classes in my plug-in.
How can I figure out what is taking so long for a task to complete? I can't use regular breakpoint style debugging, because it's not that it doesn't work it's just that it's too slow. I have setup a static "LogWriter" class that I can reference from all my classes that will allow me to write out timestamped lines to a log file from my code. Is there another way? Does visual studio keep some kind of timestamped log that I could use instead? Is there someway to view the call stack after the application has closed?
You need to use profiler. Here link to good one: ANTS Performance Profiler.
Update: You can also write messages in control points using Debug.Write. Then you need to load DebugView application that displays all your debug string with precise time stamp. It is freeware and very good for quick debugging and profiling.
My Profiler List includes ANTS, dotTrace, and AQtime.
However, looking more closely at your question, it seems to me that you should do some unit testing at the same time you're doing profiling. Maybe start by doing a quick overall performance scan, just to see which areas need most attention. Then start writing some unit tests for those areas. You can then run the profiler while running those unit tests, so that you'll get consistent results.
In my experience, the best method is also the simplest. Get it running, and while it is being slow, hit the "pause" button in the IDE. Then make a record of the call stack. Repeat this several times. (Here's a more detailed example and explanation.)
What you are looking for is any statement that appears on more than one stack sample that isn't strictly necessary. The more samples it appears on, the more time it takes. The way to tell if the statement is necessary is to look up the stack, because that tells you why it is being done.
Anything that causes a significant amount of time to be consumed will be revealed by this method, and recursion does not bother it.
People seem to tackle problems like this in one of two ways:
Try to get good measurements before doing anything.
Just find something big that you can get rid of, rip it out, and repeat.
I prefer the latter, because it's fast, and because you don't have to know precisely how big a tumor is to know it's big enough to remove. What you do need to know is exactly where it is, and that's what this method tells you.
Sounds like you want a code 'profiler'. http://en.wikipedia.org/wiki/Code_profiler#Use_of_profilers
I'm unfamiliar with which profilers are the best for C#, but I came across this link after a quick google which has a list of free open-source offerings. I'm sure someone else will know which ones are worth considering :)
http://csharp-source.net/open-source/profilers
Despite the title of this topic I must argue that the "best" way is subjective, we can only suggest possible solutions.
I have had experience using Redgate ANTS Performance Profiler which will show you where the bottlenecks are in your application. It's definitely worth checking out.
Visual Studio Team System has a profiler baked in, its far from perfect, but for simple applications you can kind of get it to work.
Recently I have had the most success with EQATECs free profiler, or rolling my own tiny profiling class where needed.
Also, there have been quite a few questions about profilers in that past see: http://www.google.com.au/search?hl=en&q=site:stackoverflow.com+.net+profiler&btnG=Google+Search&meta=&aq=f&oq=
Don't ever forget Rico Mariani's advice on how to carry out a good perf investigation.
You can also use performance counter for asp.net applications.