estimate execution time before running a function - c#

I have a function that runs for about 0.7 seconds on my not-so-new development machine. (it runs for about 3 seconds on another machine I tested)
I want to show the user some pre-message about half a second before the above function is done.
I don't want to show the message too long before the function is done as it will be annoying to just look at it and wait. On the other hand, I would rather not wait until the function is done because the whole thing starts from a user action and I don't want to waste time - it's better if I can show that message while the other function is doing its job.
I've already added a loop with a short Thread.sleep() to let the pre-message hang if the function was "too fast" but I'm afraid that usually won't be the case... And so, I want to see if I could roughly estimate the execution time based on current machine specifications and even by the current CPU usage and do that before running the function. Also, since we are talking about seconds and milliseconds, if getting this information will take more than a few milliseconds then it's not worth it. In this case, I might calculate it only once when the application is loaded.
Does anybody have an idea how to do that?

Estimating time is pointless, and most likely impossible to be acurate. (Though there might be some scenarios where it could be done).
Look at Windows file copying in the past "10 seconds left.... 2 minutes left.... 5 seconds left" It kept changing its estimate based on whatever metrics it used. Better to just show a spnning image, or message, to let the user know something is going on.
If you are processing a list of items, then it will be much easier for you, as the message could be:
Processing item 4 of 100.
At least then the user can know, roughly, what the code is doing. If you have nothing like this to inform the user, then I would cut your losses and show a simple "Processing...." message, or some icon, whatever takes your fancy for your solution.

Related

Avoid Timeout on MySQL Query

I'm accessing a MySQL database using the standard MySql.Data package from Oracle. Every few releases of the application, we need to tweak the database schema (e.g. client wanted DECIMAL(10,2) changed to DECIMAL(10,3)) which the application handles by sending the necessary SQL statement. This works except that on a large database, the schema update can be a rather lengthy operation and times out.
The obvious solution is to crank up the timeout, but that results in a relatively poor user experience - I can put up a dialog that says "updating, please wait" and then just sit there with no kind of progress indicator.
Is there a way to get some kind of feedback from the MySQL server that it's 10% complete, 20% complete, etc., that I could pass on to the user?
There's two ways to approach this problem.
The first is the easiest way, as you've suggested, and just use a progress bar that bounces back and forth. It's not great, it's not the best user experience, but it's better than locking up the application and at least it's giving feedback. Also I assume this is not something that occurs regularly and is a one-off annoyance every now and again. Not something I'd really be worried about.
However, if you really are worried about user-experience and want to give better feed back, then you're going to need to try some metrics. Taking your DECIMAL example, time the change on different row-counts. 100,000 rows, a million rows, etc etc. This will give you a napkin-guess time it might take. Note, different hardware, other things running on the computer, you're never going to get it down exact. But you have an estimate.
Once you have an estimate, and you know the row-count, you can create a real progress bar based on those estimates. And if it gets to 100% and the real operation hasn't completed, or if it finishes before you get to 100% (and you can insta-jump the bar!), it's... something.
Personally I'd go with option one, and perhaps add a helpful message that Windows commonly does. "This may take a few minutes". Maybe add "Now's a great time for coffee!". And a nice little animated gif :)

StopWatch vs Timer - When to Use

Forgive me for this question, but I can't seem to find a good source of when to use which. Would be happy if you can explain it in simple terms.
Furthermore, I am facing this dilemma:
See, I am coding a simple application. I want it to show the elapsed time (hh:mm:ss format or something). But also, to be able to "speed up" or "slow down" its time intervals (i.e. speed up so that a minute in real time equals an hour in the app).
For example, in Youtube videos (* let's not consider the fact that we can jump to specific parts of the vid *), we see the actual time spent in watching that video on the bottom left corner of the screen, but through navigating in the options menu, we are able to speed the video up or down.
And we can actually see that the time gets updated in a manner that agrees with the speed factor (like, if you choose twice the speed, the timer below gets updated twice faster than normal), and you can change this speed rate whenever you want.
This is what I'm kinda after. Something like how Youtube videos measure the time elapsed and the fact that they can change the time intervals. So, which of the two do you think I should choose? Timer or StopWatch?
I'm just coding a Windows Form Application, by the way. I'm simulating something and I want the user to be able to speed up whenever he or she wishes to. Simple as this may be, I wish to implement a proper approach.
As far as I know the main differences are:
Timer
Timer is just a simple scheduler that runs some operation/method once in a while
It executes method in a separate thread. This prevents blocking of the main thread
Timer is good when we need to execute some task in certain time interval without blocking anything.
Stopwatch
Stopwatch by default runs on the same thread
It counts time and returns TimeSpan struct that can be useful in case when we need some additional information
Stopwatch is good when we need to watch the time and get some additional information about how much elapsed processor ticks does the method take etc.
This has already been covered in a number of other questions including
here. Basically, you can either have Stopwatch with a Speed factor then the result is your "elapsed time". A more complicated approach is to implement Timer and changing the Interval property.

in c#, how to independently guarantee that two machines will not net generate the same random number?

Suppose two machines are running the same code, but you want to offset the timing of the code being run so that there's no possibility of their not running simultaneously, and by simultaneously I mean not running within 5 seconds of each other.
One could generate a random number of seconds prior to the start of the running code, but that may generate the same number.
Is there an algorithm to independently guarantee different random numbers?
In order to guarantee that the apps don't run at the same time, you need some sort of communication between the two. This could be as simple as someone setting a configuration value to run at a specific time (or delay by a set amount of seconds if you can guarantee they will start at the same time). Or it might require calling into a database (or similar) to determine when it is going to start.
It sounds like you're looking for a scheduler. You'd want a third service (the scheduler) which maintains when applications are supposed to/allowed to start. I would avoid having the applications talk directly to each other, as this will become a nightmare as your requirements become more complex (a third computer gets added, another program has to follow similar scheduling rules, etc.).
Have the programs send something unique (the MAC address of the machine, a GUID that only gets generated once and stored in a config file, etc.) to the scheduling service, and have it respond with how many seconds (if any) that program has to wait to begin its main execution loop. Or better yet, give the scheduler permissions on both machines to run the program at specified times.
You can't do this in pure isolation though - let's say that you have one program uniquely decide to wait 5 seconds, and the other wait 7 seconds - but what happens when the counter for program 2 is started 2 seconds before program 1? A scheduler can take care of that for you.
As pointed in comments/other answer true random can't really provide any guarantees of falling into particular ranges when running in parallel independently.
Assuming your goal is to not run multiple processes at the same time you can force each machine to pick different time slot to run the process.
If you can get consensus between this machines on current time and "index" of machine than you can run your program at selected slots with possible random offset withing time slot.
I.e. use time service to synchronize time (default behavior for most of OS for machines connected to pretty much any network) and pre-assign sequential IDs to machines (and have info on total count). Than let machine with ID to run in time slot like (assuming count < 60, otherwise adjust start time based on count; provide enough time to avoid overlaps when small time drift happens between time synchronization interval)
(start of an hour + (ID*minutes) + random_offset (0,30 seconds))
This way no communications between machines is needed.
Have both app read a local config file, wait the number of seconds specified and then start running.
Put 0 in one, 6+ in the other. They'll not start within 5 seconds of each other. (Adjust the 6+ as necessary to cater for variations in machine loads, speeds etc.)
Not really an algorithm but you could create two arrays of numbers that are completely different and then grab a number from the array (randomly) before the app starts.
What is the penalty for them running at the same time?
The reason I ask is that even if you offset the starting time one could start before the other one is finished. If the data they are processing grows then this gets more likely as time goes on and the 5s rule becomes obsolete.
If they use the same resources then it would be best to use those resources somehow to tell you. E.g. Set a flag in the database, or check if there is enough memory available to run.

PerfView: Analyzing Performance of App including Database Calls

I'm currently getting into PerfView for performance analysis for my (C#) apps.
But typically those apps use a lot of database calls.
So I asked myself questions like:
- How much time is spent in Repositories?
- (How much time is spent waiting for SQL Queries to return?) -> I don't know if this is even possible to discover with PerfView
But from my traces I get barely any useful results. In the "Any Stacks" View it tells me (when I use grouping on my Repository) that 1,5 seconds are spent in my Repsoitory (the whole call is about 45 seconds). And i know this is not really true, because the repositories calls the database A LOT.
Is it just that CPU metric is not captured when waiting for SQL Queries to complete because CPU has nothing to do in this period of time and therefore my times are just including data transformation times etc in the repository?
Thanks for any help!
EDIT:
What i missed is turning on thread times option to get times of blocked code (which is what's happening during database calls i suppose). I got all the stacks now, just have filter out the uninteresting things. But i don't seem to get anywhere.
What's especially interesting for me when using "Thread Time" is the BLOCKED_TIME. But with it the times are off i think. When you look at the screenshot, it tells me that CPU_TIME is 28,384. Which is milliseconds (afaik), but BLOCKED_TIME is 2,314,732, which can't be milliseconds. So percentage for CPU_TIME is very low with 1.2% but 28 out of 70 seconds are still a lot. So the Inclusive Percentage time is comparing apples and oranges here. Can somebody explain?
So, i succeeded.
What I missed (and Vance Morrison was explaining it in his video tutorial actually) is: When doing a wall clock time analysis with perfview, you get accumulated time from all the threads that have been "waiting around" in what is called "BLOCKED_TIME". Which means for a 70 seconds time, alone the finalizer thread adds 70 seconds to this "BLOCKED_TIME" because it was sitting there not doing anything (at least almost anything in my case).
So when doing wall clock time analysis it is important to filter out what you're interested in. For example search for the thread that was taking the most CPU-time and just include this one in your analysis and go further down the stack to find pieces of your code that are expensive (and also might lead to DB or Service Calls). As soon as you a analysis from the point of view of a method you are really getting the times that were spent in this method and the accumulated "BLOCK_TIME" is out of the picture.
What I found most useful is searching for methods in my own code that "seemed time consuming", i switched to the callers view for this method. Which shed some light from where it's called and in the callees view what is responsible for the consuming time further down the stack (a DB call in a repository or service calls for getting some data).
Little bit hard to explain, but as soon as i had understand really the basics of wall clock time analysis, it all made sense at some point.
I recommend this video tutorial: http://channel9.msdn.com/Series/PerfView-Tutorial
Again, great and very powerful tool!

How to optimize

I have a method in c# code behind..which needs to be executed 10000+ lines from Assemblies as well as in Child group Methods. My Question is How to optimize it? It is taking more than 40 Seconds Load to 500 Rows in my page my own gridview which is designed by myself.
Profile your code. That will help you identify where its slow. From reading your post , optimizing might take you a long time since you have alot of code and data.
Virtualize as much as you can. Instead of loading 500 rows, can you try loading 50 rows first, show your UI then load the remainder 450 rows asynchronously ? This doesnt speed up your application, but at least it seems working much quicker than waiting 40 seconds.
This method is very simple, but it can pinpoint the activities that would profit the most by optimizing.
If there is a way to speed up your program, it is taking some fraction of time, like 60%.
If you randomly interrupt it, under a debugger, you have a 60% chance of catching it in the act.
If you examine the stack, and maybe some of the state variables, it will tell you with great precision just what the problem is.
If you do it 10 times, you will see the problem on about 6 samples.

Categories

Resources