[ [ EDIT 2x ] I think I have worded my original question wrong, so I have scooted it down below and rewrote exactly what I am trying to get at, for future readers. ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ New, Shiny, Clear Question with Better Wording ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I have a loop that is running for a simulation / gaming framework. This loop has several places in it where it needs to ascertain how much time - in reality - has passed, so that the logic within these special places - specifically, Rendering and Updating - can work correctly. It has the option of being a Fixed Time Step (unfixed[update/render] is false) or not.
The problem arises when Breakpoint-based Debugging is done in any point in the application, since it uses a stopwatch to figure out how much realtime has passed (for the purpose of physics and animation moving at a realistic speed, and not based on how many frames the computer can churn out).
It looks (roughly) like this, using multiple stopwatches for each 'part' of the application loop that needs to know how much time has passed since that 'part' last occurred:
while ( RunningTheSimulation ) {
/* ... Events and other fun awesome stuff */
TimeSpan updatedifference = new TimeSpan( updatestopwatch.ElapsedTicks );
if ( unfixedupdate || updatedifference > updateinterval ) {
Time = new GameTime( updatedifference,
new TimeSpan( gamestopwatch.ElapsedTicks ) );
Update( Time );
++updatecount;
updatestopwatch.Reset( );
updatestopwatch.Start( );
}
TimeSpan renderdifference = new TimeSpan( renderstopwatch.ElapsedTicks );
if ( unfixedrender || renderdifference > renderinterval ) {
Time = new GameTime( renderdifference,
new TimeSpan( gamestopwatch.ElapsedTicks ) );
Render( Time );
++rendercount;
renderstopwatch.Reset( );
renderstopwatch.Start( );
}
}
Some info about the variables:
updatestopwatch is a Stopwatch for the time spent outside of the Update() function,
renderstopwatch is a Stopwatch for the time spent outside the Render() function, and
gamestopwatch is a Stopwatch for the total elapsed time of the simulation/game itself.
The problem arises when I debug anywhere in the application. Because the stopwatches are measuring realtime, the Simulation will be completely thrown off by any kind of Breakpoint-based debugging because the Stopwatches will keep counting time, whether or not I'm debugging the application. I am not using the Stopwatches to measure performance: I am using them to keep track of time between re-occurrences of Update, Render, and other events like the ones illustrated above. This gets extremely frustrating when I breakpoint and analyze and fix an error in Update(), but then the Render() time is so completely off that any display of the results of the simulation is vengefully kicked in the face.
That said, when I stop debugging entirely it's obviously not a problem, but I have a lot of development work to do and I'm going to be debugging for a long time, so just pretending that this isn't inconvenient won't work, unfortunately. =[
I looked at Performance Counters, but I can't seem to wrap my head around how to get them to work in the context of what I'm trying to do: Render and Update can contain any amount of arbitrary code (they're specific to whatever simulation is running on top of this while loop), which means I can't steadily do a PerformanceCounter.Increment() for the individual components of the loop.
I'll keep poking around System.Diagnostics and other .NET namespaces, but so far I've turned up blanks on how to just "ignore" the time spent in the attached Debugger...
Anyone have any ideas or insight?
[ [ EDITS 5x ] Corrected misspellings and made sure the formatting on everything was correct. Sorry about that. ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ Original, less-clear question ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I have a constant loop running in a C# application, which I debug all the time. I am currently using a Stopwatch, but I could use any other mechanism to track the passing time. My problem begins when I do things like use Breakpoints somewhere during this loop (enclosed in a typical while (true) { ... }:
if ( unfixedupdate || updatedifference > updateinterval )
{
Time = new GameTime( updatedifference,
new TimeSpan( gamestopwatch.ElapsedTicks ) );
Update( Time );
++updatecount;
updatestopwatch.Reset( );
updatestopwatch.Start( );
}
The time measures itself fine, but it measures actual real time - including any time I spent debugging. Which means if I'm poking around for 17 seconds after updatestopwatch.Reset(), this gets compounded onto my loop and - for at least 1 rerun of that loop - I have to deal with the extra time I spent in real time factoring into all of my calculations.
Is there any way I can hook into the debugger to know when its freezing the application, so I can counter-measure the time and subtract it accordingly? As tagged, I'm using .NET and C# for this, but anything related to Visual Studio as well might help get me going in the right direction.
[ EDIT ]
To provide more information, I am using several stopwatches (for update, rendering, and a few other events all in a different message queue). If I set a breakpoint inside Update(), or in any other part of the application, the stopwatches will accurately measure the Real Time spent between these. This includes time I spend debugging various completely unrelated components of my application which are called downstream of Update() or Render() or Input() etc. Obviously the simulation's Timing (controlled by the GameTime parameter passed into the toplevel Update, Render, etc. functions) won't work properly if, even if the CPU only took 13 ms to finish the update function, I spend 13 extra seconds debugging (looking at variables and then Continue with the simulation); the problem being that I will see the other stopwatches suddenly accounting for 13 extra seconds of time. If it still doesn't make sense, I'll chime in again.
Use performance counters instead. The process CPU time should give a good indicator (but not as accurate as a realtime stopwatch) and should not interfere with debugging.
A possible solution to this would be to write your locals (or whatever data you are trying to debug) to the output using Debug.WriteLine().
You failed to explain what you are actually trying to debug and why animation timers doing exactly what they are expected to do with elapsed time is a problem. Know that when you break, time continues on. What is the actual problem?
Also, keep in mind that timings while debugging are not going to be anywhere near the measurement when running in release with optimizations turned on. If you'd like to measure time frame to frame, use a commercial profiling tool. Finding how long a method or function took is exactly what they were made for.
If you'd like to debug whether or not your animation works correctly, create a deterministic test where you supply the time rather than depending on the wall clock, using dependency injection and a time provider interface.
This article has a great example:
https://www.toptal.com/qa/how-to-write-testable-code-and-why-it-matters
Related
For the challenge and educational gain, i am currently trying to make a simple game in the console window. I use a very primitive "locked" framerate system as such:
using System.Threading;
// ...
static private void Main(string[] args)
{
AutoResetEvent autoEvent = new AutoResetEvent(false);
Timer timer = new Timer(Update);
timer.Change(0, GameSpeed);
autoEvent.WaitOne();
}
So, a timer ticks every GameSpeed miliseconds, and calls the method Update().
The way that i have understood input in the console window so far is as follows:
The console application has a "queue" where it stores any keyboard input as metadata + an instance of a ConsoleKey enum. The user can add to this queue at any time. If the user holds down, say A, it will add A every computer frame. That is, the actual fastest amount the computer can do, not the locked frames i am working with.
Calling Console.Readkey() will remove and return the first element on this list. Console.KeyAvailable returns a bool indicating whether the list is empty.
If GameSpeedis set to anything higher than 400 everything consistently works fine. The below image displays the results of some Console.WriteLine() debug messages that give the amount of keyboard inputs detected in this locked/custom frame, using the following code:
int counter = 0;
while (Console.KeyAvailable) { counter++; Console.ReadKey(true); }
Console.WriteLine(counter);
Results
I use only the A key. I hold it for some time, then release it again. The GameSpeed is set to 1000. As expected, the first frames give low numbers as i might start pressing half into the frame, and so too with the last frames, as i might release the A early.
Now, the exact same experiment but with a GameSpeed of only 200
As you can see, i've marked the places i begun pressing with yellow. It always, perfectly gets the first frame. But then theres either one, two, or three frames where it acts as if it has gotten no inputs, but then after those frames it's fine and gets around 7 inputs pr frame.
I recognize that you are not supposed to make games in the console window. It is not made for scenarios like this. That does not however eliminate the possibility that there is some specific, logical reason this happens, that i might be able to fix. So, concretely the question is: can anyone provide some knowledge / ideas of why this happens?
If computer specs are needed, just say so in the comments and i'll add them.
Edit:
I think i have found the cause of this error, and it is windows keyboard repeat delay. While you can change this in the control panel, i have searched the web and found no examples of how you would change it in a c# application. The question then boils down to: how do you change windows keyboard repeat delay?
I have an application which monitors a particular event and then starts to calculate things once it happens. Events are irregular and can come in any pattern from bunches in a sec to none for long time..
I want to measure %% of time the application is busy (similar to CPU % Usage)
I want to use Timer100Ns counter
Two questions:
Do I increment it by hardware ticks or by DateTime ticks (e.g. if I use Stopwatch - do I use sw.ElapsedTicks or sw.Elapsed.Ticks) ?
Do I need a base counter for it?
so I am about to write something like this:
Stopwatch sw = new Stopwatch();
sw.Start();
// Do some operation which is irregular by nature
sw.Stop();
// Measure utilization of the application
myCounterOfTypeTimer100Ns.IncrementBy(sw.Elapsed.Ticks);
Will it do ?
EDIT : I experimented with it a bit and now its even more confusing.. It actually shows the values I increment it by. Not %%.
The mystery unravelled. It currently appears that I don't use it in the way it was supposed to be used (or rather I didn't read TFM properly). If the sampling interval is 1s (as in perf mon live window) and you intervals are more than 1s then it shows you a nonsense number... To achieve smoothness, the activity you are trying to measure must be really fractions of 1s.. Otherwise this counter is not a good idea..
The answer for this kind of problem (although its not obvious, but still disturbing that nobody suggested it in a week) is actually SampleCounter.
Hi guys i just made a small Algorithm to display the fps to my screen.
frames_temp++;
frames_Time += (int)gameTime.ElapsedGameTime.TotalMilliseconds;
if (frames_Time >= 1000)
{
frames = frames_temp;
frames_temp = 0; frames_Time = 0;
}
this code snipped is located in the Update-method. The frame variable stores the actual value drawn to the screen(just posting that code, to make sure there is no fault, eventhough i checked it already).
Now the problem is that i can't turn off the IsFixedTimeStep. I set it to false inside the constructor, initialize and even the update-method but still the programm limits my fps to ~60. I either put in a for() query into my update-method running many million loops without frame-drops to make sure its not my cpu beeing too slow .Another thing i already tried is to use my own timeSpan and the systemtime to get the elapsed time between the calls of Update(), wich gives me kinda the same output. Now it is 99% sure that update only runs 60 times a second.
So why can't i call the Update-Method as often as possible as it should be when IsFixedTimeStep is false
Ty for replies
I believe that you problem is Vertical Syncing, this function of the graphics device is locking the frame rate to your monitor refresh rate. To solve this problem you need to turn off VSync (SynchronizeWithVerticalRetrace) in the GraphicsDeviceManager:
graphicsDeviceManager.SynchronizeWithVerticalRetrace = false;
graphicsDeviceManager.ApplyChanges();
graphicsDeviceManager is your game's GraphicsDeviceManager
If I recall correct fixed timestep = false results in the update being called before the draw method, not calling it as often as possible. Therefore you should check wheter Vsync or something else is limiting your application to 60fps.
ETC = "Estimated Time of Completion"
I'm counting the time it takes to run through a loop and showing the user some numbers that tells him/her how much time, approximately, the full process will take. I feel like this is a common thing that everyone does on occasion and I would like to know if you have any guidelines that you follow.
Here's an example I'm using at the moment:
int itemsLeft; //This holds the number of items to run through.
double timeLeft;
TimeSpan TsTimeLeft;
list<double> avrage;
double milliseconds; //This holds the time each loop takes to complete, reset every loop.
//The background worker calls this event once for each item. The total number
//of items are in the hundreds for this particular application and every loop takes
//roughly one second.
private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
//An item has been completed!
itemsLeft--;
avrage.Add(milliseconds);
//Get an avgrage time per item and multiply it with items left.
timeLeft = avrage.Sum() / avrage.Count * itemsLeft;
TsTimeLeft = TimeSpan.FromSeconds(timeLeft);
this.Text = String.Format("ETC: {0}:{1:D2}:{2:D2} ({3:N2}s/file)",
TsTimeLeft.Hours,
TsTimeLeft.Minutes,
TsTimeLeft.Seconds,
avrage.Sum() / avrage.Count);
//Only using the last 20-30 logs in the calculation to prevent an unnecessarily long List<>.
if (avrage.Count > 30)
avrage.RemoveRange(0, 10);
milliseconds = 0;
}
//this.profiler.Interval = 10;
private void profiler_Tick(object sender, EventArgs e)
{
milliseconds += 0.01;
}
As I am a programmer at the very start of my career I'm curious to see what you would do in this situation. My main concern is the fact that I calculate and update the UI for every loop, is this bad practice?
Are there any do's/don't's when it comes to estimations like this? Are there any preferred ways of doing it, e.g. update every second, update every ten logs, calculate and update UI separately? Also when would an ETA/ETC be a good/bad idea.
The real problem with estimation of time taken by a process is the quantification of the workload. Once you can quantify that, you can made a better estimate
Examples of good estimates
File system I/O or network transfer. Whether or not file systems have bad performance, you can get to know in advance, you can quantify the total number of bytes to be processed and you can measure the speed. Once you have these, and once you can monitor how many bytes have you transferred, you get a good estimate. Random factors may affect your estimate (i.e. an application starts meanwhile), but you still get a significative value
Encryption on large streams. For the reasons above. Even if you are computing a MD5 hash, you always know how many blocks have been processed, how many are to be processed and the total.
Item synchronization. This is a little trickier. If you can assume that the per-unit workload is constant or you can make a good estimate of the time required to process an item when variance is low or insignificant, then you can make another good estimate of the process. Pick email synchronization: if you don't know the byte size of the messages (otherwise you fall in case 1) but common practice tells that the majority of emails have quite the same size, then you can use the mean of the time taken to download/upload all processed emails to estimate the time taken to process a single email. This won't work in 100% of the cases and is subject to error, but you still see progress bar progressing on a large account
In general the rule is that you can make a good estimate of ETC/ETA (ETA is actually the date and time the operation is expected to complete) if you have a homogeneous process about of which you know the numbers. Homogeneity grants that the time to process a work item is comparable to others, i.e. the time taken to process a previous item can be used to estimate future. Numbers are used to make correct calculations.
Examples of bad estimates
Operations on a number of files of unknown size. This time you know only how many files you want to process (e.g. to download) but you don't know their size in advance. Once the size of the files has a high variance you see troubles. Having downloaded half of the file, when these were the smallest and sum up to 10% of total bytes, can be said being halfway? No! You just see the progress bar growing fast to 50% and then much slowly
Heterogenous processes. E.g. Windows installations. As pointed out by #HansPassant, Windows installations provide a worse-than-bad estimate. Installing a Windows software involves several processes including: file copy (this can be estimated), registry modifications (usually never estimated), execution of transactional code. The real problem is the last. Transactional processes involving execution of custom installer code are discusses below
Execution of generic code. This can never be estimated. A code fragment involves conditional statements. The execution of these involve changing paths depending on a condition external to the code. This means, for example, that a program behaves differently whether you have a printer installed or not, whether you have a local or a domain account, etc.
Conclusions
Estimating the duration of a software process isn't both an impossible and an exact/*deterministic* task.
It's not impossible because, even in the case of code fragments, you can either find a model for your code (pick a LU factorization as an example, this may be estimated). Or you might redesign your code splitting it into an estimation phase - where you first determine the branch conditions - and an execution phase, where all pre-determined branches are taken. I said might because this task is in practice impossible: most code determines branches as effects of previous conditions, meaning that estimating a branch actually involves running the code. Chicken and egg circle
It's not a deterministic process. Computer systems, especially if multitasking are affected by a number of random factors that may impact on your estimated process. You will never get a correct estimate before running your process. At most, you can detect external factors and re-estimate your process. The fork between your estimate and the real duration of process is mathematically converging to zero when you get closer to process end (lim [x->N] |est(N) - real(N)| == 0, where N is the process duration)
If your user interface is so obscure that you have to explain that ETC doesn't mean Etcetera then you are doing it wrong. Every user understands what a progress bar does, don't help.
Nothing is quite as annoying as an inaccurate progress bar. Particularly ones that promise a quick finish but then don't deliver. I'd give the progress bar displayed by any installer on Windows as a good example of one that is fundamentally broken. Just not a shining example of an implementation that you should pursue.
Such a progress bar is broken because it is utterly impossible to guess up front how long it is going to take to install a program. File systems have very unpredictable perf. This is a very common problem with estimating execution time. Better UI models are the spinning dots you'd see in a video player and many programs in Windows 8. Or the marquee style supported by the common ProgressBar control. Just feedback that says "I'm not dead, working on it". Even the hour-glass cursor is better than a bad estimate. If you have something to report beyond a technicality that no user is really interested in then don't hesitate to display that. Like the number of files you've processed or the number of kilobytes you've downloaded. The actual value of the number isn't that useful, seeing the rate at which it increases is the interesting tidbit.
A colleague and I are going back and forth on this issue and I'm hoping to get some outside opinions as to whether or not my proposed solution is a good idea.
First, a disclaimer: I realize that the notion of "optimizing DateTime.Now" sounds crazy to some of you. I have a couple of pre-emptive defenses:
I sometimes suspect that those people who always say, "Computers are fast; readability always comes before optimization" are often speaking from experience developing applications where performance, though it may be important, is not critical. I'm talking about needing things to happen as close to instantaneously as possible -- like, within nanoseconds (in certain industries, this does matter -- for instance, real-time high-frequency trading).
Even with that in mind, the alternative approach I describe below is, in fact, quite readable. It is not a bizarre hack, just a simple method that works reliably and fast.
We have runs tests. DateTime.Now is slow (relatively speaking). The method below is faster.
Now, onto the question itself.
Basically, from tests, we've found that DateTime.Now takes roughly 25 ticks (around 2.5 microseconds) to run. This is averaged out over thousands to millions of calls, of course. It appears that the first call actually takes a significant amount of time and subsequent calls are much faster. But still, 25 ticks is the average.
However, my colleague and I noticed that DateTime.UtcNow takes substantially less time to run -- on average, a mere 0.03 microseconds.
Given that our application will never be running while there is a change in Daylight Savings Time, my suggestion was to create the following class:
public static class FastDateTime {
public static TimeSpan LocalUtcOffset { get; private set; }
public static DateTime Now {
get { return DateTime.UtcNow + LocalUtcOffset; }
}
static FastDateTime() {
LocalUtcOffset = TimeZone.CurrentTimeZone.GetUtcOffset(DateTime.Now);
}
}
In other words, determine the UTC offset for the local timezone once -- at startup -- and from that point onward leverage the speed of DateTime.UtcNow to get the current time a lot faster via FastDateTime.Now.
I could see this being a problem if the UTC offset changed during the time the application was running (if, for example, the application was running overnight); but as I stated already, in our case, that will not happen.
My colleague has a different idea about how to do it, which is a bit too involved for me to explain here. Ultimately, as far as I can tell, both of our approaches return an accurate result, mine being slightly faster (~0.07 microseconds vs. ~0.21 microseconds).
What I want to know is:
Am I missing something here? Given the abovementioned fact that the application will only run within the time frame of a single date, is FastDateTime.Now safe?
Can anyone else perhaps think of an even faster way of getting the current time?
Could you just use DateTime.UtcNow, and only convert to local time when the data is presented? You've already determined that DateTime.UtcNow is much faster and it will remove any ambiguity around DST.
One difference between the result of
DateTime.Now
and
DateTime.UtcNow + LocalUtcOffset
is the value of the Kind property - Local vs Utc respectively. If the resultant DateTime is being passed to a third party library consider returning
DateTime.SpecifyKind(DateTime.UtcNow + LocalUtcOffset, DateTimeKind.Local)
I like your solution.
I made some tests to see how much faster it is compared to regular DateTime.Now
DateTime.UtcNow is 117 times faster than DateTime.Now
using DateTime.UtcNow is good enough if we are only interested in the duration and not the time itself. If all we need is to calculate the duration of a specific code section ( doing duration= End_time - Start_time ) then the time zone is not important and DateTime.UtcNow is sufficient.
But if we need the time itself then we need to do
DateTime.UtcNow + LocalUtcOffset
Just adding the time span slows down a little bit
and now according to my tests we are just 49 times faster than the regular DateTime.Now
If we put this calculation in a separate function/class as suggested then calling the method slows us down even more
and we are only 34 times faster.
But even 34 times faster is a lot !!!
In short,
Using DateTime.UtcNowis much faster than DateTime.Now
The only way I found to improve the suggested class is to use
inline code: DateTime.UtcNow + LocalUtcOffset
instead of calling the class method
BTW trying to force the compiler to compile as inline by using [MethodImpl(MethodImplOptions.AggressiveInlining)]
didnt seem to speed things up.
To answer in reverse order:
2) I cannot think of a faster way.
1) It would be worth checking if there are any framework improvements in the pipeline like they have just announced for System.IO
It's hard to be sure about safety but it's something that is crying out for a lot of unit tests. Daylight savings comes to mind. The System one is obviously very battle hardened while yours is not.