Optimizing alternatives to DateTime.Now - c#

A colleague and I are going back and forth on this issue and I'm hoping to get some outside opinions as to whether or not my proposed solution is a good idea.
First, a disclaimer: I realize that the notion of "optimizing DateTime.Now" sounds crazy to some of you. I have a couple of pre-emptive defenses:
I sometimes suspect that those people who always say, "Computers are fast; readability always comes before optimization" are often speaking from experience developing applications where performance, though it may be important, is not critical. I'm talking about needing things to happen as close to instantaneously as possible -- like, within nanoseconds (in certain industries, this does matter -- for instance, real-time high-frequency trading).
Even with that in mind, the alternative approach I describe below is, in fact, quite readable. It is not a bizarre hack, just a simple method that works reliably and fast.
We have runs tests. DateTime.Now is slow (relatively speaking). The method below is faster.
Now, onto the question itself.
Basically, from tests, we've found that DateTime.Now takes roughly 25 ticks (around 2.5 microseconds) to run. This is averaged out over thousands to millions of calls, of course. It appears that the first call actually takes a significant amount of time and subsequent calls are much faster. But still, 25 ticks is the average.
However, my colleague and I noticed that DateTime.UtcNow takes substantially less time to run -- on average, a mere 0.03 microseconds.
Given that our application will never be running while there is a change in Daylight Savings Time, my suggestion was to create the following class:
public static class FastDateTime {
public static TimeSpan LocalUtcOffset { get; private set; }
public static DateTime Now {
get { return DateTime.UtcNow + LocalUtcOffset; }
}
static FastDateTime() {
LocalUtcOffset = TimeZone.CurrentTimeZone.GetUtcOffset(DateTime.Now);
}
}
In other words, determine the UTC offset for the local timezone once -- at startup -- and from that point onward leverage the speed of DateTime.UtcNow to get the current time a lot faster via FastDateTime.Now.
I could see this being a problem if the UTC offset changed during the time the application was running (if, for example, the application was running overnight); but as I stated already, in our case, that will not happen.
My colleague has a different idea about how to do it, which is a bit too involved for me to explain here. Ultimately, as far as I can tell, both of our approaches return an accurate result, mine being slightly faster (~0.07 microseconds vs. ~0.21 microseconds).
What I want to know is:
Am I missing something here? Given the abovementioned fact that the application will only run within the time frame of a single date, is FastDateTime.Now safe?
Can anyone else perhaps think of an even faster way of getting the current time?

Could you just use DateTime.UtcNow, and only convert to local time when the data is presented? You've already determined that DateTime.UtcNow is much faster and it will remove any ambiguity around DST.

One difference between the result of
DateTime.Now
and
DateTime.UtcNow + LocalUtcOffset
is the value of the Kind property - Local vs Utc respectively. If the resultant DateTime is being passed to a third party library consider returning
DateTime.SpecifyKind(DateTime.UtcNow + LocalUtcOffset, DateTimeKind.Local)

I like your solution.
I made some tests to see how much faster it is compared to regular DateTime.Now
DateTime.UtcNow is 117 times faster than DateTime.Now
using DateTime.UtcNow is good enough if we are only interested in the duration and not the time itself. If all we need is to calculate the duration of a specific code section ( doing duration= End_time - Start_time ) then the time zone is not important and DateTime.UtcNow is sufficient.
But if we need the time itself then we need to do
DateTime.UtcNow + LocalUtcOffset
Just adding the time span slows down a little bit
and now according to my tests we are just 49 times faster than the regular DateTime.Now
If we put this calculation in a separate function/class as suggested then calling the method slows us down even more
and we are only 34 times faster.
But even 34 times faster is a lot !!!
In short,
Using DateTime.UtcNowis much faster than DateTime.Now
The only way I found to improve the suggested class is to use
inline code: DateTime.UtcNow + LocalUtcOffset
instead of calling the class method
BTW trying to force the compiler to compile as inline by using [MethodImpl(MethodImplOptions.AggressiveInlining)]
didnt seem to speed things up.

To answer in reverse order:
2) I cannot think of a faster way.
1) It would be worth checking if there are any framework improvements in the pipeline like they have just announced for System.IO
It's hard to be sure about safety but it's something that is crying out for a lot of unit tests. Daylight savings comes to mind. The System one is obviously very battle hardened while yours is not.

Related

Get milliseconds passed

A just need a stable count of the current program's progression in milliseconds in C#. I don't care about what timestamp it goes off of, whether it's when the program starts, midnight, or the epoch, I just need a single function that returns a stable millisecond value that does not change in an abnormal manner besides increasing by 1 each millisecond. You'd be surprised how few comprehensive and simple answers I could find by searching.
Edit: Why did you remove the C# from my title? I'd figure that's a pretty important piece of information.
When your program starts create a StopWatch and Start() it.
private StopWatch sw = new StopWatch();
public void StartMethod()
{
sw.Start();
}
At any point you can query the Stopwatch:
public void SomeMethod()
{
var a = sw.ElapsedMilliseconds;
}
If you want something accurate/precise then you need to use a StopWatch, and please read Eric Lippert's Blog (formerly the Principal Developer of the C# compiler Team) Precision and accuracy of DateTime.
Excerpt:
Now, the question “how much time has elapsed from start to finish?” is a completely different question than “what time is it right now?” If the question you want to ask is about how long some operation took, and you want a high-precision, high-accuracy answer, then use the StopWatch class. It really does have nanosecond precision and accuracy that is close to its precision.
If you don't need an accurate time, and you don't care about precision and the possibility of edge-cases that cause your milliseconds to actually be negative then use DateTime.
Do you mean DateTime.Now? It holds absolute time, and subtracting two DateTime instances gives you a TimeSpan object which has a TotalMilliseconds property.
You could store the current time in milliseconds when the program starts, then in your function get the current time again and subtract
edit:
if what your going for is a stable count of process cycles, I would use processor clocks instead of time.
as per your comment you can use DateTime.Ticks, which is 1/10,000 of a millisecond per tick
Also, if you wanted to do the time thing you can use DateTime.Now as your variable you store when you start your program, and do another DateTime.Now whenever you want the time. It has a millisecond property.
Either way DateTime is what your looking for
It sounds like you are just trying to get the current date and time, in milliseconds. If you are just trying to get the current time, in milliseconds, try this:
long milliseconds = DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond;

Putting a thread to sleep for decimal value

This question is about System.Threading.Thread.Sleep(int). I know there is no method for a decimal value, but I really need to work with decimals.
I have a device which takes 20.37 milliseconds to turn by 1 degree. So, I need to put the code to sleep for an appropriate multiplication of 20.37 (2 degrees = 20.37*2 etc). Since the thread class got no decimal sleep method, how can I do this?
That does not work that way. Sleep will grant you that the thread sats idle for x time, but not that it won't stay idle for more. The end of the sleep period means that the thread is available for the scheduler to run it, but the scheduler may chose to run other threads/processes at that moment.
Get the initial instant, find the final instant, and calculate the current turn by the time passed. Also, do not forget to check how precise the time functions are.
Real-time programming has some particularities in its own as to advice you to seek for more info in the topic before trying to get something to work. It can be pretty extensive (multiprocessing OS vs monoprocessing, priorities, etc.)
Right, as pointed out in the comments, Thread.Sleep isn't 100% accurate. However, you can get it to (in theory) wait for 20.27 milliseconds by converting the milliseconds to ticks, and then making a new TimeSpan and calling the method with it, as follows:
Thread.Sleep(new TimeSpan(202700))
//202700 is 20.27 * TimeSpan.TicksPerMillisecond (which is 10,000)
Again, this is probably not going to be 100% accurate (as Thread.Sleep only guarantees for AT LEAST that amount of time). But if that's accurate enough, it'll be fine.
You can simply divide the integer - I just figured that out.
I needed less than a milisecond of time the thread sleeps so I just divided that time by an integer, you can either define a constant or just type in:
System.Threading.Thread.Sleep(time / 100);
Or what number you want.
Alternatively, as mentioned, you can do it like:
int thisIsTheNumberYouDivideTheTimeBy = 100;
Thread.Sleep(time / thisIsTheNumberYouDivideTheTimeBy);
Its actually quite simple. Hope that helped.
By the way, instead of
System.Threading.Thread.Sleep(x);
you can just type
Thread.Sleep(x);
unless you haven't written
using System.Threading;
in the beginning.
I had the same problem. But as a work around, i substitute the float vslie but convert to int value in the passing. The code itself rounds off for me and the thread sleeps for that long. As i said, its a work around and i'm just saying, not that it's accurate
You can use little bit of math as a workaround.
Let´s assume, that you don´t want to be extremely precise,
but still need overall float precise sleep.
Thread.Sleep(new Random().Next(20,21));
This should give you ~20.5 sleep timing. Use your imagination now.
TotalSleeps / tries = "should be wanted value", but for single sleep interval, this will not be true.
Dont use new Random() make an instance before.

What would be the best way to synchronize my application's time with outside server's time?

I was thinking of changing system's local time to server's time and then use it but I bet there are other ways to do this. I've been trying to find something like a clock in c#, but couldnt find anything. I'm receiving server's time in a DateTime format.
edit:
I need my application to use while working same time server does. I just want to get server's time once and after that, make my application work in a while loop using the time I've obtained from the server. There might be a difference between my system's time and server's time (even 5 seconds) and that's why I want to do this.
It's not entirely clear what you mean, but you could certainly create your own IClock interface which you'd use everywhere in code, and then write an implementation of that which is regularly synchronized with your server (or with NTP).
My Noda Time project already uses the idea of an injectable clock - not for synchronization purposes, but for testability. (A time service is basically a dependency.) Basically the idea is workable :) You may well not find anything which already does this, but it shouldn't be too hard to write. You'll want to think about how to adjust time though - for example, if the server time gets ahead of your "last server time + local time measurements" you may want to slew it gradually rather than having a discrete jump.
This is always assuming you do want it to be local to your application, of course. Another alternative (which may well not be appropriate, depending on your context) is to require that the host runs a time synchronization client (I believe Windows does by default these days) and simply start failing if the difference between your server and the client gets too large. (It's never going to be exactly in sync anyway, or at least not for long - you'll need to allow for some leeway.)
The answer #JonSkeet's provided to synch the times looks good, I just wanted to point out some things.
As #Alexei already said, users require admin privileges to be able to change their local time (in Windows as least), but there may also be other issues that can cause the time to be out of synch (bad internet connection, hacks etc.). This means there is no guarantee that the client time is indeed the same as the server time, so you will at least need to check the time the request was received serverside anyway. Plus there might also be a usability issue at hand here, would I want an application to be able change the time of my own local machine? Hell no.
To sum things up:
Check the time of the request serverside at least
Don't change the time of the client machine but show some kind of indicator in your application
How to handle the indicator in your application can be done in various ways.
Show a clock in your application (your initial idea) that is periodically synched with the server
Show some kind of countdown ("you can submit after x seconds.."), push a resetCountdown request to the clients when a request is received.
Enable a 'send button' or what ever you have, this would work kind of similar to the countdown.
Just remember, it's nearly impossible validate a request such as this clientside. So you have to build in some checks serverside!
I actually wanted to write a comment but it got kind of long.. :)
Okay a bit of necromancy as this is 6 years old, but had to deal with a similar problem for a network game.
Employed a technique I referred to as "marco-polo" for reasons that will be obvious soon. It requires the two clocks to be able to exchange messages, and its accuracy is dependent on how fast they can do that.
Disclaimer: I am fairly certain I am not the first to do this, and that this is the most rudimentary way to synchronize two clocks. Still I didn't find a documented way of doing so.
At Clock B (The clock we're trying to synchronize) we do the following ::
// Log the timestamp
localTime_Marco_Send = DateTime.UtcNow;
// Send that to clock A
SendSyncRequest();
// Wait for an answer
Sleep(..);
At Clock A (the reference clock) we have the following handler ::
// This is triggered by SendSyncRequest
OnReceiveSyncRequest()
{
// We received "Marco" - Send "Polo"
SendSyncReply(DateTime.UtcNow);
}
And back at Clock B ::
// This is triggered by SendSyncReply
OnReceiveSyncReply(DateTime remoteHalfTime)
{
// Log the time we received it
DateTime localTime_Polo_Receive = DateTime.UtcNow;
// The remote time is somewhere between the two local times
// On average, it will be in the middle of the two
DateTime localHalfTime = localTime_Marco_Send +
(localTime_Polo_Receive - localTime_Marco_Send) / 2;
// As a result, the estimated dT from A to B is
TimeSpan estimatedDT_A_B = localHalfTime - remoteHalfTime;
}
As a result we now have access to a nifty TimeSpan we can subtract from our current local time to estimate the remote time
DateTime estimatedRemoteTime = DateTime.UtcNow - estimatedDT_A_B;
The accuracy of this estimate is subject to the Round Trip Time of send-receive-send-receive, and you should also account for Clock drift (you should be doing this more than once):
Round-trip-time. If it were instant, you'd have the exact dT. If it takes 1 second to come and return, you don't know if the delay was on the sending or the receiving. As a result, your error is 0 < e < RTT, and on average will be RTT/2. If you know send (or receive) takes more than the other, use that to your advantage - the time you received is not the half-time, but is shifted relatively to how long each leg takes
Clock drift. CPU clocks drift, maybe 1s per day. So poll again once potential drift may play an important role.
Your server should always save the time in UTC mode.
You save time in UTC like this in the server:
DateTime utcTime = new DateTime(0, DateTimeKind.Utc);
or:
DateTime utcTimeNow = DateTime.UtcNow;
In the client, when you get the time which is stored in utc you can sonvert it to local time like this:
public DateTime ToLocalTime(DateTime utcTime)
{
//Assumes that even if utcTime kind is no properly deifned it is indeed UTC time
DateTime serverTime= new DateTime(utcTime.Ticks, DateTimeKind.Utc);
return TimeZoneInfo.ConvertTimeFromUtc(serverTime, m_localTimeZone);
}
If You want to change your local time zone , here is a code example on how to read time zone to use from config:
string localTimeZoneId = sysParamsHelper.ReadString(LOCAL_TIME_ZONE_ID_KEY, LOCAL_TIME_ZONE_DEFAULT_ID);
ReadOnlyCollection<TimeZoneInfo> timeZones = TimeZoneInfo.GetSystemTimeZones();
foreach (TimeZoneInfo timeZoneInfo in timeZones)
{
if(timeZoneInfo.Id.Equals(localTimeZoneId))
{
m_localTimeZone = timeZoneInfo;
break;
}
}
if (m_localTimeZone == null)
{
m_logger.Error(LogTopicEnum.AMR, "Could not find time zone with id: " + localTimeZoneId + " . will use default time zone (UTC).");
m_localTimeZone = TimeZoneInfo.Utc;
}

Why is the data type of System.Timers.Timer.Interval a double?

This is a bit of an academic question as I'm struggling with the thinking behind Microsoft using double as the data type for the Interval property!
Firstly from MDSN Interval is the time, in milliseconds, between Elapsed events; I would interpret that to be a discrete number so why the use of a double? surely int or long makes greater sense!?
Can Interval support values like 5.768585 (5.768585 ms)? Especially when one considers System.Timers.Timer to have nowhere near sub millisecond accuracy... Most accurate timer in .NET?
Seems a bit daft to me.. Maybe I'm missing something!
Disassembling shows that the interval is consumed via a call to (int)Math.Ceiling(this.interval) so even if you were to specify a real number, it would be turned into an int before use. This happens in a method called UpdateTimer.
Why? No idea, perhaps the spec said that double was required at one point and that changed? The end result is that double is not strictly required, because it is eventually converted to an int and cannot be larger than Int32.MaxValue according to the docs anyway.
Yes, the timer can "support" real numbers, it just doesn't tell you that it silently changed them. You can initialise and run the timer with 100.5d, it turns it into 101.
And yes, it is all a bit daft: 4 wasted bytes, potential implicit casting, conversion calls, explicit casting, all needless if they'd just used int.
The reason to use a double here is the attempt to provide enough accuracy.
In detail: The systems interrupt time slices are given by ActualResolution which is returned by NtQueryTimerResolution(). NtQueryTimerResolution is exported by the native Windows NT library NTDLL.DLL. The System time increments are given by TimeIncrement which is returned by GetSystemTimeAdjustment().
These two values are determining the behavior of the system timers. They are integer values and the express 100 ns units. However, this is already insufficient for certain hardware today. On some systems ActualResolution is returned 9766 which would correspond to 0.9766 ms. But in fact these systems are operating at 1024 interrupts per second (tuned by proper setting of the multimedia interface). 1024 interrupts a second will cause the interrupt period to be 0.9765625 ms. This is of too high detail, it reaches into the 100 ps regime and can therefore not be hold in the standard ActualResolution format.
Therefore it has been decided to put such time-parameters into double. But: This does not mean that all of the posible values are supported/used. The granularity given by TimeIncrement will persist, no matter what.
When dealing with timers it is always advisable to look at the granularity of the parameters involved.
So back to your question: Can Interval support values like 5.768585 (ms) ?
No, the system I've taken as an example above cannot.
But it can support 5.859375 (ms)!
Other systems with different hardware may support other numbers.
So the idea of introducing a double here is not such a stupid idea and actually makes sense. Spending another 4 bytes to get things finally right is a good investment.
I've summarized some more details about Windows time matters here.

Tracking Time Spent in Debugger

[ [ EDIT 2x ] I think I have worded my original question wrong, so I have scooted it down below and rewrote exactly what I am trying to get at, for future readers. ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ New, Shiny, Clear Question with Better Wording ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I have a loop that is running for a simulation / gaming framework. This loop has several places in it where it needs to ascertain how much time - in reality - has passed, so that the logic within these special places - specifically, Rendering and Updating - can work correctly. It has the option of being a Fixed Time Step (unfixed[update/render] is false) or not.
The problem arises when Breakpoint-based Debugging is done in any point in the application, since it uses a stopwatch to figure out how much realtime has passed (for the purpose of physics and animation moving at a realistic speed, and not based on how many frames the computer can churn out).
It looks (roughly) like this, using multiple stopwatches for each 'part' of the application loop that needs to know how much time has passed since that 'part' last occurred:
while ( RunningTheSimulation ) {
/* ... Events and other fun awesome stuff */
TimeSpan updatedifference = new TimeSpan( updatestopwatch.ElapsedTicks );
if ( unfixedupdate || updatedifference > updateinterval ) {
Time = new GameTime( updatedifference,
new TimeSpan( gamestopwatch.ElapsedTicks ) );
Update( Time );
++updatecount;
updatestopwatch.Reset( );
updatestopwatch.Start( );
}
TimeSpan renderdifference = new TimeSpan( renderstopwatch.ElapsedTicks );
if ( unfixedrender || renderdifference > renderinterval ) {
Time = new GameTime( renderdifference,
new TimeSpan( gamestopwatch.ElapsedTicks ) );
Render( Time );
++rendercount;
renderstopwatch.Reset( );
renderstopwatch.Start( );
}
}
Some info about the variables:
updatestopwatch is a Stopwatch for the time spent outside of the Update() function,
renderstopwatch is a Stopwatch for the time spent outside the Render() function, and
gamestopwatch is a Stopwatch for the total elapsed time of the simulation/game itself.
The problem arises when I debug anywhere in the application. Because the stopwatches are measuring realtime, the Simulation will be completely thrown off by any kind of Breakpoint-based debugging because the Stopwatches will keep counting time, whether or not I'm debugging the application. I am not using the Stopwatches to measure performance: I am using them to keep track of time between re-occurrences of Update, Render, and other events like the ones illustrated above. This gets extremely frustrating when I breakpoint and analyze and fix an error in Update(), but then the Render() time is so completely off that any display of the results of the simulation is vengefully kicked in the face.
That said, when I stop debugging entirely it's obviously not a problem, but I have a lot of development work to do and I'm going to be debugging for a long time, so just pretending that this isn't inconvenient won't work, unfortunately. =[
I looked at Performance Counters, but I can't seem to wrap my head around how to get them to work in the context of what I'm trying to do: Render and Update can contain any amount of arbitrary code (they're specific to whatever simulation is running on top of this while loop), which means I can't steadily do a PerformanceCounter.Increment() for the individual components of the loop.
I'll keep poking around System.Diagnostics and other .NET namespaces, but so far I've turned up blanks on how to just "ignore" the time spent in the attached Debugger...
Anyone have any ideas or insight?
[ [ EDITS 5x ] Corrected misspellings and made sure the formatting on everything was correct. Sorry about that. ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ Original, less-clear question ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I have a constant loop running in a C# application, which I debug all the time. I am currently using a Stopwatch, but I could use any other mechanism to track the passing time. My problem begins when I do things like use Breakpoints somewhere during this loop (enclosed in a typical while (true) { ... }:
if ( unfixedupdate || updatedifference > updateinterval )
{
Time = new GameTime( updatedifference,
new TimeSpan( gamestopwatch.ElapsedTicks ) );
Update( Time );
++updatecount;
updatestopwatch.Reset( );
updatestopwatch.Start( );
}
The time measures itself fine, but it measures actual real time - including any time I spent debugging. Which means if I'm poking around for 17 seconds after updatestopwatch.Reset(), this gets compounded onto my loop and - for at least 1 rerun of that loop - I have to deal with the extra time I spent in real time factoring into all of my calculations.
Is there any way I can hook into the debugger to know when its freezing the application, so I can counter-measure the time and subtract it accordingly? As tagged, I'm using .NET and C# for this, but anything related to Visual Studio as well might help get me going in the right direction.
[ EDIT ]
To provide more information, I am using several stopwatches (for update, rendering, and a few other events all in a different message queue). If I set a breakpoint inside Update(), or in any other part of the application, the stopwatches will accurately measure the Real Time spent between these. This includes time I spend debugging various completely unrelated components of my application which are called downstream of Update() or Render() or Input() etc. Obviously the simulation's Timing (controlled by the GameTime parameter passed into the toplevel Update, Render, etc. functions) won't work properly if, even if the CPU only took 13 ms to finish the update function, I spend 13 extra seconds debugging (looking at variables and then Continue with the simulation); the problem being that I will see the other stopwatches suddenly accounting for 13 extra seconds of time. If it still doesn't make sense, I'll chime in again.
Use performance counters instead. The process CPU time should give a good indicator (but not as accurate as a realtime stopwatch) and should not interfere with debugging.
A possible solution to this would be to write your locals (or whatever data you are trying to debug) to the output using Debug.WriteLine().
You failed to explain what you are actually trying to debug and why animation timers doing exactly what they are expected to do with elapsed time is a problem. Know that when you break, time continues on. What is the actual problem?
Also, keep in mind that timings while debugging are not going to be anywhere near the measurement when running in release with optimizations turned on. If you'd like to measure time frame to frame, use a commercial profiling tool. Finding how long a method or function took is exactly what they were made for.
If you'd like to debug whether or not your animation works correctly, create a deterministic test where you supply the time rather than depending on the wall clock, using dependency injection and a time provider interface.
This article has a great example:
https://www.toptal.com/qa/how-to-write-testable-code-and-why-it-matters

Categories

Resources