According to msdn DateTime precision is 10 ms. So t2-t1 precision in the example below is also 10 ms. However the returned value is "double" what is confusing.
DateTime t1 = DateTime.Now; // precision is 10 ms
....
DateTime t2 = DateTime.Now; // precision is 10 ms
... (t2-t1).TotalMilliseconds; // double (so precision is less than 1 ms???)
I expect int value because double value doesn't make sense when precision is 10 ms. I need to use resulted value in Thread.Sleep(). Should I just cast to int?
The precision of DateTime itself is down to the tick.
The granularity of DateTime.Now is typically 10 or 15ms - it's the granularity of the system clock. (That doesn't mean the clock is accurate to the nearest 10 or 15ms, mind you.) The subtraction operator on DateTime shouldn't know or care about that though - the result is just a TimeSpan which again has a precision to the tick level.
Just casting to int should be absolutely fine.
(You might want to read Eric Lippert's blog post on this, by the way.)
Related
Have a small problem where if I save a DateTime field as an SQL command parameter it loses precision, like often less than a milisecond.
e.g. The parameter's Value is:
TimeOfDay {16:59:35.4002017}
But its SqlValue is:
TimeOfDay {16:59:35.4000000}
And that's the time that's saved in the database.
Now I'm not particularly bothered about a couple of microseconds, but it causes problems later on when I'm comparing values, they show up as not equal.
(Also in some comparisons the type of the field is not known until run-time so I'm not even sure at dev-time whether I'll even need special DateTime "rounding" logic)
Is there any easy fix for this when adding the parameter?
You're using DateTime, which is documented with:
Accuracy: Rounded to increments of .000, .003, or .007 seconds
It sounds like you want DateTime2:
Precision, scale: 0 to 7 digits, with an accuracy of 100ns. The default precision is 7 digits.
That 100ns precision is the same as DateTime (1 tick = 100ns)
Or just live with the difference and write methods to round DateTime before comparing - that may end up being simpler.
Try using datetime2 it has better precision.
For a rowkey on Azure TableStorage entities following prefix is used:
DateTime.MaxValue.Subtract(DateTime.UtcNow).TotalMilliseconds
As far as I know should this timestamp act as a kind of "sorter" so that newer entities are on top of an list. So, this shown code line creates (as I can imagine) the amount of milliseconds of the current date/time till the DateTime.MaxValue.
Is there a simple and safe way, to convert this amount of milliseconds "back" to the date/time when the timestamp was created? I´m not so familiar with date/time conversions...
The DateTime.MaxValue is:
equivalent to 23:59:59.9999999 UTC, December 31, 9999 in the
Gregorian calendar, exactly one 100-nanosecond tick before 00:00:00
UTC, January 1, 10000.
Thus, considering roughly 10,000 years, you have:
10,000 x 365 x 24 x 60 x 60 x 1000 = 315,360,000,000,000 //Note 15-digit
And the double precision is at least 15-digit. In other words, as long as you use the first 15 digit of your TotalMilliseconds as the timestamp, then it should be fine.
I recommend to cast it to long whose integer precision is:
–9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 //note, more than 15-digit
And then use ToString("D15") as unique timestamp:
long val = (long)DateTime.MaxValue.Subtract(DateTime.UtcNow).TotalMilliseconds;
string timestamp = val.ToString("D15");
And to convert back, you could cast it back to double and use AddMilliseconds with negative sign from max.
double db = Convert.ToDouble(timestamp);
DateTime dt = DateTime.MaxValue;
dt.AddMilliseconds(-db); //this will give you the datetime back with milliseconds precision
Then you will get precision up to your milliseconds.
I did a small winform program for data transferring in Visual Studio, and I used a method to provide the transferring time duration. After the transferring being done, the program will return a dialog window to show the time.
But here I don't know what is the time precision or the resolution of the timer, how can it be such a precision, even micro second?
var startTime = DateTime.Now;
this.transferdata();
var endTime = DateTime.Now;
var timeElapsed = endTime.Subtract(startTime);
when I saw the definition of class DateTime, there is only a precision in milisecond. can anybody tell me why there is such a high resolution timer in the visual studio 2012? Or there is related to the operating system?
The precision of the clock depends on the operating system. The system clock ticks a certain number of times per second, and you can only measure whole ticks.
You can test the resolution for a specific computer using code like this:
DateTime t1 = DateTime.Now;
DateTime t2;
while ((t2 = DateTime.Now) == t1) ;
Console.WriteLine(t2 - t1);
On my computer the result is 00:00:00.0156253, which means that the system clock ticks 64 times per second.
(Note: The DateTime type also has ticks, but that is not the same as the system clock ticks. A DateTime tick is 1/10000000 second.)
To measure time more precisely, you should use the Stopwatch class. Its resolution also depends on the system, but is much higher than the system clock. You can get the resolution from the Stopwatch.Frequency property, and on my computer it returns 2143566 which is a tad more than 64...
Start a stopwatch before the work and stop it after, then get the elapsed time:
Stopwatch time = Stopwatch.StartNew();
this.transferdata();
time.Stop();
TimeSpan timeElapsed = time.Elapsed;
That will return the time in the resolution that the TimeSpan type can handle, e.g. 1/10000000 second. You can also calculate the time from the number of ticks:
double timeElapsed = (double)s.ElapsedTicks / (double)Stopwatch.Frequency;
You are confusing several things. Precision, Accuracy, Frequency, and Resolution.
You might have a variable that is accurate to a billion decimal places. But if you can't actually measure that small of a number then that's the difference between precision and resolution. Frequency is the number of times per second a measurement is taken, while relates to resolution. Accuracy is how closely a given sample is to the real measurement.
So, given that DateTime has a precision much higher than the system clock, simply saying DateTime.Now will not necessarily give you an exact timestamp. There are, however, Higher resolution timers in Windows, and the Stopwatch class uses them to measure time elapsed, so if you use this class you get a much better accuracy.
DateTime has no "default precision". It has only one precision, and that's the Minimum and Maximum values it can store. DateTime internally stores it's values as a single value, and this value is formatted to whatever type you want to display (seconds, minutes, days, ticks, whatever...).
I am working with .NET but I need to communicate with a logging service, unix based, that expects seconds and microseconds since the Unix epoch time. The seconds is easily retrievable doing something like:
DateTime UnixEpoch = new DateTime(1970, 1, 1);
TimeSpan time = DateTime.UtcNow() - UnixEpoch
int seconds = (int) time.TotalSeconds
however, I am unsure how to calculate the microseconds. I could use the TotalMilliseconds property and convert it to microseconds but I believe that defeats the purpose of using microseconds as a precise measurement. I have looked into using the StopWatch class but it doesn't seem like I can seed it with a time (Unix Epoch for example).
Thanks.
Use the Ticks property to get the most fine-grained level of detail. A tick is 100ns, so divide by 10 to get to microseconds.
However, that talks about the representation precision - it doesn't talk about the accuracy at all. Given the coarse granularity of DateTime.UtcNow I wouldn't expect it to be particularly useful. See Eric Lippert's blog post about the difference between precision and accuracy for more information.
You may want to start a stopwatch at a known time, and basically add its time to the "start point". Note that "ticks" from Stopwatch doesn't mean the same as TimeSpan.Ticks.
From Epoch Converter:
epoch = (DateTime.Now.ToUniversalTime().Ticks - 621355968000000000) / 10000000;
This will truncate it because of integer division, so make it 10000000.0 to maintain the sub-second portion, or just don't do the division.
It's important to note that the .Ticks property you get is relative to Jan 1, 0001 and not Jan 1, 1970 like in UNIX which is why you need to subtract that offset above.
edit: just for clarity, that nasty constant is just the number of ticks between Jan 1, 0001 and Jan 1, 1970 in UTC. If you take the seconds portion of it (62135596800) and divide by (365 * 24 * 60 * 60) you see you get a number close to 1970, which is of course not exactly 1970 due to leap adjustments.
DateTime.Ticks represent units of 100 nanoseconds since Jan. 1, 0001.
On modern CPUs with dynamically varying clock rate, it's very difficult (or expensive) to get accurate sub-millisecond timing.
I'd suggest that you just multiply the Milliseconds property by 1000.
Use Ticks like Jon suggests, but I think your accuracy is still going to be in the neighborhood of 1ms.
What is a good data-type for saving hours in .net?
Is it better to use the decimal type or is the double data-type more appropriate. With hours I mean values such as:
2 for two hours
1.5 for 90 minutes
8.25 for 8 hours and 15 minutes.
A good way to represent a number of hours is to use a TimeSpan:
TimeSpan hours = TimeSpan.FromHours(2);
Given the choice between decimal or double I'd probably go for double as there is typically no expectation that the amount of time is represented exactly. If you need an exact decimal representation of your fractional number of hours (which seems unlikely) then use decimal.
You could also consider storing it as an integer in for example seconds, milliseconds or ticks.
The best datatype to store hours is the one designed for it - TimeSpan.
It has methods that allow you to add/subtract/convert it.
As for storage in a database, it really depends on what you are using this for and what kind of resolution is required.
I would use the time datatype - as it will hold the range:
00:00:00.0000000 through 23:59:59.9999999
However, if you need to hold more than 24 hours in this field, you may want to consider a tinyint or int holding the number of minutes (assuming that is the maximum time resolution you require).
In SQL Server use INT or DECIMAL. TIME isn't really ideal for storing a duration because TIME defines a point in time within the 24 hour clock whereas duration is simply an integer or decimal value. You cannot do addition or subtraction with TIME values and there is no obvious way to use TIME to store durations greater than 24hrs.
Why don't use TIME?
You can use DATEADD with TIME to manipulate it easier:
SELECT DATEADD(minute, 30, CAST('2:00:00' AS TIME))
becomes 02:30:00.0000000. And so on..