I am working with .NET but I need to communicate with a logging service, unix based, that expects seconds and microseconds since the Unix epoch time. The seconds is easily retrievable doing something like:
DateTime UnixEpoch = new DateTime(1970, 1, 1);
TimeSpan time = DateTime.UtcNow() - UnixEpoch
int seconds = (int) time.TotalSeconds
however, I am unsure how to calculate the microseconds. I could use the TotalMilliseconds property and convert it to microseconds but I believe that defeats the purpose of using microseconds as a precise measurement. I have looked into using the StopWatch class but it doesn't seem like I can seed it with a time (Unix Epoch for example).
Thanks.
Use the Ticks property to get the most fine-grained level of detail. A tick is 100ns, so divide by 10 to get to microseconds.
However, that talks about the representation precision - it doesn't talk about the accuracy at all. Given the coarse granularity of DateTime.UtcNow I wouldn't expect it to be particularly useful. See Eric Lippert's blog post about the difference between precision and accuracy for more information.
You may want to start a stopwatch at a known time, and basically add its time to the "start point". Note that "ticks" from Stopwatch doesn't mean the same as TimeSpan.Ticks.
From Epoch Converter:
epoch = (DateTime.Now.ToUniversalTime().Ticks - 621355968000000000) / 10000000;
This will truncate it because of integer division, so make it 10000000.0 to maintain the sub-second portion, or just don't do the division.
It's important to note that the .Ticks property you get is relative to Jan 1, 0001 and not Jan 1, 1970 like in UNIX which is why you need to subtract that offset above.
edit: just for clarity, that nasty constant is just the number of ticks between Jan 1, 0001 and Jan 1, 1970 in UTC. If you take the seconds portion of it (62135596800) and divide by (365 * 24 * 60 * 60) you see you get a number close to 1970, which is of course not exactly 1970 due to leap adjustments.
DateTime.Ticks represent units of 100 nanoseconds since Jan. 1, 0001.
On modern CPUs with dynamically varying clock rate, it's very difficult (or expensive) to get accurate sub-millisecond timing.
I'd suggest that you just multiply the Milliseconds property by 1000.
Use Ticks like Jon suggests, but I think your accuracy is still going to be in the neighborhood of 1ms.
Related
For a rowkey on Azure TableStorage entities following prefix is used:
DateTime.MaxValue.Subtract(DateTime.UtcNow).TotalMilliseconds
As far as I know should this timestamp act as a kind of "sorter" so that newer entities are on top of an list. So, this shown code line creates (as I can imagine) the amount of milliseconds of the current date/time till the DateTime.MaxValue.
Is there a simple and safe way, to convert this amount of milliseconds "back" to the date/time when the timestamp was created? I´m not so familiar with date/time conversions...
The DateTime.MaxValue is:
equivalent to 23:59:59.9999999 UTC, December 31, 9999 in the
Gregorian calendar, exactly one 100-nanosecond tick before 00:00:00
UTC, January 1, 10000.
Thus, considering roughly 10,000 years, you have:
10,000 x 365 x 24 x 60 x 60 x 1000 = 315,360,000,000,000 //Note 15-digit
And the double precision is at least 15-digit. In other words, as long as you use the first 15 digit of your TotalMilliseconds as the timestamp, then it should be fine.
I recommend to cast it to long whose integer precision is:
–9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 //note, more than 15-digit
And then use ToString("D15") as unique timestamp:
long val = (long)DateTime.MaxValue.Subtract(DateTime.UtcNow).TotalMilliseconds;
string timestamp = val.ToString("D15");
And to convert back, you could cast it back to double and use AddMilliseconds with negative sign from max.
double db = Convert.ToDouble(timestamp);
DateTime dt = DateTime.MaxValue;
dt.AddMilliseconds(-db); //this will give you the datetime back with milliseconds precision
Then you will get precision up to your milliseconds.
I have a very simple issue with datetime and am seeking some help.
I have a log that I would like to get all data information from. There are three columns of datetime formats (2 in UNIX timestamp while the other isn't).
The one with different timestamp format offers a value of, for example, 22194885 which I don't know which datetime type it belongs to.
Looks like minutes since January 1, 1970. this is Python code, but works the same as C localtime():
>>> import time
>>> time.localtime(22194885*60)
time.struct_time(tm_year=2012, tm_mon=3, tm_mday=13, tm_hour=19, tm_min=45, tm_sec=0, tm_wday=1, tm_yday=73, tm_isdst=1)
Works out to 3/13/2012 7:45pm.
Looks like it could be minutes since the Epoch, rather than milliseconds
22194885 minutes / 60 = 369914.75 hours
369914.75 hours / 24 = 15413.1 days
15413.1 days / 365 = 42.2 years
1970 + 42.2 = about today
For help converting Epoch time to .Net time, see
How to convert a Unix timestamp to DateTime and vice versa?
Remember that question deals with milliseconds, so you'll have to adjust the answer slightly.
Based on the calculations in Eric J's answer (which has been deleted), this could well be the number of minutes since the epoch. Bah! He slipped in a ninja edit.
Julian seconds (since the start of the year) is also a strong possibility.
According to msdn DateTime precision is 10 ms. So t2-t1 precision in the example below is also 10 ms. However the returned value is "double" what is confusing.
DateTime t1 = DateTime.Now; // precision is 10 ms
....
DateTime t2 = DateTime.Now; // precision is 10 ms
... (t2-t1).TotalMilliseconds; // double (so precision is less than 1 ms???)
I expect int value because double value doesn't make sense when precision is 10 ms. I need to use resulted value in Thread.Sleep(). Should I just cast to int?
The precision of DateTime itself is down to the tick.
The granularity of DateTime.Now is typically 10 or 15ms - it's the granularity of the system clock. (That doesn't mean the clock is accurate to the nearest 10 or 15ms, mind you.) The subtraction operator on DateTime shouldn't know or care about that though - the result is just a TimeSpan which again has a precision to the tick level.
Just casting to int should be absolutely fine.
(You might want to read Eric Lippert's blog post on this, by the way.)
What is a good data-type for saving hours in .net?
Is it better to use the decimal type or is the double data-type more appropriate. With hours I mean values such as:
2 for two hours
1.5 for 90 minutes
8.25 for 8 hours and 15 minutes.
A good way to represent a number of hours is to use a TimeSpan:
TimeSpan hours = TimeSpan.FromHours(2);
Given the choice between decimal or double I'd probably go for double as there is typically no expectation that the amount of time is represented exactly. If you need an exact decimal representation of your fractional number of hours (which seems unlikely) then use decimal.
You could also consider storing it as an integer in for example seconds, milliseconds or ticks.
The best datatype to store hours is the one designed for it - TimeSpan.
It has methods that allow you to add/subtract/convert it.
As for storage in a database, it really depends on what you are using this for and what kind of resolution is required.
I would use the time datatype - as it will hold the range:
00:00:00.0000000 through 23:59:59.9999999
However, if you need to hold more than 24 hours in this field, you may want to consider a tinyint or int holding the number of minutes (assuming that is the maximum time resolution you require).
In SQL Server use INT or DECIMAL. TIME isn't really ideal for storing a duration because TIME defines a point in time within the 24 hour clock whereas duration is simply an integer or decimal value. You cannot do addition or subtraction with TIME values and there is no obvious way to use TIME to store durations greater than 24hrs.
Why don't use TIME?
You can use DATEADD with TIME to manipulate it easier:
SELECT DATEADD(minute, 30, CAST('2:00:00' AS TIME))
becomes 02:30:00.0000000. And so on..
I Have a legacy database with a field containing an integer representing a datetime in UTC
From the documentation:
"Timestamps within a CDR appear in Universal Coordinated Time (UTC). This value remains
independent of daylight saving time changes"
An example of a value is 1236772829.
My question is what is the best way to convert it to a .NET DateTime (in CLR code, not in the DB), both as the UTC value and as a local time value.
Have tried to google it but without any luck.
You'll need to know what the integer really means. This will typically consist of:
An epoch/offset (i.e. what 0 means) - for example "midnight Jan 1st 1970"
A scale, e.g. seconds, milliseconds, ticks.
If you can get two values and what they mean in terms of UTC, the rest should be easy. The simplest way would probably be to have a DateTime (or DateTimeOffset) as the epoch, then construct a TimeSpan from the integer, e.g. TimeSpan.FromMilliseconds etc. Add the two together and you're done. EDIT: Using AddSeconds or AddMilliseconds as per aakashm's answer is a simpler way of doing this bit :)
Alternatively, do the arithmetic yourself and call the DateTime constructor which takes a number of ticks. (Arguably the version taking a DateTimeKind as well would be better, so you can explicitly state that it's UTC.)
Googling that exact phrase gives me this Cicso page, which goes on to say "The field specifies a time_t value that is obtained from the operating system. "
time_t is a C library concept which strictly speaking doesn't have to be any particular implementation, but typically for UNIX-y systems will be the number of seconds since the start of the Unix epoch, 1970 January 1 00:00.
Assuming this to be right, this code will give you a DateTime from an int:
DateTime epochStart = new DateTime(1970, 1, 1);
int cdrTimestamp = 1236772829;
DateTime result = epochStart.AddSeconds(cdrTimestamp);
// Now result is 2009 March 11 12:00:29
You should sanity check the results you get to confirm that this is the correct interpretation of these time_ts.