Does anyone know how to calculate the number of .net time ticks from outside of the .net framework? my situation involves a python script on Linux (my side) needing to send a timestamp as the number of ticks in a message destined for a C# process (the other side, who can't change its code). I can't find any libraries that do this... Any thoughts? Thanks!
You can easily determine what the Unix/Linux epoch (1970-01-01 00:00:00) is in DateTime ticks:
DateTime netTime = new DateTime(1970, 1, 1, 0, 0, 0);
Console.WriteLine(netTime.Ticks);
// Prints 621355968000000000
Since the Unix/Linux timestamp is the number of seconds since 1970-01-01 00:00:00, you can convert that to 100ns ticks by multiplying it by 10000000, and then convert it to DateTime ticks by adding 621355968000000000:
long netTicks = 621355968000000000L + 10000000L * linuxTimestamp;
Related
I have date time in a string as "20160127003500".
What I need to do is convert this to Unix time-stamp adding hours to it.
I want to add hours offset as "1" or "2" or "24".
Can anyone please guide me in right direction.
Regards
First, parse the entire string (including the offset you mentioned in the question comments) to a DateTimeOffset:
using System.Globalization;
string s = "20160129205500 +0100";
string format = "yyyyMMddHHmmss zzz";
DateTimeOffset dto = DateTimeOffset.ParseExact(s, format, CultureInfo.InvariantCulture);
Then, there are a few different ways to get a Unix timestamp. Note that by the pure definition of a "Unix timestamp", the result would be in terms of seconds, though many languages these days use a higher precision (such as milliseconds used in JavaScript).
If you are targeting .NET 4.6 or higher, simply use the built-in methods:
// pick one for the desired precision:
long timestamp = dto.ToUnixTimeMilliseconds();
long timestamp = dto.ToUnixTimeSeconds();
If you are targeting an earlier version of .NET, then compute it yourself:
DateTime epoch = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
// pick one for the desired precision:
long timestamp = (long) dto.UtcDateTime.Subtract(epoch).TotalMilliseconds;
long timestamp = (long) dto.UtcDateTime.Subtract(epoch).TotalSeconds;
I'm converting this code from C# to Ruby:
C# Code
DateTime dtEpoch = new DateTime(1970, 01, 01, 0, 0, 0, 0, DateTimeKind.Utc);
string strTimeStamp = Convert.ToUInt64((DateTime.UtcNow - dtEpoch).TotalSeconds).ToString();
Ruby Code
now = Time.now.utc
epoch = Time.utc(1970,01,01, 0,0,0)
time_diff = ((now - epoch).to_s).unpack('Q').first.to_s
I need to convert the integer into an unsigned 64-bit integer. Is unpack really the way to go?
I'm not really sure what is the returned value for your code, but it sure ain't seconds since epoch.
Ruby stores dates and times internally as seconds until epoch. Time.now.to_i will return exactly what you're looking for.
require 'date'
# Seconds from epoch to this very second
puts Time.now.to_i
# Seconds from epoch until today, 00:00
puts Date.today.to_time.to_i
In short, Time.now.to_i is enough.
Ruby internally stores Times in seconds since 1970-01-01T00:00:00.000+0000:
Time.now.to_f #=> 1439806700.8638804
Time.now.to_i #=> 1439806700
And you don't have to convert the value to something like ulong in C#, because Ruby automatically coerces the integer type so that it doesn't fight against your common sence.
A bit verbose explanation: ruby stores integers as instances of Fixnum, if that number fits the 63-bit size (not 64-bit, weird huh?) If that number exceeds that size, ruby automatically converts it to a Bignum, which has an arbitrary size.
I am using UTC seconds timestamp to sync to the servers.. When device timestamp is greater, it pushes data to the server, when server timestamp is greater, it pulls from the server.
Every time data is changed, the timestamp in the phone is updated to the latest time. I use the following functions to convert date to seconds
long seconds = FromDateToSeconds(DateTime.UtcNow);
public long FromDateToSeconds(DateTime date)
{
var epoch = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
return Convert.ToInt64((date - epoch).TotalSeconds);
}
When the data is sync'd, the server returns an updated timestamp which is updated in the device. When ever you change the data jus after it is synced. The FromDatetoSeconds functions returns a timestamp which is lesser than the last server sync timestamp. I see a difference of 1-15seconds?
I dont understand how this is possible. Does UtcNow return the correct time? Or is it off by 10-20 secs?
Some help would be appreciated.
The clock in the phone is not synced against anything except for when the mobile operator enables synchronization (and I do not think any operator has).
This means that the clock in your phone is of a few seconds if not even minutes! The only thing you can do is to calculate time incl. the offset.
To do so, pull the current time from a time server and then calculate the difference. Add this difference to all times.
I am working with .NET but I need to communicate with a logging service, unix based, that expects seconds and microseconds since the Unix epoch time. The seconds is easily retrievable doing something like:
DateTime UnixEpoch = new DateTime(1970, 1, 1);
TimeSpan time = DateTime.UtcNow() - UnixEpoch
int seconds = (int) time.TotalSeconds
however, I am unsure how to calculate the microseconds. I could use the TotalMilliseconds property and convert it to microseconds but I believe that defeats the purpose of using microseconds as a precise measurement. I have looked into using the StopWatch class but it doesn't seem like I can seed it with a time (Unix Epoch for example).
Thanks.
Use the Ticks property to get the most fine-grained level of detail. A tick is 100ns, so divide by 10 to get to microseconds.
However, that talks about the representation precision - it doesn't talk about the accuracy at all. Given the coarse granularity of DateTime.UtcNow I wouldn't expect it to be particularly useful. See Eric Lippert's blog post about the difference between precision and accuracy for more information.
You may want to start a stopwatch at a known time, and basically add its time to the "start point". Note that "ticks" from Stopwatch doesn't mean the same as TimeSpan.Ticks.
From Epoch Converter:
epoch = (DateTime.Now.ToUniversalTime().Ticks - 621355968000000000) / 10000000;
This will truncate it because of integer division, so make it 10000000.0 to maintain the sub-second portion, or just don't do the division.
It's important to note that the .Ticks property you get is relative to Jan 1, 0001 and not Jan 1, 1970 like in UNIX which is why you need to subtract that offset above.
edit: just for clarity, that nasty constant is just the number of ticks between Jan 1, 0001 and Jan 1, 1970 in UTC. If you take the seconds portion of it (62135596800) and divide by (365 * 24 * 60 * 60) you see you get a number close to 1970, which is of course not exactly 1970 due to leap adjustments.
DateTime.Ticks represent units of 100 nanoseconds since Jan. 1, 0001.
On modern CPUs with dynamically varying clock rate, it's very difficult (or expensive) to get accurate sub-millisecond timing.
I'd suggest that you just multiply the Milliseconds property by 1000.
Use Ticks like Jon suggests, but I think your accuracy is still going to be in the neighborhood of 1ms.
I have a legacy C++-based application that timestamps incoming network traffic using the CRT _ftime() function. The _ftime() function returns a _timeb structure, which has a 32-bit and a 64-bit implementation. We are using the 32-bit implementation, which looks like this:
struct _timeb {
long time; // 4 bytes
unsigned short millitm; // 2 bytes
short timezone; // 2 bytes
short dstflag; // 2 bytes
};
From the MSDN documentation, here is how each field is interpreted:
dstflag - nonzero if daylight savings time is currently in effect for the local time zone (see _tzset for an explanation of how daylight savings time is determined.)
millitm - fraction of a second in milliseconds
time - time in seconds since midnight (00:00:00), January 1, 1970, coordinated universal time (UTC).
timezone - difference in minutes, moving westward, between UTC and local time. The value of timezone is set from the value of the global variable _timezone (see _tzset).
I am re-working the portion of the code that does the timestamping to use C# in .NET 3.5. Timestamps are now generated using the System.DateTime structure, but I still need to convert them back to the _timeb structure so the legacy C++ code can operate on them. Here is how I am doing that in my managed C++ bridge library:
DateTime dateTime = DateTime::UtcNow;
DateTime baseTime(1970, 1, 1, 0, 0, 0, DateTimeKind::Utc);
TimeSpan delta = dateTime - baseTime;
_timeb timestamp;
timestamp.time = delta.TotalSeconds;
timestamp.millitm = dateTime.Millisecond;
timestamp.dstflag = TimeZoneInfo::Local->IsDaylightSavingTime(dateTime) ? 1 : 0;
timestamp.timezone = TimeZoneInfo::Local->BaseUtcOffset.TotalMinutes * -1;
From what I can tell, this appears to reconstruct the _timeb structure as if I had called _ftime() directly, and that's good. The thing is, timestamps are a critical piece of our application, so this has to be right.
My question is two-fold.
Is my algorithm flawed somehow? Does anyone see anything obvious that I've missed? Are there boundary conditions where this won't work right?
Is there a better way to do the conversion? Does .NET have a way to do this in a more straightforward manner?
You're aware of the Y2K38 problem? I assume you checked the sign of .timezone. Avoid the cleverness of using dateTime.Millisecond, that just confuses the next guy. Looks good otherwise.