I have a legacy C++-based application that timestamps incoming network traffic using the CRT _ftime() function. The _ftime() function returns a _timeb structure, which has a 32-bit and a 64-bit implementation. We are using the 32-bit implementation, which looks like this:
struct _timeb {
long time; // 4 bytes
unsigned short millitm; // 2 bytes
short timezone; // 2 bytes
short dstflag; // 2 bytes
};
From the MSDN documentation, here is how each field is interpreted:
dstflag - nonzero if daylight savings time is currently in effect for the local time zone (see _tzset for an explanation of how daylight savings time is determined.)
millitm - fraction of a second in milliseconds
time - time in seconds since midnight (00:00:00), January 1, 1970, coordinated universal time (UTC).
timezone - difference in minutes, moving westward, between UTC and local time. The value of timezone is set from the value of the global variable _timezone (see _tzset).
I am re-working the portion of the code that does the timestamping to use C# in .NET 3.5. Timestamps are now generated using the System.DateTime structure, but I still need to convert them back to the _timeb structure so the legacy C++ code can operate on them. Here is how I am doing that in my managed C++ bridge library:
DateTime dateTime = DateTime::UtcNow;
DateTime baseTime(1970, 1, 1, 0, 0, 0, DateTimeKind::Utc);
TimeSpan delta = dateTime - baseTime;
_timeb timestamp;
timestamp.time = delta.TotalSeconds;
timestamp.millitm = dateTime.Millisecond;
timestamp.dstflag = TimeZoneInfo::Local->IsDaylightSavingTime(dateTime) ? 1 : 0;
timestamp.timezone = TimeZoneInfo::Local->BaseUtcOffset.TotalMinutes * -1;
From what I can tell, this appears to reconstruct the _timeb structure as if I had called _ftime() directly, and that's good. The thing is, timestamps are a critical piece of our application, so this has to be right.
My question is two-fold.
Is my algorithm flawed somehow? Does anyone see anything obvious that I've missed? Are there boundary conditions where this won't work right?
Is there a better way to do the conversion? Does .NET have a way to do this in a more straightforward manner?
You're aware of the Y2K38 problem? I assume you checked the sign of .timezone. Avoid the cleverness of using dateTime.Millisecond, that just confuses the next guy. Looks good otherwise.
Related
I'm trying to add a wrapper around DateTime to include the time zone information. Here's what I have so far:
public struct DateTimeWithZone {
private readonly DateTime _utcDateTime;
private readonly TimeZoneInfo _timeZone;
public DateTimeWithZone(DateTime dateTime, TimeZoneInfo timeZone) {
_utcDateTime = TimeZoneInfo.ConvertTimeToUtc(DateTime.SpecifyKind(dateTime, DateTimeKind.Unspecified), timeZone);
_timeZone = timeZone;
}
public DateTime UniversalTime { get { return _utcDateTime; } }
public TimeZoneInfo TimeZone { get { return _timeZone; } }
public DateTime LocalTime { get { return TimeZoneInfo.ConvertTimeFromUtc(_utcDateTime, _timeZone); } }
public DateTimeWithZone AddDays(int numDays) {
return new DateTimeWithZone(TimeZoneInfo.ConvertTimeFromUtc(UniversalTime.AddDays(numDays), _timeZone), _timeZone);
}
public DateTimeWithZone AddDaysToLocal(int numDays) {
return new DateTimeWithZone(LocalTime.AddDays(numDays), _timeZone);
}
}
This has been adapted from an answer #Jon Skeet provided in an earlier question.
I am struggling with with adding/subtracting time due to problems with daylight saving time. According to the following it is best practice to add/subtract the universal time:
https://msdn.microsoft.com/en-us/library/ms973825.aspx#datetime_topic3b
The problem I have is that if I say:
var timeZone = TimeZoneInfo.FindSystemTimeZoneById("Romance Standard Time");
var date = new DateTimeWithZone(new DateTime(2003, 10, 26, 00, 00, 00), timeZone);
date.AddDays(1).LocalTime.ToString();
This will return 26/10/2003 23:00:00. As you can see the local time has lost an hour (due to daylight saving time ending) so if I was to display this, it would say it's the same day as the day it's just added a day to. However if i was to say:
date.AddDaysToLocal(1).LocalTime.ToString();
I would get back 27/10/2003 00:00:00 and the time is preserved. This looks correct to me but it goes against the best practice to add to the universal time.
I'd appreciate it if someone could help clarify what's the correct way to do this. Please note that I have looked at Noda Time and it's currently going to take too much work to convert to it, also I'd like a better understanding of the problem.
Both ways are correct (or incorrect) depending upon what you need to do.
I like to think of these as different types of computations:
Chronological computation.
Calendrical computation.
A chronological computation involves time arithmetic in units that are regular with respect to physical time. For example the addition of seconds, nanoseconds, hours or days.
A calendrical computation involves time arithmetic in units that humans find convenient, but which don't always have the same length of physical time. For example the addition of months or years (each of which have a varying number of days).
A calendrical computation is convenient when you want to add a coarse unit that does not necessarily have a fixed number of seconds in it, and yet you still want to preserve the finer field units in the date, such as days, hours, minutes and seconds.
In your local time computation, you add a day, and presuming a calendrical computation is what you intended, you preserve the local time of day, despite the fact that 1 day is not always 24 hours in the local calendar. Be aware that arithmetic in local time has the potential to result in a local time that has two mappings to UTC, or even zero mappings to UTC. So your code should be constructed such that you know this can never happen, or be able to detect when it does and react in whatever way is correct for your application (e.g. disambiguate an ambiguous mapping).
In your UTC time computation (a chronological computation), you always add 86400 seconds, and the local calendar can react however it may due to UTC offset changes (daylight saving related or otherwise). UTC offset changes can be as large as 24h, and so adding a chronological day may not even bump the local calendar day of the month by one. Chronological computations always have a result which has a unique UTC <-> local mapping (assuming the input has a unique mapping).
Both computations are useful. Both are commonly needed. Know which you need, and know how to use the API to compute whichever you need.
Just to add to Howard's great answer, understand that the "best practice" you refer to is about incrementing by an elapsed time. Indeed, if you wanted to add 24 hours, you'd do that in UTC and you'd find you'd end up on 23:00 due to there being an extra hour in that day.
I typically consider adding a day to be a calendrical computation (using Howard's terminology), and thus it doesn't matter how many hours there are on that day or not - you increment the day in local time.
You do then have to verify that the result is a valid time on that day, as it very well may have landed you on an invalid value, in the "gap" of a forward transition. You'll have to decide how to adjust. Likewise, when you convert to UTC, you should test for ambiguous time and adjust accordingly.
Understand that by not doing any adjusting on your own, you're relying on the default behavior of the TimeZoneInfo methods, which adjust backward during an ambiguous time (even though the usually desired behavior is to adjust forward), and that ConvertTimeFromUtc will throw an exception during an invalid time.
This is the reason why ZonedDateTime in Noda Time has the concept of "resolvers" to allow you to control this behavior more specifically. Your code is missing any similar concept.
I'll also add that while you say you've looked at Noda Time and it's too much work to convert to it - I'd encourage you to look again. One doesn't necessarily need to retrofit their entire application to use it. You can, but you can also just introduce it where it's needed. For example, you might want to use it internally in this DateTimeWithZone class, in order to force you down the right path.
One more thing - When you use SpecifyKind in your input, you're basically saying to ignore whatever the input kind is. Since you're designing general purpose code for reuse, you're inviting the potential for bugs. For example, I might pass in DateTime.UtcNow, and you're going to assume it's the timezone-based time. Noda Time avoids this problem by having separate types instead of a "kind". If you're going to continue to use DateTime, then you should evaluate the kind to apply an appropriate action. Just ignoring it is going to get you into trouble for sure.
I'm converting this code from C# to Ruby:
C# Code
DateTime dtEpoch = new DateTime(1970, 01, 01, 0, 0, 0, 0, DateTimeKind.Utc);
string strTimeStamp = Convert.ToUInt64((DateTime.UtcNow - dtEpoch).TotalSeconds).ToString();
Ruby Code
now = Time.now.utc
epoch = Time.utc(1970,01,01, 0,0,0)
time_diff = ((now - epoch).to_s).unpack('Q').first.to_s
I need to convert the integer into an unsigned 64-bit integer. Is unpack really the way to go?
I'm not really sure what is the returned value for your code, but it sure ain't seconds since epoch.
Ruby stores dates and times internally as seconds until epoch. Time.now.to_i will return exactly what you're looking for.
require 'date'
# Seconds from epoch to this very second
puts Time.now.to_i
# Seconds from epoch until today, 00:00
puts Date.today.to_time.to_i
In short, Time.now.to_i is enough.
Ruby internally stores Times in seconds since 1970-01-01T00:00:00.000+0000:
Time.now.to_f #=> 1439806700.8638804
Time.now.to_i #=> 1439806700
And you don't have to convert the value to something like ulong in C#, because Ruby automatically coerces the integer type so that it doesn't fight against your common sence.
A bit verbose explanation: ruby stores integers as instances of Fixnum, if that number fits the 63-bit size (not 64-bit, weird huh?) If that number exceeds that size, ruby automatically converts it to a Bignum, which has an arbitrary size.
I'm in the process of porting a C++ program to C#. The program needs to be able to read a file's "modified" timestamp and store it in a List. This is what I have so far:
C# code:
ret = new List<Byte>(); //list to store the timestamp
var file = new FileInfo(filename);
//Get revision date/time
DateTime revTime_lastWriteTime_LT = file.LastWriteTime;
//Copy date/time into the List (binary buffer)
ret.Add(Convert.ToByte(revTime_lastWriteTime_LT.Month));
ret.Add(Convert.ToByte(revTime_lastWriteTime_LT.Day));
ret.Add(Convert.ToByte(revTime_lastWriteTime_LT.Year % 100)); // 2-digit year
ret.Add(Convert.ToByte(revTime_lastWriteTime_LT.Hour));
ret.Add(Convert.ToByte(revTime_lastWriteTime_LT.Minute));
ret.Add(Convert.ToByte(revTime_lastWriteTime_LT.Second));
The problem occurs when I read in the Hours value. If the file was modified during daylight savings time (like a summer month), the hour value in the C++ program gets subtracted by one. I can't seem to replicate this in my program. In the MSDN article for DateTime it says under "DateTime values": "local time is optionally affected by daylight saving time, which adds or subtracts an hour from the length of a day". In my C# code I made sure to change my DateTime object to local time using, ToLocalTime(), but apparently I haven't instituted the option that the article is talking about. How do I make sure that my DateTime object in local time subtracts a value when reading in a file that was modified during daylight savings time?
C++ code just in case:
static WIN32_FILE_ATTRIBUTE_DATA get_file_data(const std::string & filename)
{
WIN32_FILE_ATTRIBUTE_DATA ret;
if (!GetFileAttributesEx(filename.c_str(), GetFileExInfoStandard, &ret))
RaiseLastWin32Error();
return ret;
}
//---------------------------------------------------------------------------
static SYSTEMTIME get_file_time(const std::string & filename)
{
const WIN32_FILE_ATTRIBUTE_DATA data(get_file_data(filename));
FILETIME local;
if (!FileTimeToLocalFileTime(&data.ftLastWriteTime, &local))
RaiseLastWin32Error();
SYSTEMTIME ret;
if (!FileTimeToSystemTime(&local, &ret))
RaiseLastWin32Error();
return ret;
}
void parse_asm()
{
// Get revision date/time and size
const SYSTEMTIME rev = get_file_time(filename);
// Copy date/time into the binary buffer
ret.push_back(rev.wMonth);
ret.push_back(rev.wDay);
ret.push_back(rev.wYear % 100); // 2-digit year
ret.push_back(rev.wHour);
ret.push_back(rev.wMinute);
ret.push_back(rev.wSecond);
}
Update for clarity:
In Windows time settings I am in (UTC-05:00) Eastern Time (US & Canada). The file was last modified on Tues Sept 03, 2013 at 12:13:52 PM. The C++ app shows the hour value as 11 and the C# app shows the hour value as 12 using the code above. I need the C# app to show the same hour value as the C++ app.
The bug is actually not with .NET, but with your C++ code. You're using FileTimeToLocalFileTime, which has a well known bug, as described in KB932955:
Note The FileTimeToLocalFileTime() function and the
LocalFileTimeToFileTime() function perform the conversion between UTC
time and local time by using only the current time zone information
and the DST information. This conversion occurs regardless of the
timestamp that is being converted.
So in the example you gave, Sept 03, 2013 at 12:13:52 PM in the US Eastern Time zone should indeed be in daylight saving time. But because it is right now (February 2015) not daylight saving time, you currently get 11 for the hour in your C++ program. If you run the exact same C++ code after next month's transition (March 8th 2015), you will then get 12 for the hour.
The fix for the C++ code is described in the remarks section of the MSDN entry for the FileTimeToLocalFileTime function:
To account for daylight saving time when converting a file time to a local time, use the following sequence of functions in place of using FileTimeToLocalFileTime:
FileTimeToSystemTime
SystemTimeToTzSpecificLocalTime
SystemTimeToFileTime
Now that you understand the bug - if you actually wanted to keep that behavior in C# (which I do not recommend), then you would do the following:
TimeSpan currentOffset = TimeZoneInfo.Local.GetUtcOffset(DateTime.UtcNow);
DateTime revTime_LastWriteTime_Lt = file.LastWriteTimeUtc.Add(currentOffset);
The better thing to do would just to leave your current code as is (using file.LastWriteTime), and call the bug fixed.
Sorry, you need to use Add not AddHours, Add accepts TimeSpan. So you're looking for:
file.LastWriteTimeUtc.Add(TimeZoneInfo.Local.BaseUtcOffset);
How can I get the timezone offset of the physical server running my code? Not some date object or other object in memory.
For example, the following code will output -4:00:00:
<%= TimeZone.CurrentTimeZone.GetUtcOffset(new DateTime()) %>
When it should be -03:00:00 because of daylight savings
new DateTime() will give you January 1st 0001, rather than the current date/time. I suspect you want the current UTC offset... and that's why you're not seeing the daylight saving offset in your current code.
I'd use TimeZoneInfo.Local instead of TimeZone.CurrentTimeZone - it may not affect things, but it would definitely be a better approach. TimeZoneInfo should pretty much replace TimeZone in all code. Then you can use GetUtcOffset:
var offset = TimeZoneInfo.Local.GetUtcOffset(DateTime.UtcNow);
(Using DateTime.Now should work as well, but it involves some magic behind the scenes when there are daylight saving transitions around now. DateTime actually has four kinds rather than the advertised three, but it's simpler just to avoid the issue entirely by using UtcNow.)
Or of course you could use my Noda Time library instead of all this BCL rubbish ;) (If you're doing a lot of date/time work I'd thoroughly recommend that - obviously - but if you're only doing this one bit, it would probably be overkill.)
Since .NET 3.5, you can use the following from DateTimeOffset to get the current offset.
var offset = DateTimeOffset.Now.Offset;
MSDN documentation
There seems to be some difference between how GetUtcOffset works with new DateTime() and DateTime.Now. When I run it in the Central Time Zone, I get:
TimeZone.CurrentTimeZone.GetUtcOffset(new DateTime()) // -06:00:00
TimeZone.CurrentTimeZone.GetUtcOffset(DateTime.Now) // -05:00:00
It's a bit of a kludge, but I suppose you could also do this:
DateTime.Now - DateTime.UtcNow // -05:00:00
When you contract a new DateTime object it gets DateTime.MinValue
When you get its TimeZone.CurrentTimeZone.GetUtcOffset you actually get the time offset for that date.
If you use the DateTime.Now you will get the current date & time and therefore the current offset.
I Have a legacy database with a field containing an integer representing a datetime in UTC
From the documentation:
"Timestamps within a CDR appear in Universal Coordinated Time (UTC). This value remains
independent of daylight saving time changes"
An example of a value is 1236772829.
My question is what is the best way to convert it to a .NET DateTime (in CLR code, not in the DB), both as the UTC value and as a local time value.
Have tried to google it but without any luck.
You'll need to know what the integer really means. This will typically consist of:
An epoch/offset (i.e. what 0 means) - for example "midnight Jan 1st 1970"
A scale, e.g. seconds, milliseconds, ticks.
If you can get two values and what they mean in terms of UTC, the rest should be easy. The simplest way would probably be to have a DateTime (or DateTimeOffset) as the epoch, then construct a TimeSpan from the integer, e.g. TimeSpan.FromMilliseconds etc. Add the two together and you're done. EDIT: Using AddSeconds or AddMilliseconds as per aakashm's answer is a simpler way of doing this bit :)
Alternatively, do the arithmetic yourself and call the DateTime constructor which takes a number of ticks. (Arguably the version taking a DateTimeKind as well would be better, so you can explicitly state that it's UTC.)
Googling that exact phrase gives me this Cicso page, which goes on to say "The field specifies a time_t value that is obtained from the operating system. "
time_t is a C library concept which strictly speaking doesn't have to be any particular implementation, but typically for UNIX-y systems will be the number of seconds since the start of the Unix epoch, 1970 January 1 00:00.
Assuming this to be right, this code will give you a DateTime from an int:
DateTime epochStart = new DateTime(1970, 1, 1);
int cdrTimestamp = 1236772829;
DateTime result = epochStart.AddSeconds(cdrTimestamp);
// Now result is 2009 March 11 12:00:29
You should sanity check the results you get to confirm that this is the correct interpretation of these time_ts.