I am trying to calculate if the given specified date is at least six months old. I am doing this:
if(DateTime.Now.AddMonths(-6)>date)
{
//Do something
}
Is this correct?
Some people say that this approach is wrong and will not give accurate results. Is the above is correct?
"6 months" is not a precise amount of time. It depends on the length of the months. In particular, you may well get different results from your calculation compared with date.AddMonths(6) < DateTime.Now. (Consider adding 6 months from August 30th vs taking away 6 months from February 28th... You may be okay, but you need to think about this carefully.)
You need to consider a few things carefully:
You're currently using DateTime.Now instead of DateTime.Today; how do you want the current time of day to affect things?
Is the "kind" of date UTC, unspecified or local? It makes a difference - DateTime is confusing, unfortunately.
How do you want to handle situations like the ones in the first paragraph?
Ultimately, if people are telling you it will not give accurate results, you should ask them for specific examples - you need to get a wealth of input data and desired results, add automated tests for them, and get them to pass. Then if anyone claims your code isn't working correctly, you should be able to challenge them to create another test case which fails, and justify their decision.
If you are only concerned about the date and not the time, use DateTime.Now.Date instead. Apart from that, I do not see any problems with the code you already have.
Related
I'm trying to add a wrapper around DateTime to include the time zone information. Here's what I have so far:
public struct DateTimeWithZone {
private readonly DateTime _utcDateTime;
private readonly TimeZoneInfo _timeZone;
public DateTimeWithZone(DateTime dateTime, TimeZoneInfo timeZone) {
_utcDateTime = TimeZoneInfo.ConvertTimeToUtc(DateTime.SpecifyKind(dateTime, DateTimeKind.Unspecified), timeZone);
_timeZone = timeZone;
}
public DateTime UniversalTime { get { return _utcDateTime; } }
public TimeZoneInfo TimeZone { get { return _timeZone; } }
public DateTime LocalTime { get { return TimeZoneInfo.ConvertTimeFromUtc(_utcDateTime, _timeZone); } }
public DateTimeWithZone AddDays(int numDays) {
return new DateTimeWithZone(TimeZoneInfo.ConvertTimeFromUtc(UniversalTime.AddDays(numDays), _timeZone), _timeZone);
}
public DateTimeWithZone AddDaysToLocal(int numDays) {
return new DateTimeWithZone(LocalTime.AddDays(numDays), _timeZone);
}
}
This has been adapted from an answer #Jon Skeet provided in an earlier question.
I am struggling with with adding/subtracting time due to problems with daylight saving time. According to the following it is best practice to add/subtract the universal time:
https://msdn.microsoft.com/en-us/library/ms973825.aspx#datetime_topic3b
The problem I have is that if I say:
var timeZone = TimeZoneInfo.FindSystemTimeZoneById("Romance Standard Time");
var date = new DateTimeWithZone(new DateTime(2003, 10, 26, 00, 00, 00), timeZone);
date.AddDays(1).LocalTime.ToString();
This will return 26/10/2003 23:00:00. As you can see the local time has lost an hour (due to daylight saving time ending) so if I was to display this, it would say it's the same day as the day it's just added a day to. However if i was to say:
date.AddDaysToLocal(1).LocalTime.ToString();
I would get back 27/10/2003 00:00:00 and the time is preserved. This looks correct to me but it goes against the best practice to add to the universal time.
I'd appreciate it if someone could help clarify what's the correct way to do this. Please note that I have looked at Noda Time and it's currently going to take too much work to convert to it, also I'd like a better understanding of the problem.
Both ways are correct (or incorrect) depending upon what you need to do.
I like to think of these as different types of computations:
Chronological computation.
Calendrical computation.
A chronological computation involves time arithmetic in units that are regular with respect to physical time. For example the addition of seconds, nanoseconds, hours or days.
A calendrical computation involves time arithmetic in units that humans find convenient, but which don't always have the same length of physical time. For example the addition of months or years (each of which have a varying number of days).
A calendrical computation is convenient when you want to add a coarse unit that does not necessarily have a fixed number of seconds in it, and yet you still want to preserve the finer field units in the date, such as days, hours, minutes and seconds.
In your local time computation, you add a day, and presuming a calendrical computation is what you intended, you preserve the local time of day, despite the fact that 1 day is not always 24 hours in the local calendar. Be aware that arithmetic in local time has the potential to result in a local time that has two mappings to UTC, or even zero mappings to UTC. So your code should be constructed such that you know this can never happen, or be able to detect when it does and react in whatever way is correct for your application (e.g. disambiguate an ambiguous mapping).
In your UTC time computation (a chronological computation), you always add 86400 seconds, and the local calendar can react however it may due to UTC offset changes (daylight saving related or otherwise). UTC offset changes can be as large as 24h, and so adding a chronological day may not even bump the local calendar day of the month by one. Chronological computations always have a result which has a unique UTC <-> local mapping (assuming the input has a unique mapping).
Both computations are useful. Both are commonly needed. Know which you need, and know how to use the API to compute whichever you need.
Just to add to Howard's great answer, understand that the "best practice" you refer to is about incrementing by an elapsed time. Indeed, if you wanted to add 24 hours, you'd do that in UTC and you'd find you'd end up on 23:00 due to there being an extra hour in that day.
I typically consider adding a day to be a calendrical computation (using Howard's terminology), and thus it doesn't matter how many hours there are on that day or not - you increment the day in local time.
You do then have to verify that the result is a valid time on that day, as it very well may have landed you on an invalid value, in the "gap" of a forward transition. You'll have to decide how to adjust. Likewise, when you convert to UTC, you should test for ambiguous time and adjust accordingly.
Understand that by not doing any adjusting on your own, you're relying on the default behavior of the TimeZoneInfo methods, which adjust backward during an ambiguous time (even though the usually desired behavior is to adjust forward), and that ConvertTimeFromUtc will throw an exception during an invalid time.
This is the reason why ZonedDateTime in Noda Time has the concept of "resolvers" to allow you to control this behavior more specifically. Your code is missing any similar concept.
I'll also add that while you say you've looked at Noda Time and it's too much work to convert to it - I'd encourage you to look again. One doesn't necessarily need to retrofit their entire application to use it. You can, but you can also just introduce it where it's needed. For example, you might want to use it internally in this DateTimeWithZone class, in order to force you down the right path.
One more thing - When you use SpecifyKind in your input, you're basically saying to ignore whatever the input kind is. Since you're designing general purpose code for reuse, you're inviting the potential for bugs. For example, I might pass in DateTime.UtcNow, and you're going to assume it's the timezone-based time. Noda Time avoids this problem by having separate types instead of a "kind". If you're going to continue to use DateTime, then you should evaluate the kind to apply an appropriate action. Just ignoring it is going to get you into trouble for sure.
Is there a standard or accepted best practice for how times should be displayed as HH:MM when the source time has HH:MM:SS precision? Should seconds be truncated or rounded to the nearest minute?
Socially, if I looked at a digital clock and saw that it was 4:00:45, I would never tell someone it was 4:01. But I didn't know if this convention is universal or if it applies in computing too.
Also, rounding to the nearest minute might produce unexpected behavior, e.g. if the rounding causes the hour or date to change. This doesn't necessarily apply to the particular use-case we're dealing with today, but I can easily imagine another use case where a list of "Sales in January" includes a sale on 31-Jan 23:59:59 that would be displayed as 1-Feb 00:00
If context is relevant to the answer, this use-case is a SQL Server app that converts a datetime to a smalldatetime, which SQL Server will round to the nearest minute. The result will be displayed as HH:MM in a C# web application. The conversion happens in legacy code that we can't change right now, but we can force truncation of seconds instead of rounding.
But I'm not sure if we should do this.
In general with dates and times it's best to round down.
To me "1 week ago" means "at least 7 days, but less than 14 days". A similar idea applies to times.
Unless it's critical for ordering of the parent object I really wouldn't bother. If you don't need the seconds; they can be considered low-priority. I'd personally round it because I like the precision, even if fractal.
If you're aware that it can cause unexpected behaviour; whomever interprets the data should account for it - but you can make it easier on them by adding a note in your code that it can cause this so that they are aware.
We have an issue on a customer site, whereby we're importing a CSV file including two date fields (a start and finish date/time, with accuracy to seconds). The import code calculates the difference between the two dates as a TimeSpan, then we save the TotalSeconds to the database (in a real field).
Works perfectly in our development environment - but for some reason, on the customer site, the time difference is making some fractional error in the calculation, such that a time difference of 123 seconds frequently shows up in the DB as 123.0001 seconds, or 122.9999 seconds. We cannot reproduce the problem here.
I recall many years ago there was some issue with Pentium processors that they were making weird floating point calculation errors (such that they were nicknamed 5.0001-ium processors), but I don't recall the details. Is it possible that there might be a similar issue on the customer site, whereby date/time calculations are being messed up by a particular kind of processor? Can you think of any other possible reasons for this odd behavior?
The code is pretty simple. I've edited out some extraneous stuff, but it goes like this:
DateTime startDate, endDate;
// set startdate and enddate by parsing from CSV file
var timeDiff = endDate.Subtract(startDate);
// and we save to the database using timeDiff.TotalSeconds
Round the number to a whole number before you put it in the database.
Working on an application where we would like the user to be able to enter incomplete dates.
In some cases there will only be a year - say 1854, or there might be a year and a month, for example March 1983, or there may be a complete date - 11 June 2001.
We'd like a single 'date' attribute/column - and to be able to sort on date.
Any suggestions?
Store the date as an integer -- yyyymmdd.
You can then zero out any month or day component that has not been entered
Year only: 1954 => 19540000
Year & Month: April 2004 => 20040400
January 1st, 2011 => 20110101
Of course I am assuming that you do not need to store any time of day information.
You could then create a struct to encapsulate this logic with useful properties indicating which level of granularity has been set, the relevant System.DateTime, etc
Edit: sorting should then work nicely as well
I can't think of a good way of using a single date field.
A problem you would get if you used January as the default month and 1 as the default day like others have suggested is, what happens when they actually pick January? How would you track if it's a selected January or a defaulted January.
I think you're going to have to store a mask along with the date.
You would only need a bit per part of the date, which would only be 6 bits of data.
M|D|Y|H|Min|S
Month Only 1|0|0|0|0|0 = 32
Year Only 0|0|1|0|0|0 = 8
Month+Year 1|0|1|0|0|0 = 40
AllButMinSec 1|1|1|1|0|0 = 60
You could put this into a Flag Enum to make it easier to use in code.
Well, you could do it via a single column and field that says 'IsDateComplete'.
If you only have the date field, then you'll need to encode the "incompleteness" in the date format itself, such that if the date is, say, < 1900, it's considered "Incomplete".
Personally, I'd go with an field on the side, that marks it as such. Easier to follow, easier to make decisions on, and allows for any dates.
It goes without saying, perhaps, that you can just create a date from DateTime.MinValue and then set what you "know".
Of course, my approach doesn't allow you to "know" what you don't know. (That is, you don't know that they've set the month). You could perhaps use a date-format specifier to mask that, and store it alongside as well, but it's potentially getting cumbersome.
Anyway, some thoughts for you.
One option is to use January as the default month, 1 as the default day, and 1900 or something like that as the default year. Incomplete dates would get padded out with those defaults, and incomplete dates would sort before complete ones in the same year.
Another, slightly more complex option is to use -1 for default day and year, and -1, 'NoMonth', or some such as the default month. Pad incomplete dates as above. This may make sorting a little hard depending on how you do it, but it gives you a way of telling which parts of the date are valid.
I know you'd rather have 1 column but, Instead of a single column one can always have a separate column for day, month and year. Not very difficult to do queries against, and it allways any of the components to be null.
Smehow encoding these states in the datetime itself will be harder to query.
What I did when last solving this problem, was to create a custom date type that kept track of which date parts was actually set and provided conversions to and from a DateTime. For storing in database i used one date field and then one boolean/bit to keep track of which date components that were actually set by the user.
Greetings
I'm trying to do some DateTime math for various time zones and I wanted to take daylight savings into account. Lets say I have a TimeZoneInfo and i've determined the appropriate AdjustmentRule for a given DateTime. Lets also say the particular TimeZoneInfo i'm dealing with is specified as rule.DaylightTransitionStart.IsFixedDateRule == false, so I need to figure out if the given DateTime falls within the start/end TransitionTime.Week values.
This is where I'm getting confused, what is .NET considering as a "week"? My first thought was it probably used something like
DayOfWeek thisMarksWeekBoundaries = Thread.CurrentThread.CurrentUICulture.DateTimeFormat.FirstDayOfWeek;
and went through the calendar assigning days to week, incrementing week every time it crossed a boundary. But, if I do this for May 2010 there are 6 week boundary buckets, and the max valid value for TransitionTime.Week is 5 so this can't be right.
Whats the right way to slice up May 2010?
This article http://msdn.microsoft.com/en-us/library/system.timezoneinfo.transitiontime.isfixeddaterule.aspx shows how to extract the IsFixedDateRule == false, see DisplayTransitionInfo
I finally realized whats going on, I think the property name "Week" is what threw me off. There might be 6 weeks in May (depending on how you count them), but any particular DayOfWeek shows up at most 5 times. The Week property doesn't really refer to what week the DayOfWeek is showing up in, its the nth DayOfWeek for that month--with the magic value 5 meaning its last so either the max n is 4 or 5 for a given month.