The following code compiles fine with comparison operator.
If(dateTimeVariable > SqlDateTime.MinValue) //compiles Ok. dateTimeVariable is of type DateTime
{
}
However, the following code fails to compile.
DateTime dateTimeVariable=SqlDateTime.MinValue;
//Throws exception , cannot convert source type SqlDateTime to DateTime. Which is obvious.
My question is why comparison is allowed between SqlDateTime and Datetime types but not assignment. (Unless comparison operators are doing some implicit conversion.)
I'm guessing I must be missing something really basic.
There's an implicit conversion in SqlDateTime that takes care of converting a DateTime to an SqlDateTime without any additional work:
public static implicit operator SqlDateTime(DateTime value)
{
return new SqlDateTime(value);
}
// SqlDateTime mySqlDate = DateTime.Now
What must be happening is that dateTimeVariable is being implicitly converted from a DateTime to an SqlDateTime for the comparison:
if (dateTimeVariable > SqlDateTime.MinValue)
{
// if dateTimeVariable, after conversion to an SqlDateTime, is greater than the
// SqlDateTime.MinValue, this code executes
}
But in the case of the following code, there's nothing that allows you to simply stuff an SqlDateTime into a DateTime variable, so it doesn't allow it.
DateTime dateTimeVariable = SqlDateTime.MinValue; // fails
Cast your initial value and it will compile okay, but there's a chance you're going to lose some valuable information that is part of an SqlDateTime but not a DateTime.
DateTime dateTimeVariable = (DateTime)SqlDateTime.MinValue;
This is a question of potential loss of precision. Usually this occurs in the context of "narrowing" versus "widening".
Integers are a subset of numbers. All integers are numbers, some numbers are not integers. Thus, the type "number" is wider than the type "integer".
You can always assign a type to a wider type without losing information.
Narrowing is another matter. To assign 1.3 to an integer you must lose information. This is possible but the compiler won't perform a narrowing conversion unless you explicitly state that this is what you want.
As a result, assignments that require a widening conversion are automatically and implicitly converted, but narrowing assignments require explicit casting or conversion (not all conversions are simple casting).
Although arguably SqlDateTime is narrower than DateTime differences in representation mean that conversions in both directions are potentially lossy. As a result, to assign a SqlDateTime to a DateTime requires an explicit conversion. Conversion of DateTime to SqlDateTime strictly speaking ought to require explicit conversion but the implicit conversion implemented in the SqlDateTime type (qv Grant's answer) makes SqlDateTime behave as though it were wider. I made the mistake of assuming SqlDateTime was wider because that's how it's behaving in this case and many kudos to commenters for picking out this important subtlety.
This implicit conversion thing is actually a bit of an issue with VARCHAR columns and ADO.NET implicitly typed parameters, because C# strings are Unicode and become NVARCHAR, so comparing them to an indexed column of type VARCHAR will cause a widening conversion to NVARCHAR (the implicit widening conversions thing also occurs in TSQL), which can prevent the use of the index - which won't stop the query from returning the correct results but will cripple performance.
From MSDN
SqlDateTime Structure
Represents the date and time data ranging in value from January 1, 1753 to December 31, 9999 to an accuracy of 3.33 milliseconds to be stored in or retrieved from a database. The SqlDateTime structure has a different underlying data structure from its corresponding .NET Framework type, DateTime, which can represent any time between 12:00:00 AM 1/1/0001 and 11:59:59 PM 12/31/9999, to the accuracy of 100 nanoseconds. SqlDateTime actually stores the relative difference to 00:00:00 AM 1/1/1900. Therefore, a conversion from "00:00:00 AM 1/1/1900" to an integer will return 0.
Related
I have uncovered a unfortunate side-effect that storing and rehydrating a C# DateTime from a SQL Server datetime column alters it slightly, so that the equality operator is no longer valid.
For example, using EF:
FileInfo fi = new FileInfo("file.txt");
Entity.FileTimeUtc = fi.LastWriteTimeUtc;
we find that
(Entity.FileTimeUtc == fi.LastWriteTimeUtc) // true
but if we save that entity and then reload it from SQL Server, we find that
(Entity.FileTimeUtc == fi.LastWriteTimeUtc) // false
I understand that a process of rounding has happened here (if only by a few milliseconds) due to differing internal storage formats between the .NET DateTime and the SQL datetime.
What I am looking for is a process that will reliably emulate this conversion, and align native DateTime values to those which have been stored and rehydrated from a SQL datetime field, to make the equality test valid again.
That is because SQL Server's DateTime type counts time in 3- and 4-millisecond "ticks", with some very odd "rounding" rules.
See my answer to the question, "How does SqlDateTime do its precision reduction?" for details on exactly what those rounding rules are.
That will allow you to do the exact same conversion in .Net.
Also, I believe that if converting your C#/.Net DateTime values to a System.Data.SqlTypes.SqlDateTime will do the same conversions (but I can't swear to that — it's been a while since I had to wrangle with that).
When I compare a DateTime variable with SqlDateTime.MinValue:
if (StartDate > SqlDateTime.MinValue)
{
// some code
}
I get the following runtime exception if StartDate is < SqlDateTime.MinValue:
SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and
12/31/9999 11:59:59 PM.
This can be easy solved with a small change:
if (StartDate > SqlDateTime.MinValue.Value)
{
// some code
}
I understand that in the first code snippet I'm comparing apples to oranges. What I don't understand is the exception message. It seems like I'm assigning a DateTime value to a SqlDateTime variable.
What am I missing?
The .NET native DateTime type (to be specific, its a structure) holds a broader range of possible values than the SqlDateTime data type can support. More specifically, a DateTime value can range from 01/01/0000 to a theoretical 12/31/9999.
When the compiler tries to coerce the types for comparison, it attempts to put a DateTime value (MinValue.Value) that's outside (below or 'before' in context) the range supported by SqlDateTime - hence the overflow.
From SqlDateTime Structure on MSDN:
SqlDateTime structure
Represents the date and time data ranging in value from January 1, 1753 to December 31, 9999 to an accuracy of 3.33 milliseconds to be stored in or retrieved from a database. The SqlDateTime structure has a different underlying data structure from its corresponding .NET Framework type, DateTime, which can represent any time between 12:00:00 AM 1/1/0001 and 11:59:59 PM 12/31/9999, to the accuracy of 100 nanoseconds. SqlDateTime actually stores the relative difference to 00:00:00 AM 1/1/1900. Therefore, a conversion from "00:00:00 AM 1/1/1900" to an integer will return 0.
Have a small problem where if I save a DateTime field as an SQL command parameter it loses precision, like often less than a milisecond.
e.g. The parameter's Value is:
TimeOfDay {16:59:35.4002017}
But its SqlValue is:
TimeOfDay {16:59:35.4000000}
And that's the time that's saved in the database.
Now I'm not particularly bothered about a couple of microseconds, but it causes problems later on when I'm comparing values, they show up as not equal.
(Also in some comparisons the type of the field is not known until run-time so I'm not even sure at dev-time whether I'll even need special DateTime "rounding" logic)
Is there any easy fix for this when adding the parameter?
You're using DateTime, which is documented with:
Accuracy: Rounded to increments of .000, .003, or .007 seconds
It sounds like you want DateTime2:
Precision, scale: 0 to 7 digits, with an accuracy of 100ns. The default precision is 7 digits.
That 100ns precision is the same as DateTime (1 tick = 100ns)
Or just live with the difference and write methods to round DateTime before comparing - that may end up being simpler.
Try using datetime2 it has better precision.
I have a time object which if null is for some reason interpreted as 00:00:00. So I need to do a test if it is null, but now I can't do that so I need the equivalent of:
if (TimeObject == 00:00:00) {...
What would the format for this if statement be?
First, C# does not support time literals, so 00:00:00 won't make sense to a standard C# compiler.
Second, in order to handle time, you will need to use DateTime or TimeSpan structures.
Third, because these are structures, they can never be null - they have a default value, but will not allow DateTime dt = null; If you want a nullable struct, use Nullable Types (thus DateTime? and TimeSpan?)
Int, Float, Double, Decimal, DateTime .etc are value types. And I know:
Int:Represents a 32-bit signed integer.
Float:Represents a single-precision floating-point number(32-bit).
Double:Represents a double-precision floating-point number(64-bit).
...
But how many bit for DateTime? And why all value types in .NET are struct?
Based on here, DateTime represents 64-bit in C#:
Prior to the .NET Framework version 2.0, the DateTime structure
contains a 64-bit field composed of an unused 2-bit field concatenated
with a private Ticks field, which is a 62-bit unsigned field that
contains the number of ticks that represent the date and time. The
value of the Ticks field can be obtained with the Ticks property.
Starting with the .NET Framework 2.0, the DateTime structure contains
a 64-bit field composed of a private Kind field concatenated with the
Ticks field. The Kind field is a 2-bit field that indicates whether
the DateTime structure represents a local time, a Coordinated
Universal Time (UTC), or the time in an unspecified time zone. The
Kind field is used when performing time conversions between time
zones, but not for time comparisons or arithmetic. The value of the
Kind field can be obtained with the Kind property.