More on Implied Types - c#

As a follow up to the question what is the purpose of double implying...
Also, I read in an article a while back (no I don't remember the link) that it is inappropriate to do decimal dollars = 0.00M;. It stated that the appropriate way was decimal dollars = 0;. It also stated that this was appropriate for all numeric types. Is this incorrect, and why? If so, what is special about 0?

Well, for any integer it's okay to use an implicit conversion from int to decimal, so that's why it compiles. (Zero has some other funny properties... you can implicitly convert from any constant zero expression to any enum type. It shouldn't work with constant expressions of type double, float and decimal, but it happens to do so in the MS compiler.)
However, you may want to use 0.00m if that's the precision you want to specify. Unlike double and float, decimal isn't normalized - so 0m, 0.0m and 0.00m have different representations internally - you can see that when you call ToString on them. (The integer 0 will be implicitly converted to the same representation as 0m.)
So the real question is, what do you want to represent in your particular case?

Related

How do i convert Vector3 into float? [duplicate]

Example:
float timeRemaining = 0.58f;
Why is the f is required at the end of this number?
Your declaration of a float contains two parts:
It declares that the variable timeRemaining is of type float.
It assigns the value 0.58 to this variable.
The problem occurs in part 2.
The right-hand side is evaluated on its own. According to the C# specification, a number containing a decimal point that doesn't have a suffix is interpreted as a double.
So we now have a double value that we want to assign to a variable of type float. In order to do this, there must be an implicit conversion from double to float. There is no such conversion, because you may (and in this case do) lose information in the conversion.
The reason is that the value used by the compiler isn't really 0.58, but the floating-point value closest to 0.58, which is 0.57999999999999978655962351581366... for double and exactly 0.579999946057796478271484375 for float.
Strictly speaking, the f is not required. You can avoid having to use the f suffix by casting the value to a float:
float timeRemaining = (float)0.58;
Because there are several numeric types that the compiler can use to represent the value 0.58: float, double and decimal. Unless you are OK with the compiler picking one for you, you have to disambiguate.
The documentation for double states that if you do not specify the type yourself the compiler always picks double as the type of any real numeric literal:
By default, a real numeric literal on the right side of the assignment
operator is treated as double. However, if you want an integer number
to be treated as double, use the suffix d or D.
Appending the suffix f creates a float; the suffix d creates a double; the suffix m creates a decimal. All of these also work in uppercase.
However, this is still not enough to explain why this does not compile:
float timeRemaining = 0.58;
The missing half of the answer is that the conversion from the double 0.58 to the float timeRemaining potentially loses information, so the compiler refuses to apply it implicitly. If you add an explicit cast the conversion is performed; if you add the f suffix then no conversion will be needed. In both cases the code would then compile.
The problem is that .NET, in order to allow some types of implicit operations to be carried out involving float and double, needed to either explicitly specify what should happen in all scenarios involving mixed operands or else allow implicit conversions between the types to be performed in one direction only; Microsoft chose to follow the lead of Java in allowing the direction which occasionally favors precision, but frequently sacrifices correctness and generally creates hassle.
In almost all cases, taking the double value which is closest to a particular numeric quantity and assigning it to a float will yield the float value which is closest to that same quantity. There are a few corner cases, such as the value 9,007,199,791,611,905; the best float representation would be 9,007,200,328,482,816 (which is off by 536,870,911), but casting the best double representation (i.e. 9,007,199,791,611,904) to float yields 9,007,199,254,740,992 (which is off by 536,870,913). In general, though, converting the best double representation of some quantity to float will either yield the best possible float representation, or one of two representations that are essentially equally good.
Note that this desirable behavior applies even at the extremes; for example, the best float representation for the quantity 10^308 matches the float representation achieved by converting the best double representation of that quantity. Likewise, the best float representation of 10^309 matches the float representation achieved by converting the best double representation of that quantity.
Unfortunately, conversions in the direction that doesn't require an explicit cast are seldom anywhere near as accurate. Converting the best float representation of a value to double will seldom yield anything particularly close to the best double representation of that value, and in some cases the result may be off by hundreds of orders of magnitude (e.g. converting the best float representation of 10^40 to double will yield a value that compares greater than the best double representation of 10^300.
Alas, the conversion rules are what they are, so one has to live with using silly typecasts and suffixes when converting values in the "safe" direction, and be careful of implicit typecasts in the dangerous direction which will frequently yield bogus results.

Does (int)myDouble ever differ from (int)Math.Truncate(myDouble)?

Does (int)myDouble ever differ from (int)Math.Truncate(myDouble)?
Is there any reason I should prefer one over the other?
Math.Truncate is intended for when you need to keep your result as a double with no fractional part. If you want to convent it to an int, just use the cast directly.
Edit: For reference, here is the relevant documentation from the “Explicit Numeric Conversions Table”:
When you convert from a double or float value to an integral type, the value is truncated.
As pointed out by Ignacio Vazquez-Abrams (int)myDouble will fail in the same way as (int)Math.Truncate(myDouble) when myDouble is too large.
So there is no difference in output, but (int)myDouble is working faster.

what is the purpose of double implying?

for example:
const decimal dollars = 25.50M;
why do we have to add that M?
why not just do:
const decimal dollars = 25.50;
since it already says decimal, doesnt it imply that 25.50 is a decimal?
No.
25.50 is a standalone expression of type double, not decimal.
The compiler will not see that you're trying to assign it to a decimal variable and interpret it as a decimal.
Except for lambda expressions, anonymous methods, and the conditional operator, all C# expressions have a fixed type that does not depend at all on context.
Imagine what would happen if the compiler did what you want it to, and you called Math.Max(1, 2).
Math.Max has overloads that take int, double, and decimal. Which one would it call?
There are two important concepts to understand in this situation.
Literal Values
Implicit Conversion
Essentially what you are asking is whether a literal value can be implicitly converted between 2 types. The compiler will actually do this for you in some cases when there would be no loss in precision. Take this for example:
long n = 1000; // Assign an Int32 literal to an Int64.
This is possible because a long (Int64) contains a larger range of values compared to an int (Int32). For your specific example it is possible to lose precision. Here are the drastically different ranges for decimal and double.
Decimal: ±1.0 × 10−28 to ±7.9 × 1028
Double: ±5.0 × 10−324 to ±1.7 × 10308
With knowledge it becomes clear why an implicit conversion is a bad idea. Here is a list of implicit conversions that the C# compiler currently supports. I highly recommend you do a bit of light reading on the subject.
Implicit Numeric Conversions Table
Note also that due to the inner details of how doubles and decimals are defined, slight rounding errors can appear in your assignments or calculations. You need to know about how floats, doubles, and decimals work at the bit level to always make the best choices.
For example, a double cannot precisely store the value 25.10, but a decimal can.
A double can precisely store the value 25.50 however, for fun binary-encoding reasons.
Decimal structure

Splitting a double in two, C#

I'm attempting to use a double to represent a bit of a dual-value type in a database that must sometimes accept two values, and sometimes accept only one (int).
So the field is a float in the database, and in my C# code, it is a double (since mapping it via EF makes it a double for some reason... )
So basically what I want to do .. let's say 2.5 is the value. I want to separate that out into 2, and 5. Is there any implicit way to go about this?
Like this:
int intPart = (int)value;
double fractionalPart = value - intPart;
If you want fractionalPart to be an int, you can multiply it by 10n, where n is the number of digits you want, and cast to int.
However, beware of precision loss.
However, this is extremely poor design; you should probably make two fields.
SQL Server's float type is an 8-byte floating-point value, equivalent to C#'s double.
You should be able to implicitly type-cast the double to an int to get the first part and subtract that from the number and multiply that by ten to get the second part

What is the best practice to make division return double in C#

In c# when you want to divide the result of a method such as below, what is the best way to force it to return a double value rather than the default integer.
(int)Math.Ceiling((double)(System.DateTime.DaysInMonth(2009, 1) / 7));
As you can see I need the division to return a double so I can use the ceiling function.
A division of two int numbers returns an int, truncating any decimal points. This is generally true for other data types as well: arithmetic operations don't change the type of their operands.
To enforce a certain return type, you must therefore convert the operands appropriately. In your particular case, it's actually sufficient to convert one of the operators to double: that way, C# will perform the conversion for the other operand automatically.
You've got the choice: You can explicitly convert either operand. However, since the second operand is a literal, it's better just to make that literal the correct type directly.
This can either be done using a type suffix (d in the case of double) or to write a decimal point behind it. The latter way is generally preferred. in your case:
(int)Math.Ceiling(System.DateTime.DaysInMonth(2009, 1) / 7.0);
Notice that this decimal point notation always yields a double. To make a float, you need to use its type suffix: 7f.
This behaviour of the fundamental operators is the same for nearly all languages out there, by the way. One notable exception: VB, where the division operator generally yields a Double. There's a special integer division operator (\) if that conversion is not desired. Another exception concerns C++ in a weird way: the difference between two pointers of the same type is a ptrdiff_t. This makes sense but it breaks the schema that an operator always yields the same type as its operands. In particular, subtracting two unsigned int does not yield a signed int.
Change the 7 to a double:
(int) Math.Ceiling(System.DateTime.DaysInMonth(2009, 1) / 7.0);
just divide with a literal double:
(int)Math.Ceiling((System.DateTime.DaysInMonth(2009, 1) / 7.0))
As far as I know, you can't force a function to return a different type, so casting the result is your best bet. Casting the result of the function to a double and then dividing should do the trick.
To expand upon Konrad's answer...
Changing 7 to 7.0, 7 to 7D, 7 to 7M all get you the answer you want as well.
For a completely different approach that avoids casting and floating-point math altogether...
int result = val1 / val2;
if (val1 % val2 != 0) result++;
So, in your case...
int numDaysInMonth = System.DateTime.DaysInMonth(2009, 1);
int numWeeksInMonth = numDaysInMonth / 7;
if (numDaysInMonth % numWeeksInMonth != 0) numWeeksInMonth++;
This approach is quite verbose, but there might be some special cases where it is preferable. Technically, there should be a slight performance advantage to the modulus approach, but you'll be hard-pressed to measure it.
In your case, I'd stick with the accepted answer :-)

Categories

Resources