Does (int)myDouble ever differ from (int)Math.Truncate(myDouble)?
Is there any reason I should prefer one over the other?
Math.Truncate is intended for when you need to keep your result as a double with no fractional part. If you want to convent it to an int, just use the cast directly.
Edit: For reference, here is the relevant documentation from the “Explicit Numeric Conversions Table”:
When you convert from a double or float value to an integral type, the value is truncated.
As pointed out by Ignacio Vazquez-Abrams (int)myDouble will fail in the same way as (int)Math.Truncate(myDouble) when myDouble is too large.
So there is no difference in output, but (int)myDouble is working faster.
Related
I have a NumericUpDown widget on my winform declared as numEnemyDefence. I want to use this to perform basic math on a variable:
damagePerHit -= Double.Parse(numEnemyDefence.Value);
Where damagePerHit is a double.
However, I am returned with the error of:
Cannot convert decimal to string.
Where is the string coming from? And why is the parse not working?
Double.Parse expects its argument to be a string. NumericUpDown.Value is a decimal. The C# compiler rejects your code because it doesn't make automatic conversions for you from the decimal type to the string type. And this is a good thing because it prevents a lot of subtle errors.
You can simply cast the decimal value to a double value
damagePerHit -= (double)numEnemyDefence.Value;
I also recommend to change (if possible) your variable damagePerHit to a simple decimal if you don't need the precision of a double value.
By the way these kind of operations are also source for other head scratching when you hit the floating point inaccuracy problem
Use Convert class and Convert.toDouble method
for example:
const decimal dollars = 25.50M;
why do we have to add that M?
why not just do:
const decimal dollars = 25.50;
since it already says decimal, doesnt it imply that 25.50 is a decimal?
No.
25.50 is a standalone expression of type double, not decimal.
The compiler will not see that you're trying to assign it to a decimal variable and interpret it as a decimal.
Except for lambda expressions, anonymous methods, and the conditional operator, all C# expressions have a fixed type that does not depend at all on context.
Imagine what would happen if the compiler did what you want it to, and you called Math.Max(1, 2).
Math.Max has overloads that take int, double, and decimal. Which one would it call?
There are two important concepts to understand in this situation.
Literal Values
Implicit Conversion
Essentially what you are asking is whether a literal value can be implicitly converted between 2 types. The compiler will actually do this for you in some cases when there would be no loss in precision. Take this for example:
long n = 1000; // Assign an Int32 literal to an Int64.
This is possible because a long (Int64) contains a larger range of values compared to an int (Int32). For your specific example it is possible to lose precision. Here are the drastically different ranges for decimal and double.
Decimal: ±1.0 × 10−28 to ±7.9 × 1028
Double: ±5.0 × 10−324 to ±1.7 × 10308
With knowledge it becomes clear why an implicit conversion is a bad idea. Here is a list of implicit conversions that the C# compiler currently supports. I highly recommend you do a bit of light reading on the subject.
Implicit Numeric Conversions Table
Note also that due to the inner details of how doubles and decimals are defined, slight rounding errors can appear in your assignments or calculations. You need to know about how floats, doubles, and decimals work at the bit level to always make the best choices.
For example, a double cannot precisely store the value 25.10, but a decimal can.
A double can precisely store the value 25.50 however, for fun binary-encoding reasons.
Decimal structure
In the following snippet:
long frameRate = (long)(_frameCounter / this._stopwatch.Elapsed.TotalSeconds);
Why is there an additional (long)(...) to the right of the assignment operator?
The division creates a double-precision floating point value (since TimeSpan.TotalSeconds is a double), so the cast truncates the resulting value to be integral instead of floating point. You end up with an approximate but whole number of frames-per-second instead of an exact answer with fractional frames-per-second.
If frameRate is used for display or logging, the cast might just be to make the output look nicer.
It's an explicit conversion (cast) that converts the result of the division operation to a long.
See: Casting and Type Conversions
Because the result of the calculation relates to what types the variables are that are being used. If the compiler thinks the result type is not a long because of the types being acted on, then you need to cast your result.
Note that casting your result may incur loss of accuracy or values. The bracketed cast (long) is an explicit cast and will not generate any errors if, say, you tried to fit "1.234" into a long, which could only store "1".
In my opinion there could be a few reasons:
At least one of types in expression is not integer type (I don't think so).
Developer wanted to highlighted that result is long type (makes the result type clear for reader -- good reason).
Developer was not sure what is the result of expression and wanted to make sure it will be long (it's better to make sure, that hopes, it will work).
I believe it was 3 :).
As a follow up to the question what is the purpose of double implying...
Also, I read in an article a while back (no I don't remember the link) that it is inappropriate to do decimal dollars = 0.00M;. It stated that the appropriate way was decimal dollars = 0;. It also stated that this was appropriate for all numeric types. Is this incorrect, and why? If so, what is special about 0?
Well, for any integer it's okay to use an implicit conversion from int to decimal, so that's why it compiles. (Zero has some other funny properties... you can implicitly convert from any constant zero expression to any enum type. It shouldn't work with constant expressions of type double, float and decimal, but it happens to do so in the MS compiler.)
However, you may want to use 0.00m if that's the precision you want to specify. Unlike double and float, decimal isn't normalized - so 0m, 0.0m and 0.00m have different representations internally - you can see that when you call ToString on them. (The integer 0 will be implicitly converted to the same representation as 0m.)
So the real question is, what do you want to represent in your particular case?
I am using Convert.ChangeType() to convert from Object (which I get from DataBase) to a generic type T. The code looks like this:
T element = (T)Convert.ChangeType(obj, typeof(T));
return element;
and this works great most of the time, however I have discovered that if I try to cast something as simple as return of the following sql query
select 3.2
the above code (T being double) wont return 3.2, but 3.2000000000000002. I can't realise why this is happening, or how to fix it. Please help!
What you're seeing is an artifact of the way floating-point numbers are represented in memory. There's quite a bit of information available on exactly why this is, but this paper is a good one. This phenomenon is why you can end up with seemingly anomalous behavior. A double or single should never be displayed to the user unformatted, and you should avoid equality comparisons like the plague.
If you need numbers that are accurate to a greater level of precision (ie, representing currency values), then use decimal.
This probably is because of floating point arithmetic. You probably should use decimal instead of double.
It is not a problem of Convert. Internally double type represent as infinite fraction of 2 of real number, that is why you got such result. Depending of your purpose use:
Either Decimal
Or use precise formating {0:F2}
Use Math.Flor/Math.Ceil