Example:
float timeRemaining = 0.58f;
Why is the f is required at the end of this number?
Your declaration of a float contains two parts:
It declares that the variable timeRemaining is of type float.
It assigns the value 0.58 to this variable.
The problem occurs in part 2.
The right-hand side is evaluated on its own. According to the C# specification, a number containing a decimal point that doesn't have a suffix is interpreted as a double.
So we now have a double value that we want to assign to a variable of type float. In order to do this, there must be an implicit conversion from double to float. There is no such conversion, because you may (and in this case do) lose information in the conversion.
The reason is that the value used by the compiler isn't really 0.58, but the floating-point value closest to 0.58, which is 0.57999999999999978655962351581366... for double and exactly 0.579999946057796478271484375 for float.
Strictly speaking, the f is not required. You can avoid having to use the f suffix by casting the value to a float:
float timeRemaining = (float)0.58;
Because there are several numeric types that the compiler can use to represent the value 0.58: float, double and decimal. Unless you are OK with the compiler picking one for you, you have to disambiguate.
The documentation for double states that if you do not specify the type yourself the compiler always picks double as the type of any real numeric literal:
By default, a real numeric literal on the right side of the assignment
operator is treated as double. However, if you want an integer number
to be treated as double, use the suffix d or D.
Appending the suffix f creates a float; the suffix d creates a double; the suffix m creates a decimal. All of these also work in uppercase.
However, this is still not enough to explain why this does not compile:
float timeRemaining = 0.58;
The missing half of the answer is that the conversion from the double 0.58 to the float timeRemaining potentially loses information, so the compiler refuses to apply it implicitly. If you add an explicit cast the conversion is performed; if you add the f suffix then no conversion will be needed. In both cases the code would then compile.
The problem is that .NET, in order to allow some types of implicit operations to be carried out involving float and double, needed to either explicitly specify what should happen in all scenarios involving mixed operands or else allow implicit conversions between the types to be performed in one direction only; Microsoft chose to follow the lead of Java in allowing the direction which occasionally favors precision, but frequently sacrifices correctness and generally creates hassle.
In almost all cases, taking the double value which is closest to a particular numeric quantity and assigning it to a float will yield the float value which is closest to that same quantity. There are a few corner cases, such as the value 9,007,199,791,611,905; the best float representation would be 9,007,200,328,482,816 (which is off by 536,870,911), but casting the best double representation (i.e. 9,007,199,791,611,904) to float yields 9,007,199,254,740,992 (which is off by 536,870,913). In general, though, converting the best double representation of some quantity to float will either yield the best possible float representation, or one of two representations that are essentially equally good.
Note that this desirable behavior applies even at the extremes; for example, the best float representation for the quantity 10^308 matches the float representation achieved by converting the best double representation of that quantity. Likewise, the best float representation of 10^309 matches the float representation achieved by converting the best double representation of that quantity.
Unfortunately, conversions in the direction that doesn't require an explicit cast are seldom anywhere near as accurate. Converting the best float representation of a value to double will seldom yield anything particularly close to the best double representation of that value, and in some cases the result may be off by hundreds of orders of magnitude (e.g. converting the best float representation of 10^40 to double will yield a value that compares greater than the best double representation of 10^300.
Alas, the conversion rules are what they are, so one has to live with using silly typecasts and suffixes when converting values in the "safe" direction, and be careful of implicit typecasts in the dangerous direction which will frequently yield bogus results.
Related
So I don't really get the conversion from a float to a double e.g:
float wow = 1.123562641346;
float wow = 1.123562641346f;
The first one is a double that gets saved in a variable that has memory reserved for a float and it can't implicitely convert to a float so it gives an error.
The second one has on the right side a float being saved inside a float variable. What I don't get is that the second one gives exactly the same result as this:
float wow = (float) 1.123562641346;
I mean the second one is it just exactly the same as (float), does the "f" just stand for explicitely convert to a float?
If it doesn't mean "explicitely" convert to float, then I don't know why it doesn't give an error since there isn't an implicit conversion for it.
I really can't find any resources that seem to explain this in any way, the only answers I can find is that the "f" just means that it is a float data type, but that still doesn't explain why I can give it something with 13 decimals and it converts it to the 7 decimals expected, while the first solution doesn't work and it doesn't automatically convert it.
Implicitly converting a double to a float is potentially a data-losing operation, so the compiler makes it an error.
Explicitly converting it means that the programmer has taken control, so it does not provoke an error.
Note that in your example, the double value does indeed lose data when it's converted to float as the following demonstrates:
double d = 1.123562641346;
Console.WriteLine(d.ToString("f16")); // 1.1235626413460000
float f1 = (float)1.123562641346;
Console.WriteLine(f1.ToString("f16")); // 1.1235626935958862
float f2 = 1.123562641346f;
Console.WriteLine(f2.ToString("f16")); // 1.1235626935958862
The compiler is trying to prevent the programmer from writing code that causes accidental data loss.
Note that it does NOT warn that float f2 = 1.123562641346f; is trying to initialise f2 with a value that it cannot actually represent. The same thing can happen with double initialisations - the compiler won't warn about assigning a double that can't actually be represented exactly.
The numeric value on the right of the "=" when initialising a floating point number is known as a "real-literal".
The C# Standard says this about converting the value of a real-literal to a floating point value:
The value of a real literal of type float or double is determined by
using the IEEE “round to nearest” mode.
This rounding is performed without provoking a compile error.
Your understanding is correct, the f at the end of the number indicates that it's a float, so it will be considered as a float and when you assign this float value to a float variable, you will not get conversion errors.
If there is no f at the end of the number having decimals, then, by default, the value is handled as a double and once you assign this double value to a float variable, you get an error because of potential data loss.
Read more here: https://answers.unity.com/questions/423675/why-is-there-sometimes-an-f-behinf-a-number.html
The value 0.105700679f should be convertible precisely to decimal. decimal clearly is able to hold this value precisely:
decimal d = 0.105700679m;
Console.WriteLine(d); //0.105700679
float also is able to hold the value precisely:
float f = 0.105700679f;
Console.WriteLine(f == 0.105700679f); //True
Console.WriteLine(f == 0.1057007f); //False
Console.WriteLine(f.ToString("R")); //Round-trip representation, 0.105700679
(Note, that float.ToString() seems to drop precision. I just asked about that as well.)
https://www.h-schmidt.net/FloatConverter/IEEE754.html says:
It seems the value really is stored like that. I am seeing this value right now in the debugger. I received it over the network as IEEE float. This value exists!
But when I convert from float to decimal precision is dropped:
float f = 0.105700679f;
decimal d = (decimal)f;
Console.WriteLine(d); //0.1057007
Console.WriteLine(d.ToString("F15")); //0.105700700000000
Console.WriteLine(((double)d).ToString("R")); //0.1057007
I do understand that floating point numbers are imprecise. But here I see no reason for a loss of information. This is on .NET 4.7.1. How can I convert from float to decimal and preserve precision in all cases where doing so is possible?
This is important to me because I am processing financial market data and joining data sources based on price. Data is given to me as a float over a 3rd party network protocol. I need to import that float to a decimal representation.
Try converting f to double and then converting that to decimal.
I suspect you are seeing shortcomings in .NET.
Let’s look at some of the code in your question line by line. In float f = 0.105700679f;, the decimal numeral “0.105700679” is converted to 32-bit binary floating-point. The result of this is the number 0.105700679123401641845703125.
In Console.WriteLine(f == 0.105700679f);. This compares f to the value represented by 0.105700679f. Since the f suffix denotes a float type, 0.105700679f represents the decimal numeral “0.105700679” converted to 32-bit binary floating-point. So of course it has the same value as it did before, and the test for equality returns true. You have not tested whether f is equal to 0.105700679, you have tested whether it is equal to 0.105700679f, and it is.
Then we have decimal d = (decimal)f;. Based on the results you are seeing, it appears to me this conversion produces a number with only seven decimal digits, .1057007. I presume Microsoft has decided that, because a float is only “capable” of holding seven decimal digits, that only seven should be produced when converting to decimal. (This is both a false understanding of what the value of a binary floating-point number represents and an incorrect number. A conversion from decimal to float and back is only guaranteed to preserve six decimal digits, and a conversion from float to decimal and back requires nine decimal digits to preserve the float value. So seven is just wrong.)
If there is a solution to your problem, it is to convert f to decimal by some means other than the cast (decimal) f. I do not know C#, so I cannot say what the solution should be. I suggest trying to convert to double first and then decimal. Quite likely C# will convert float to double without changing the value, and then the conversion to decimal will produce more decimal digits. Another possibility could be converting f to a string with the number of decimal digits you desire and then converting the string to a decimal number.
Also, you say the data is coming via a third-party network protocol. It appears the protocol is incapable of representing the actual values it is supposed to be communicating. That is a serious defect that you should complain to the third party about. I know that may seem futile, but it should be done. Also, it casts doubt on your need to convert the float value to 0.105700679. Most eight-digit decimal numbers in the origin of this data could not survive the conversion to float and back to decimal without change. So, even if you are able to convert float to eight decimal digits, most of the results will differ from the original pre-transport values. E.g., if the original number were 0.105700680, it is changed to 0.105700679123401641845703125 when converted to a float to be sent over the network. So the receiver receives 0.105700679123401641845703125, and it is impossible for them to know the original number was 0.105700680 rather than 0.105700679.
C# data type FLOAT have Precision is 7.
float f = 0.105700679f;
Console.WriteLine(d); //0.1057007 so that result is true!
read more: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/float
for example:
const decimal dollars = 25.50M;
why do we have to add that M?
why not just do:
const decimal dollars = 25.50;
since it already says decimal, doesnt it imply that 25.50 is a decimal?
No.
25.50 is a standalone expression of type double, not decimal.
The compiler will not see that you're trying to assign it to a decimal variable and interpret it as a decimal.
Except for lambda expressions, anonymous methods, and the conditional operator, all C# expressions have a fixed type that does not depend at all on context.
Imagine what would happen if the compiler did what you want it to, and you called Math.Max(1, 2).
Math.Max has overloads that take int, double, and decimal. Which one would it call?
There are two important concepts to understand in this situation.
Literal Values
Implicit Conversion
Essentially what you are asking is whether a literal value can be implicitly converted between 2 types. The compiler will actually do this for you in some cases when there would be no loss in precision. Take this for example:
long n = 1000; // Assign an Int32 literal to an Int64.
This is possible because a long (Int64) contains a larger range of values compared to an int (Int32). For your specific example it is possible to lose precision. Here are the drastically different ranges for decimal and double.
Decimal: ±1.0 × 10−28 to ±7.9 × 1028
Double: ±5.0 × 10−324 to ±1.7 × 10308
With knowledge it becomes clear why an implicit conversion is a bad idea. Here is a list of implicit conversions that the C# compiler currently supports. I highly recommend you do a bit of light reading on the subject.
Implicit Numeric Conversions Table
Note also that due to the inner details of how doubles and decimals are defined, slight rounding errors can appear in your assignments or calculations. You need to know about how floats, doubles, and decimals work at the bit level to always make the best choices.
For example, a double cannot precisely store the value 25.10, but a decimal can.
A double can precisely store the value 25.50 however, for fun binary-encoding reasons.
Decimal structure
As a follow up to the question what is the purpose of double implying...
Also, I read in an article a while back (no I don't remember the link) that it is inappropriate to do decimal dollars = 0.00M;. It stated that the appropriate way was decimal dollars = 0;. It also stated that this was appropriate for all numeric types. Is this incorrect, and why? If so, what is special about 0?
Well, for any integer it's okay to use an implicit conversion from int to decimal, so that's why it compiles. (Zero has some other funny properties... you can implicitly convert from any constant zero expression to any enum type. It shouldn't work with constant expressions of type double, float and decimal, but it happens to do so in the MS compiler.)
However, you may want to use 0.00m if that's the precision you want to specify. Unlike double and float, decimal isn't normalized - so 0m, 0.0m and 0.00m have different representations internally - you can see that when you call ToString on them. (The integer 0 will be implicitly converted to the same representation as 0m.)
So the real question is, what do you want to represent in your particular case?
I'm attempting to use a double to represent a bit of a dual-value type in a database that must sometimes accept two values, and sometimes accept only one (int).
So the field is a float in the database, and in my C# code, it is a double (since mapping it via EF makes it a double for some reason... )
So basically what I want to do .. let's say 2.5 is the value. I want to separate that out into 2, and 5. Is there any implicit way to go about this?
Like this:
int intPart = (int)value;
double fractionalPart = value - intPart;
If you want fractionalPart to be an int, you can multiply it by 10n, where n is the number of digits you want, and cast to int.
However, beware of precision loss.
However, this is extremely poor design; you should probably make two fields.
SQL Server's float type is an 8-byte floating-point value, equivalent to C#'s double.
You should be able to implicitly type-cast the double to an int to get the first part and subtract that from the number and multiply that by ten to get the second part