When I convert a string to a float I get my answer plus a bunch of junk decimals. How do I fix this? Is this floats error I'm seeing?
string myFloat = "1.94";
float f = float.Parse(myFloat);
It needs to be a float for database reasons ...
By junk I mean 1.94 turns into: 1.94000005722046
The problem is that you can't use float / double if you want a precise representation of your parsed number. In case you need that you must use decimal. For money amounts its almost always required to use decimal. So keep that in mind.
Please read about how floating point numbers are represented internally:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
How 1.94 is represented internally by a float can be tested in this calculator:
http://pages.cs.wisc.edu/~rkennedy/exact-float?number=1.94
As you see its 1.940000057220458984375.
Databases support the decimal datatype:
Oracle offers DECIMAL: http://docs.oracle.com/javadb/10.6.2.1/ref/rrefsqlj15260.html
SQL Server offers DECIMAL: http://msdn.microsoft.com/de-de/library/ms187746.aspx
MySQL offers DECIMAL: https://dev.mysql.com/doc/refman/5.1/en/fixed-point-types.html
You can change it to a double to get a more accurate representation but if you need it exactly
You cannot fix this this is how floats are represented in computers. If you use the decimal data type you will get an exact representation.
string myFloat = "1.94";
decimal f = decimal.Parse(myFloat);
And then do the conversion to double when you store it to the database. You could still get some noise in the database. The only way to be sure to get rid of this is to use numeric in the database.
Related
The value 0.105700679f should be convertible precisely to decimal. decimal clearly is able to hold this value precisely:
decimal d = 0.105700679m;
Console.WriteLine(d); //0.105700679
float also is able to hold the value precisely:
float f = 0.105700679f;
Console.WriteLine(f == 0.105700679f); //True
Console.WriteLine(f == 0.1057007f); //False
Console.WriteLine(f.ToString("R")); //Round-trip representation, 0.105700679
(Note, that float.ToString() seems to drop precision. I just asked about that as well.)
https://www.h-schmidt.net/FloatConverter/IEEE754.html says:
It seems the value really is stored like that. I am seeing this value right now in the debugger. I received it over the network as IEEE float. This value exists!
But when I convert from float to decimal precision is dropped:
float f = 0.105700679f;
decimal d = (decimal)f;
Console.WriteLine(d); //0.1057007
Console.WriteLine(d.ToString("F15")); //0.105700700000000
Console.WriteLine(((double)d).ToString("R")); //0.1057007
I do understand that floating point numbers are imprecise. But here I see no reason for a loss of information. This is on .NET 4.7.1. How can I convert from float to decimal and preserve precision in all cases where doing so is possible?
This is important to me because I am processing financial market data and joining data sources based on price. Data is given to me as a float over a 3rd party network protocol. I need to import that float to a decimal representation.
Try converting f to double and then converting that to decimal.
I suspect you are seeing shortcomings in .NET.
Let’s look at some of the code in your question line by line. In float f = 0.105700679f;, the decimal numeral “0.105700679” is converted to 32-bit binary floating-point. The result of this is the number 0.105700679123401641845703125.
In Console.WriteLine(f == 0.105700679f);. This compares f to the value represented by 0.105700679f. Since the f suffix denotes a float type, 0.105700679f represents the decimal numeral “0.105700679” converted to 32-bit binary floating-point. So of course it has the same value as it did before, and the test for equality returns true. You have not tested whether f is equal to 0.105700679, you have tested whether it is equal to 0.105700679f, and it is.
Then we have decimal d = (decimal)f;. Based on the results you are seeing, it appears to me this conversion produces a number with only seven decimal digits, .1057007. I presume Microsoft has decided that, because a float is only “capable” of holding seven decimal digits, that only seven should be produced when converting to decimal. (This is both a false understanding of what the value of a binary floating-point number represents and an incorrect number. A conversion from decimal to float and back is only guaranteed to preserve six decimal digits, and a conversion from float to decimal and back requires nine decimal digits to preserve the float value. So seven is just wrong.)
If there is a solution to your problem, it is to convert f to decimal by some means other than the cast (decimal) f. I do not know C#, so I cannot say what the solution should be. I suggest trying to convert to double first and then decimal. Quite likely C# will convert float to double without changing the value, and then the conversion to decimal will produce more decimal digits. Another possibility could be converting f to a string with the number of decimal digits you desire and then converting the string to a decimal number.
Also, you say the data is coming via a third-party network protocol. It appears the protocol is incapable of representing the actual values it is supposed to be communicating. That is a serious defect that you should complain to the third party about. I know that may seem futile, but it should be done. Also, it casts doubt on your need to convert the float value to 0.105700679. Most eight-digit decimal numbers in the origin of this data could not survive the conversion to float and back to decimal without change. So, even if you are able to convert float to eight decimal digits, most of the results will differ from the original pre-transport values. E.g., if the original number were 0.105700680, it is changed to 0.105700679123401641845703125 when converted to a float to be sent over the network. So the receiver receives 0.105700679123401641845703125, and it is impossible for them to know the original number was 0.105700680 rather than 0.105700679.
C# data type FLOAT have Precision is 7.
float f = 0.105700679f;
Console.WriteLine(d); //0.1057007 so that result is true!
read more: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/float
using c# mvc4 and mssql I have an Object which has a float field, now when I look it in the database the float value is 2.014112E+17 but when I get the object in my code, it becomes 2.01411186E+17. why is it different between the object in the server and the object in the database? there is no conversion happening in between by me, just reading an object from database. Thank you
Edit: I'm using this float point as a timestamp to sync some of my data with another database and this issue is causing is me some problems, is there a way to get an accurate value or storing it as a float is a wrong idea in first place?
Floats are only accurate to a certain degree due to their implementation. For accuracy, use Decimal.
Difference between decimal, float and double in .NET?
float and double are floating binary point types. In other words, they
represent a number like this:
10001.10010110011
The binary number and the location of the binary point are both encoded within the value.
decimal is a floating decimal point type. In other words, they
represent a number like this:
12345.65789
Edit: You can also try saving the timestamp as a unix timestamp, which is just the number of seconds since 1970-01-01. It might be better suited for your needs
If you have literally used a SQL float with a C# float, these are not comparable. You should be using a SQL real to store your C# float.
See here for full chart: C# Equivalent of SQL Server DataTypes
As an aside, you will always have the potential for these issues when working with floating point numbers. Where possible, use a decimal instead.
Further reference for SQL float != C# float: Why is a SQL float different from a C# float
I searched for this question first before posting, but all I got was based on C++.
Here is my question:
Is a double with f suffix normal in c#? If yes, why and how is this possible?
Have a look at this code:
double d1 = 1.2f;
double d2 = 2.0f;
Console.WriteLine("{0}", d2 - d1);
decimal dm1 = 1.2m;
decimal dm2 = 2.0m;
Console.WriteLine("{0}", dm2 - dm1);
The answers for the first calculation is 0.799999952316284 with f suffix instead of 0.8. Also, when I change the f to a d which I think should be the normal way, it gives a correct answer of 0.8.
The right hand expression is evaluated as float and then "deposited" in a double variable. Nothing wrong or weird here. I think the difference in result has to do with the precision of the two data types.
Referring to your appreciation of the "correct answer", the fact that 0.8 came out "correct" is not because you changed from a float literal to a double literal. That's just a better approximation of the result. The "correct" result is indeed coming from the second expression, the one using decimal types.
The f suffix stand for float and not double. So 1.2f is a single precission floating point number which will be saved to a double directly after creating it because of an implicit cast to double.
The inprecission you are getting seems to be happening there and not at the calculation as it seems to be working with 1.2d.
Such behaviour is normal when using floating-point values. Use decimal if you do not want such behaviour as you already did in you examples yourself...
Double and Float both are binary numbers.
The Problem is not their precision but the kind of numbers they can store in an exact manner, which must be binary, too. Change 1.2f to 0.5f of 0.25f or 0.125f and so an and you will see 'correct' results. But any number with different factorials must be stored in an approximation. There is a '3' hidden in the 1.2 and you can't store in in a float or double. If you try, only an approximation will be stored.
Decimals are actually storing decimal digits and you won't see any approximations there as long as you don't leave the decimal realm. If you try to store, say, 1/3 in a decimal, it'll have to approximate as well..
Let's say we have the following simple code
string number = "93389.429999999993";
double numberAsDouble = Convert.ToDouble(number);
Console.WriteLine(numberAsDouble);
after that conversion numberAsDouble variable has the value 93389.43. What can i do to make this variable keep the full number as is without rounding it? I have found that Convert.ToDecimal does not behave the same way but i need to have the value as double.
-------------------small update---------------------
putting a breakpoint in line 2 of the above code shows that the numberAsDouble variable has the rounded value 93389.43 before displayed in the console.
93389.429999999993 cannot be represented exactly as a 64-bit floating point number. A double can only hold 15 or 16 digits, while you have 17 digits. If you need that level of precision use a decimal instead.
(I know you say you need it as a double, but if you could explain why, there may be alternate solutions)
This is expected behavior.
A double can't represent every number exactly. This has nothing to do with the string conversion.
You can check it yourself:
Console.WriteLine(93389.429999999993);
This will print 93389.43.
The following also shows this:
Console.WriteLine(93389.429999999993 == 93389.43);
This prints True.
Keep in mind that there are two conversions going on here. First you're converting the string to a double, and then you're converting that double back into a string to display it.
You also need to consider that a double doesn't have infinite precision; depending on the string, some data may be lost due to the fact that a double doesn't have the capacity to store it.
When converting to a double it's not going to "round" any more than it has to. It will create the double that is closest to the number provided, given the capabilities of a double. When converting that double to a string it's much more likely that some information isn't kept.
See the following (in particular the first part of Michael Borgwardt's answer):
decimal vs double! - Which one should I use and when?
A double will not always keep the precision depending on the number you are trying to convert
If you need to be precise you will need to use decimal
This is a limit on the precision that a double can store. You can see this yourself by trying to convert 3389.429999999993 instead.
The double type has a finite precision of 64 bits, so a rounding error occurs when the real number is stored in the numberAsDouble variable.
A solution that would work for your example is to use the decimal type instead, which has 128 bit precision. However, the same problem arises with a smaller difference.
For arbitrary large numbers, the System.Numerics.BigInteger object from the .NET Framework 4.0 supports arbitrary precision for integers. However you will need a 3rd party library to use arbitrary large real numbers.
You could truncate the decimal places to the amount of digits you need, not exceeding double precision.
For instance, this will truncate to 5 decimal places, getting 93389.42999. Just replace 100000 for the needed value
string number = "93389.429999999993";
decimal numberAsDecimal = Convert.ToDecimal(number);
var numberAsDouble = ((double)((long)(numberAsDecimal * 100000.0m))) / 100000.0;
I’m getting numbers from a database. Some are stored as double in the database and some are stored as string.
What I want to do is count the decimal number of caracters so : 345.34938 would give me a result of 5.
As I said, some of my double come from the database as double and some as string. I’m wondering if there could be any kind of problem when casting the string numbers to double, hence giving me wrong result when trying to count the decimals.
I think I should be ok but I’m afraid that in some situations I’ll end up receiving wrong double numbers when they’re casted from the string (thinking about having 1.9999999 instead of 2.0, things like that)...
Is there any kind of risk that casting my number from string to double would give me strange result when stored as double in my application ? Am I being to frisky around that ?
Consider converting the string representations to System.Decimal with the decimal.Parse method. Because for a decimal there's a much better correspondence between the value of the number and its string representation. Also, it can handle more digits.
A System.Decimal will preserve trailing zeros present in the string (like "2.7500"), which a System.Double will not.
But if your strings never have more than 15 digits in total (including digits before the decimal point .), your approach with double will probably work. But the exact number represented almost always differs from "what you see" with a double, so the number of decimal figures is to be understood as what double.ToString() shows...
Maybe it's easier to just use the string directly, like
int numberOfDecimals = myString.Length - (myString.IndexOf('.') + 1);