Conversion String to Float fail for some number but not others - c#

I have the following numbers as strings; 22570438, 22570481, 22570480.
var listOfStrings = new List<string> { "22570438", "22570481", "22570480" };
foreach (var val in listOfStrings)
{
float numTest = 0;
numTest = Convert.ToInt64(float.Parse(val));
numTest = long.Parse(val);
numTest = float.Parse(val.ToString().TrimStart().TrimEnd(), CultureInfo.InvariantCulture.NumberFormat);
}
For number, 22570438, in these 3 instances the number returned is 22570438, as with 22570480
But for 22570481, these 3 instances return 22570480. Code below is a sample how I'm doing the testing and not an code issue. I have tried it in other projects and still getting same result.
Has anyone experience this issue and is it a compiler issue when converting 22570481 to a float ??
I tried finding similar questions but If anyone knows a post that could help please reply with link.

float has limited precision; it can't accurately store arbitrary integers beyond a certain size, and it doesn't have the precision to retain what you want here.
Consider using int, decimal or double instead.
It is not a compiler bug or a runtime bug. It is a fundamental feature of floating point arithmetic (in this case 32-bit IEEE 754 floating points)

Related

Example of seemingly equal float variables not equal [duplicate]

This question already has answers here:
Floating point inaccuracy examples
(7 answers)
Closed 3 years ago.
Can anyone show me an example of two C# variables containing float values that "seem" to be equal but in fact are not. When I say "seem equal", what I mean is that they intuitively seem to be equal.
The reason why I'm looking for such an example is because I have code that compares two float variables for equality and Visual Studio is warning me that Comparison of floating point numbers can be unequal due to the differing precision of the two values.. I understand that float variables are not precise (here's a StackOverflow question where this is discussed and explained very clearly) but I am failing to find an actual example where two values that seem to be equal are actually considered different by C#.
For instance, the first answer to the SO question I referenced earlier mentions that 9.2 and 92/10 are internally represented differently so I wrote the following code to verify if C# would treat them as equal or not and the result is that they are considered equal.
var f1 = 92f / 10f;
var f2 = 9.2f;
if (f1 == f2)
{
Console.Write("Equal, as expected");
}
else
{
Console.Write("Surprisingly not equal");
}
So, I'm looking for an example of f1 and f2 that "seem" to be equal but would cause C# to treat them as different.
If you don't insist on float (Single) type, but actually want any floating point types (Single, Double) example you can try:
if (Math.Sqrt(2.0) * Math.Sqrt(2.0) == 2.0)
Console.Write("Equal, as expected");
else
Console.Write("Surprisingly not equal");
Try the code below. value1 and value2 both represent toSum * 10, but they are not equal. At least on my working machine. Float type has especially low precision on large values.
const float toSum = 1000000000.1f;
const int count = 10;
float value1 = 0;
for (int i = 0; i < count; i++)
{
value1 += toSum;
}
float value2 = toSum * count;
var equal = value1 == value2;

parse float from string

I have some problem with parsing float value from string.
Problem is with decimal part of value, here is example:
var tmp = "263148,21";
var ftmp = float.Parse(tmp); //263148.219
I tried some other values, but I don't figured out, from what reason are some decimal values incorrect.
This doesn't have anything to do with the comma in the OP's code - instead, this question is about a float value not accurately representing a real number.
Floating point numbers are limited in their precision. For really precise numbers you should use double instead.
See also this answer: why-is-floating-point-arithmetic-in-c-sharp-imprecise? and have a look at this for more information: What Every Computer Scientist Should Know About Floating-Point Arithmetic
var tmp = "263148,21";
var culture = (CultureInfo)CultureInfo.CurrentCulture.Clone();
culture.NumberFormat.NumberDecimalSeparator = ",";
var ftmp = double.Parse(tmp, culture);
You have to use double instead of float
As stated in other answers and comments, this is related to floating point precision.
Consider using Decimal if you need to have the exact same decimals or to leverage rounding errors. But please note that this type is much more heavy weight (128 bits) and might not be suited for your case.
var tmp = "263148,21";
var dtmp = Decimal.Parse(tmp); //263148.21

How to use Newton-Raphson method to find the square root of a BigInteger in C#

So I'm attempting to use the Newton-Raphson method to find the square root of a BigInteger.
Here is my code:
private void sqrRt(BigInteger candidate)
{
BigInteger epsilon = new BigInteger(0.0001);
BigInteger guess = candidate / 2;
while (BigInteger.Abs(guess * guess - candidate) >= epsilon)
{
// guess = guess - (((guess**2) - y)/(2*guess))
guess = BigInteger.Subtract(guess, BigInteger.Divide(BigInteger.Subtract(BigInteger.Multiply(guess, guess), candidate), BigInteger.Multiply(2, guess)));
MessageBox.Show(Convert.ToString(guess));
}
}
The problem seems to be that the BigInteger is not precise enough to fall within the degree of accuracy of the epsilon in the while loop - i.e. it needs a decimal place. My question is what/how/where do I convert to a double to make the while loop eventually return false?
You are using the wrong data type. In order to have decimal points, you would need to use double, float, decimal, or Complex.
Check the links of all these types so you can see their digits of precision.

Properly round financial data

I decided to re-create my question:
decimal dTotal = 0m;
foreach (DictionaryEntry item in _totals)
{
if (!string.IsNullOrEmpty(item.Value.ToString()))
{
dTotal += Convert.ToDecimal(item.Value);
}
}
Console.WriteLine(dTotal / 3600m);
Console.WriteLine(decimal.Round(dTotal / 3600m, 2));
Console.WriteLine(decimal.Divide(dTotal, 3600m));
The above code returns:
579.99722222222222222222222222
580.00
579.99722222222222222222222222
So, that is where my issues are coming from, I really need it to just display the 579.99; but any round, be it decimal.Round or Math.Round still return 580; even the string formats for {0:F} return 580.00.
How can i properly do this?
New answer (to new question)
Okay, so you've got a value of 579.99722222222222222222222222 - and you're asking that to be rounded to two decimal places. Isn't 580.00 the natural answer? It's closer to the original value than 579.99 is. It sounds like you essentially want flooring behaviour, but with a given number of digits. For that, you can use:
var floored = Math.Floor(original * 100) / 100;
In this case, you can do both in one step:
var hours = Math.Floor(dTotal / 36) / 100;
... which is equivalent to
var hours = Math.Floor((dTotal / 3600) * 100) / 100;
Original answer (to original question)
Sounds like you've probably got payTotal in an inappropriate form to start with:
using System;
class Test
{
static void Main()
{
decimal pay = 2087975.7m;
decimal time = pay / 3600;
Console.WriteLine(time); // Prints 579.99325
}
}
This is the problem:
var payTotal = 2087975.7;
That's assigning payTotal to a double variable. The value you've actually got is 2087975.69999999995343387126922607421875, which isn't what you wanted. Any time you find yourself casting from double to decimal or vice versa, you should be worried: chances are you've used the wrong type somewhere. Currency values should absolutely be stored in decimal rather than double (and there are various other Stack Overflow questions talking about when to use which).
See my two articles on floating point for more info:
Binary floating point in .NET
Decimal floating point in .NET
(Once you've got correct results, formatting them is a different matter of course, but that shouldn't be too bad...)

Why does this floating-point calculation give different results on different machines?

I have a simple routine which calculates the aspect ratio from a floating point value. So for the value 1.77777779, the routine returns the string "16:9". I have tested this on my machine and it works fine.
The routine is given as :
public string AspectRatioAsString(float f)
{
bool carryon = true;
int index = 0;
double roundedUpValue = 0;
while (carryon)
{
index++;
float upper = index * f;
roundedUpValue = Math.Ceiling(upper);
if (roundedUpValue - upper <= (double)0.1 || index > 20)
{
carryon = false;
}
}
return roundedUpValue + ":" + index;
}
Now on another machine, I get completely different results. So on my machine, 1.77777779 gives "16:9" but on another machine I get "38:21".
Here's an interesting bit of the C# specifiction, from section 4.1.6:
Floating-point operations may be
performed with higher precision than
the result type of the operation. For
example, some hardware architectures
support an “extended” or “long double”
floating-point type with greater range
and precision than the double type,
and implicitly perform all
floating-point operations using this
higher precision type. Only at
excessive cost in performance can such
hardware architectures be made to
perform floating-point operations with
less precision, and rather than
require an implementation to forfeit
both performance and precision, C#
allows a higher precision type to be
used for all floating-point
operations. Other than delivering more
precise results, this rarely has any
measurable effects.
It is possible that this is one of the "measurable effects" thanks to that call to Ceiling. Taking the ceiling of a floating point number, as others have noted, magnifies a difference of 0.000000002 by nine orders of magnitude because it turns 15.99999999 into 16 and 16.00000001 into 17. Two numbers that differ slightly before the operation differ massively afterwards; the tiny difference might be accounted for by the fact that different machines can have more or less "extra precision" in their floating point operations.
Some related issues:
C# XNA Visual Studio: Difference between "release" and "debug" modes?
CLR JIT optimizations violates causality?
To address your specific problem of how to compute an aspect ratio from a float: I'd possibly solve this a completely different way. I'd make a table like this:
struct Ratio
{
public int X { get; private set; }
public int Y { get; private set; }
public Ratio (int x, int y) : this()
{
this.X = x;
this.Y = y;
}
public double AsDouble() { return (double)X / (double)Y; }
}
Ratio[] commonRatios = {
new Ratio(16, 9),
new Ratio(4, 3),
// ... and so on, maybe the few hundred most common ratios here.
// since you are pinning results to be less than 20, there cannot possibly
// be more than a few hundred.
};
and now your implementation is
public string AspectRatioAsString(double ratio)
{
var results = from commonRatio in commonRatios
select new {
Ratio = commonRatio,
Diff = Math.Abs(ratio - commonRatio.AsDouble())};
var smallestResult = results.Min(x=>x.Diff);
return String.Format("{0}:{1}", smallestResult.Ratio.X, smallestResult.Ratio.Y);
}
Notice how the code now reads very much like the operation you are trying to perform: from this list of common ratios, choose the one where the difference between the given ratio and the common ratio is minimized.
I wouldn't use floating point numbers unless I really had to. They're too prone to this sort of thing due to rounding errors.
Can you change the code to work in double precision? (decimal would be overkill). If you do this, does it give more consistent results?
As to why it's different on different machines, what are the differences between the two machines?
32 bit vs 64 bit?
Windows 7 vs Vista vs XP?
Intel vs AMD processor? (thanks Oded)
Something like this might be the cause.
Try Math.Round instead of Math.Ceiling. If you end up with 16.0000001 and round up you'll incorrectly discard that answer.
Miscellaneous other suggestions:
Doubles are better than floats.
(double) 0.1 cast is unnecessary.
Might want to throw an exception if you can't figure out what the aspect ratio is.
If you return immediately upon finding the answer you can ditch the carryon variable.
A perhaps more accurate check would be to calculate the aspect ratio for each guess and compare it to the input.
Revised (untested):
public string AspectRatioAsString(double ratio)
{
for (int height = 1; height <= 20; ++height)
{
int width = (int) Math.Round(height * ratio);
double guess = (double) width / height;
if (Math.Abs(guess - ratio) <= 0.01)
{
return width + ":" + height;
}
}
throw ArgumentException("Invalid aspect ratio", "ratio");
}
When index is 9, you would expect to get something like upper = 16.0000001 or upper = 15.9999999. Which one you get will depend on rounding error, which may differ on different machines. When it's 15.999999, roundedUpValue - upper <= 0.1 is true, and the loop ends. When it's 16.0000001, roundedUpValue - upper <= 0.1 is false and the loop keeps going until you get to index > 20.
Instead maybe you should try rounding upper to the nearest integer and checking if the absolute value of its difference from that integer is small. In otherwords, use something like if (Math.Abs(Math.Round(upper) - upper) <= (double)0.0001 || index > 20)
We had printf()-statements with floating point values that gave different roundings on computer 1 versus computer 2, even though both computers contained the same Visual Studio 2019 version and build.
The difference was found however in a slightly older Windows 10 SDK versus the newest version. How strange it may seem... After fixing that the differences were gone.

Categories

Resources