Avoiding rounding in convert.tosingle method - c#

consider ,
object a =1.123456;
float f = convert.ToSingle(a);
But when I print the value of f , I get 1.123455.
It is getting rounded off.
Also the problem is I cant change the data type of float in the code.
Please help.

This is done because of the way the floating-point type works.
If you want a better precision (in the cost of some performance) - use the Double or Decimal type instead.
For more information about why floating-point loses precision, read:
http://msdn.microsoft.com/en-us/library/c151dt3s%28VS.80%29.aspx

Related

C# explanation of the "f" keyword for a float vs implicit and explicit conversions

So I don't really get the conversion from a float to a double e.g:
float wow = 1.123562641346;
float wow = 1.123562641346f;
The first one is a double that gets saved in a variable that has memory reserved for a float and it can't implicitely convert to a float so it gives an error.
The second one has on the right side a float being saved inside a float variable. What I don't get is that the second one gives exactly the same result as this:
float wow = (float) 1.123562641346;
I mean the second one is it just exactly the same as (float), does the "f" just stand for explicitely convert to a float?
If it doesn't mean "explicitely" convert to float, then I don't know why it doesn't give an error since there isn't an implicit conversion for it.
I really can't find any resources that seem to explain this in any way, the only answers I can find is that the "f" just means that it is a float data type, but that still doesn't explain why I can give it something with 13 decimals and it converts it to the 7 decimals expected, while the first solution doesn't work and it doesn't automatically convert it.
Implicitly converting a double to a float is potentially a data-losing operation, so the compiler makes it an error.
Explicitly converting it means that the programmer has taken control, so it does not provoke an error.
Note that in your example, the double value does indeed lose data when it's converted to float as the following demonstrates:
double d = 1.123562641346;
Console.WriteLine(d.ToString("f16")); // 1.1235626413460000
float f1 = (float)1.123562641346;
Console.WriteLine(f1.ToString("f16")); // 1.1235626935958862
float f2 = 1.123562641346f;
Console.WriteLine(f2.ToString("f16")); // 1.1235626935958862
The compiler is trying to prevent the programmer from writing code that causes accidental data loss.
Note that it does NOT warn that float f2 = 1.123562641346f; is trying to initialise f2 with a value that it cannot actually represent. The same thing can happen with double initialisations - the compiler won't warn about assigning a double that can't actually be represented exactly.
The numeric value on the right of the "=" when initialising a floating point number is known as a "real-literal".
The C# Standard says this about converting the value of a real-literal to a floating point value:
The value of a real literal of type float or double is determined by
using the IEEE “round to nearest” mode.
This rounding is performed without provoking a compile error.
Your understanding is correct, the f at the end of the number indicates that it's a float, so it will be considered as a float and when you assign this float value to a float variable, you will not get conversion errors.
If there is no f at the end of the number having decimals, then, by default, the value is handled as a double and once you assign this double value to a float variable, you get an error because of potential data loss.
Read more here: https://answers.unity.com/questions/423675/why-is-there-sometimes-an-f-behinf-a-number.html

Why isn't conversion of Decimal.MaxValue between Single and Decimal transitive?

If I try and convert Decimal.MaxValue from Decimal to Single and back again, the conversion fails with an OverflowException:
Convert.ToDecimal(Convert.ToSingle(Decimal.MaxValue))
// '...' threw an exception of type 'System.OverflowException'
// base: {"Value was either too large or too small for a Decimal."}
What gives? Surely the value of Decimal.MaxValue as a Single should be a valid Decimal value?
I understand that the differences between Single and Decimal and expect a loss of precision converting from Decimal to Single but as Single.MaxValue is greater than Decimal.MaxValue it doesn't make sense that the "Value was either too large or too small for Decimal". If you think that does make sense please explain why in an answer.
Additionally,
Convert.ToSingle(Decimal.MaxValue)
// 7.92281625E+28
so there is no problem converting this number to a Single.
You're making an incorrect assumption:
Surely the value of Decimal.MaxValue as a Single should be a valid
Decimal value?
The value of Decimal.MaxValue is 79,228,162,514,264,337,593,543,950,335. A float can't represent anywhere near that level of precision, so when you convert the decimal value to a float you end up with an approximation.
In this case the float is represented internally as 2^96. This turns out to be a pretty good approximation -- 79,228,162,514,264,337,593,543,950,336 -- but take note of that last digit.
The value of the float is larger than Decimal.MaxValue, which is why your attempt to convert it back into a decimal fails.
(The .NET framework doesn't offer much help when trying to diagnose these kind of problems. It'll always display a pre-rounded value for the float -- 7.92281625E+28 or similar -- and never the full approximation that it's using internally.)

Number type suffixes in C#

I searched for this question first before posting, but all I got was based on C++.
Here is my question:
Is a double with f suffix normal in c#? If yes, why and how is this possible?
Have a look at this code:
double d1 = 1.2f;
double d2 = 2.0f;
Console.WriteLine("{0}", d2 - d1);
decimal dm1 = 1.2m;
decimal dm2 = 2.0m;
Console.WriteLine("{0}", dm2 - dm1);
The answers for the first calculation is 0.799999952316284 with f suffix instead of 0.8. Also, when I change the f to a d which I think should be the normal way, it gives a correct answer of 0.8.
The right hand expression is evaluated as float and then "deposited" in a double variable. Nothing wrong or weird here. I think the difference in result has to do with the precision of the two data types.
Referring to your appreciation of the "correct answer", the fact that 0.8 came out "correct" is not because you changed from a float literal to a double literal. That's just a better approximation of the result. The "correct" result is indeed coming from the second expression, the one using decimal types.
The f suffix stand for float and not double. So 1.2f is a single precission floating point number which will be saved to a double directly after creating it because of an implicit cast to double.
The inprecission you are getting seems to be happening there and not at the calculation as it seems to be working with 1.2d.
Such behaviour is normal when using floating-point values. Use decimal if you do not want such behaviour as you already did in you examples yourself...
Double and Float both are binary numbers.
The Problem is not their precision but the kind of numbers they can store in an exact manner, which must be binary, too. Change 1.2f to 0.5f of 0.25f or 0.125f and so an and you will see 'correct' results. But any number with different factorials must be stored in an approximation. There is a '3' hidden in the 1.2 and you can't store in in a float or double. If you try, only an approximation will be stored.
Decimals are actually storing decimal digits and you won't see any approximations there as long as you don't leave the decimal realm. If you try to store, say, 1/3 in a decimal, it'll have to approximate as well..

C# Maths gives wrong results!

I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.

C# : Number Conversion Problem

Today I faced a strange problem in C#. I have an ASP.NET page where user can enter certain price, quantity etc. I get the price value, convert it to double, then multiply it with 100 and then typecast it to an integer. When the price is "33.30", after converting it to double it remains 33.3 (obviously...), but after multiplying it with 100, it becomes 3329.9999999999995, and when I cast it to integer by applying simple cast operator "(int) (price * 100) ", it becomes 3329.
Right now I have no idea why this is happening. So I thought may be you guys can help :) .
This happens because of the way doubles are stored. You should use decimal when working with money to avoid rounding errors.
don't cast it, round it using Math.Round. and its better to use a decimal type for currency
This is happening due to floating point rounding errors. Floating point numbers cannot be accurately represented in binary, so rounding errors such as the one you are experiencing happen. See this wikipedia article for more detail.
To overcome this, you should round to the closest integer - this is best achieved by using Math.Round.
When dealing with currencies however, best practice it to use the decimal type instead of double.
If you want to cast to the closest integer there is a Math.Round method for this.
What you are doing by default is flooring - which is exactly what you observe. (and is consistent with C)
The error is because doubles are stored in binary form. While every binary fraction has an exact decimal expansion, most decimals don't have an exact binary expansion. The decimal 33.3 has an inexact binary expansion. This approximation is then multiplied by 100, and converted to its exact decimal expansion, which is 3329.9999999999995. (Actually, this may not be the exact expansion, due to display truncation, but the gist of it is the same.)
Floating Point arithmetic in computing is almost always an approximation of the "Real" value

Categories

Resources