Sum on floats in C# [duplicate] - c#

This question already has answers here:
Why is the "f" required when declaring floats?
(3 answers)
Closed last month.
I need to know a simple question, how is float sum = 1.1f + 1.2f; different from float sum = (float)(1.1+ 1.2);
It is a C# code btw.
Thanks in advance

The difference between the two exps is that in the first expression, the operands are explicitly cast to float using the f suffix. While the second expression, the operands are implicitly cast to double (the default type for floating-point literals in C#), and then explicitly cast to float using the (float) cast operator.
So, both expressions are equivalent and will produce the same result. However, the first expression is generally considered more readable, as it explicitly indicates that the operands are float values.
What is the problem you're trying to solve ?

Related

C# explanation of the "f" keyword for a float vs implicit and explicit conversions

So I don't really get the conversion from a float to a double e.g:
float wow = 1.123562641346;
float wow = 1.123562641346f;
The first one is a double that gets saved in a variable that has memory reserved for a float and it can't implicitely convert to a float so it gives an error.
The second one has on the right side a float being saved inside a float variable. What I don't get is that the second one gives exactly the same result as this:
float wow = (float) 1.123562641346;
I mean the second one is it just exactly the same as (float), does the "f" just stand for explicitely convert to a float?
If it doesn't mean "explicitely" convert to float, then I don't know why it doesn't give an error since there isn't an implicit conversion for it.
I really can't find any resources that seem to explain this in any way, the only answers I can find is that the "f" just means that it is a float data type, but that still doesn't explain why I can give it something with 13 decimals and it converts it to the 7 decimals expected, while the first solution doesn't work and it doesn't automatically convert it.
Implicitly converting a double to a float is potentially a data-losing operation, so the compiler makes it an error.
Explicitly converting it means that the programmer has taken control, so it does not provoke an error.
Note that in your example, the double value does indeed lose data when it's converted to float as the following demonstrates:
double d = 1.123562641346;
Console.WriteLine(d.ToString("f16")); // 1.1235626413460000
float f1 = (float)1.123562641346;
Console.WriteLine(f1.ToString("f16")); // 1.1235626935958862
float f2 = 1.123562641346f;
Console.WriteLine(f2.ToString("f16")); // 1.1235626935958862
The compiler is trying to prevent the programmer from writing code that causes accidental data loss.
Note that it does NOT warn that float f2 = 1.123562641346f; is trying to initialise f2 with a value that it cannot actually represent. The same thing can happen with double initialisations - the compiler won't warn about assigning a double that can't actually be represented exactly.
The numeric value on the right of the "=" when initialising a floating point number is known as a "real-literal".
The C# Standard says this about converting the value of a real-literal to a floating point value:
The value of a real literal of type float or double is determined by
using the IEEE “round to nearest” mode.
This rounding is performed without provoking a compile error.
Your understanding is correct, the f at the end of the number indicates that it's a float, so it will be considered as a float and when you assign this float value to a float variable, you will not get conversion errors.
If there is no f at the end of the number having decimals, then, by default, the value is handled as a double and once you assign this double value to a float variable, you get an error because of potential data loss.
Read more here: https://answers.unity.com/questions/423675/why-is-there-sometimes-an-f-behinf-a-number.html

Why do decimals give a compile time error on division by zero? [duplicate]

This question already has answers here:
Why does this method return double.PositiveInfinity not DivideByZeroException?
(4 answers)
Closed 6 years ago.
As a follow up from my previous question, I noticed odd behaviour (with the console) when dividing by zero. I found that the following two statements compile fine:
Console.WriteLine(1d / 0d);
Console.WriteLine(1f / 0f);
Whereas these two give a compile time error:
Console.WriteLine(1 / 0);
Console.WriteLine(1m / 0m);
Of
Division by constant zero
Why is there this difference in behaviour?
Division by 0 is allowed for float and double types. It returns infinity.
For Int32 and Decimal, it is not allowed, it raises an exception.
The compiler doesn't allow division by 0 on constant values because it leads to undefined behavior.
public class MyConsts {
public const int i = 1/0; // Constant, compile time evaluation
}
...
Console.WriteLine(MyConsts.i); // What would you expect ?
The compiler is unable to compute a proper value for your constant expression. Keep in mind that the value is compiled and not evaluated at runtime so it's not possible to raise an exception.

Why is (1/90) = 0? [duplicate]

This question already has answers here:
C# is rounding down divisions by itself
(10 answers)
Closed 8 years ago.
I am working in Unity3D with C# and I get a weird result. Can anyone tell me why my code equals 0?
float A = 1 / 90;
The literals 1 and 90 are interpreted as an int. So integer division is used. After that the result is converted to a float.
In general C# will read all sequences (without decimal dot) of digits as an int. An int will be converted to a float if necessary. But before the assignment, that's not necessary. So all calculations in between are done as ints.
In other words, what you've written is:
float A = (float) ((int) 1)/((int) 90)
(made it explicit here, this is more or less what the compiler reads).
Now a division of two int's is processed such that it takes only the integral part into account. The integral part of 0.011111 is 0 thus zero.
If you however modify one of the literals to a floating point (1f, 1.0f, 90f,...) or both, this will work. Thus use one of these:
float A = 1/90.0f;
float A = 1.0f/90;
float A = 1.0f/90.0f;
In that case, floating point division will be performed. Which takes into account both parts.
etc.

float doesn't register [duplicate]

This question already has answers here:
Division returns zero
(8 answers)
Closed 8 years ago.
Example:
Output when debugging: bVar= 0.0
What am I missing?
You are dividing integers, not floats.
Use float bVarianza = (499f/ 500f); instead.
Your expression is evaluated as
float x = (float) (int / int).
After your integers have been divided (which results in 0 because integers don't have a fractal part) the result is stored in your variable of type float, which adds the .0 fractal part.
You divide an int by an int, so the answer is truncated to an int. That is, the expression 499/500 is evaluated to 0. Then you store 0 in a float, so it becomes 0.0.
If instead you say 499F / 500, then the expression itself will be a float, and you'll get a fractional result.
According to MSDN :
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
So if you want to get a float as result, you should cast types like
float bVarianza = (float)499 / (float)500;
in the case when one of operands is Real number, you can only cast the result:
float bVarianza = (float)(499/500.0);

What is the best practice to make division return double in C#

In c# when you want to divide the result of a method such as below, what is the best way to force it to return a double value rather than the default integer.
(int)Math.Ceiling((double)(System.DateTime.DaysInMonth(2009, 1) / 7));
As you can see I need the division to return a double so I can use the ceiling function.
A division of two int numbers returns an int, truncating any decimal points. This is generally true for other data types as well: arithmetic operations don't change the type of their operands.
To enforce a certain return type, you must therefore convert the operands appropriately. In your particular case, it's actually sufficient to convert one of the operators to double: that way, C# will perform the conversion for the other operand automatically.
You've got the choice: You can explicitly convert either operand. However, since the second operand is a literal, it's better just to make that literal the correct type directly.
This can either be done using a type suffix (d in the case of double) or to write a decimal point behind it. The latter way is generally preferred. in your case:
(int)Math.Ceiling(System.DateTime.DaysInMonth(2009, 1) / 7.0);
Notice that this decimal point notation always yields a double. To make a float, you need to use its type suffix: 7f.
This behaviour of the fundamental operators is the same for nearly all languages out there, by the way. One notable exception: VB, where the division operator generally yields a Double. There's a special integer division operator (\) if that conversion is not desired. Another exception concerns C++ in a weird way: the difference between two pointers of the same type is a ptrdiff_t. This makes sense but it breaks the schema that an operator always yields the same type as its operands. In particular, subtracting two unsigned int does not yield a signed int.
Change the 7 to a double:
(int) Math.Ceiling(System.DateTime.DaysInMonth(2009, 1) / 7.0);
just divide with a literal double:
(int)Math.Ceiling((System.DateTime.DaysInMonth(2009, 1) / 7.0))
As far as I know, you can't force a function to return a different type, so casting the result is your best bet. Casting the result of the function to a double and then dividing should do the trick.
To expand upon Konrad's answer...
Changing 7 to 7.0, 7 to 7D, 7 to 7M all get you the answer you want as well.
For a completely different approach that avoids casting and floating-point math altogether...
int result = val1 / val2;
if (val1 % val2 != 0) result++;
So, in your case...
int numDaysInMonth = System.DateTime.DaysInMonth(2009, 1);
int numWeeksInMonth = numDaysInMonth / 7;
if (numDaysInMonth % numWeeksInMonth != 0) numWeeksInMonth++;
This approach is quite verbose, but there might be some special cases where it is preferable. Technically, there should be a slight performance advantage to the modulus approach, but you'll be hard-pressed to measure it.
In your case, I'd stick with the accepted answer :-)

Categories

Resources