How to check if a Double can be converted into a Int32? - c#

I have a double value that I'd like to convert into a Int32. How can I check before converting if it can be converted?
Sometimes the value is undefined and the Converting into Int32 throws an OverflowException.
I already tried to test it that way:
double value = getSomeValue();
if (value == Double.NAN) {
value =0;
}
int v = Convert.ToInt32(value);
But this does not cover all cases.

Maybe this?
Update: I believe the update below addresses the edge cases. I've tested this against every case I could think of verifying the output against a method that attempts Convert.ToInt32 directly and catches the exception.
static bool TryConvertToInt32(double value, out int result)
{
const double Min = int.MinValue - 0.5;
const double Max = int.MaxValue + 0.5;
// Notes:
// 1. double.IsNaN is needed for exclusion purposes because NaN compares
// false for <, >=, etc. for every value (including itself).
// 2. value < Min is correct because -2147483648.5 rounds to int.MinValue.
// 3. value >= Max is correct because 2147483648.5 rounds to int.MaxValue + 1.
if (double.IsNaN(value) || value < Min || value >= Max)
{
result = 0;
return false;
}
result = Convert.ToInt32(value);
return true;
}

Check whether Double.IsNaN and make sure it's between int.MinValue and int.MaxValue,

You could compare to the range of an Int32.
if(value <= (double)Int32.MAX_VALUE && value >= (double)Int32.MIN_VALUE)
return (Int32)value;
return 0;
Of course, if you want to return Max/Min value when the double is too large, you could do this:
if(value <= (double)Int32.MAX_VALUE && value >= (double)Int32.MIN_VALUE)
return (Int32)value;
if(value > (double)Int32.MAX_VALUE)
return Int32.MAX_VALUE;
if(value < (double)Int32.MIN_VALUE)
return Int32.MIN_VALUE;
return 0;

Try something like this:
double d = Double.NaN;
int i;
if(Int32.TryParse(d.ToString(), out i))
{
Console.WriteLine("Success");
Console.WriteLine(i);
} else {
Console.WriteLine("Fail");
}

Unless you absolutely need the performance, what about using exception handling?

You could try something like this:
(value>=Int32.MinValue)&&(value<=Int32.MaxValue)
This will probably falsely reject values which are outside the value range of int but get rounded into it. So you might want to extent the interval a bit.
For example Int32.MaxValue+0.1 gets rejected.
How do you want to treat non integral doubles? This code accepts them and silently rounds away the fractional part. The suggestions based on int.TryParse(value.ToString(),...) will throw consider such doubles invalid.

Related

How do I prevent an int from becoming negative after subtracting

I am trying to subtract a value from an int which has a set value. Once it gets to 0 and I subtract more from it, the value becomes negative.
I thought if I used an if-else statement to check whether the value goes below 0 it would prevent the value from becoming negative. But the value proceeds into the negative. How do I prevent the value from going into the negative range?
{
DateTime start = dateTimePicker2.Value.Date;
DateTime end = dateTimePicker1.Value.Date;
TimeSpan difference = end - start;
int days = difference.Days;
int min = 0;
int max = 21;
int rdays = Convert.ToInt32(Holidays_Number_lbl.Text);
Holidays_Number_lbl.Text = (rdays - days).ToString();
int Holidays_Number = int.Parse(Holidays_Number_lbl.Text);
if ((Holidays_Number > min) && (Holidays_Number < max))
{
MessageBox.Show("Holidays have been counted");
}
else
{
MessageBox.Show(" You have exceeded your 21 holidays ");//value goes into the minus ?
}
}
Expected result: MessageBox appears saying you have exceeded your days and value doesn't go into the negative.
Actual Result: Value proceeds into the negative.
You can specify zero as the lowest possible value with Math.Max()on the line where you do the arithmetic:
Holidays_Number_lbl.Text = (Math.Max(rdays - days, 0)).ToString();
However, you're converting to a string and then back to a number. Something like this would eliminate the need for int.Parse:
...
int Holidays_Number = Math.Max(rdays - days, 0);
Holidays_Number_lbl.Text = Holidays_Number.ToString();
if ((Holidays_Number > min) && (Holidays_Number < max))
{
...
This line int Holidays_Number = int.Parse(Holidays_Number_lbl.Text); is setting the value of Holidays_Number. Then the next line checks that with an if statement. But the if statement does not change the value, it just checks it. So if it is below 0, it will remain below 0.

Simplify if-clauses when checking for range and setting default value

I have a function that converts a double value into the sbyte and returns its hex representation:
string convertToSByte(double num, double factor)
{
double _Value = num * factor;
if (_Value > 127)
{
_Value = 127;
}
else if (_Value < -127)
{
_Value = -127;
}
return Convert.ToSByte(_Value).ToString("X2");
}
The calculated _Value is supposed to be within the range of [-127 ; 127] if not then these values have to be set as default.
Question: How can these two if conditions and the setting of the default values be simplified?
EDIT:
I tried using the conditional operator ? but actually it is not much simpler (even a little harder to read) and also not really less code
ps. This question serves more of an educational purpose. To find a different way to check for ranges of a variable
You could use Min/Max
string convertToSByte(double num, double factor)
{
var value = Math.Min(127, Math.Max(-127.0, num * factor));
return Convert.ToSByte(value).ToString("X2");
}

Why Convert.ToInt32(1.0/0.00004) != (Int32)(1.0/0.00004)

Why this code http://ideone.com/YRcICG
void Main()
{
double a = 0.00004;
Int32 castToInt = (Int32)(1.0/a);
Int32 convertToInt = Convert.ToInt32(1.0/a);
Console.WriteLine("{0} {1:F9} {2:F9}", castToInt == convertToInt, castToInt, convertToInt);
Console.WriteLine((((int)(1.0/(1.0/25000))) == 24999));
}
results in
False 24999,000000000 25000,000000000
True
in context of CLR/C# implementation
The trick lies in the way the double is represented so (1.0/a) will be represented in the following way:
(1.0/a) = 24999.99999999999636202119290828704833984375.
When you use cast you get only the first part of this number, while the convert Method works in a different way, here is the code for the Convert method:
public static int ToInt32(double value)
{
if (value >= 0.0)
{
if (value < 2147483647.5)
{
int num = (int)value;
double num2 = value - (double)num;
if (num2 > 0.5 || (num2 == 0.5 && (num & 1) != 0))
{
num++;
}
return num;
}
}
else
{
if (value >= -2147483648.5)
{
int num3 = (int)value;
double num4 = value - (double)num3;
if (num4 < -0.5 || (num4 == -0.5 && (num3 & 1) != 0))
{
num3--;
}
return num3;
}
}
throw new OverflowException(Environment.GetResourceString("Overflow_Int32"));
}
As you can see there is an if statement that checks the difference between casted value and original double, in your example it is:
int num = (int)value;
double num2 = value - (double)num;
24999.99999999999636202119290828704833984375 - 24999 > 0.5
, hence you get the increment.
In your calculation, the answer to 1.0/0.00004 is getting converted to a value very slightly smaller than 2500, because floating-point numbers can't precisely represent all possible values. Given that,
Why do the two integers have different values?
Casting a double to an int truncates the value, so everything after the decimal point is discarded.
Convert.ToInt32 on a double rounds to the nearest integer.
Why are floating point numbers imprecise?
See the excellent article linked in another answer: What Every Computer Scientist Should Know About Floating-Point Arithmetic
How can I represent values precisely?
You could use a decimal type rather than double. There are pros and cons to doing this, see decimal vs double! - Which one should I use and when?
Cast trunk the floating number
(Int32)41.548 == 41
Convert round the number (feature?)
Convert.ToInt32(41.548) == 42
Floating point math is not exact. Simple values like 0.2 cannot be precisely represented using binary floating point numbers, and the limited precision of floating point numbers means that slight changes in the order of operations can change the result. A must read:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
The IEEE standard divides exceptions into 5 classes: overflow, underflow, division by zero, invalid operation and inexact. There is a separate status flag for each class of exception. The meaning of the first three exceptions is self-evident. Invalid operation covers the situations listed in TABLE D-3, and any comparison that involves a NaN.

Counting Precision Digits

How to count precision digits on a C# decimal type?
e.g. 12.001 = 3 precision digits.
I would like to thrown an error is a precision of greater than x is present.
Thanks.
public int CountDecPoint(decimal d){
string[] s = d.ToString().Split('.');
return s.Length == 1 ? 0 : s[1].Length;
}
Normally the decimal separator is ., but to deal with different culture, this code will be better:
public int CountDecPoint(decimal d){
string[] s = d.ToString().Split(Application.CurrentCulture.NumberFormat.NumberDecimalSeparator[0]);
return s.Length == 1 ? 0 : s[1].Length;
}
You can get the "scale" of a decimal like this:
static byte GetScale(decimal d)
{
return BitConverter.GetBytes(decimal.GetBits(d)[3])[2];
}
Explanation: decimal.GetBits returns an array of four int values of which we take only the last one. As described on the linked page, we need only the second to last byte from the four bytes that make up this int, and we do that with BitConverter.GetBytes.
Examples: The scale of the number 3.14m is 2. The scale of 3.14000m is 5. The scale of 123456m is 0. The scale of 123456.0m is 1.
If the code may run on a big-endian system, it is likely that you have to modify to BitConverter.GetBytes(decimal.GetBits(d)[3])[BitConverter.IsLittleEndian ? 2 : 1] or something similar. I have not tested that. See the comments by relatively_random below.
I know I'm resurrecting an ancient question, but here's a version that doesn't rely on string representations and actually ignores trailing zeros. If that's even desired, of course.
public static int GetMinPrecision(this decimal input)
{
if (input < 0)
input = -input;
int count = 0;
input -= decimal.Truncate(input);
while (input != 0)
{
++count;
input *= 10;
input -= decimal.Truncate(input);
}
return count;
}
I would like to thrown an error is a precision of greater than x is present
This looks like the simplest way:
void AssertPrecision(decimal number, int decimals)
{
if (number != decimal.Round(number, decimals, MidpointRounding.AwayFromZero))
throw new Exception()
};

custom method for returning decimal places shows odd behavior

I am writing a simple method that will calculate the number of decimal places in a decimal value. The method looks like this:
public int GetDecimalPlaces(decimal decimalNumber) {
try {
int decimalPlaces = 1;
double powers = 10.0;
if (decimalNumber > 0.0m) {
while (((double)decimalNumber * powers) % 1 != 0.0) {
powers *= 10.0;
++decimalPlaces;
}
}
return decimalPlaces;
I have run it against some test values to make sure that everything is working fine but am getting some really weird behavior back on the last one:
int test = GetDecimalPlaces(0.1m);
int test2 = GetDecimalPlaces(0.01m);
int test3 = GetDecimalPlaces(0.001m);
int test4 = GetDecimalPlaces(0.0000000001m);
int test5 = GetDecimalPlaces(0.00000000010000000001m);
int test6 = GetDecimalPlaces(0.0000000001000000000100000000010000000001000000000100000000010000000001000000000100000000010000000001m);
Tests 1-5 work fine but test6 returns 23. I know that the value being passed in exceeds the maximum decimal precision but why 23? The other thing I found odd is when I put a breakpoint inside the GetDecimalPlaces method following my call from test6 the value of decimalNumber inside the method comes through as the same value that would have come from test5 (20 decimal places) yet even though the value passed in has 20 decimal places 23 is returned.
Maybe its just because I'm passing in a number that has way too many decimal places and things go wonky but I want to make sure that I'm not missing something fundamentally wrong here that might throw off calculations for the other values later down the road.
The number you're actually testing is this:
0.0000000001000000000100000000
That's the closest exact decimal value to 0.0000000001000000000100000000010000000001000000000100000000010000000001000000000100000000010000000001.
So the correct answer is actually 20. However, your code is giving you 23 because you're using binary floating point arithmetic, for no obvious reason. That's going to be introducing errors into your calculations, completely unnecessarily. If you change to use decimal consistently, it's fine:
public static int GetDecimalPlaces(decimal decimalNumber) {
int decimalPlaces = 1;
decimal powers = 10.0m;
if (decimalNumber > 0.0m) {
while ((decimalNumber * powers) % 1 != 0.0m) {
powers *= 10.0m;
++decimalPlaces;
}
}
return decimalPlaces;
}
(Suggestion) You could calculate that this way:
public static int GetDecimalPlaces(decimal decimalNumber)
{
var s = decimalNumber.ToString();
return s.Substring(s.IndexOf(CultureInfo.CurrentCulture.NumberFormat.NumberDecimalSeparator) + 1).Length;
}
There is another way to do this and probably it works faster because it uses remainder operation only if the decimal number has a "trailing zeros" problem.
The basic idea:
In .NET any decimal is stored in memory in the form
m * Math.Power(10, -p)
where m is mantissa (96 bit size) and p is order (value from 0 to 28).
decimal.GetBits method retrieves this representation from decimal struct and returns it as array of int (of length 4).
Using this data we can construct another decimal. If we will use only mantissa, without "Math.Power(10, -p)" part, the result will be an integral decimal. And if this integral decimal number is divisible by 10, then our source number has one or more trailing zeros.
So here is my code
static int GetDecimalPlaces(decimal value)
{
// getting raw decimal structure
var raw = decimal.GetBits(value);
// getting current decimal point position
int decimalPoint = (raw[3] >> 16) & 0xFF;
// using raw data to create integral decimal with the same mantissa
// (note: it always will be absolute value because I do not analyze
// the sign information of source number)
decimal integral = new decimal(raw[0], raw[1], raw[2], false, 0);
// disposing from trailing zeros
while (integral > 0 && integral % 10 == 0)
{
decimalPoint--;
integral /= 10;
}
// returning the answer
return decimalPoint;
}

Categories

Resources