Denormalized numbers C# - c#

I recently came across denormalized definition and I understand that there are some numbers that cannot be represented in a normalized form because they are too small to fit into its corresponding type. According with IEEE
So what I was trying to do is catch when a denormalized number is being passed as a parameter to avoid calculations with this numbers. If I am understanding correct I just need to look for numbers within the Range of denormalized
private bool IsDenormalizedNumber(float number)
{
return Math.Pow(2, -149) <= number && number<= ((2-Math.Pow(2,-23))*Math.Pow(2, -127)) ||
Math.Pow(-2, -149) <= number && number<= -((2 - Math.Pow(2, -23)) * Math.Pow(2, -127));
}
Is my interpretation correct?

I think a better approach would be to inspect the bits. Normalized or denormalized is a characteristic of the binary representation, not of the value itself. Therefore, you will be able to detect it more reliably this way and you can do so without and potentially dangerous floating point comparisons.
I put together some runnable code for you, so that you can see it work. I adapted this code from a similar question regarding doubles. Detecting the denormal is much simpler than fully excising the exponent and significand, so I was able to simplify the code greatly.
As for why it works... The exponent is stored in offset notation. The 8 bits of the exponent can take the values 1 to 254 (0 and 255 are reserved for special cases), they are then offset adjusted by -127 yielding the normalized range of -126 (1-127) to 127 (254-127). The exponent is set to 0 in the denormal case. I think this is only required because .NET does not store the leading bit on the significand. According to IEEE 754, it can be stored either way. It appears that C# has opted for dropping it in favor of a sign bit, though I don't have any concrete details to back that observation.
In any case, the actual code is quite simple. All that is required is to excise the 8 bits storing the exponent and test for 0. There is a special case around 0, which is handled below.
NOTE: Per the comment discussion, this code relies on platform specific implementation details (x86_64 in this test case). As #ChiuneSugihara pointed out, the CLI does not ensure this behavior and it may differ on other platforms, such as ARM.
using System;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("-120, denormal? " + IsDenormal((float)Math.Pow(2, -120)));
Console.WriteLine("-126, denormal? " + IsDenormal((float)Math.Pow(2, -126)));
Console.WriteLine("-127, denormal? " + IsDenormal((float)Math.Pow(2, -127)));
Console.WriteLine("-149, denormal? " + IsDenormal((float)Math.Pow(2, -149)));
Console.ReadKey();
}
public static bool IsDenormal(float f)
{
// when 0, the exponent will also be 0 and will break
// the rest of this algorithm, so we should check for
// this first
if (f == 0f)
{
return false;
}
// Get the bits
byte[] buffer = BitConverter.GetBytes(f);
int bits = BitConverter.ToInt32(buffer, 0);
// extract the exponent, 8 bits in the upper registers,
// above the 23 bit significand
int exponent = (bits >> 23) & 0xff;
// check and see if anything is there!
return exponent == 0;
}
}
}
The output is:
-120, denormal? False
-126, denormal? False
-127, denormal? True
-149, denormal? True
Sources:
extracting mantissa and exponent from double in c#
https://en.wikipedia.org/wiki/IEEE_floating_point
https://en.wikipedia.org/wiki/Denormal_number
http://csharpindepth.com/Articles/General/FloatingPoint.aspx
Code adapted from:
extracting mantissa and exponent from double in c#

From my understanding denormalized numbers are there for help with underflows in some cases (see answer to Denormalized Numbers - IEEE 754 Floating Point).
So to get a denormalized number you would need to explicitly create it or else cause an underflow. In the first case it seems unlikely that a literal denormalized number would be specified in code, and even if someone tried it I am not sure that .NET would allow it. In the second case as long as you are in a checked context you should get an OverflowException thrown for any overflow or underflow in an arithmetic computation so that would guard against the possibility of getting a denormalized number. In an unchecked context I am not sure if an underflow will take you to a denormalized number, but you can try it and see if you are wanting to run calculations in unchecked.
Long story short you can not worry about it if you are running in checked and try an underflow and see in unchecked if you want to run in that context.
EDIT
I wanted to update my answer since a comment didn't feel substantial enough. First off I struck out the comment I made about the checked context since that only applies to non-floating point calculations (like int) and not to float or double. That was my mistake on that one.
The issue with denormalized numbers is that they are not consistent in the CLI. Notice how I am using "CLI" and not "C#" because we need to go lower level than just C# to understand the issue. From The Common Language Infrastructure Annotated Standard Partition I Section 12.1.3 the second note (page 125 of the book) it states:
This standard does not specify the behavior of arithmetic operations on denormalized floating point numbers, nor does it specify when or whether such representations should be created. This is in keeping with IEC 60559:1989. In addition, this standard does not specify how to access the exact bit pattern of NaNs that are created, nor the behavior when converting a NaN between 32-bit and 64-bit representation. All of this behavior is deliberately left implementation specific.
So at the CLI level the handling of denormalized numbers is deliberately left to be implementation specific. Furthermore, if you look at the documentation for float.Epsilon (found here), which is the smallest positive number representable by a float you will get a denormalized number on most machines that matches what is listed in the documentation (which is approximately 1.4e-45). This is what #Kevin Burdett was most likely seeing in his answer. That being said if you scroll down farther on the page you will see the following quote under "Platform Notes"
On ARM systems, the value of the Epsilon constant is too small to be detected, so it equates to zero. You can define an alternative epsilon value that equals 1.175494351E-38 instead.
So there are portability issues that can come into play when you are dealing with manually handling denormalized numbers even just for the .NET CLR (which is an implementation of the CLI). In fact this ARM specific value is kind of interesting since it appears to be a normalized number (I used the function from #Kevin Burdett with IsDenormal(1.175494351E-38f) and it returned false). In the CLI proper the concerns are more severe since there is no standardization on their handling by design according to the annotation on the CLI standard. So this leaves questions about what would happen with the same code on Mono or Xamarin for instance which is a difference implementation of the CLI than the .NET CLR.
In the end I am right back to my previous advice. Just don't worry about denormalized numbers, they are there to silently help you and it is hard to imagine why you would need to specifically single them out. Also as #HansPassant mentioned you most likely won't even encounter anyway. It is just hard to imagine how you would be going under the smallest, positive normalized number in double which is absurdly small.

Related

Why .Net supports 'infinity' value in floats? [duplicate]

I'm just curious, why in IEEE-754 any non zero float number divided by zero results in infinite value? It's a nonsense from the mathematical perspective. So I think that correct result for this operation is NaN.
Function f(x) = 1/x is not defined when x=0, if x is a real number. For example, function sqrt is not defined for any negative number and sqrt(-1.0f) if IEEE-754 produces a NaN value. But 1.0f/0 is Inf.
But for some reason this is not the case in IEEE-754. There must be a reason for this, maybe some optimization or compatibility reasons.
So what's the point?
It's a nonsense from the mathematical perspective.
Yes. No. Sort of.
The thing is: Floating-point numbers are approximations. You want to use a wide range of exponents and a limited number of digits and get results which are not completely wrong. :)
The idea behind IEEE-754 is that every operation could trigger "traps" which indicate possible problems. They are
Illegal (senseless operation like sqrt of negative number)
Overflow (too big)
Underflow (too small)
Division by zero (The thing you do not like)
Inexact (This operation may give you wrong results because you are losing precision)
Now many people like scientists and engineers do not want to be bothered with writing trap routines. So Kahan, the inventor of IEEE-754, decided that every operation should also return a sensible default value if no trap routines exist.
They are
NaN for illegal values
signed infinities for Overflow
signed zeroes for Underflow
NaN for indeterminate results (0/0) and infinities for (x/0 x != 0)
normal operation result for Inexact
The thing is that in 99% of all cases zeroes are caused by underflow and therefore in 99%
of all times Infinity is "correct" even if wrong from a mathematical perspective.
I'm not sure why you would believe this to be nonsense.
The simplistic definition of a / b, at least for non-zero b, is the unique number of bs that has to be subtracted from a before you get to zero.
Expanding that to the case where b can be zero, the number that has to be subtracted from any non-zero number to get to zero is indeed infinite, because you'll never get to zero.
Another way to look at it is to talk in terms of limits. As a positive number n approaches zero, the expression 1 / n approaches "infinity". You'll notice I've quoted that word because I'm a firm believer in not propagating the delusion that infinity is actually a concrete number :-)
NaN is reserved for situations where the number cannot be represented (even approximately) by any other value (including the infinities), it is considered distinct from all those other values.
For example, 0 / 0 (using our simplistic definition above) can have any amount of bs subtracted from a to reach 0. Hence the result is indeterminate - it could be 1, 7, 42, 3.14159 or any other value.
Similarly things like the square root of a negative number, which has no value in the real plane used by IEEE754 (you have to go to the complex plane for that), cannot be represented.
In mathematics, division by zero is undefined because zero has no sign, therefore two results are equally possible, and exclusive: negative infinity or positive infinity (but not both).
In (most) computing, 0.0 has a sign. Therefore we know what direction we are approaching from, and what sign infinity would have. This is especially true when 0.0 represents a non-zero value too small to be expressed by the system, as it frequently the case.
The only time NaN would be appropriate is if the system knows with certainty that the denominator is truly, exactly zero. And it can't unless there is a special way to designate that, which would add overhead.
NOTE:
I re-wrote this following a valuable comment from #Cubic.
I think the correct answer to this has to come from calculus and the notion of limits. Consider the limit of f(x)/g(x) as x->0 under the assumption that g(0) == 0. There are two broad cases that are interesting here:
If f(0) != 0, then the limit as x->0 is either plus or minus infinity, or it's undefined. If g(x) takes both signs in the neighborhood of x==0, then the limit is undefined (left and right limits don't agree). If g(x) has only one sign near 0, however, the limit will be defined and be either positive or negative infinity. More on this later.
If f(0) == 0 as well, then the limit can be anything, including positive infinity, negative infinity, a finite number, or undefined.
In the second case, generally speaking, you cannot say anything at all. Arguably, in the second case NaN is the only viable answer.
Now in the first case, why choose one particular sign when either is possible or it might be undefined? As a practical matter, it gives you more flexibility in cases where you do know something about the sign of the denominator, at relatively little cost in the cases where you don't. You may have a formula, for example, where you know analytically that g(x) >= 0 for all x, say, for example, g(x) = x*x. In that case the limit is defined and it's infinity with sign equal to the sign of f(0). You might want to take advantage of that as a convenience in your code. In other cases, where you don't know anything about the sign of g, you cannot generally take advantage of it, but the cost here is just that you need to trap for a few extra cases - positive and negative infinity - in addition to NaN if you want to fully error check your code. There is some price there, but it's not large compared to the flexibility gained in other cases.
Why worry about general functions when the question was about "simple division"? One common reason is that if you're computing your numerator and denominator through other arithmetic operations, you accumulate round-off errors. The presence of those errors can be abstracted into the general formula format shown above. For example f(x) = x + e, where x is the analytically correct, exact answer, e represents the error from round-off, and f(x) is the floating point number that you actually have on the machine at execution.

Math "pow" in Java and C# return slightly different results?

I am porting program from C# to java. I've faced a fact that
Java
Math.pow(0.392156862745098,1./3.) = 0.7319587495200227
C#
Math.Pow( 0.392156862745098, 1.0 / 3.0) =0.73195874952002271
this last digit leads to sufficient differences in further calculations. Is there any way to emulate c#'s pow?
Thanx
Just to confirm what Chris Shain wrote, I get the same binary values:
// Java
public class Test
{
public static void main(String[] args)
{
double input = 0.392156862745098;
double pow = Math.pow(input, 1.0/3.0);
System.out.println(Double.doubleToLongBits(pow));
}
}
// C#
using System;
public class Test
{
static void Main()
{
double input = 0.392156862745098;
double pow = Math.Pow(input, 1.0/3.0);
Console.WriteLine(BitConverter.DoubleToInt64Bits(pow));
}
}
Output of both: 4604768117848454313
In other words, the double values are exactly the same bit pattern, and any differences you're seeing (assuming you'd get the same results) are due to formatting rather than a difference in value. By the way, the exact value of that double is
0.73195874952002271118800535987247712910175323486328125
Now it's worth noting that distinctly weird things can happen in floating point arithmetic, particularly when optimizations allow 80-bit arithmetic in some situations but not others, etc.
As Henk says, if a difference in the last bit or two causes you problems, then your design is broken.
If your calculations are sensitive to this kind of difference then you will need other measures (a redesign).
this last digit leads to sufficient differences in further calculations
That's impossible, because they're the same number. A double doesn't have enough precision to distinguish between 0.7319587495200227 and 0.73195874952002271; they're both represented as
0.73195874952002271118800535987247712910175323486328125.
The difference is the rounding: Java is using 16 significant digits and C# is using 17. But that's just a display issue.
Both Java and C# return a IEEE floating point number (specifically, a double) from Math.Pow. The difference that you are seeing is almost certainly due to the formatting when you display the number as decimal. The underlying (binary) value is probably the same, and your math troubles lie elsewhere.
Floating-point arithmetic is inherently imprecise. You are claiming that the C# answer is "better" but neither of them are that accurate. For example, Wolfram Alpha (which is much more accurate indeed) gives these values:
http://www.wolframalpha.com/input/?i=Pow%280.392156862745098%2C+1.0+%2F+3.0%29
If a unit's difference in the 17th digit is causing later computations to go awry, then I think there's a problem with your math, not with Java's implementation of pow. You need to think about how to restructure your computations so that they don't rely on such minor differences.
Seventeen digits' precision is the best any IEEE floating point number can do, regardless of language:
http://en.wikipedia.org/wiki/Double-precision_floating-point_format

Why is the division result between two integers truncated?

All experienced programmers in C# (I think this comes from C) are used to cast on of the integers in a division to get the decimal / double / float result instead of the int (the real result truncated).
I'd like to know why is this implemented like this? Is there ANY good reason to truncate the result if both numbers are integer?
C# traces its heritage to C, so the answer to "why is it like this in C#?" is a combination of "why is it like this in C?" and "was there no good reason to change?"
The approach of C is to have a fairly close correspondence between the high-level language and low-level operations. Processors generally implement integer division as returning a quotient and a remainder, both of which are of the same type as the operands.
(So my question would be, "why doesn't integer division in C-like languages return two integers", not "why doesn't it return a floating point value?")
The solution was to provide separate operations for division and remainder, each of which returns an integer. In the context of C, it's not surprising that the result of each of these operations is an integer. This is frequently more accurate than floating-point arithmetic. Consider the example from your comment of 7 / 3. This value cannot be represented by a finite binary number nor by a finite decimal number. In other words, on today's computers, we cannot accurately represent 7 / 3 unless we use integers! The most accurate representation of this fraction is "quotient 2, remainder 1".
So, was there no good reason to change? I can't think of any, and I can think of a few good reasons not to change. None of the other answers has mentioned Visual Basic which (at least through version 6) has two operators for dividing integers: / converts the integers to double, and returns a double, while \ performs normal integer arithmetic.
I learned about the \ operator after struggling to implement a binary search algorithm using floating-point division. It was really painful, and integer division came in like a breath of fresh air. Without it, there was lots of special handling to cover edge cases and off-by-one errors in the first draft of the procedure.
From that experience, I draw the conclusion that having different operators for dividing integers is confusing.
Another alternative would be to have only one integer operation, which always returns a double, and require programmers to truncate it. This means you have to perform two int->double conversions, a truncation and a double->int conversion every time you want integer division. And how many programmers would mistakenly round or floor the result instead of truncating it? It's a more complicated system, and at least as prone to programmer error, and slower.
Finally, in addition to binary search, there are many standard algorithms that employ integer arithmetic. One example is dividing collections of objects into sub-collections of similar size. Another is converting between indices in a 1-d array and coordinates in a 2-d matrix.
As far as I can see, no alternative to "int / int yields int" survives a cost-benefit analysis in terms of language usability, so there's no reason to change the behavior inherited from C.
In conclusion:
Integer division is frequently useful in many standard algorithms.
When the floating-point division of integers is needed, it may be invoked explicitly with a simple, short, and clear cast: (double)a / b rather than a / b
Other alternatives introduce more complication both the programmer and more clock cycles for the processor.
Is there ANY good reason to truncate the result if both numbers are integer?
Of course; I can think of a dozen such scenarios easily. For example: you have a large image, and a thumbnail version of the image which is 10 times smaller in both dimensions. When the user clicks on a point in the large image, you wish to identify the corresponding pixel in the scaled-down image. Clearly to do so, you divide both the x and y coordinates by 10. Why would you want to get a result in decimal? The corresponding coordinates are going to be integer coordinates in the thumbnail bitmap.
Doubles are great for physics calculations and decimals are great for financial calculations, but almost all the work I do with computers that does any math at all does it entirely in integers. I don't want to be constantly having to convert doubles or decimals back to integers just because I did some division. If you are solving physics or financial problems then why are you using integers in the first place? Use nothing but doubles or decimals. Use integers to solve finite mathematics problems.
Calculating on integers is faster (usually) than on floating point values. Besides, all other integer/integer operations (+, -, *) return an integer.
EDIT:
As per the request of the OP, here's some addition:
The OP's problem is that they think of / as division in the mathematical sense, and the / operator in the language performs some other operation (which is not the math. division). By this logic they should question the validity of all other operations (+, -, *) as well, since those have special overflow rules, which is not the same as would be expected from their math counterparts. If this is bothersome for someone, they should find another language where the operations perform as expected by the person.
As for the claim on perfomance difference in favor of integer values: When I wrote the answer I only had "folk" knowledge and "intuition" to back up the claim (hece my "usually" disclaimer). Indeed as Gabe pointed out, there are platforms where this does not hold. On the other hand I found this link (point 12) that shows mixed performances on an Intel platform (the language used is Java, though).
The takeaway should be that with performance many claims and intuition are unsubstantiated until measured and found true.
Yes, if the end result needs to be a whole number. It would depend on the requirements.
If these are indeed your requirements, then you would not want to store a decimal and then truncate it. You would be wasting memory and processing time to accomplish something that is already built-in functionality.
The operator is designed to return the same type as it's input.
Edit (comment response):
Why? I don't design languages, but I would assume most of the time you will be sticking with the data types you started with and in the remaining instance, what criteria would you use to automatically assume which type the user wants? Would you automatically expect a string when you need it? (sincerity intended)
If you add an int to an int, you expect to get an int. If you subtract an int from an int, you expect to get an int. If you multiple an int by an int, you expect to get an int. So why would you not expect an int result if you divide an int by an int? And if you expect an int, then you will have to truncate.
If you don't want that, then you need to cast your ints to something else first.
Edit: I'd also note that if you really want to understand why this is, then you should start looking into how binary math works and how it is implemented in an electronic circuit. It's certainly not necessary to understand it in detail, but having a quick overview of it would really help you understand how the low-level details of the hardware filter through to the details of high-level languages.

Fourier transform rounding error

I'm messing around with Fourier transformations. Now I've created a class that does an implementation of the DFT (not doing anything like FFT atm). This is the implementation I've used:
public static Complex[] Dft(double[] data)
{
int length = data.Length;
Complex[] result = new Complex[length];
for (int k = 1; k <= length; k++)
{
Complex c = Complex.Zero;
for (int n = 1; n <= length; n++)
{
c += Complex.FromPolarCoordinates(data[n-1], (-2 * Math.PI * n * k) / length);
}
result[k-1] = 1 / Math.Sqrt(length) * c;
}
return result;
}
And these are the results I get from Dft({2,3,4})
Well it seems pretty okay, since those are the values I expect. There is only one thing I find confusing. And it all has to do with the rounding of doubles.
First of all, why are the first two numbers not exactly the same (0,8660..443 8 ) vs (0,8660..443). And why can't it calculate a zero, where you'd expect it. I know 2.8E-15 is pretty close to zero, but well it's not.
Anyone know how these, marginal, errors occur and if I can and want to do something about it.
It might seem that there's not a real problem, because it's just small errors. However, how do you deal with these rounding errors if you're for example comparing 2 values.
5,2 + 0i != 5,1961524 + i2.828107*10^-15
Cheers
I think you've already explained it to yourself - limited precision means limited precision. End of story.
If you want to clean up the results, you can do some rounding of your own to a more reasonable number of siginificant digits - then your zeros will show up where you want them.
To answer the question raised by your comment, don't try to compare floating point numbers directly - use a range:
if (Math.Abs(float1 - float2) < 0.001) {
// they're the same!
}
The comp.lang.c FAQ has a lot of questions & answers about floating point, which you might be interested in reading.
From http://support.microsoft.com/kb/125056
Emphasis mine.
There are many situations in which precision, rounding, and accuracy in floating-point calculations can work to generate results that are surprising to the programmer. There are four general rules that should be followed:
In a calculation involving both single and double precision, the result will not usually be any more accurate than single precision. If double precision is required, be certain all terms in the calculation, including constants, are specified in double precision.
Never assume that a simple numeric value is accurately represented in the computer. Most floating-point values can't be precisely represented as a finite binary value. For example .1 is .0001100110011... in binary (it repeats forever), so it can't be represented with complete accuracy on a computer using binary arithmetic, which includes all PCs.
Never assume that the result is accurate to the last decimal place. There are always small differences between the "true" answer and what can be calculated with the finite precision of any floating point processing unit.
Never compare two floating-point values to see if they are equal or not- equal. This is a corollary to rule 3. There are almost always going to be small differences between numbers that "should" be equal. Instead, always check to see if the numbers are nearly equal. In other words, check to see if the difference between them is very small or insignificant.
Note that although I referenced a microsoft document, this is not a windows problem. It's a problem with using binary and is in the CPU itself.
And, as a second side note, I tend to use the Decimal datatype instead of double: See this related SO question: decimal vs double! - Which one should I use and when?
In C# you'll want to use the 'decimal' type, not double for accuracy with decimal points.
As to the 'why'... repsensenting fractions in different base systems gives different answers. For example 1/3 in a base 10 system is 0.33333 recurring, but in a base 3 system is 0.1.
The double is a binary value, at base 2. When converting to base 10 decimal you can expect to have these rounding errors.

long/large numbers and modulus in .NET

I'm currently writing a quick custom encoding method where I take a stamp a key with a number to verify that it is a valid key.
Basically I was taking whatever number that comes out of the encoding and multiplying it by a key.
I would then multiply those numbers to the deploy to the user/customer who purchases the key. I wanted to simply use (Code % Key == 0) to verify that the key is valid, but for large values the mod function does not seem to function as expected.
Number = 468721387;
Key = 12345678;
Code = Number * Key;
Using the numbers above:
Code % Key == 11418772
And for smaller numbers it would correctly return 0. Is there a reliable way to check divisibility for a long in .NET?
Thanks!
EDIT:
Ok, tell me if I'm special and missing something...
long a = DateTime.Now.Ticks;
long b = 12345;
long c = a * b;
long d = c % b;
d == 10001 (Bad)
and
long a = DateTime.Now.Ticks;
long b = 12;
long c = a * b;
long d = c % b;
d == 0 (Good)
What am I doing wrong?
As others have said, your problem is integer overflow. You can make this more obvious by checking "Check for arithmetic overflow/underflow" in the "Advanced Build Settings" dialog. When you do so, you'll get an OverflowException when you perform *DateTime.Now.Ticks * 12345*.
One simple solution is just to change "long" to "decimal" (or "double") in your code.
In .NET 4.0, there is a new BigInteger class.
Finally, you say you're "... writing a quick custom encoding method ...", so a simple homebrew solution may be satisfactory for your needs. However, if this is production code, you might consider more robust solutions involving cryptography or something from a third-party who specializes in software licensing.
The answers that say that integer overflow is the likely culprit are almost certainly correct; you can verify that by putting a "checked" block around the multiplication and seeing if it throws an exception.
But there is a much larger problem here that everyone seems to be ignoring.
The best thing to do is to take a large step back and reconsider the wisdom of this entire scheme. It appears that you are attempting to design a crypto-based security system but you are clearly not an expert on cryptographic arithmetic. That is a huge red warning flag. If you need a crypto-based security system DO NOT ATTEMPT TO ROLL YOUR OWN. There are plenty of off-the-shelf crypto systems that are built by experts, heavily tested, and readily available. Use one of them.
If you are in fact hell-bent on rolling your own crypto, getting the math right in 64 bits is the least of your worries. 64 bit integers are way too small for this crypto application. You need to be using a much larger integer size; otherwise, finding a key that matches the code is trivial.
Again, I cannot emphasize strongly enough how difficult it is to construct correct crypto-based security code that actually protects real users from real threats.
Integer Overflow...see my comment.
The value of the multiplication you're doing overflows the int data type and causes it to wrap (int values fall between +/-2147483647).
Pick a more appropriate data type to hold a value as large as 5786683315615386 (the result of your multiplication).
UPDATE
Your new example changes things a little.
You're using long, but now you're using System.DateTime.Ticks which on Mono (not sure about the MS platform) is returning 633909674610619350.
When you multiply that by a large number, you are now overflowing a long just like you were overflowing an int previously. At that point, you'll probably need to use a double to work with the values you want (decimal may work as well, depending on how large your multiplier gets).
Apparently, your Code fails to fit in the int data type. Try using long instead:
long code = (long)number * key;
The (long) cast is necessary. Without the cast, the multiplication will be done in 32-bit integer form (assuming number and key variables are typed int) and the result will be casted to long which is not what you want. By casting one of the operands to long, you tell the compiler to perform the multiplication on two long numbers.

Categories

Resources