I'm trying to compute the cosine of 4203708359 radians in C#:
var x = (double)4203708359;
var c = Math.Cos(x);
(4203708359 can be exactly represented in double precision.)
I'm getting
c = -0.57977754519440394
Windows' calculator gives
c = -0.579777545198813380788467070278
PHP's cos(double) function (which internally just uses cos(double) from the C standard library) on Linux gives:
c = -0.57977754519881
C's cos(double) function in a simple C program compiled with Visual Studio 2017 gives
c = -0.57977754519881342
Here is the definition of Math.cos() in C#: https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Math.cs#L57-L58
It appears to be a built-in function. I didn't dig (yet) in the C# compiler to check what this effectively compiles to but this is probably the next step.
In the meantime:
Why is the precision so poor in my C# example, and what can I do about it?
Is it simply that the cosine implementation in the C# compiler deals poorly with large integer inputs?
Edit 1: Wolfram Mathematica 11.0:
In[1] := N[Cos[4203708359], 50]
Out[1] := -0.57977754519881338078846707027800171954257546099993
Edit 2: I do need that level precision, and I'm ready to go pretty far in order to obtain it. I'd be happy to use an arbitrary precision library if there exists a good one that supports cosine (my efforts haven't led to one so far).
Edit 3: I posted the question on coreclr's issue tracker: https://github.com/dotnet/coreclr/issues/12737
I think I might know the answer. I'm pretty sure the sin/cos libraries don't take arbitrarily large numbers and calculate the sin/cos of them - they instead reduce them down to low numbers (between 0-2xpi?) and calculate them there. I mean, cos(x) = cos(x + 2xpi) = cos(x + 4xpi) = ...
Problem is, how is the program supposed to reduce your 10-digit number down? Realistically, it should figure out how many times it needs to multiply (2xpi) to get a value just below your number, then subtract that out. In your case, that's about 670 million.
So it's multiplying (2xpi) by this 9 digit value - so it's effectively losing 9 digits worth of significance from the math library's version of pi.
I ended up writing a little function to test what was going on:
private double reduceDown(double start)
{
decimal startDec = (decimal)start;
decimal pi = decimal.Parse("3.1415926535897932384626433832795");
decimal tau = pi * 2;
int num = (int)(startDec / tau);
decimal x = startDec - (num * tau);
double retVal;
double.TryParse(x.ToString(), out retVal);
return retVal;
//return start - (num * tau);
}
All this is doing is using decimal data type as a way of reducing down the value without losing digits of precision from pi - it still returns back a double. When I call it with a modification of your code:
var x = (double)4203708359;
var c = Math.Cos(x);
double y = reduceDown(x);
double c2 = Math.Cos(y);
MessageBox.Show(c.ToString() + Environment.NewLine + c2);
return;
... sure enough, the second one is accurate.
So my advice is - if you really need radians that high, and you really need the accuracy? Do something like that function above, and reduce the number down on your end in a way that you don't lose digits of precision.
Presumably, the salts are stored along with each password. You could use the PHP code to calculate that cosine, and store that also with the password. I would then also add a password version number and default all those older passwords to be version 1. Then, in your C# code, for any new passwords, you implement a new hashing algorithm, and store those password hashes as passwords version 2. For any version 1 passwords, to authenticate, you do not have to calculate the cosine, you simply use the one stored along with the password hash and the salt.
The programmer of that PHP code was probably wanting to do a clever version of pepper. By storing that cosine, or pepper along with the salt and the password hashes, you basically change that pepper into a salt2. So, another versionless way of doing this would be to use two salts in your C# hashing code. For new passwords you could leave the second salt blank or assign it some other way. For old passwords, it would be that cosine, but it is already calculated.
Regarding this part of my question: "Why is the precision so poor in my C# example", coreclr developers answered here: https://github.com/dotnet/coreclr/issues/12737
In a nutshell, .NET Framework 4.6.2 (x86 and x64) and .NET Core (x86) appear to use Intel's x87 FP unit (i.e. fcos or fsincos) that gives inaccurate results while .NET Core on x64 (and PHP, Visual Studio 2017 and gcc) use more accurate, presumably SSE2-based implementations that give correctly rounded results.
Related
I am encrypting the user's input for generating a string for password. But a line of code gives different results in different versions of the framework. Partial code with value of key pressed by user:
Key pressed: 1. Variable ascii is 49. Value of 'e' and 'n' after some calculation:
e = 103,
n = 143,
Math.Pow(ascii, e) % n
Result of above code:
In .NET 3.5 (C#)
Math.Pow(ascii, e) % n
gives 9.0.
In .NET 4 (C#)
Math.Pow(ascii, e) % n
gives 77.0.
Math.Pow() gives the correct (same) result in both versions.
What is the cause, and is there a solution?
Math.Pow works on double-precision floating-point numbers; thus, you shouldn't expect more than the first 15–17 digits of the result to be accurate:
All floating-point numbers also have a limited number of significant digits, which also determines how accurately a floating-point value approximates a real number. A Double value has up to 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
However, modulo arithmetic requires all digits to be accurate. In your case, you are computing 49103, whose result consists of 175 digits, making the modulo operation meaningless in both your answers.
To work out the correct value, you should use arbitrary-precision arithmetic, as provided by the BigInteger class (introduced in .NET 4.0).
int val = (int)(BigInteger.Pow(49, 103) % 143); // gives 114
Edit: As pointed out by Mark Peters in the comments below, you should use the BigInteger.ModPow method, which is intended specifically for this kind of operation:
int val = (int)BigInteger.ModPow(49, 103, 143); // gives 114
Apart from the fact that your hashing function is not a very good one *, the biggest problem with your code is not that it returns a different number depending on the version of .NET, but that in both cases it returns an entirely meaningless number: the correct answer to the problem is
49103 mod 143 = is 114. (link to Wolfram Alpha)
You can use this code to compute this answer:
private static int PowMod(int a, int b, int mod) {
if (b == 0) {
return 1;
}
var tmp = PowMod(a, b/2, mod);
tmp *= tmp;
if (b%2 != 0) {
tmp *= a;
}
return tmp%mod;
}
The reason why your computation produces a different result is that in order to produce an answer, you use an intermediate value that drops most of the significant digits of the 49103 number: only the first 16 of its 175 digits are correct!
1230824813134842807283798520430636310264067713738977819859474030746648511411697029659004340261471771152928833391663821316264359104254030819694748088798262075483562075061997649
The remaining 159 digits are all wrong. The mod operation, however, seeks a result that requires every single digit to be correct, including the very last ones. Therefore, even the tiniest improvement to the precision of Math.Pow that may have been implemented in .NET 4, would result in a drastic difference of your calculation, which essentially produces an arbitrary result.
* Since this question talks about raising integers to high powers in the context of password hashing, it may be a very good idea to read this answerlink before deciding if your current approach should be changed for a potentially better one.
What you see is rounding error in double. Math.Pow works with double and the difference is as below:
.NET 2.0 and 3.5 => var powerResult = Math.Pow(ascii, e); returns:
1.2308248131348429E+174
.NET 4.0 and 4.5 => var powerResult = Math.Pow(ascii, e); returns:
1.2308248131348427E+174
Notice the last digit before E and that is causing the difference in the result. It's not the modulus operator (%).
Floating-point precision can vary from machine to machine, and even on the same machine.
However, the .NET make a virtual machine for your apps... but there are changes from version to version.
Therefore you shouldn't rely on it to produce consistent results. For encryption, use the classes that the Framework provides rather than rolling your own.
There are a lot of answers about the way the code is bad. However, as to why the result is different…
Intel's FPUs use the 80-bit format internally to get more precision for intermediate results. So if a value is in the processor register it gets 80 bits, but when it is written to the stack it gets stored at 64 bits.
I expect that the newer version of .NET has a better optimizer in its Just in Time (JIT) compilation, so it is keeping a value in a register rather than writing it to the stack and then reading it back from the stack.
It may be that the JIT can now return a value in a register rather than on the stack. Or pass the value to the MOD function in a register.
See also Stack Overflow question What are the applications/benefits of an 80-bit extended precision data type?
Other processors, e.g. the ARM will give different results for this code.
Maybe it's best to calculate it yourself using only integer arithmetic. Something like:
int n = 143;
int e = 103;
int result = 1;
int ascii = (int) 'a';
for (i = 0; i < e; ++i)
result = result * ascii % n;
You can compare the performance with the performance of the BigInteger solution posted in the other answers.
How to generate random numbers with a stable distribution in C#?
The Random class has uniform distribution. Many other code on the
internet show normal distribution. But we need stable distribution
meaning infinite variance, a.k.a fat-tailed distribution.
The reason is for generating realistic stock prices. In the real
world, huge variations in prices are far more likely than in
normal distributions.
Does someone know the C# code to convert Random class output
into stable distribution?
Edit: Hmmm. Exact distribution is less critical than ensuring it will randomly generate huge sigma like at least 20 sigma. We want to test a trading strategy for resilience in a true fat tailed distribution which is exactly how stock market prices behave.
I just read about ZipFian and Cauchy due to comments. Since I must pick, let's go with Cauchy distribution but I will also try ZipFian to compare.
In general, the method is:
Choose a stable, fat-tailed distribution. Say, the Cauchy distribution.
Look up the quantile function of the chosen distribution.
For the Cauchy distribution, that would be p --> peak + scale * tan( pi * (p - 0.5) ).
And now you have a method of transforming uniformly-distributed random numbers into Cauchy-distributed random numbers.
Make sense? See
http://en.wikipedia.org/wiki/Inverse_transform_sampling
for details.
Caveat: It has been a long, long time since I took statistics.
UPDATE:
I liked this question so much I just blogged it: see
http://ericlippert.com/2012/02/21/generating-random-non-uniform-data/
My article exploring a few interesting examples of Zipfian distributions is here:
http://blogs.msdn.com/b/ericlippert/archive/2010/12/07/10100227.aspx
If you're interested in using the Zipfian distribution (which is often used when modeling processes from the sciences or social domains), you would do something along the lines of:
Select your k (skew) for the distribution
Precompute the domain of the cumulative distribution (this is just an optimization)
Generate random values for the distribution by finding the nearest value from the domain
Sample Code:
List<int> domain = Enumerable.Range(0,1000); // generate your domain
double skew = 0.37; // select a skew appropriate to your domain
double sigma = domain.Aggregate(0.0d, (z,x) => x + 1.0 / Math.Pow(z+1, skew));
List<double> cummDist = domain.Select(
x => domain.Aggregate(0.0d, (z,y) => z + 1.0/Math.Pow(y, skew) * sigma));
Now you can generate random values by selecting the closest value from within the domain:
Random rand = new Random();
double seek = rand.NextDouble();
int searchIndex = cummDist.BinarySearch(seek);
// return the index of the closest value from the distribution domain
return searchIndex < 0 ? (~searchIndex)-1 : searchIndex-1;
You can, of course, generalize this entire process by factoring out the logic that materializes the domain of the distribution from the process that maps and returns a value from that domain.
I have before me James Gentle's Springer volume on this topic, Random Number Generation and Monte Carlo Methods, courtesy of my statistician wife. It discusses the stable family on page 105:
The stable family of distributions is a flexible family of generally heavy-tailed distributions. This family includes the normal distribution at one extreme value of one of the parameters and the Cauchy at the other extreme value. Chambers, Mallows, and Stuck (1976) give a method for generating deviates from stable distributions. (Watch for some errors in the constants in the auxiliary function D2, for evaluating (ex-1)/x.) Their method is used in the IMSL libraries. For a symmetric stable distribution, Devroye (1986) points out that a faster method can be developed by exploiting the relationship of the symmetric stable to the Fejer-de la Vallee Poissin distribution. Buckle (1995) shows how to simulate the parameters of a stable distribution, conditional on the data.
Generating deviates from the generic stable distribution is hard. If you need to do this then I would recommend a library such as IMSL. I do not advise you attempt this yourself.
However, if you are looking for a specific distribution in the stable family, e.g. Cauchy, then you can use the method described by Eric, known as the probability integral transform. So long as you can write down the inverse of the distribution function in closed form then you can use this approach.
The following C# code generates a random number following a stable distribution given the shape parameters alpha and beta. I release it to the public domain under Creative Commons Zero.
public static double StableDist(Random rand, double alpha, double beta){
if(alpha<=0 || alpha>2 || beta<-1 || beta>1)
throw new ArgumentException();
var halfpi=Math.PI*0.5;
var unif=NextDouble(rand);
while(unif == 0.0)unif=NextDouble(rand);
unif=(unif - 0.5) * Math.PI;
// Cauchy special case
if(alpha==1 && beta==0)
return Math.Tan(unif);
var expo=-Math.Log(1.0 - NextDouble(rand));
var c=Math.Cos(unif);
if(alpha == 1){
var s=Math.Sin(unif);
return 2.0*((unif*beta+halfpi)*s/c -
beta * Math.Log(halfpi*expo*c/(
unif*beta+halfpi)))/Math.PI;
}
var z=-Math.Tan(halfpi*alpha)*beta;
var ug=unif+Math.Atan(-z)/alpha;
var cpow=Math.Pow(c, -1.0 / alpha);
return Math.Pow(1.0+z*z, 1.0 / (2*alpha))*
(Math.Sin(alpha*ug)*cpow)*
Math.Pow(Math.Cos(unif-alpha*ug)/expo, (1.0-alpha) / alpha);
}
private static double NextDouble(Random rand){
// The default NextDouble implementation in .NET (see
// https://github.com/dotnet/corert/blob/master/src/System.Private.CoreLib/shared/System/Random.cs)
// is very problematic:
// - It generates a random number 0 or greater and less than 2^31-1 in a
// way that very slightly biases 2^31-2.
// - Then it divides that number by 2^31-1.
// - The result is a number that uses roughly only 32 bits of pseudorandomness,
// even though `double` has 53 bits in its significand.
// To alleviate some of these problems, this method generates a random 53-bit
// random number and divides that by 2^53. Although this doesn't fix the bias
// mentioned above (for the default System.Random), this bias may be of
// negligible importance for most purposes not involving security.
long x=rand.Next(0,1<<30);
x<<=23;
x+=rand.Next(0,1<<23);
return (double)x / (double)(1L<<53);
}
In addition, I set forth pseudocode for the stable distribution in a separate article.
I'm currently writing a quick custom encoding method where I take a stamp a key with a number to verify that it is a valid key.
Basically I was taking whatever number that comes out of the encoding and multiplying it by a key.
I would then multiply those numbers to the deploy to the user/customer who purchases the key. I wanted to simply use (Code % Key == 0) to verify that the key is valid, but for large values the mod function does not seem to function as expected.
Number = 468721387;
Key = 12345678;
Code = Number * Key;
Using the numbers above:
Code % Key == 11418772
And for smaller numbers it would correctly return 0. Is there a reliable way to check divisibility for a long in .NET?
Thanks!
EDIT:
Ok, tell me if I'm special and missing something...
long a = DateTime.Now.Ticks;
long b = 12345;
long c = a * b;
long d = c % b;
d == 10001 (Bad)
and
long a = DateTime.Now.Ticks;
long b = 12;
long c = a * b;
long d = c % b;
d == 0 (Good)
What am I doing wrong?
As others have said, your problem is integer overflow. You can make this more obvious by checking "Check for arithmetic overflow/underflow" in the "Advanced Build Settings" dialog. When you do so, you'll get an OverflowException when you perform *DateTime.Now.Ticks * 12345*.
One simple solution is just to change "long" to "decimal" (or "double") in your code.
In .NET 4.0, there is a new BigInteger class.
Finally, you say you're "... writing a quick custom encoding method ...", so a simple homebrew solution may be satisfactory for your needs. However, if this is production code, you might consider more robust solutions involving cryptography or something from a third-party who specializes in software licensing.
The answers that say that integer overflow is the likely culprit are almost certainly correct; you can verify that by putting a "checked" block around the multiplication and seeing if it throws an exception.
But there is a much larger problem here that everyone seems to be ignoring.
The best thing to do is to take a large step back and reconsider the wisdom of this entire scheme. It appears that you are attempting to design a crypto-based security system but you are clearly not an expert on cryptographic arithmetic. That is a huge red warning flag. If you need a crypto-based security system DO NOT ATTEMPT TO ROLL YOUR OWN. There are plenty of off-the-shelf crypto systems that are built by experts, heavily tested, and readily available. Use one of them.
If you are in fact hell-bent on rolling your own crypto, getting the math right in 64 bits is the least of your worries. 64 bit integers are way too small for this crypto application. You need to be using a much larger integer size; otherwise, finding a key that matches the code is trivial.
Again, I cannot emphasize strongly enough how difficult it is to construct correct crypto-based security code that actually protects real users from real threats.
Integer Overflow...see my comment.
The value of the multiplication you're doing overflows the int data type and causes it to wrap (int values fall between +/-2147483647).
Pick a more appropriate data type to hold a value as large as 5786683315615386 (the result of your multiplication).
UPDATE
Your new example changes things a little.
You're using long, but now you're using System.DateTime.Ticks which on Mono (not sure about the MS platform) is returning 633909674610619350.
When you multiply that by a large number, you are now overflowing a long just like you were overflowing an int previously. At that point, you'll probably need to use a double to work with the values you want (decimal may work as well, depending on how large your multiplier gets).
Apparently, your Code fails to fit in the int data type. Try using long instead:
long code = (long)number * key;
The (long) cast is necessary. Without the cast, the multiplication will be done in 32-bit integer form (assuming number and key variables are typed int) and the result will be casted to long which is not what you want. By casting one of the operands to long, you tell the compiler to perform the multiplication on two long numbers.
I'm trying to compute 100! and there doesn't seem to be a built-in factorial function. So, I've written:
Protected Sub ComputeFactorial(ByVal n As ULong)
Dim factorial As ULong = 1
Dim i As Integer
For i = 1 To n
factorial = factorial * i
Next
lblAnswer.Text = factorial
End Sub
Unfortunately, running this with the value of 100 for n rseults in
Value was either too large or too
small for a UInt64.
So, is there a larger data type for holding numbers? Am i mistaken in my methods? Am I helpless?
Sounds like Project Euler.
.NET 4.0 has System.Numerics.BigInteger, or you can pick up a pretty sweet implementation here:
C# BigInteger Class
Edit: treed :(
I'll add - the version at CodeProject has additional features like integer square root, a primality test, Lucas sequence generation. Also, you don't have direct access to the buffer in the .NET implementation which was annoying for a couple things I was trying.
Until you can use System.Numerics.BigInteger you are going to be stuck using a non-Microsoft implementation like BigInteger on Code Project.
Hint: use an array to store the digits of the number. You can tell by inspection that the result will not have more than 200 digits.
You need an implementation of "BigNums". These are integers that dynamically allocate memory so that they can hold their value.
A version was actually cut from the BCL.
The J# library has an implementation of java.math.BigInteger that you can use from any language.
Alternatively, if precision/accuracy are not a concern (you only care about order of magnitude), you can just use 64-bit floats.
decimal will handle 0 through +/-79,228,162,514,264,337,593,543,950,335 with no decimal point(scale of zero)
I have a very large number I need to calculate, and none of the inbuilt datatypes in C# can handle such a large number.
Basicly I want to solve this:
Project Euler 16:
2^15 = 32768 and the sum of its digits
is 3 + 2 + 7 + 6 + 8 = 26.
What is the sum of the digits of the
number 2^1000?
I have already written the code, but, as said before, the number is too large for c# datatypes. The code has been tested and verified with small numbers (such as 2^15) and it works perfectly.
using System;
namespace _16_2E1000
{
class Program
{
static void Main(string[] args)
{
ulong sum = 0;
ulong i = 1 << 1000;
string s = i.ToString();
foreach (char c in s)
{
sum += (ulong) Convert.ToInt64(c.ToString());
}
Console.WriteLine(sum);
Console.ReadLine();
}
}
}
You can use BigInteger from the J# classes. First question in this article tells you how. It's a bit of pain b/c then you have to provide the J# redistributable when you roll out tho.
First to answerer you exact question, look for a BigInt or BigNum type
Second, from what I know of Project Euler, there will be a cool, tricky way to do it that is much easier.
As a first guess I'd compute the answerer for 2^1 -> 2^n (for whatever n you can get to work) and look for patterns. Also look for patterns in the sequences
V(0) = 2^p
V(n) = floor(V(n - 1) / 10)
D(n) = V(n) % 10
I hope this is not a homework problem, but to get to the answer of 2^1000, you'll have to divide it into smaller chunks,
try something like,
2^1000 = 2 * 2^999 = 2^999 + 2^999 = 2^ 998 + 2^ 998 + 2^ 998 + 2^ 998
breaking into smaller bits till you get to solvable a problem,
complete solution to project Euler is on following links.
http://blog.functionalfun.net/2008/07/project-euler-problem-16-calculating.html
http://code.msdn.microsoft.com/projecteuler
It is not necessary to have Big Integer capabilities in order to solve this problem.
One could just use the property that:
2^n = 2^(n-1) + 2^(n-1)
If Big Integer is really necessary for other tasks, I have been using the BigInt class from F# in my C# programs and am happy with it.
The necessary steps:
Install the F# CTP
In your C# (or other .NET language) application add a reference to the FSharp.Core dll.
Add: using Microsoft.FSharp.Math;
In the "Class View" window familiarize yourself with the members of the two classes: BigInt and BigNum
After executing these steps one is basically ready to use the BigInt class.
One last hint:
To avoid declaring variables with improper names to hold constants that makes the code unreadable, I am using a name that starts with _ (underscore), followed by the integer constant. In this way one will have expressions like:
N = _2 * N;
clearly much more readable than:
N = Two * N;
Here's a BigInteger (source code is available) that you can use; though, as already mentioned, there are more efficient ways to do this than brute force.
BigInteger on codeplex
Actually, while a biginteger utility might be of interest here, you don't need it, even for this. Yes, it looks like it does, but you don't. In fact, use of a biginteger form may even slow things down.
Since I don't want to solve the problem for you, I'll just suggest you think about this in a modular way.