I am trying to understand this code and I am not sure what language it is. It seems to be Java but I am not sure. I apologize if I am posting this incorrectly. I am volunteering and helping with a calendar and trying to find a random generator to work with basic. I am immediately trying to understand what this is doing.
private static uint GetUint()
{
m_z = 36969 * (m_z & 65535) + (m_z >> 16);
m_w = 18000 * (m_w & 65535) + (m_w >> 16);
return (m_z << 16) + m_w;
}
public static double GetUniform()
{
// 0 <= u < 2^32
uint u = GetUint();
// The magic number below is 1/(2^32 + 2).
// The result is strictly between 0 and 1.
return (u + 1.0) * 2.328306435454494e-10;
}
It is C#, and the code is from here http://www.codeproject.com/KB/recipes/SimpleRNG.aspx?display=Print
It is used to generate random numbers. There's quite a bit more info on it at that link above. To find it I Googled the 2.38... number, because it looked familiar.
It's should be C#.
C++'s public and private must be followed by a :.
Java doesn't have uint.
The naming convention (CamelCase) looks like a .NET language, and the syntax is C-like.
This seems to be a double LCG implemented in C# (I say C# instead of Java because IIRC Java doesn't have uint). You can find more about LCGs on Wikipedia.
Still, most dialects of BASIC have some random number generator built in, typically using the instructions RANDOMIZE for initializing it and RAND or RANDOM to get a random number.
Because of the naming conventions (methods starting in uppercase), the data types (uint, double), the keywords (private, public, static), the programming conventions (braces in a separate line) and the operators (>>, +, *, &) I'm pretty sure the programming language used in the above snippet it's C# .
Related
I'm trying to compute the cosine of 4203708359 radians in C#:
var x = (double)4203708359;
var c = Math.Cos(x);
(4203708359 can be exactly represented in double precision.)
I'm getting
c = -0.57977754519440394
Windows' calculator gives
c = -0.579777545198813380788467070278
PHP's cos(double) function (which internally just uses cos(double) from the C standard library) on Linux gives:
c = -0.57977754519881
C's cos(double) function in a simple C program compiled with Visual Studio 2017 gives
c = -0.57977754519881342
Here is the definition of Math.cos() in C#: https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Math.cs#L57-L58
It appears to be a built-in function. I didn't dig (yet) in the C# compiler to check what this effectively compiles to but this is probably the next step.
In the meantime:
Why is the precision so poor in my C# example, and what can I do about it?
Is it simply that the cosine implementation in the C# compiler deals poorly with large integer inputs?
Edit 1: Wolfram Mathematica 11.0:
In[1] := N[Cos[4203708359], 50]
Out[1] := -0.57977754519881338078846707027800171954257546099993
Edit 2: I do need that level precision, and I'm ready to go pretty far in order to obtain it. I'd be happy to use an arbitrary precision library if there exists a good one that supports cosine (my efforts haven't led to one so far).
Edit 3: I posted the question on coreclr's issue tracker: https://github.com/dotnet/coreclr/issues/12737
I think I might know the answer. I'm pretty sure the sin/cos libraries don't take arbitrarily large numbers and calculate the sin/cos of them - they instead reduce them down to low numbers (between 0-2xpi?) and calculate them there. I mean, cos(x) = cos(x + 2xpi) = cos(x + 4xpi) = ...
Problem is, how is the program supposed to reduce your 10-digit number down? Realistically, it should figure out how many times it needs to multiply (2xpi) to get a value just below your number, then subtract that out. In your case, that's about 670 million.
So it's multiplying (2xpi) by this 9 digit value - so it's effectively losing 9 digits worth of significance from the math library's version of pi.
I ended up writing a little function to test what was going on:
private double reduceDown(double start)
{
decimal startDec = (decimal)start;
decimal pi = decimal.Parse("3.1415926535897932384626433832795");
decimal tau = pi * 2;
int num = (int)(startDec / tau);
decimal x = startDec - (num * tau);
double retVal;
double.TryParse(x.ToString(), out retVal);
return retVal;
//return start - (num * tau);
}
All this is doing is using decimal data type as a way of reducing down the value without losing digits of precision from pi - it still returns back a double. When I call it with a modification of your code:
var x = (double)4203708359;
var c = Math.Cos(x);
double y = reduceDown(x);
double c2 = Math.Cos(y);
MessageBox.Show(c.ToString() + Environment.NewLine + c2);
return;
... sure enough, the second one is accurate.
So my advice is - if you really need radians that high, and you really need the accuracy? Do something like that function above, and reduce the number down on your end in a way that you don't lose digits of precision.
Presumably, the salts are stored along with each password. You could use the PHP code to calculate that cosine, and store that also with the password. I would then also add a password version number and default all those older passwords to be version 1. Then, in your C# code, for any new passwords, you implement a new hashing algorithm, and store those password hashes as passwords version 2. For any version 1 passwords, to authenticate, you do not have to calculate the cosine, you simply use the one stored along with the password hash and the salt.
The programmer of that PHP code was probably wanting to do a clever version of pepper. By storing that cosine, or pepper along with the salt and the password hashes, you basically change that pepper into a salt2. So, another versionless way of doing this would be to use two salts in your C# hashing code. For new passwords you could leave the second salt blank or assign it some other way. For old passwords, it would be that cosine, but it is already calculated.
Regarding this part of my question: "Why is the precision so poor in my C# example", coreclr developers answered here: https://github.com/dotnet/coreclr/issues/12737
In a nutshell, .NET Framework 4.6.2 (x86 and x64) and .NET Core (x86) appear to use Intel's x87 FP unit (i.e. fcos or fsincos) that gives inaccurate results while .NET Core on x64 (and PHP, Visual Studio 2017 and gcc) use more accurate, presumably SSE2-based implementations that give correctly rounded results.
I saw the following code in my companies codebase and thought to myself "Damn that's a fine line of linq, I'd like to translate that to Haskell to see what it's like in an actual functional language"
static Random random = new Random();
static string RandomString(int length)
{
const string chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
return new string(Enumerable.Repeat(chars, length)
.Select(s => s[random.Next(s.Length)])
.ToArray());
}
However I'm having a bit of trouble getting a concise and direct translation to Haskell because of how awkward it is to generate random numbers in this language.
I've considered a couple of approaches. The most direct translation of the C# code only generates a single random index and then uses that in place of the random.Next(s.Length). But I need to generate multiple indexes, not a single one.
Then I considered doing a list of IO Int random number actions, but I can't work out how to go through and convert the list of IO actions into actual random numbers.
So the Haskell code that I end up writing ends up looking quite convoluted compared to the C# (which I wasn't expecting in this case) and I haven't even got it to work anyway.
My question, what would be a natural translation of the C# to Haskell? Or more generally, how would you go about generating a random String of a specified length in Haskell (because this C# way doesn't seem to translate well to Haskell)?
Note: I'm mainly interested in what the algorithm to generate a random string looks like in Haskell. I'm not really interested in any standard libraries which do the job for me
The natural translation to Haskell involves having some sort of IO (as you need randomness). Since you are essentially trying to perform the action of choosing a character n times, you want to use replicateM. Then, for getting a random number in a range, you can use randomRIO.
import Control.Monad (replicateM)
import System.Random (randomRIO)
randomString :: Int -> IO String
randomString n = replicateM n (do r <- randomRIO (0,m); pure (chars !! r))
where
chars = ['A'..'Z'] ++ ['0'..'9']
m = length chars
This is somewhat complicated by the fact you want a string of only characters in a certain range. otherwise, you'd have a one liner: randomString n = replicateM n randomIO.
That said, the more faithful translation would use conduit. I'm not going to include imports and language pragmas (because they are a bit painful). This looks a lot more like what you would write in C#:
randomString' :: Int -> IO String
randomString' n = runConduit $ replicate n chars
.| mapM (\cs -> do r <- randomRIO (0,m); pure (cs !! r))
.| sinkList
where
chars = ['A'..'Z'] ++ ['0'..'9']
m = length chars
I have a little problem;
I have this C++ calculation:
int main(unsigned long pass){
char name[50];
cout << "Enter username please: " << endl << endl;
gets_s(name);
cout << "\n";
pass = strlen(name);
pass = pass * 3;
pass = pass << 2;
pass = pow(pass, 3.0);
pass = pass + 23;
pass = pass + (pass * 708224);
cout << "Your generated serial: " << +pass << endl << endl;
system("pause");}
This gives me the working code for a 3 char username.
This is my C# calculation.
private void btn_generate_Click(object sender, EventArgs e)
{
pass = txt_user.TextLength;
pass = pass * 3;
pass = pass << 2;
pass = pass * pass * pass;
pass = pass + 23;
pass = pass + (pass * 708224);
txt_serial.Text = pass.ToString();
}
This gives me the wrong code for the exact same username..
What is strange is that the calculation on both gives me the same result until this line:
pass = pass + (pass * 708224);
after this calculation C# gives me the wrong result.
c++ result: 2994463703 (correct)
c# result: 33059234775 (wrong)
I hope someone can explain this.
So, there are three (at least) underlying issues here.
This algorithm grows exponentially, and offers no protection against anything. It is easily reversible and does not attempt to secure the input.
The pow(pass, 3.0) method is going to return a double.
The long datatype (in C++) is not always 64-bits. It can be 32-bit.
If we ignore the first point, and skip to two and three, there are two potential issues:
When the pow(pass, 3.0) line gets hit, it may not always return the same value, due to floating-point error. (Now, I don't suspect this is a major issue in your code, but you fail to take it into account.)
When the pass + (pass * 708224) line (which can be rewritten as pass * 708225 fyi) gets hit, on a 32-bit C++ environment it will silently overflow to the value 2,994,463,703, which just so happens to be your C++ result.
So, how do you fix this?
Fix that algorithm. As it stands, you can easily build a lookup table of potential values.
Input (pass) Output
1 1,240,154,505
2 9,806,791,575
3 33,059,234,775
4 78,340,308,375
5 152,992,889,175
etc. etc.
Now, the issue here is not that these numbers are always going to be the same, that's generally expected. The issue is that the only value which actually fits within an Int32 is the first one. As soon as the second character is calculated, it's outside the potential range. And if you are going to be doing this in C++, you should really try to make sure you avoid long data-types. They are not always guaranteed to be 64-bits.
If you need a serial or hash (as we call them in the real-world), I recommend you look at md5, sha1/2, or any other hash algorithm. (All three mentioned here are built into .NET, should be easy enough to get them for C++.)
How can I tell if my C++ environment supports 64-bit unsigned long variables?
Easy, seed a unsigned long value with the maximum value for an unsigned int (4,294,967,295), add one to it, and check if the value is less than 1.
unsigned long test = 4294967295;
test = test + 1;
bool longIs64Bits = test > 0;
The result should be either true, or false. If true, then you have a 64-bit unsigned long type. If false, then you don't.
What if I really need 64-bit numbers?
Fortunately, C++ also provides a long long variable type. (As well as unsigned long long.) Note: these data-type sizes can vary, but will be no less than 64-bits.
unsigned long long test = 4294967295;
test = test + 1;
bool longLongIs64Bits = test > 0;
The preceding snippet should always be true.
Lastly, there is also uint64_t, defined in <stdint.h>. This is guaranteed to be 64-bits. It's part of the C99 spec, and C++11 spec, though I cannot vouch for support of it.
EBrown is dead right that the C++ code need serious rethinking, but if the C++ code is legacy and you can't change it, the best you can do is duplicate the bugs in the C# version.
Use uint instead of long in the c# version to trigger the unsigned 32 bit overflow.
You can't reliably duplicate the double rounding error. Use Math.pow and pray it never comes up.
Edit:
Addendum: Friends don't let friends roll their own crypto.
I'm porting several thousand lines of cryptographic C# functions to a Java project. The C# code extensively uses unsigned values and bitwise operations.
I am aware of the necessary Java work-arounds to support unsigned values. However, it would be much more convenient if there were implementations of unsigned 32bit and 64bit Integers that I could drop into my code. Please link to such a library.
Quick google queries reveal several that are part of commercial applications:
http://www.teamdev.com/downloads/jniwrapper/javadoc/com/jniwrapper/UInt64.html
http://publib.boulder.ibm.com/infocenter/rfthelp/v7r0m0/index.jsp?topic=/com.rational.test.ft.api.help/ApiReference/com/rational/test/value/UInt64.html
Operations with signed and unsigned integers are mostly identical, when using two's complement notation, which is what Java does. What this means is that if you have two 32-bit words a and b and want to compute their sum a+b, the same internal operation will produce the right answer regardless of whether you consider the words as being signed or unsigned. This will work properly for additions, subtractions, and multiplications.
The operations which must be sign-aware include:
Right shifts: a signed right shift duplicates the sign bit, while an unsigned right shift always inserts zeros. Java provides the ">>>" operator for unsigned right-shifting.
Divisions: an unsigned division is distinct from a signed division. When using 32-bit integers, you can convert the values to the 64-bit long type ("x & 0xFFFFFFFFL" does the "unsigned conversion" trick).
Comparisons: if you want to compare a with b as two 32-bit unsigned words, then you have two standard idioms:
if ((a + Integer.MIN_VALUE) < (b + Integer.MIN_VALUE)) { ... }
if ((a & 0xFFFFFFFFL) < (b & 0xFFFFFFFFL)) { ... }
Knowing that, the signed Java types are not a big hassle for cryptographic code. I have implemented many cryptographic primitives in Java, and the signed types are not an issue provided that you understand what you are writing. For instance, have a look at sphlib: this is an opensource library which implements many cryptographic hash functions, both in C and in Java. The Java code uses Java's signed types (int, long...) quite seamlessly, and it simply works.
Java does not have operator overloading, so Java-only "solutions" to get unsigned types will involve custom classes (such as the UInt64 class you link to), which will imply a massive performance penalty. You really do not want to do that.
Theoretically, one could define a Java-like language with unsigned types and implement a compiler which produces bytecode for the JVM (internally using the tricks I detail above for shifts, divisions and comparisons). I am not aware of any available tool which does that; and, as I said above, Java's signed types are just fine for cryptographic code (in other words, if you have trouble with such signed types, then I daresay that you do not know enough to implement cryptographic code securely, and you should refrain from doing so; instead, use existing opensource libraries).
This is a language feature, not a library feature, so there is no way to extend Java to support this functionality unless you change the language itself, in which case you'd need to make your own compiler.
However, if you need unsigned right-shifts, Java supports the >>> operator which works like the >> operator for unsigned types.
You can, however, make your own methods to perform arithmetic with signed types as though they were unsigned; this should work, for example:
static int multiplyUnsigned(int a, int b)
{
final bool highBitA = a < 0, highBitB = b < 0;
final long a2 = a & ~(1 << 31), b2 = b & ~(1 << 31);
final long result = (highBitA ? a2 | (1 << 31) : a2)
* (highBitB ? b2 | (1 << 31) : b2);
return (int)result;
}
Edit:
Thanks to #Ben's comment, we can simplify this:
static int multiplyUnsigned(int a, int b)
{
final long mask = (1L << 32) - 1;
return (int)((a & mask) * (b & mask));
}
Neither of these methods works, though, for the long type. You'd have to cast to a double, negate, multiply, and cast it back again in that case, which would likely kill any and all of your optimizations.
I have a very large number I need to calculate, and none of the inbuilt datatypes in C# can handle such a large number.
Basicly I want to solve this:
Project Euler 16:
2^15 = 32768 and the sum of its digits
is 3 + 2 + 7 + 6 + 8 = 26.
What is the sum of the digits of the
number 2^1000?
I have already written the code, but, as said before, the number is too large for c# datatypes. The code has been tested and verified with small numbers (such as 2^15) and it works perfectly.
using System;
namespace _16_2E1000
{
class Program
{
static void Main(string[] args)
{
ulong sum = 0;
ulong i = 1 << 1000;
string s = i.ToString();
foreach (char c in s)
{
sum += (ulong) Convert.ToInt64(c.ToString());
}
Console.WriteLine(sum);
Console.ReadLine();
}
}
}
You can use BigInteger from the J# classes. First question in this article tells you how. It's a bit of pain b/c then you have to provide the J# redistributable when you roll out tho.
First to answerer you exact question, look for a BigInt or BigNum type
Second, from what I know of Project Euler, there will be a cool, tricky way to do it that is much easier.
As a first guess I'd compute the answerer for 2^1 -> 2^n (for whatever n you can get to work) and look for patterns. Also look for patterns in the sequences
V(0) = 2^p
V(n) = floor(V(n - 1) / 10)
D(n) = V(n) % 10
I hope this is not a homework problem, but to get to the answer of 2^1000, you'll have to divide it into smaller chunks,
try something like,
2^1000 = 2 * 2^999 = 2^999 + 2^999 = 2^ 998 + 2^ 998 + 2^ 998 + 2^ 998
breaking into smaller bits till you get to solvable a problem,
complete solution to project Euler is on following links.
http://blog.functionalfun.net/2008/07/project-euler-problem-16-calculating.html
http://code.msdn.microsoft.com/projecteuler
It is not necessary to have Big Integer capabilities in order to solve this problem.
One could just use the property that:
2^n = 2^(n-1) + 2^(n-1)
If Big Integer is really necessary for other tasks, I have been using the BigInt class from F# in my C# programs and am happy with it.
The necessary steps:
Install the F# CTP
In your C# (or other .NET language) application add a reference to the FSharp.Core dll.
Add: using Microsoft.FSharp.Math;
In the "Class View" window familiarize yourself with the members of the two classes: BigInt and BigNum
After executing these steps one is basically ready to use the BigInt class.
One last hint:
To avoid declaring variables with improper names to hold constants that makes the code unreadable, I am using a name that starts with _ (underscore), followed by the integer constant. In this way one will have expressions like:
N = _2 * N;
clearly much more readable than:
N = Two * N;
Here's a BigInteger (source code is available) that you can use; though, as already mentioned, there are more efficient ways to do this than brute force.
BigInteger on codeplex
Actually, while a biginteger utility might be of interest here, you don't need it, even for this. Yes, it looks like it does, but you don't. In fact, use of a biginteger form may even slow things down.
Since I don't want to solve the problem for you, I'll just suggest you think about this in a modular way.