Is it possible, from .NET, to mimic the exact randomization that Java uses? I have a seed, and I would like to be able to recieve the same results in both C# and Java when creating a random number.
You don't need to read the source code. The formula is a one-liner and is given in the documentation for java.util.Random.
Here's a partial translation:
[Serializable]
public class Random
{
public Random(UInt64 seed)
{
this.seed = (seed ^ 0x5DEECE66DUL) & ((1UL << 48) - 1);
}
public int NextInt(int n)
{
if (n <= 0) throw new ArgumentException("n must be positive");
if ((n & -n) == n) // i.e., n is a power of 2
return (int)((n * (long)Next(31)) >> 31);
long bits, val;
do
{
bits = Next(31);
val = bits % (UInt32) n;
}
while (bits - val + (n - 1) < 0);
return (int) val;
}
protected UInt32 Next(int bits)
{
seed = (seed * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1);
return (UInt32)(seed >> (48 - bits));
}
private UInt64 seed;
}
Example:
Random rnd = new Random(42);
Console.WriteLine(rnd.NextInt(10));
Console.WriteLine(rnd.NextInt(20));
Console.WriteLine(rnd.NextInt(30));
Console.WriteLine(rnd.NextInt(40));
Console.WriteLine(rnd.NextInt(50));
Output on both platforms is 0, 3, 18, 4, 20.
If you have the source code of the java.util.Random class for your Java implementation, you can easily port it to .NET.
If you require both applications (Java and .NET) to use a certain random number generator, you'd better implement one in both platforms and use it instead, as the system provided version might change its behavior as a result of an update.(Looks like the Java specification precisely describes the behavior of its PRNG.)
If you don't need a cryptographically secure pseudorandom number generator then I would go for the Mersenne twister. You can find source code for C# here and Java here.
Well, you can look in the source code for Random.java and copy the algorithm, constants, etc.etc, but Random uses System.nanoTime in its constructor so you won't get the same results.
From java.util.Random
public Random() {
this(++seedUniquifier +
System.nanoTime()); }
I wouldn't be at all surprised if the source in C# would show you something similar.
Edit: Disregard, as has been pointed out, the constructor that takes an input seed never accesses time.
Maybe it would make sense to implement your own simple pseudo-random number generator? That way you have complete control and can garauntee the same seed gives the same results in both environments. Probably a bit more work than porting one to the other though.
Another option might be to write your random numbers out to a file once from one platform and then just load your random numbers for both platforms from that file, or you could load them from a service such as random.org
Related
I would like to generate an OTP 6-digit pin in my C# .NET Application. However, for security reasons, I heard that using the Random() package to perform this action might not be the most appropriate. Are there any other methods available?
You definitely want to use something in the System.Security.Cryptography namespace if you want something more secure than System.Random.
Here's a handy implementation written by Eric Lippert in his fabulous Fixing Random series.
public static class BetterRandom
{
private static readonly ThreadLocal<System.Security.Cryptography.RandomNumberGenerator> crng = new ThreadLocal<System.Security.Cryptography.RandomNumberGenerator>(System.Security.Cryptography.RandomNumberGenerator.Create);
private static readonly ThreadLocal<byte[]> bytes = new ThreadLocal<byte[]>(() => new byte[sizeof(int)]);
public static int NextInt()
{
crng.Value.GetBytes(bytes.Value);
return BitConverter.ToInt32(bytes.Value, 0) & int.MaxValue;
}
public static double NextDouble()
{
while (true)
{
long x = NextInt() & 0x001FFFFF;
x <<= 31;
x |= (long)NextInt();
double n = x;
const double d = 1L << 52;
double q = n / d;
if (q != 1.0)
return q;
}
}
}
Now you can easily create a OTP string:
string otp = (BetterRandom.NextInt() % 1000000).ToString("000000");
One way to generate a 6-digit OTP is to use a cryptographically secure pseudorandom number generator (CSPRNG). A CSPRNG is a random number generator that is designed to be resistant to attackers trying to predict the numbers that will be generated.
One popular CSPRNG is the Microsoft Cryptographic Service Provider (CSP) Random Number Generator. To use this CSPRNG, you can call the System.Security.Cryptography.RNGCryptoServiceProvider.GetBytes method, passing in the number of bytes you want to generate. This method will return an array of bytes, which you can then convert into a 6-digit string by taking the first 6 bytes and converting them to hexadecimal.
Another option is to use the System.Random class, but you should only do this if you are sure that the seed value you are using is truly random and cannot be predicted by an attacker.
I'm trying to calculate a large number, which requires BigInteger.Pow(), but I need the exponent to also be a BigInteger and not int.
i.e.
BigInteger.Pow(BigInteger)
How can I achieve this?
EDIT: I came up with an answer. User dog helped me to achieve this.
public BigInteger Pow(BigInteger value, BigInteger exponent)
{
BigInteger originalValue = value;
while (exponent-- > 1)
value = BigInteger.Multiply(value, originalValue);
return value;
}
Just from the aspect of general maths, this doesn't make sense. That's why it's not implemented.
Think of this example: Your BigInteger number is 2 and you need to potentiate it by 1024. This means that the result is a 1 KB number (2^1024). Now imagine you take int.MaxValue: Then, your number will consume 2 GB of memory already. Using a BigInteger as an exponent would yield a number beyond memory capacity!
If your application requires numbers in this scale, where the number itself is too large for your memory, you probably want a solution that stores the number and the exponent separately, but that's something I can only speculate about since it's not part of your question.
If your your issue is that your exponent variable is a BigInteger, you can just cast it to int:
BigInteger.Pow(bigInteger, (int)exponent); // exponent is BigInteger
Pow(2, int64.MaxValue) requires 1,152,921 terabytes just to hold the number, for a sense of scale. But here's the function anyways, in case you have a really nice computer.
static BigInteger Pow(BigInteger a, BigInteger b) {
BigInteger total = 1;
while (b > int.MaxValue) {
b -= int.MaxValue ;
total = total * BigInteger.Pow(a, int.MaxValue);
}
total = total * BigInteger.Pow(a, (int)b);
return total;
}
As others have pointed out, raising something to a power higher than the capacity of int is bad news. However, assuming you're aware of this and are just being given your exponent in the form of a BigInteger, you can just cast to an int and proceed on your merry way:
BigInteger.Pow(myBigInt, (int)myExponent);
or, even better,
try
{
BigInteger.Pow(myBigInt, (int)myExponent);
}
catch (OverflowException)
{
// Do error handling and stuff.
}
For me the solution was to use the function BigInteger.ModPow(BigInteger value, BigInteger exponent, BigInteger modulus) because I needed to do a mod afterwards anyway.
The function calculates a given BigInteger to the power of another BigInteger and calculates the modulo with a third BitInteger.
Although it will still take a good amount of CPU Power it can be evaluated because the function already knows about the modulo and therefore can save a ton of memory.
Hope this might help some with the same question.
Edit:
Is available since .Net Framework 4.0 and is in .Net Standard 1.1 and upwards.
I came up with:
public BigInteger Pow(BigInteger value, BigInteger exponent)
{
BigInteger originalValue = value;
while (exponent-- > 1)
value = BigInteger.Multiply(value, originalValue);
return value;
}
I've been working to optimize the Lucas-Lehmer primality test using C# code (yes I'm doing something with Mersenne primes to calculate perfect numbers. I was wondering it is possible with the current code to make further improvements in speed. I use the System.Numerics.BigInteger class to hold the numbers, perhaps it is not the wisest, we'll see it then.
This code is actually based on the intelligence found on: http://en.wikipedia.org/wiki/Lucas%E2%80%93Lehmer_primality_test
This page (at the timestamp) section, some proof is given to optimize the division away.
The code for the LucasTest is:
public bool LucasLehmerTest(int num)
{
if (num % 2 == 0)
return num == 2;
else
{
BigInteger ss = new BigInteger(4);
for (int i = 3; i <= num; i++)
{
ss = KaratsubaSquare(ss) - 2;
ss = LucasLehmerMod(ss, num);
}
return ss == BigInteger.Zero;
}
}
Edit:
Which is faster than using ModPow from the BigInteger class as suggested by Mare Infinitus below. That implementation is:
public bool LucasLehmerTest(int num)
{
if (num % 2 == 0)
return num == 2;
else
{
BigInteger m = (BigInteger.One << num) - 1;
BigInteger ss = new BigInteger(4);
for (int i = 3; i <= num; i++)
ss = (BigInteger.ModPow(ss, 2, m) - 2) % m;
return ss == BigInteger.Zero;
}
}
The LucasLehmerMod method is implemented as follows:
public BigInteger LucasLehmerMod(BigInteger divident, int divisor)
{
BigInteger mask = (BigInteger.One << divisor) - 1; //Mask
BigInteger remainder = BigInteger.Zero;
BigInteger temporaryResult = divident;
do
{
remainder = temporaryResult & mask;
temporaryResult >>= divisor;
temporaryResult += remainder;
} while ( (temporaryResult >> divisor ) != 0 );
return (temporaryResult == mask ? BigInteger.Zero : temporaryResult);
}
What I am afraid of is that when using the BigInteger class from the .NET framework, I am bound to their calculations. Would it mean I have to create my own BigInteger class to improve it? Or can I sustain by using a KaratsubaSquare (derived from the Karatsuba algorithm) like this, what I found on Optimizing Karatsuba Implementation:
public BigInteger KaratsubaSquare(BigInteger x)
{
int n = BitLength(x);
if (n <= LOW_DIGITS) return BigInteger.Pow(x,2); //Standard square
BigInteger b = x >> n; //Higher half
BigInteger a = x - (b << n); //Lower half
BigInteger ac = KaratsubaSquare(a); // lower half * lower half
BigInteger bd = KaratsubaSquare(b); // higher half * higher half
BigInteger c = Karatsuba(a, b); // lower half * higher half
return ac + (c << (n + 1)) + (bd << (2 * n));
}
So basically, I want to look if it is possible to improve the Lucas-Lehmer test method by optimizing the for loop. However, I am a bit stuck there... Is it even possible?
Any thoughts are welcome of course.
Some extra thoughs:
I could use several threads to speed up the calculation on finding Perfect numbers. However, I have no experience (yet) with good partitioning.
I'll try to explain my thoughts (no code yet):
First I'll be generating a primetable with use of the sieve of Erathostenes. It takes about 25 ms to find primes within the range of 2 - 1 million single threaded.
What C# offers is quite astonishing. Using PLINQ with the Parallel.For method, I could run several calculations almost simultaneously, however, it chunks the primeTable array into parts which are not respected to the search.
I already figured out that the automatic load balancing of the threads is not sufficient for this task. Hence I need to try a different approach by dividing the loadbalance depending on the mersenne numbers to find and use to calculate a perfect number. Has anyone some experience with this? This page seems to be a bit helpful: http://www.drdobbs.com/windows/custom-parallel-partitioning-with-net-4/224600406
I'll be looking into it further.
As for now, my results are as following.
My current algorithm (using the standard BigInteger class from C#) can find the first 17 perfect numbers (see http://en.wikipedia.org/wiki/List_of_perfect_numbers) within 5 seconds on my laptop (an Intel I5 with 4 cores and 8GB of RAM). However, then it gets stuck and finds nothing within 10 minutes.
This is something I cannot match yet... My gut feeling (and common sense) tells me that I should look into the LucasLehmer test, since a for-loop calculating the 18th perfect number (using Mersenne Prime 3217) would run 3214 times. There is room for improvement I guess...
What Dinony posted below is a suggestion to rewrite it completely in C. I agree that would boost my performance, however I choose C# to find out it's limitations and benefits. Since it's widely used, and it's ability to rapidly develop applications, it seemed to me worthy of trying.
Could unsafe code provide benefits here as well?
One possible optimization is to use BigInteger ModPow
It really increases performance significantly.
Just a note for info...
In python, this
ss = KaratsubaSquare(ss) - 2
has worse performance than this:
ss = ss*ss - 2
What about adapting the code to C? I have no idea about the algorithm, but it is not that much code.. so the biggest run-time improvement could be adapting to C.
I'm porting this line from C++ to C#, and I'm not an experienced C++ programmer:
unsigned int nSize = BN_num_bytes(this);
In .NET I'm using System.Numerics.BigInteger
BigInteger num = originalBigNumber;
byte[] numAsBytes = num.ToByteArray();
uint compactBitsRepresentation = 0;
uint size2 = (uint)numAsBytes.Length;
I think there is a fundamental difference in how they operate internally, since the sources' unit tests' results don't match if the BigInt equals:
0
Any negative number
0x00123456
I know literally nothing about BN_num_bytes (edit: the comments just told me that it's a macro for BN_num_bits).
Question
Would you verify these guesses about the code:
I need to port BN_num_bytes which is a macro for ((BN_num_bits(bn)+7)/8) (Thank you #WhozCraig)
I need to port BN_num_bits which is floor(log2(w))+1
Then, if the possibility exists that leading and trailing bytes aren't counted, then what happens on Big/Little endian machines? Does it matter?
Based on these answers on Security.StackExchange, and that my application isn't performance critical, I may use the default implementation in .NET and not use an alternate library that may already implement a comparable workaround.
Edit: so far my implementation looks something like this, but I'm not sure what the "LookupTable" is as mentioned in the comments.
private static int BN_num_bytes(byte[] numAsBytes)
{
int bits = BN_num_bits(numAsBytes);
return (bits + 7) / 8;
}
private static int BN_num_bits(byte[] numAsBytes)
{
var log2 = Math.Log(numAsBytes.Length, 2);
var floor = Math.Floor(log2);
return (uint)floor + 1;
}
Edit 2:
After some more searching, I found that:
BN_num_bits does not return the number of significant bits of a given bignum, but rather the position of the most significant 1 bit, which is not necessarily the same thing
Though I still don't know what the source of it looks like...
The man page (OpenSSL project) of BN_num_bits says that "Basically, except for a zero, it returns floor(log2(w))+1.".
So these are the correct implementations of the BN_num_bytes and BN_num_bits functions for .Net's BigInteger.
public static int BN_num_bytes(BigInteger number) {
if (number == 0) {
return 0;
}
return 1 + (int)Math.Floor(BigInteger.Log(BigInteger.Abs(number), 2)) / 8;
}
public static int BN_num_bits(BigInteger number) {
if (number == 0) {
return 0;
}
return 1 + (int)Math.Floor(BigInteger.Log(BigInteger.Abs(number), 2));
}
You should probably change these into extension methods for convenience.
You should understand that these functions measure the minimum number of bits/bytes that are needed to express a given integer number. Variables declared as int (System.Int32) take 4 bytes of memory, but you only need 1 byte (or 3 bits) to express the integer number 7. This is what BN_num_bytes and BN_num_bits calculate - the minimum required storage size for a concrete number.
You can find the source code of the original implementations of the functions in the official OpenSSL repository.
Combine what WhozCraig in the comments said with this link explaining BN_num_bits:
http://www.openssl.org/docs/crypto/BN_num_bytes.html
And you end up with something like this, which should tell you the significant number of bytes:
public static int NumberOfBytes(BigInteger bigInt)
{
if (bigInt == 0)
{
return 0; //you need to check what BN_num_bits actually does here as not clear from docs, probably returns 0
}
return (int)Math.Ceiling(BigInteger.Log(bigInt + 1, 2) + 7) / 8;
}
How do you generate random numbers effectively?
Every time a random number program boots up, it starts spitting same numbers as before. (I guess because of quasi nature of random number generation)
Is there a way, that random# generation becomes non-deterministic? sort of Entropy addition to generation that number generated after boot is in different sequence than last one. (random random rather that quasi-random)
Also, say range of such generation is (m,n) such that n-m = x, is there a chance that a number say 'p' appears next time after x-1 other numbers have been generated. But next lot of such x numbers would not be same as sequence from last one. Example:
range: 1,5. Generation : 2,4,5,1,3 (1st) 4,2,3,1,5 (2nd)... same numbers.
I out of nonplussed state of mind wrote this :
int num1 = (rand.Next(1, 440) *31* (int)DateTime.Now.Ticks *59* (DateTime.Now.Second * 100) % 439) + 1;
int num2 = (rand.Next(1, 440) *31* (int)DateTime.Now.Ticks *59* (DateTime.Now.Second * 100) % 439) + 1;
here range was (1,440). but it still generates numbers out of bound and zero, and it's frequency is not that great either. It is C#.NET code. Why so?
your answers can be language agnostic / algorithmic / analytical. Thanks in advance.
Very few "random" number generators are actually random. Almost all are pseudorandom, following a predictable sequence when started with the same seed value. Many pseudorandom number generators (PRNGs) get their seed from the date and time of their initial invocation. Others get their seed from a source of random data supplied by the operating system, which often is generated from outside sources (e.g., mouse motion, keyboard activity).
The right way to seed a good random number generator is to not seed it. Every good generator has a default mechanism to supply the seed, and it is usually much better than any you can come up with. The only real reason to seed the generator is if you actually want the same sequence of random numbers (e.g., when you're trying to repeat a process that requires randomness).
See http://msdn.microsoft.com/en-us/library/system.random.aspx for the details of the C# Random class, but basically, it uses a very well known and respected algorithm and seeds it with the date and time.
To answer your key question, just use rand.Next(min, max+1) and you'll always get a random sequence of numbers between min and max inclusive. The sequence will be the same every time you use the same seed. But rand = new Random() will use the current time, and as long as your program is invoked with some separation in time, they'll be different.
"Seed" the random number generator by getting the number of seconds since midnight and then passing it in:
Random rand = new Random(secs);
This still does not generate perfectly random numbers, but should serve your purpose.
Producing the same sequence over and over is often a feature, not a bug, as long as you control it. Producing a repeatable sequence makes debugging easier. If you are serious about wanting a non-reproducible random sequence, you could look for a secure random number generator with this as a design aim. You've tagged your question C#, but since Java has http://docs.oracle.com/javase/1.4.2/docs/api/java/security/SecureRandom.html and the windows API has http://en.wikipedia.org/wiki/CryptGenRandom you may able to find an equivalent in C#.
I am not so conversant in C#.
But I don't think this problem would occur in Java because the default constructor of Random class uses a seed based on current time and a unique count identifier.Below is code from java.util.Random class.
private static volatile long seedUniquifier = 8682522807148012L;
public Random() { this(++seedUniquifier + System.nanoTime()); }
If C# doesent support this out of box, you could use the above code to create a unique seed each time.
P.S: Note that since access to seedUniquifier is not synchronized, even though its volatile, there is a small possibility that same seeds are used for multiple Random objects. From javadoc of Random class:
"This constructor sets the seed of the random number generator to a
value very likely to be distinct from any other invocation of this constructor."
You can use a chaotic map to generate random numbers. The C++ code below (GenRandRea) returns a vector of random number using the so-called "Tent map" (https://www.wikiwand.com/en/Tent_map). The seed is an integer that is used to generate x (as a number between 0. and 1.) as input of the iterative map. Diferent seeds will generate different sequences.
vector<double> GenRandRea(unsigned seed, int VecDim){
double x, y, f;
vector<double> retval;
x = 0.5*(abs(sin((double)seed)) + abs(cos((double)seed)));
for (int i = 0; i<(tentmap_delay + VecDim); i++) {
if ((x >= 0.) && (x <= 0.5)) {
f = 2 * tentmap_r * x;
}
else {
f = 2 * tentmap_r * (1. - x);
}
if (i>=tentmap_delay) {
y = (x*tentmap_const) - (int)(x*tentmap_const);
retval.push_back(y);
}
x = f;
}
return retval;
}
with
const double tentmap_r = 0.75; //parameter for the tent map
const int tentmap_delay = 50; /*number of interactions in the tent map
allowing for sorting */
const double tentmap_const = 1.e6; //constant for the tent map
VecDim is the output vector dimension. The ideia is to iterate at least (tentmap_delay + VecDim) turns and write the result in retval (a vector of doubles).
To use this code:
vector<double> val;
val = GenRandRea(2, 10);
for (int kk=0; kk<10;kk++){
cout << setprecision(9) << val[kk] << endl;
}
which will for example produce:
0.767902586
0.848146121
0.727780818
0.408328773
0.88750684
0.83126026
0.253109609
0.620335586
0.569496621
0.145755069
Regards!