There is Best way to generate a random float in C# for float, and Generating a Random Decimal in C# for decimal, however, there is no question for double yet which covers "the full range of double".
How to properly generate a random double with a uniform distribution over all valid (i.e., non-NaN/infinite/...) values?
Your scaling wouldn't work, because the scaling factor would be actually bigger than double.MaxValue which doesn't work.
var number = r.NextDouble() * 2d - 1d; // r in [-1, 1]
return number * double.MaxValue;
This code snippet first generates a uniform distributed random number in the range [-1,1].
Than it transforms it into your desired range.
This only works, when both ranges are equal, which is true for your use-case ( double.MaxValue == Math.Abs(double.MinValue))
Note: actually the upper bound of Random.NextDouble() is 0.99999999999999978 according to https://learn.microsoft.com/en-us/dotnet/api/system.random.nextdouble?view=netframework-4.7)
According to your question, how to get a range where 1 is included, you could implement the random.NextDouble() method on your own.
The reference implementation (http://referencesource.microsoft.com/#mscorlib/system/random.cs,bb77e610694e64ca) uses the Random.Next() method and maps it to a double, where the Random.Next() method has a range [0, Int32.MaxValue - 1]. So just use r.Next() * (1.0 / (Int32.MaxValue - 1)) which would return double values in the range [0, 1].
By the way, I don't think, the docs are correct, since the maximum value, Random.NextDouble() can generate is 0.999999999534339, which is (Int32.MaxValue - 1) * (1.0 / Int32.MaxValue) and not 0.99999999999999978.
Related
The documentation of Random.NextDouble():
Returns a random floating-point number that is greater than or equal to 0.0, and less than 1.0.
So, it can be exactly 0. But what are the chances for that?
var random = new Random();
var x = random.NextDouble()
if(x == 0){
// probability for this?
}
It would be easy to calculate the probability for Random.Next() being 0, but I have no idea how to do it in this case...
As mentioned in comments, it depends on internal implementation of NextDouble. In "old" .NET Framework, and in modern .NET up to version 5, it looks like this:
protected virtual double Sample() {
return (InternalSample()*(1.0/MBIG));
}
InternalSample returns integer in 0 to Int32.MaxValue range, 0 included, int.MaxValue excluded. We can assume that the distribution of InternalSample is uniform (in the docs for Next method, which just calls InternalSample, there are clues that it is, and it seems there is no reason to use non-uniform distribution in general-purpose RNG for integers). That means every number is equally likely. Then, we have 2,147,483,647 numbers in distribution, and the probability to draw 0 is 1 / 2,147,483,647.
In modern .NET 6+ there are two implementations. First is used when you provide explicit seed value to Random constructor. This implementation is the same as above, and is used for compatibility reasons - so that old code relying on the seed value to produce deterministic results will not break while moving to the new .NET version.
Second implementation is a new one and is used when you do NOT pass seed into Random constructor. Source code:
public override double NextDouble() =>
// As described in http://prng.di.unimi.it/:
// "A standard double (64-bit) floating-point number in IEEE floating point format has 52 bits of significand,
// plus an implicit bit at the left of the significand. Thus, the representation can actually store numbers with
// 53 significant binary digits. Because of this fact, in C99 a 64-bit unsigned integer x should be converted to
// a 64-bit double using the expression
// (x >> 11) * 0x1.0p-53"
(NextUInt64() >> 11) * (1.0 / (1ul << 53));
We first obtain random 64-bit unsigned integer. Now, we could multiply it by 1 / 2^64 to obtain double in 0..1 range, but that would make the resulting distribution biased. double is represented by 53-bit mantissa (52 bits are explicit and one is implicit) ,exponent and sign. For all integer values exponent is the same, so that leaves us with 53 bits to represent integer values. But we have 64-bit integer here. This means integer values less than 2^53 can be represented exactly by double but bigger integers can not. For example:
ulong l1 = 1ul << 53;
ulong l2 = l1 + 1;
double d1 = l1;
double d2 = l2;
Console.WriteLine(d1 == d2);
Prints "true", so two different integers map to the same double value. That means if we just multiply our 64-bit integer by 1 / 2^64 - we'll get a biased non-uniform distribution, because many integers bigger than 2^53-1 will map to the same values.
So instead, we throw away 11 bits, and multiply the result by 1 / 2^53 to get uniform distibution in 0..1 range. The probability to get 0 is then 1 / 2^53 (1 / 9,007,199,254,740,992). This implementation is better than the old one, because is provides much more different doubles in 0 .. 1 range (2^53 compared to 2^32 in old one).
You also asked in comments:
If one knows how many numbers there are between 0 inclusive and 1
exclusive (according to IEEE 754), it would be possible to answer the
'probability' question, because 0 is one of all of them
That's not so. There are actually more than 2^53 representable numbers between 0..1 in IEEE 754. We have 52 bits of mantissa, then we have 11 bits of exponent, half of which is for negative exponents. Almost all negative exponents (rougly half of that 11 bit range) combined with mantissa gives us distinct value in 0..1 range.
Why we can't use full 0..1 range which IEEE allows us to generate random number? Because this range is not uniform (like the full double range is not uniform itself). For example there are more representable numbers in 0 .. 0.5 range than in 0.5 .. 1 range.
This is from a strictly academic persepective.
From Double Struct:
All floating-point numbers also have a limited number of significant
digits, which also determines how accurately a floating-point value
approximates a real number. A Double value has up to 15 decimal digits
of precision, although a maximum of 17 digits is maintained
internally. This means that some floating-point operations may lack
the precision to change a floating point value.
If only 15 decimal digits are significant, then your possible return values are:
0.000000000000000
To:
0.999999999999999
Said differently, you have 10^15 possible (comparably different, "distinct") values (see Permutations in the first answer):
10^15 = 1,000,000,000,000,000
Zero is just ONE of those possibilities:
1 / 1,000,000,000,000,000 = 0.000000000000001
Stated as a percentage:
0.0000000000001% chance of zero being randomly selected?
I think this is the closest "correct" answer you're going to get...
...whether it performs this way in practice is possibly a different story.
Just create a simple program, and let it run until you are satisfied by the number of tries done. (see: https://onlinegdb.com/ij1M50gRQ)
Random r = new Random();
Double d ;
int attempts=0;
int attempts0=0;
while (true) {
d = Math.Round(r.NextDouble(),3);
if(d==0) attempts0++;
attempts++;
if (attempts%1000000==0) Console.WriteLine($"Attempts: {attempts}, with {attempts0} times a 0 value, this is {Math.Round(100.0*attempts0/attempts,3)} %");
}
example output:
...
Attempts: 208000000, with 103831 times a 0 value, this is 0.05 %
Attempts: 209000000, with 104315 times a 0 value, this is 0.05 %
Attempts: 210000000, with 104787 times a 0 value, this is 0.05 %
Attempts: 211000000, with 105305 times a 0 value, this is 0.05 %
Attempts: 212000000, with 105853 times a 0 value, this is 0.05 %
Attempts: 213000000, with 106349 times a 0 value, this is 0.05 %
Attempts: 214000000, with 106839 times a 0 value, this is 0.05 %
...
Changing the value of d to be rounded to 2 decimals will return 0.5%
Currently, I'm developing some fuzzy logic stuff in C# and want to achieve this in a generic way. For simplicity, I can use float, double and decimal to process an interval [0, 1], but for performance, it would be better to use integers. Some thoughts about symmetry also led to the decision to omit the highest value in unsigned and the lowest value in signed integers. The lowest, non-omitted value maps to 0 and the highest, non-omitted value maps to 1. The omitted value is normalized to the next non-omitted value.
Now, I want to implement some compund calculations in the form of:
byte f(byte p1, byte p2, byte p3, byte p4)
{
return (p1 * p2) / (p3 * p4);
}
where the byte values are interpreted as the [0, 1] interval mentioned above. This means p1 * p2 < p1 and p1 * p2 < p2 as opposed to numbers greater than 1, where this is not valid, e. g. 2 * 3 = 6, but 0.1 * 0.2 = 0.02.
Additionally, a problem is: p1 * p2 and p3 * p4 may exceed the range of the type byte. The result of the whole formula may not exceed this range, but the overflow would still occur in one or both parts. Of course, I can just cast to ushort and in the end back to byte, but for an ulong I wouldn't have this possibility without further effort and I don't want to stick to 32 bits. On the other hand, if I return (p1 / p3) * (p2 / p4), I decrease the type escalation, but might run into a result of 0, where the actual result is non-zero.
So I thought of somehow simultaneously "shrinking" both products step by step until I have the result in the [0, 1] interpretation. I don't need an exact value, a heuristic with an error less than 3 integer values off the correct value would be sufficient, and for an ulong an even higher error would certainly be OK.
So far, I have tried to convert the input to a decimal/float/double in the interval [0, 1] and calculated it. But this is completely counterproductive regarding performance. I read stuff about division algorithms, but I couldn't find the one I saw once in class. It was about calculating quotient and remainder simultaneously, with an accumulator. I tried to reconstruct and extend it for factorized parts of the division with corrections, but this breaks, where inidivisibility occurs and I get a too big error. I also made some notes and calculated some integer examples manually, trying to factor out, cancel out, split sums and such fancy derivation stuff, but nothing led to a satisfying result or steps for an algorithm.
Is there a
performant way
to multiply/divide signed (and unsigned) integers as above
interpreted as interval [0, 1]
without type promotion
?
To answer your question as summarised: No.
You need to state (and rank) your overall goals explicitly (e.g., is symmetry more or less important than performance?). Your chances of getting a helpful answer improve with succinctly stating them in the question.
While I think Phil1970's you can ignore scaling for … division overly optimistic, multiplication is enough of a problem: If you don't generate partial results bigger (twice as big) as your "base type", you are stuck with multiplying parts of your operands and piecing the result together.
For ideas about piecing together "larger" results: AVR's Fractional Multiply.
Regarding …in signed integers. The lowest, non-omitted value maps to 0…, I expect that you will find, e.g., excess -32767/32768-coded fractions even harder to handle than two's complement ones.
If you are not careful, you will lost more time doing conversions that it would have take with regular operations.
That being said, an alternative that might make some sense would be to map value between 0 and 128 included (or 0 and 32768 if you want more precision) so that all value are essentially stored multiplied by 128.
So if you have (0.5 * 0.75) / (0.125 * 0.25) the stored values for each of those numbers would be 64, 96, 16 and 32 respectively. If you do those computation using ushort you would have (64 * 96) / (16 * 32) = 6144 / 512 = 12. This would give a result of 12 / 128 = 0.09375.
By the way, you can ignore scaling for addition, substraction and division. For multiplication, you would do the multiplication as usual and then divide by 128. So for 0.5 * 0.75 you would have 64 * 96 / 128 = 48 which correspond to 48 / 128 = 0.375 as expected.
The code can be optimized for the platform particularly if the platform is more efficient with narrow numbers. And if necessary, rounding could be added to operation.
By the way since the scaling if a power of 2, you can use bit shifting for scaling. You might prefer to use 256 instead of 128 particularly if you don't have one cycle bit shifting but then you need larger width to handle some operations.
But you might be able to do some optimization if the most significant bit is not set for example so that you would only use larger width when necessary.
I'm programming a perceptron and really need to get the range from the normal NextDouble (0, 1) to (-0.5, 0.5). Problem is, I'm using an array and I'm not sure whether it is possible. Hopefully that's enough information.
Random rdm = new Random();
double[] weights = {rdm.NextDouble(), rdm.NextDouble(), rdm.NextDouble()};
Simply subtract 0.5 from your random number:
double[] weights = {
rdm.NextDouble() - 0.5,
rdm.NextDouble() - 0.5,
rdm.NextDouble() - 0.5
};
If you need a only one decimal value (my wild guess from what I have seen in Wikipedia) and to include both limits, I wouldn't use a double but just a decimal value and then do the math:
(rdm.Next(11) - 5) / 10M;
That will return any of the 11 different possible values from -0.5 to 0.5.
Or you could go the double way but with a rounding, so you can actually hit the upper limit (0.5):
Math.Round(rdm.NextDouble() - 0.5, 1);
This way is probably a tiny bit slower than my first suggestion.
I need to convert any range to a -1 to 1 scale. I have multiple ranges that I am using and therefore need this equation to be dynamic. Below is my current equation. It works great for ranges where 0 is the centerpoint. I.E. 200 to -200. I have another range however that isn't converting nicely. 6000 to 4000. I've also tested 0 to 360 and it works.
var offset = -1 + ((2 / yMax) * (point.Y));
One of the major issues that I may have is that sometimes I'll get a value that is outside the range, and as such, the converted value needs to also be outside the -1 to 1 range.
What this is for is to take a value that is a real world value, and I need to be able to plot it into an OpenGL point. I'm using .NET 4.0 and the Tao Framework.
rescaled = -1 + 2 * (point.Y - yMin) / (yMax - yMin);
However, in OpenGL you can do this with a projection matrix (or matrix multiply inside a shader). Read about glTranslatef and glScalef for how to use them or how to duplicate them with matrix multiply.
If tan(x) = y and atan(y) = x why Math.Atan(Math.Tan(x)) != x?
I´m trying to calculate x in something like:
tan(2/x +3) = 5
so
atan(tan(2/x + 3) = atan(5)
and so on... but I´ve tried this:
double d = Math.Atan(Math.Tan(10));
and d != 10. Why?
The tangent function is periodic with period pi, and is invertible only if you restrict it to a subset of its domain over which it is injective. Usually the choice of such set is the open interval ]-pi/2, pi/2[, hence the arctan function will always return a point in that interval. In your case, 10 = 3*pi + 0.57522... Thus, the arctan of the tangent of 10 will return 0.57522...
Note that the arctan function, defined as above, is injective and defined over all the real numbers, hence the converse of your problem math.tan(math.atan(x)) == x
indeed holds for each x (except for numerical errors).
In order to deal with numerical errors, you should never perform comparisons between the results of floating point computations using == or !=. Use abs(number1 - number2) < epsilon // ==
abs(number1 - number2) >= epsilon // !=
instead, where epsilon is a small positive constant.
A graph might help explain why you are not getting the result you expected.
(source: wolfram.com)
http://mathworld.wolfram.com/Tangent.html
That shows the graph of Tan, but if you imagine reading off a value of x for a given y, (e.g. y = 0) then depending on which "strand" of Tan you read, you will get a different answer (-pi, 0, pi...). That's the point about Arctan(x) having more than one solution.
If arctan was restricted to only one of those strands, e.g. -pi/2 < x < pi/2 then Arctan(tan(x)) will return x providing you have accounted for floating point errors.
EDIT: However, according to http://msdn.microsoft.com/en-us/library/system.math.atan.aspx, the atan method already returns -pi/2 < x < pi/2 or NaN if your input is undefined. So the problem must soley be down to floating point rounding.
EDIT (F.R.): Added figure
I dont know any C#, but maths says that tan is not invertable, only in a small intervall.
e.g. tan(pi) = 0 and tan(0) = 0. When asking for atan(0) it could be 0 or pi (or every multiple of pi), so the result is in the range from -pi/2 .. pi/2.
Even if you start with an x in the invertable range i doesnt has to work, because of rounding errors with the floating points (it has not unlimmited precision).
tan-1(tan(x)) == x for all x in (-PI/2, PI/2).
Because the tangent function is periodic we need to normalize input angle. Math.Atan returns an angle, θ, measured in radians, such that -π/2 ≤ θ ≤ π/2, so it makes sense to normalize to that range (since it obviously won't anything within that range anyway):
double normalizedAngle = (angle + Math.PI / 2) % Math.PI - Math.PI / 2;
Doubles should be compared with some error margin. But in fact for this case Double.Epsilon is too small and "If you create a custom algorithm that determines whether two floating-point numbers can be considered equal, you must use a value that is greater than the Epsilon constant to establish the acceptable absolute margin of difference for the two values to be considered equal. (Typically, that margin of difference is many times greater than Epsilon.)" For instance, Math.Atan(Math.Tan(-0.49999632679501449)) + 0.49999632679501449 will be greater than Double.Epsilon for 1.1235582092889474E+307 times.
It might be helpful if you posted what you are trying to accomplish. I have recollections of discovering trig functions that handled the issue if what quadrant the inputs were in for me when I tried playing with angles, for example.
In general, when you are dealing with floating point numbers, you are dealing with approximations. There are numbers that cannot be represented exactly, and the tan and arctan operations are themselves only approximate.
If you want to compare floating point numbers, you need to ask if they are nearly equal, or equivalently, if the difference is less than some small value, and think carefully what you are doing.
Here is are some FAQS (for c++, but the idea is the same), that talk a bit about some of the oddities of floating point numbers:
FAQ 29.16
FAQ 29.17
FAQ 29.18
Edit: Looking at the other answers, I realise that the main problem is probably that tan isn't invertible, but the approximation issue is worth considering too, whenever you test floating point numbers for equality.
Looking at the .net documentation for Math.Atan, atan produces a value between -π/2 and ≤ π/2, which doesn't include 10. That I think is the usual range for arctan.
double d = Math.Atan(1) * (180 / Math.PI);
so d will be 45 in degrees