Is a GUID unique 100% of the time?
Will it stay unique over multiple threads?
While each generated GUID is not
guaranteed to be unique, the total
number of unique keys (2128 or
3.4×1038) is so large that the probability of the same number being
generated twice is very small. For
example, consider the observable
universe, which contains about 5×1022
stars; every star could then have
6.8×1015 universally unique GUIDs.
From Wikipedia.
These are some good articles on how a GUID is made (for .NET) and how you could get the same guid in the right situation.
https://ericlippert.com/2012/04/24/guid-guide-part-one/
https://ericlippert.com/2012/04/30/guid-guide-part-two/
https://ericlippert.com/2012/05/07/guid-guide-part-three/
If you are scared of the same GUID values then put two of them next to each other.
Guid.NewGuid().ToString() + Guid.NewGuid().ToString();
If you are too paranoid then put three.
The simple answer is yes.
Raymond Chen wrote a great article on GUIDs and why substrings of GUIDs are not guaranteed unique. The article goes in to some depth as to the way GUIDs are generated and the data they use to ensure uniqueness, which should go to some length in explaining why they are :-)
As a side note, I was playing around with Volume GUIDs in Windows XP. This is a very obscure partition layout with three disks and fourteen volumes.
\\?\Volume{23005604-eb1b-11de-85ba-806d6172696f}\ (F:)
\\?\Volume{23005605-eb1b-11de-85ba-806d6172696f}\ (G:)
\\?\Volume{23005606-eb1b-11de-85ba-806d6172696f}\ (H:)
\\?\Volume{23005607-eb1b-11de-85ba-806d6172696f}\ (J:)
\\?\Volume{23005608-eb1b-11de-85ba-806d6172696f}\ (D:)
\\?\Volume{23005609-eb1b-11de-85ba-806d6172696f}\ (P:)
\\?\Volume{2300560b-eb1b-11de-85ba-806d6172696f}\ (K:)
\\?\Volume{2300560c-eb1b-11de-85ba-806d6172696f}\ (L:)
\\?\Volume{2300560d-eb1b-11de-85ba-806d6172696f}\ (M:)
\\?\Volume{2300560e-eb1b-11de-85ba-806d6172696f}\ (N:)
\\?\Volume{2300560f-eb1b-11de-85ba-806d6172696f}\ (O:)
\\?\Volume{23005610-eb1b-11de-85ba-806d6172696f}\ (E:)
\\?\Volume{23005611-eb1b-11de-85ba-806d6172696f}\ (R:)
| | | | |
| | | | +-- 6f = o
| | | +---- 69 = i
| | +------ 72 = r
| +-------- 61 = a
+---------- 6d = m
It's not that the GUIDs are very similar but the fact that all GUIDs have the string "mario" in them. Is that a coincidence or is there an explanation behind this?
Now, when googling for part 4 in the GUID I found approx 125.000 hits with volume GUIDs.
Conclusion: When it comes to Volume GUIDs they aren't as unique as other GUIDs.
It should not happen. However, when .NET is under a heavy load, it is possible to get duplicate guids. I have two different web servers using two different sql servers. I went to merge the data and found I had 15 million guids and 7 duplicates.
Yes, a GUID should always be unique. It is based on both hardware and time, plus a few extra bits to make sure it's unique. I'm sure it's theoretically possible to end up with two identical ones, but extremely unlikely in a real-world scenario.
Here's a great article by Raymond Chen on Guids:
https://blogs.msdn.com/oldnewthing/archive/2008/06/27/8659071.aspx
Guids are statistically unique. The odds of two different clients generating the same Guid are infinitesimally small (assuming no bugs in the Guid generating code). You may as well worry about your processor glitching due to a cosmic ray and deciding that 2+2=5 today.
Multiple threads allocating new guids will get unique values, but you should get that the function you are calling is thread safe. Which environment is this in?
Eric Lippert has written a very interesting series of articles about GUIDs.
There are on the order 230 personal computers in the world (and of
course lots of hand-held devices or non-PC computing devices that have
more or less the same levels of computing power, but lets ignore
those). Let's assume that we put all those PCs in the world to the
task of generating GUIDs; if each one can generate, say, 220 GUIDs per
second then after only about 272 seconds -- one hundred and fifty
trillion years -- you'll have a very high chance of generating a
collision with your specific GUID. And the odds of collision get
pretty good after only thirty trillion years.
GUID Guide, part one
GUID Guide, part two
GUID Guide, part three
Theoretically, no, they are not unique. It's possible to generate an identical guid over and over. However, the chances of it happening are so low that you can assume they are unique.
I've read before that the chances are so low that you really should stress about something else--like your server spontaneously combusting or other bugs in your code. That is, assume it's unique and don't build in any code to "catch" duplicates--spend your time on something more likely to happen (i.e. anything else).
I made an attempt to describe the usefulness of GUIDs to my blog audience (non-technical family memebers). From there (via Wikipedia), the odds of generating a duplicate GUID:
1 in 2^128
1 in 340 undecillion (don’t worry, undecillion is not on the
quiz)
1 in 3.4 × 10^38
1 in 340,000,000,000,000,000,000,000,000,000,000,000,000
None seems to mention the actual math of the probability of it occurring.
First, let's assume we can use the entire 128 bit space (Guid v4 only uses 122 bits).
We know that the general probability of NOT getting a duplicate in n picks is:
(1-1/2128)(1-2/2128)...(1-(n-1)/2128)
Because 2128 is much much larger than n, we can approximate this to:
(1-1/2128)n(n-1)/2
And because we can assume n is much much larger than 0, we can approximate that to:
(1-1/2128)n^2/2
Now we can equate this to the "acceptable" probability, let's say 1%:
(1-1/2128)n^2/2 = 0.01
Which we solve for n and get:
n = sqrt(2* log 0.01 / log (1-1/2128))
Which Wolfram Alpha gets to be 5.598318 × 1019
To put that number into perspective, lets take 10000 machines, each having a 4 core CPU, doing 4Ghz and spending 10000 cycles to generate a Guid and doing nothing else. It would then take ~111 years before they generate a duplicate.
From http://www.guidgenerator.com/online-guid-generator.aspx
What is a GUID?
GUID (or UUID) is an acronym for 'Globally Unique Identifier' (or 'Universally Unique Identifier'). It is a 128-bit integer number used to identify resources. The term GUID is generally used by developers working with Microsoft technologies, while UUID is used everywhere else.
How unique is a GUID?
128-bits is big enough and the generation algorithm is unique enough that if 1,000,000,000 GUIDs per second were generated for 1 year the probability of a duplicate would be only 50%. Or if every human on Earth generated 600,000,000 GUIDs there would only be a 50% probability of a duplicate.
Is a GUID unique 100% of the time?
Not guaranteed, since there are several ways of generating one. However, you can try to calculate the chance of creating two GUIDs that are identical and you get the idea: a GUID has 128 bits, hence, there are 2128 distinct GUIDs – much more than there are stars in the known universe. Read the wikipedia article for more details.
MSDN:
There is a very low probability that the value of the new Guid is all zeroes or equal to any other Guid.
If your system clock is set properly and hasn't wrapped around, and if your NIC has its own MAC (i.e. you haven't set a custom MAC) and your NIC vendor has not been recycling MACs (which they are not supposed to do but which has been known to occur), and if your system's GUID generation function is properly implemented, then your system will never generate duplicate GUIDs.
If everyone on earth who is generating GUIDs follows those rules then your GUIDs will be globally unique.
In practice, the number of people who break the rules is low, and their GUIDs are unlikely to "escape". Conflicts are statistically improbable.
I experienced a duplicate GUID.
I use the Neat Receipts desktop scanner and it comes with proprietary database software. The software has a sync to cloud feature, and I kept getting an error upon syncing. A gander at the logs revealed the awesome line:
"errors":[{"code":1,"message":"creator_guid: is already
taken","guid":"C83E5734-D77A-4B09-B8C1-9623CAC7B167"}]}
I was a bit in disbelief, but surely enough, when I found a way into my local neatworks database and deleted the record containing that GUID, the error stopped occurring.
So to answer your question with anecdotal evidence, no. A duplicate is possible. But it is likely that the reason it happened wasn't due to chance, but due to standard practice not being adhered to in some way. (I am just not that lucky) However, I cannot say for sure. It isn't my software.
Their customer support was EXTREMELY courteous and helpful, but they must have never encountered this issue before because after 3+ hours on the phone with them, they didn't find the solution. (FWIW, I am very impressed by Neat, and this glitch, however frustrating, didn't change my opinion of their product.)
For more better result the best way is to append the GUID with the timestamp (Just to make sure that it stays unique)
Guid.NewGuid().ToString() + DateTime.Now.ToString();
GUID algorithms are usually implemented according to the v4 GUID specification, which is essentially a pseudo-random string. Sadly, these fall into the category of "likely non-unique", from Wikipedia (I don't know why so many people ignore this bit): "... other GUID versions have different uniqueness properties and probabilities, ranging from guaranteed uniqueness to likely non-uniqueness."
The pseudo-random properties of V8's JavaScript Math.random() are TERRIBLE at uniqueness, with collisions often coming after only a few thousand iterations, but V8 isn't the only culprit. I've seen real-world GUID collisions using both PHP and Ruby implementations of v4 GUIDs.
Because it's becoming more and more common to scale ID generation across multiple clients, and clusters of servers, entropy takes a big hit -- the chances of the same random seed being used to generate an ID escalate (time is often used as a random seed in pseudo-random generators), and GUID collisions escalate from "likely non-unique" to "very likely to cause lots of trouble".
To solve this problem, I set out to create an ID algorithm that could scale safely, and make better guarantees against collision. It does so by using the timestamp, an in-memory client counter, client fingerprint, and random characters. The combination of factors creates an additive complexity that is particularly resistant to collision, even if you scale it across a number of hosts:
http://usecuid.org/
I have experienced the GUIDs not being unique during multi-threaded/multi-process unit-testing (too?). I guess that has to do with, all other tings being equal, the identical seeding (or lack of seeding) of pseudo random generators. I was using it for generating unique file names. I found the OS is much better at doing that :)
Trolling alert
You ask if GUIDs are 100% unique. That depends on the number of GUIDs it must be unique among. As the number of GUIDs approach infinity, the probability for duplicate GUIDs approach 100%.
In a more general sense, this is known as the "birthday problem" or "birthday paradox". Wikipedia has a pretty good overview at:
Wikipedia - Birthday Problem
In very rough terms, the square root of the size of the pool is a rough approximation of when you can expect a 50% chance of a duplicate. The article includes a probability table of pool size and various probabilities, including a row for 2^128. So for a 1% probability of collision you would expect to randomly pick 2.6*10^18 128-bit numbers. A 50% chance requires 2.2*10^19 picks, while SQRT(2^128) is 1.8*10^19.
Of course, that is just the ideal case of a truly random process. As others mentioned, a lot is riding on the that random aspect - just how good is the generator and seed? It would be nice if there was some hardware support to assist with this process which would be more bullet-proof except that anything can be spoofed or virtualized. I suspect that might be the reason why MAC addresses/time-stamps are no longer incorporated.
The Answer of "Is a GUID is 100% unique?" is simply "No" .
If You want 100% uniqueness of GUID then do following.
generate GUID
check if that GUID is Exist in your table column where you are looking for uniquensess
if exist then goto step 1 else step 4
use this GUID as unique.
The hardest part is not about generating a duplicated Guid.
The hardest part is designed a database to store all of the generated ones to check if it is actually duplicated.
From WIKI:
For example, the number of random version 4 UUIDs which need to be generated in order to have a 50% probability of at least one collision is 2.71 quintillion, computed as follows:
enter image description here
This number is equivalent to generating 1 billion UUIDs per second for about 85 years, and a file containing this many UUIDs, at 16 bytes per UUID, would be about 45 exabytes, many times larger than the largest databases currently in existence, which are on the order of hundreds of petabytes
GUID stands for Global Unique Identifier
In Brief:
(the clue is in the name)
In Detail:
GUIDs are designed to be unique; they are calculated using a random method based on the computers clock and computer itself, if you are creating many GUIDs at the same millisecond on the same machine it is possible they may match but for almost all normal operations they should be considered unique.
I think that when people bury their thoughts and fears in statistics, they tend to forget the obvious. If a system is truly random, then the result you are least likely to expect (all ones, say) is equally as likely as any other unexpected value (all zeros, say). Neither fact prevents these occurring in succession, nor within the first pair of samples (even though that would be statistically "truly shocking"). And that's the problem with measuring chance: it ignores criticality (and rotten luck) entirely.
IF it ever happened, what's the outcome? Does your software stop working? Does someone get injured? Does someone die? Does the world explode?
The more extreme the criticality, the worse the word "probability" sits in the mouth. In the end, chaining GUIDs (or XORing them, or whatever) is what you do when you regard (subjectively) your particular criticality (and your feeling of "luckiness") to be unacceptable. And if it could end the world, then please on behalf of all of us not involved in nuclear experiments in the Large Hadron Collider, don't use GUIDs or anything else indeterministic!
Enough GUIDs to assign one to each and every hypothetical grain of sand on every hypothetical planet around each and every star in the visible universe.
Enough so that if every computer in the world generates 1000 GUIDs a second for 200 years, there might (MIGHT) be a collision.
Given the number of current local uses for GUIDs (one sequence per table per database for instance) it is extraordinarily unlikely to ever be a problem for us limited creatures (and machines with lifetimes that are usually less than a decade if not a year or two for mobile phones).
... Can we close this thread now?
Related
I'm looking to generate unique ids for identifying some data in my system. I'm using an elaborate system which concatenates some (non unique, relevant) meta-data with System.Guid.NewGuid()s. Are there any drawbacks to this approach, or am I in the clear?
I'm looking to generate unique ids for identifying some data in my system.
I'd recommend a GUID then, since they are by definition globally unique identifiers.
I'm using an elaborate system which concatenates some (non unique, relevant) meta-data with System.Guid.NewGuid(). Are there any drawbacks to this approach, or am I in the clear?
Well, since we do not know what you would consider a drawback, it is hard to say. A number of possible drawbacks come to mind:
GUIDs are big: 128 bits is a lot of bits.
GUIDs are not guaranteed to have any particular distribution; it is perfectly legal for GUIDs to be generated sequentially, and it is perfectly legal for the to be distributed uniformly over their 124 bit space (128 bits minus the four bits that are the version number of course.) This can have serious impacts on database performance if the GUID is being used as a primary key on a database that is indexed into sorted order by the GUID; insertions are much more efficient if the new row always goes at the end. A uniformly distributed GUID will almost never be at the end.
Version 4 GUIDs are not necessarily cryptographically random; if GUIDs are generated by a non-crypto-random generator, an attacker could in theory predict what your GUIDs are when given a representative sample of them. An attacker could in theory determine the probability that two GUIDs were generated in the same session. Version one GUIDs are of course barely random at all, and can tell the sophisticated reader when and where they were generated.
And so on.
I am planning a series of articles about these and other characteristics of GUIDs in the next couple of weeks; watch my blog for details.
UPDATE: https://ericlippert.com/2012/04/24/guid-guide-part-one/
When you use System.Guid.NewGuid(), you may still want to check that the guid doesn't already exist in your system.
While a guid is so complex as to be virtually unique, there is nothing to guarantee that it doesn't already exist except probability. It's just incredibly statistically unlikely, to the point that in almost any case it's the same as being unique.
Generating to identical guids is like winning the lottery twice - there's nothing to actually prevent it, it's just so unlikely it might as well be impossible.
Most of the time you could probably get away with not checking for existing matches, but in a very extreme case with lots of generation going on, or where the system absolutely must not fail, it could be worth checking.
EDIT
Let me clarify a little more. It is highly, highly unlikely that you would ever see a duplicate guid. That's the point. It's "globally unique", meaning there's such an infinitesimally chance of a duplicate that you can assume it will be unique. However, if we are talking about code that keeps an aircraft in the sky, monitors a nuclear reactor, or handles life support on the International Space Station, I, personally, would still check for a duplicate, just because it would really be terrible to hit that edge case. If you're just writing a blog engine, on the other hand, go ahead, use it without checking.
Feel free to use NewGuid(). There is no problem with its uniqueness.
There is too low probability that it will generate the same guid twice; a nice example can be found here: Simple proof that GUID is not unique
var bigHeapOGuids = new Dictionary<Guid, Guid>();
try
{
do
{
Guid guid = Guid.NewGuid();
bigHeapOGuids.Add(guid ,guid );
} while (true);
}
catch (OutOfMemoryException)
{
}
At some point it just crashed on OutOfMemory and not on duplicated key conflict.
My problem is not usual. Let's imagine few billions of strings. Strings are usually less then 15 characters. In this list I need to find out the number of the unique elements.
First of all, what object should I use? You shouldn't forget if I add a new element I have to check if it is already existing in the list. It is not a problem in the beginning, but after few millions of words it can really slow down the process.
That's why I thought that Hashtable would be the ideal for this task because checking the list is ideally only log(1). Unfortunately a single object in .net can be only 2GB.
Next step will be to implement a custom hashtable which contains a list of 2GB hashtables.
I am wondering maybe some of you know a better solution.
(Computer has extremely high specification.)
I would skip the data structures exercise and just use an SQL database. Why write another custom data structure that you have to analyze and debug, just use a database. They are really good at answering queries like this.
I'd consider a Trie or a Directed acyclic word graph which should be more space-efficient than a hash table. Testing for membership of a string would be O(len) where len is the length of the input string, which is probably the same as a string hashing function.
This can be solved in worst-case O(n) time using radix sort with counting sort as a stable sort for each character position. This is theoretically better than using a hash table (O(n) expected but not guaranteed) or mergesort (O(n log n)). Using a trie would also result in a worst-case O(n)-time solution (constant-time lookup over n keys, since all strings have a bounded length that's a small constant), so this is comparable. I'm not sure how they compare in practice. Radix sort is also fairly easy to implement and there are plenty of existing implementations.
If all strings are d characters or shorter, and the number of distinct characters is k, then radix sort takes O(d (n + k)) time to sort n keys. After sorting, you can traverse the sorted list in O(n) time and increment a counter every time you get to a new string. This would be the number of distinct strings. Since d is ~15 and k is relatively small compared to n (a billion), the running time is not too bad.
This uses O(dn) space though (to hold each string), so it's less space-efficient than tries.
If the items are strings, which are comparable... then I would suggest abandoning the idea of a Hashtable and going with something more like a Binary Search Tree. There are several implementations out there in C# (none that come built into the Framework). Be sure to get one that is balanced, like a Red Black Tree or an AVL Tree.
The advantage is that each object in the tree is relatively small (only contains it's object, and a link to its parent and two leaves), so you can have a whole slew of them.
Also, because it's sorted, the retrieval and insertion time are both O log(n).
Since you specify that a single object cannot contain all of the strings, I would presume that you have the strings on disk or some other external memory. In that case I would probably go with sorting. From a sorted list it is simple to extract the unique elements. Merge sorting is popular for external sorts, and needs only an amount of extra space equal to what you have. Start by dividing the input into pieces that fit into memory, sort those and then start merging.
With a few billion strings, if even a few percent are unique, the chances of a hash collision are pretty high (.NET hash codes are 32-bit int, yielding roughly 4 billion unique hash values. If you have as few as 100 million unique strings, the risk of hash collision may be unacceptably high). Statistics isn't my strongest point, but some google research turns up that the probability of a collision for a perfectly distributed 32-bit hash is (N - 1) / 2^32, where N is the number of unique things that are hashed.
You run a MUCH lower probability of a hash collision using an algorithm that uses significantly more bits, such as SHA-1.
Assuming an adequate hash algorithm, one simple approach close to what you have already tried would be to create an array of hash tables. Divide possible hash values into enough numeric ranges so that any given block will not exceed the 2GB limit per object. Select the correct hash table based on the value of the hash, then search in that hash table. For example, you might create 256 hash tables and use (HashValue)%256 to get a hash table number from 0..255. Use that same algorithm when assigning a string to a bucket, and when checking/retrieving it.
divide and conquer - partition data by first 2 letters (say)
dictionary of xx=>dictionary of string=> count
I would use a database, any database would do.
Probably the fastest because modern databases are optimized for speed and memory usage.
You need only one column with index, and then you can count the number of records.
+1 for the SQL/Db solutions, keeps things simple --will allow you to focus on the real task at hand.
But just for academic purposes, I will like to add my 2 cents.
-1 for hashtables. (I cannot vote down yet). Because they are implemented using buckets, the storage cost can be huge in many practical implementation. Plus I agree with Eric J, the chances of collisions will undermine the time efficiency advantages.
Lee, the construction of a trie or DAWG will take up space as well as some extra time (initialization latency). If that is not an issue (that will be the case when you may need to perform search like operations on the set of strings in the future as well and you have ample memory available), tries can be a good choice.
Space will be the problem with Radix sort or similar implementations (as mentioned by KirarinSnow) because the dataset is huge.
The below is my solution for a one time duplicate counting with limits on how much space can be used.
If we have the storage available for holding 1 billion elements in my memory, we can go for sorting them in place by heap-sort in Θ(n log n) time and then by simply traversing the collection once in O(n) time and doing this:
if (a[i] == a[i+1])
dupCount++;
If we do not have that much memory available, we can divide the input file on disk into smaller files (till the size becomes small enough to hold the collection in memory); then sort each such small file by using the above technique; then merge them together. This requires many passes on the main input file.
I will like to keep away from quick-sort because the dataset is huge. If I could squeeze in some memory for the second case, I would better use it to reduce the number of passes rather than waste it in merge-sort/quick-sort (actually, it depends heavily on the type of input we have at hand).
Edit: SQl/DB solutions are good only when you need to store this data for a long duration.
Have you tried a Hash-map (Dictionary in .Net)?
Dictionary<String, byte> would only take up 5 bytes per entry on x86 (4 for the pointer to the string pool, 1 for the byte), which is about 400M elements. If there are many duplicates, they should be able to fit. Implementation-wise, it might be verrryy slow (or not work), since you also need to store all those strings in memory.
If the strings are very similar, you could also write your own Trie implementation.
Otherwise, you best bets would be to sort the data in-place on disk (after which counting unique elements is trivial), or use a lower-level, more memory-tight language like C++.
A Dictionary<> is internally organized as a list of lists. You won't get close to the (2GB/8)^2 limit on a 64-bit machine.
I agree with the other posters regarding a database solution, but further to that, a reasonably-intelligent use of triggers, and a potentially-cute indexing scheme (i.e. a numerical representation of the strings) would be the fastest approach, IMHO.
If What you need is a close approximation of the unique counts then look for HyperLogLog Algorithm. It is used to get a close estimation of the cardinality of large datasets like the one you are referring to. Google BigQuery, Reddit use that for similar purposes. Many modern databases have implemented this. It is pretty fast and can work with minimal memory.
Why would anybody use the "standard" random number generator from System.Random at all instead of always using the cryptographically secure random number generator from System.Security.Cryptography.RandomNumberGenerator (or its subclasses because RandomNumberGenerator is abstract)?
Nate Lawson tells us in his Google Tech Talk presentation "Crypto Strikes Back" at minute 13:11 not to use the "standard" random number generators from Python, Java and C# and to instead use the cryptographically secure version.
I know the difference between the two versions of random number generators (see question 101337).
But what rationale is there to not always use the secure random number generator? Why use System.Random at all? Performance perhaps?
Speed and intent. If you're generating a random number and have no need for security, why use a slow crypto function? You don't need security, so why make someone else think that the number may be used for something secure when it won't be?
Apart from the speed and the more useful interface (NextDouble() etc) it is also possible to make a repeatable random sequence by using a fixed seed value. That is quite useful, amongst others during Testing.
Random gen1 = new Random(); // auto seeded by the clock
Random gen2 = new Random(0); // Next(10) always yields 7,8,7,5,2,....
First of all the presentation you linked only talks about random numbers for security purposes. So it doesn't claim Random is bad for non security purposes.
But I do claim it is. The .net 4 implementation of Random is flawed in several ways. I recommend only using it if you don't care about the quality of your random numbers. I recommend using better third party implementations.
Flaw 1: The seeding
The default constructor seeds with the current time. Thus all instances of Random created with the default constructor within a short time-frame (ca. 10ms) return the same sequence. This is documented and "by-design". This is particularly annoying if you want to multi-thread your code, since you can't simply create an instance of Random at the beginning of each thread's execution.
The workaround is to be extra careful when using the default constructor and manually seed when necessary.
Another problem here is that the seed space is rather small (31 bits). So if you generate 50k instances of Random with perfectly random seeds you will probably get one sequence of random numbers twice (due to the birthday paradox). So manual seeding isn't easy to get right either.
Flaw 2: The distribution of random numbers returned by Next(int maxValue) is biased
There are parameters for which Next(int maxValue) is clearly not uniform. For example if you calculate r.Next(1431655765) % 2 you will get 0 in about 2/3 of the samples. (Sample code at the end of the answer.)
Flaw 3: The NextBytes() method is inefficient.
The per byte cost of NextBytes() is about as big as the cost to generate a full integer sample with Next(). From this I suspect that they indeed create one sample per byte.
A better implementation using 3 bytes out of each sample would speed NextBytes() up by almost a factor 3.
Thanks to this flaw Random.NextBytes() is only about 25% faster than System.Security.Cryptography.RNGCryptoServiceProvider.GetBytes on my machine (Win7, Core i3 2600MHz).
I'm sure if somebody inspected the source/decompiled byte code they'd find even more flaws than I found with my black box analysis.
Code samples
r.Next(0x55555555) % 2 is strongly biased:
Random r = new Random();
const int mod = 2;
int[] hist = new int[mod];
for(int i = 0; i < 10000000; i++)
{
int num = r.Next(0x55555555);
int num2 = num % 2;
hist[num2]++;
}
for(int i=0;i<mod;i++)
Console.WriteLine(hist[i]);
Performance:
byte[] bytes=new byte[8*1024];
var cr=new System.Security.Cryptography.RNGCryptoServiceProvider();
Random r=new Random();
// Random.NextBytes
for(int i=0;i<100000;i++)
{
r.NextBytes(bytes);
}
//One sample per byte
for(int i=0;i<100000;i++)
{
for(int j=0;j<bytes.Length;j++)
bytes[j]=(byte)r.Next();
}
//One sample per 3 bytes
for(int i=0;i<100000;i++)
{
for(int j=0;j+2<bytes.Length;j+=3)
{
int num=r.Next();
bytes[j+2]=(byte)(num>>16);
bytes[j+1]=(byte)(num>>8);
bytes[j]=(byte)num;
}
//Yes I know I'm not handling the last few bytes, but that won't have a noticeable impact on performance
}
//Crypto
for(int i=0;i<100000;i++)
{
cr.GetBytes(bytes);
}
System.Random is much more performant since it does not generate cryptographically secure random numbers.
A simple test on my machine filling a buffer of 4 bytes with random data 1,000,000 times takes 49 ms for Random, but 2845 ms for RNGCryptoServiceProvider. Note that if you increase the size of the buffer you are filling, the difference narrows as the overhead for RNGCryptoServiceProvider is less relevant.
The most obvious reasons have already been mentioned, so here's a more obscure one: cryptographic PRNGs typically need to be continually be reseeded with "real" entropy. Thus, if you use a CPRNG too often, you could deplete the system's entropy pool, which (depending on the implementation of the CPRNG) will either weaken it (thus allowing an attacker to predict it) or it will block while trying to fill up its entropy pool (thus becoming an attack vector for a DoS attack).
Either way, your application has now become an attack vector for other, totally unrelated applications which – unlike yours – actually vitally depend on the cryptographic properties of the CPRNG.
This is an actual real world problem, BTW, that has been observed on headless servers (which naturally have rather small entropy pools because they lack entropy sources such as mouse and keyboard input) running Linux, where applications incorrectly use the /dev/random kernel CPRNG for all sorts of random numbers, whereas the correct behavior would be to read a small seed value from /dev/urandom and use that to seed their own PRNG.
If you're programming an online card game or lotter then you would want to make sure the sequence is next to impossible to guess. However, if you are showing users, say, a quote of the day the performance is more important than security.
This has been discussed at some length, but ultimately, the issue of performance is a secondary consideration when selecting a RNG. There are a vast array of RNGs out there, and the canned Lehmer LCG that most system RNGs consists of is not the best nor even necessarily the fastest. On old, slow systems it was an excellent compromise. That compromise is seldom ever really relevant these days. The thing persists into present day systems primarily because A) the thing is already built, and there is no real reason to 'reinvent the wheel' in this case, and B) for what the vast bulk of people will be using it for, it's 'good enough'.
Ultimately, the selection of an RNG comes down to Risk/Reward ratio. In some applications, for example a video game, there is no risk whatsoever. A Lehmer RNG is more than adequate, and is small, concise, fast, well-understood, and 'in the box'.
If the application is, for example, an on-line poker game or lottery where there are actual prizes involved and real money comes into play at some point in the equation, the 'in the box' Lehmer is no longer adequate. In a 32-bit version, it only has 2^32 possible valid states before it begins to cycle at best. These days, that's an open door to a brute force attack. In a case like this, the developer will want to go to something like a Very Long Period RNG of some species, and probably seed it from a cryptographically strong provider. This gives a good compromise between speed and security. In such a case, the person will be out looking for something like the Mersenne Twister, or a Multiple Recursive Generator of some kind.
If the application is something like communicating large quantities of financial information over a network, now there is a huge risk, and it heavily outweights any possible reward. There are still armored cars because sometimes heavily armed men is the only security that's adequate, and trust me, if a brigade of special ops people with tanks, fighters, and helicopters was financially feasible, it would be the method of choice. In a case like this, using a cryptographically strong RNG makes sense, because whatever level of security you can get, it's not as much as you want. So you'll take as much as you can find, and the cost is a very, very remote second-place issue, either in time or money. And if that means that every random sequence takes 3 seconds to generate on a very powerful computer, you're going to wait the 3 seconds, because in the scheme of things, that is a trivial cost.
Note that the System.Random class in C# is coded incorrectly, so should be avoided.
https://connect.microsoft.com/VisualStudio/feedback/details/634761/system-random-serious-bug#tabs
Not everyone needs cryptographically secure random numbers, and they might benefit more from a speedier plain prng. Perhaps more importantly is that you can control the sequence for System.Random numbers.
In a simulation utilizing random numbers you might want to recreate, you rerun the simulation with the same seed. It can be handy for tracking bugs when you want to regenerate a given faulty scenario as well - running your program with the exact same sequence of random numbers that crashed the program.
Random is not a random number generator, it is a deterministic pseudo-random sequence generator, which takes its name for historical reasons.
The reason to use System.Random is if you want these properties, namely a deterministic sequence, which is guaranteed to produce the same sequence of results when initialized with the same seed.
If you want to improve the "randomness" without sacrificing the interface, you can inherit from System.Random overriding several methods.
https://msdn.microsoft.com/en-us/library/system.random.sample(v=vs.110).aspx
Why would you want a deterministic sequence
One reason to have a deterministic sequence rather than true randomness is because it is repeatable.
For example, if you are running a numerical simulation, you can initialize the sequence with a (true) random number, and record what number was used.
Then, if you wish to repeat the exact same simulation, e.g. for debugging purposes, you can do so by instead initializing the sequence with the recorded value.
Why would you want this particular, not very good, sequence
The only reason I can think of would be for backwards compatibility with existing code which uses this class.
In short, if you want to improve the sequence without changing the rest of your code, go ahead.
If I don't need the security, i.e., I just want a relatively indeterminate value not one that's cryptographically strong, Random has a much easier interface to use.
Different needs call for different RNGs. For crypto, you want your random numbers to be as random as possible. For Monte Carlo simulations, you want them to fill the space evenly and to be able to start the RNG from a known state.
I wrote a game (Crystal Sliders on the iPhone: Here) that would put up a "random" series of gems (images) on the map and you would rotate the map how you wanted it and select them and they went away. - Similar to Bejeweled. I was using Random(), and it was seeded with the number of 100ns ticks since the phone booted, a pretty random seed.
I found it amazing that it would generate games that were nearly identical to each other -of the 90 or so gems, of 2 colors, I would get two EXACTLY the same except for 1 to 3 gems! If you flip 90 coins and get the same pattern except for 1-3 flips, that is VERY unlikely! I have several screen shots that show them the same. I was shocked at how bad System.Random() was! I assumed, that I MUST have written something horribly wrong in my code and was using it wrong. I was wrong though, it was the generator.
As an experiment - and a final solution, I went back to the random number generator that I've been using since 1985 or so - which is VASTLY better. It is faster, has a period of 1.3 * 10^154 (2^521) before it repeats. The original algorithm was seeded with a 16 bit number, but I changed that to a 32 bit number, and improved the initial seeding.
The original one is here:
ftp://ftp.grnet.gr/pub/lang/algorithms/c/jpl-c/random.c
Over the years, I've thrown every random number test I could think of at this, and it past all of them. I don't expect that it is of any value as a cryptographic one, but it returns a number as fast as "return *p++;" until it runs out of the 521 bits, and then it runs a quick process over the bits to create new random ones.
I created a C# wrapper - called it JPLRandom() implemented the same interface as Random() and changed all the places where I called it in the code.
The difference was VASTLY better - OMG I was amazed - there should be no way I could tell from just looking at the screens of 90 or so gems in a pattern, but I did an emergency release of my game following this.
And I would never use System.Random() for anything ever again. I'm SHOCKED that their version is blown away by something that is now 30 years old!
-Traderhut Games
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I'd like to prove that a GUID is not unique in a simple test program.
I expected the following code to run for hours, but it's not working. How can I make it work?
BigInteger begin = new BigInteger((long)0);
BigInteger end = new BigInteger("340282366920938463463374607431768211456",10); //2^128
for(begin; begin<end; begin++)
Console.WriteLine(System.Guid.NewGuid().ToString());
I'm using C#.
Kai, I have provided a program that will do what you want using threads. It is licensed under the following terms: you must pay me $0.0001 per hour per CPU core you run it on. Fees are payable at the end of each calendar month. Please contact me for my paypal account details at your earliest convenience.
using System;
using System.Collections.Generic;
using System.Linq;
namespace GuidCollisionDetector
{
class Program
{
static void Main(string[] args)
{
//var reserveSomeRam = new byte[1024 * 1024 * 100]; // This indeed has no effect.
Console.WriteLine("{0:u} - Building a bigHeapOGuids.", DateTime.Now);
// Fill up memory with guids.
var bigHeapOGuids = new HashSet<Guid>();
try
{
do
{
bigHeapOGuids.Add(Guid.NewGuid());
} while (true);
}
catch (OutOfMemoryException)
{
// Release the ram we allocated up front.
// Actually, these are pointless too.
//GC.KeepAlive(reserveSomeRam);
//GC.Collect();
}
Console.WriteLine("{0:u} - Built bigHeapOGuids, contains {1} of them.", DateTime.Now, bigHeapOGuids.LongCount());
// Spool up some threads to keep checking if there's a match.
// Keep running until the heat death of the universe.
for (long k = 0; k < Int64.MaxValue; k++)
{
for (long j = 0; j < Int64.MaxValue; j++)
{
Console.WriteLine("{0:u} - Looking for collisions with {1} thread(s)....", DateTime.Now, Environment.ProcessorCount);
System.Threading.Tasks.Parallel.For(0, Int32.MaxValue, (i) =>
{
if (bigHeapOGuids.Contains(Guid.NewGuid()))
throw new ApplicationException("Guids collided! Oh my gosh!");
}
);
Console.WriteLine("{0:u} - That was another {1} attempts without a collision.", DateTime.Now, ((long)Int32.MaxValue) * Environment.ProcessorCount);
}
}
Console.WriteLine("Umm... why hasn't the universe ended yet?");
}
}
}
PS: I wanted to try out the Parallel extensions library. That was easy.
And using OutOfMemoryException as control flow just feels wrong.
EDIT
Well, it seems this still attracts votes. So I've fixed the GC.KeepAlive() issue. And changed it to run with C# 4.
And to clarify my support terms: support is only available on the 28/Feb/2010. Please use a time machine to make support requests on that day only.
EDIT 2
As always, the GC does a better job than I do at managing memory; any previous attempts at doing it myself were doomed to failure.
This will run for a lot more than hours. Assuming it loops at 1 GHz (which it won't - it will be a lot slower than that), it will run for 10790283070806014188970 years. Which is about 83 billion times longer than the age of the universe.
Assuming Moores law holds, it would be a lot quicker to not run this program, wait several hundred years and run it on a computer that is billions of times faster. In fact, any program that takes longer to run than it takes CPU speeds to double (about 18 months) will complete sooner if you wait until the CPU speeds have increased and buy a new CPU before running it (unless you write it so that it can be suspended and resumed on new hardware).
A GUID is theoretically non-unique. Here's your proof:
GUID is a 128 bit number
You cannot generate 2^128 + 1 or more GUIDs without re-using old GUIDs
However, if the entire power output of the sun was directed at performing this task, it would go cold long before it finished.
GUIDs can be generated using a number of different tactics, some of which take special measures to guarantee that a given machine will not generate the same GUID twice. Finding collisions in a particular algorithm would show that your particular method for generating GUIDs is bad, but would not prove anything about GUIDs in general.
Of course GUIDs can collide. Since GUIDs are 128-bits, just generate 2^128 + 1 of them and by the pigeonhole principle there must be a collision.
But when we say that a GUID is a unique, what we really mean is that the key space is so large that it is practically impossible to accidentally generate the same GUID twice (assuming that we are generating GUIDs randomly).
If you generate a sequence of n GUIDs randomly, then the probability of at least one collision is approximately p(n) = 1 - exp(-n^2 / 2 * 2^128) (this is the birthday problem with the number of possible birthdays being 2^128).
n p(n)
2^30 1.69e-21
2^40 1.77e-15
2^50 1.86e-10
2^60 1.95e-03
To make these numbers concrete, 2^60 = 1.15e+18. So, if you generate one billion GUIDs per second, it will take you 36 years to generate 2^60 random GUIDs and even then the probability that you have a collision is still 1.95e-03. You're more likely to be murdered at some point in your life (4.76e-03) than you are to find a collision over the next 36 years. Good luck.
If you're worried about uniqueness you can always purchase new GUIDs so you can throw away your old ones. I'll put some up on eBay if you'd like.
Personally, I think the "Big Bang" was caused when two GUIDs collided.
You can show that in O(1) time with a variant of the quantum bogosort algorithm.
Guid g1 = Guid.NewGuid();
Guid g2 = Guid.NewGuid();
if(g1 != g2) Universe.Current.Destroy();
Any two GUIDs are very likely unique (not equal).
See this SO entry, and from Wikipedia
While each generated GUID is not
guaranteed to be unique, the total
number of unique keys (2^128 or
3.4×10^38) is so large that the probability of the same number being
generated twice is very small. For
example, consider the observable
universe, which contains about 5×10^22
stars; every star could then have
6.8×10^15 universally unique GUIDs.
So probably you have to wait for many more billion of years, and hope that you hit one before the universe as we know it comes to an end.
[Update:] As the comments below point out, newer MS GUIDs are V4 and do not use the MAC address as part of the GUID generation (I haven't seen any indication of a V5 implementation from MS though, so if anyone has a link confirming that let me know). WIth V4 though, time is still a factor though, and the odds against duplication of GUIDs remains so small as to be irrelevant for any practical usage. You certainly would not be likely to ever generate a duplicate GUID from just a single system test such as the OP was trying to do.
Most of these answers are missing one vital point about Microsoft's GUID implementation. The first part of the GUID is based on a timestamp and another part is based on the MAC address of the network card (or a random number if no NIC is installed).
If I understand this correctly, it means that the only reliable way to duplicate a GUID would be to run simultainous GUID generations on multiple machines where the MAC addresses were the same AND where the clocks on both systems were at the same exact time when the generation occured (the timestamp is based on milliseconds if I understand it correctly).... even then there are a lot of other bits in the number that are random, so the odds are still vanishingly small.
For all practical purposes the GUIDs are universally unique.
There is a pretty good description of the MS GUID over at "The Old New Thing" blog
Here's a nifty little extension method that you can use if you want to check guid uniqueness in many places in your code.
internal static class GuidExt
{
public static bool IsUnique(this Guid guid)
{
while (guid != Guid.NewGuid())
{ }
return false;
}
}
To call it, simply call Guid.IsUnique whenever you generate a new guid...
Guid g = Guid.NewGuid();
if (!g.IsUnique())
{
throw new GuidIsNotUniqueException();
}
...heck, I'd even recommend calling it twice to make sure it got it right in the first round.
Counting to 2^128 - ambitious.
Lets imagine that we can count 2^32 IDs per second per machine - not that ambitious, since it's not even 4.3 billion per second. Lets dedicate 2^32 machines to that task. Furthermore, lets get 2^32 civilisations to each dedicate the same resources to the task.
So far, we can count 2^96 IDs per second, meaning we will be counting for 2^32 seconds (a little over 136 years).
Now, all we need is to get 4,294,967,296 civilisations to each dedicate 4,294,967,296 machines, each machine capable of counting 4,294,967,296 IDs per second, purely to this task for the next 136 years or so - I suggest we get started on this essential task right now ;-)
Well if the running time of 83 billion years does not scare you, think that you will also need to store the generated GUIDs somewhere to check if you have a duplicate; storing 2^128 16-byte numbers would only require you to allocate 4951760157141521099596496896 terabytes of RAM upfront, so imagining you have a computer which could fit all that and that you somehow find a place to buy terabyte DIMMs at 10 grams each, combined they will weigh more than 8 Earth masses, so you can seriously shift it off the current orbit, before you even press "Run". Think twice!
for(begin; begin<end; begin)
Console.WriteLine(System.Guid.NewGuid().ToString());
You aren't incrementing begin so the condition begin < end is always true.
If GUID collisions are a concern, I would recommend using the ScottGuID instead.
Presumably you have reason to be believe that the algorithm for producing Guids is not producing truly random numbers, but is in fact cycling with a period << 2^128.
e.g. RFC4122 method used to derive GUIDs which fixes the values of some bits.
Proof of cycling is going to depend upon the possible size of the period.
For small periods, hash table of hash(GUID) -> GUID with replacement on collision
if GUIDs do not match (terminate if they do) might be an approach. Consider also only doing the replacement a random fraction of the time.
Ultimately if the maximum period between collisions is large enough (and isn't known in advance) any method is only going to yield a probability that the collision would be found if it existed.
Note that if the method of generating Guids is clock based (see the RFC), then it may not be possible to determine if collisions exist because either (a) you won't be able to wait long enough for the clock to wrap round, or (b) you can't request enough Guids within a clock tick to force a collision.
Alternatively you might be able to show a statistical relationship between the bits in the Guid, or a correlation of bits between Guids. Such a relationship might make it highly probable that the algorithm is flawed without necessarily being able to find an actual collision.
Of course, if you just want to prove that Guids can collide, then a mathematical proof, not a program, is the answer.
I don't understand why no one has mentioned upgrading your graphics card... Surely if you got a high-end NVIDIA Quadro FX 4800 or something (192 CUDA cores) this would go faster...
Of course if you could afford a few NVIDIA Qadro Plex 2200 S4s (at 960 CUDA cores each), this calculation would really scream. Perhaps NVIDIA would be willing to loan you a few for a "Technology Demonstration" as a PR stunt?
Surely they'd want to be part of this historic calculation...
But do you have to be sure you have a duplicate, or do you only care if there can be a duplicate. To be sure that you have two people with the same birthday, you need 366 people (not counting leap year). For there to be a greater than 50% chance of having two people with the same birthday you only need 23 people. That's the birthday problem.
If you have 32 bits, you only need 77,163 values to have a greater than 50% chance of a duplicate. Try it out:
Random baseRandom = new Random(0);
int DuplicateIntegerTest(int interations)
{
Random r = new Random(baseRandom.Next());
int[] ints = new int[interations];
for (int i = 0; i < ints.Length; i++)
{
ints[i] = r.Next();
}
Array.Sort(ints);
for (int i = 1; i < ints.Length; i++)
{
if (ints[i] == ints[i - 1])
return 1;
}
return 0;
}
void DoTest()
{
baseRandom = new Random(0);
int count = 0;
int duplicates = 0;
for (int i = 0; i < 1000; i++)
{
count++;
duplicates += DuplicateIntegerTest(77163);
}
Console.WriteLine("{0} iterations had {1} with duplicates", count, duplicates);
}
1000 iterations had 737 with duplicates
Now 128 bits is a lot, so you are still talking a large number of items still giving you a low chance of collision. You would need the following number of records for the given odds using an approximation:
0.8 billion billion for a 1/1000 chance of a collision occurring
21.7 billion billion for 50% chance of a collision occurring
39.6 billion billion for 90% chance of a collision occurring
There are about 1E14 emails sent per year so it would be about 400,000 years at this level before you would have a 90% chance of having two with the same GUID, but that is a lot different than saying you need to run a computer 83 billion times the age of the universe or that the sun would go cold before finding a duplicate.
Aren't you all missing a major point?
I thought GUIDs were generated using two things which make the chances of them being Globally unique quite high. One is they are seeded with the MAC address of the machine that you are on and two they use the time that they were generated plus a random number.
So unless you run it on the actual machine and run all you guesses within the smallest amount of time that the machine uses to represent a time in the GUID you will never generate the same number no matter how many guesses you take using the system call.
I guess if you know the actual way a GUID is made would actually shorten the time to guess quite substantially.
Tony
You could hash the GUIDs. That way, you should get a result much faster.
Oh, of course, running multiple threads at the same time is also a good idea, that way you'll increase the chance of a race condition generating the same GUID twice on different threads.
GUIDs are 124 bits because 4 bits hold the version number.
Go to the cryogenics lab in the New York City.
Freeze yourself for (roughly) 1990 years.
Get a job at Planet Express.
Buy a brand-new CPU. Build a computer, run the program, and place it in the safe place with an pseudo-perpetual motion machine like the doomsday machine.
Wait until the time machine is invented.
Jump to the future using the time machine. If you bought 1YHz 128bit CPU, go to 3,938,453,320 days 20 hours 15 minutes 38 seconds 463 ms 463 μs 374 ns 607 ps after when you started to run the program.
...?
PROFIT!!!
... It takes at least 10,783,127 years even if you had 1YHz CPU which is 1,000,000,000,000,000 (or 1,125,899,906,842,624 if you prefer to use binary prefix) times faster than 1GHz CPU.
So rather than waiting for the compute finished, it would be better to feed pigeons which lost their home because other n pigeons took their home. :(
Or, you can wait until 128-bit quantum computer is invented. Then you may prove that GUID is not unique, by using your program in reasonable time(maybe).
Have you tried begin = begin + new BigInteger((long)1) in place of begin++?
If the number of UUID being generated follows Moore's law, the impression of never running out of GUID in the foreseeable future is false.
With 2 ^ 128 UUIDs, it will only take 18 months * Log2(2^128) ~= 192 years, before we run out of all UUIDs.
And I believe (with no statistical proof what-so-ever) in the past few years since mass adoption of UUID, the speed we are generating UUID is increasing way faster than Moore's law dictates. In other words, we probably have less than 192 years until we have to deal with UUID crisis, that's a lot sooner than end of universe.
But since we definitely won't be running them out by the end of 2012, we'll leave it to other species to worry about the problem.
The odds of a bug in the GUID generating code are much higher than the odds of the algorithm generating a collision. The odds of a bug in your code to test the GUIDs is even greater. Give up.
The program, albeit its errors, shows proof that a GUID is not unique. Those that try to prove the contrary are missing the point. This statement just proves the weak implementation of some of the GUID variations.
A GUID is not necessary unique by definition, it is highly unique by definition. You just refined the meaning of highly. Depending on the version, the implementator (MS or others), use of VM's, etc your definition of highly changes. (see link in earlier post)
You can shorten your 128 bit table to prove your point. The best solution is to use a hash formula to shorten your table with duplicates, and then use the full value once the hash collides and based on that re-generate a GUID. If running from different locations, you would be storing your hash/full key pairs in a central location.
Ps: If the goal is just to generate x number of different values, create a hash table of this width and just check on the hash value.
Not to p**s on the bonfire here, but it does actually happen, and yes, I understand the joking you have been giving this guy, but the GUID is unique only in principle, I bumped into this thread because there is a bug in the WP7 emulator which means every time it boots it gives out the SAME GUID the first time it is called! So, where in theory you cannot have a conflict, if there is a problem generating said GUI, then you can get duplicates
http://forums.create.msdn.com/forums/p/92086/597310.aspx#597310
Since part of Guid generation is based on the current machine's time, my theory to get a duplicate Guid is:
Perform a clean installation of Windows
Create a startup script that resets the time to 2010-01-01 12:00:00 just as Windows boots up.
Just after the startup script, it triggers your application to generate a Guid.
Clone this Windows installation, so that you rule out any subtle differences that may occur in subsequent boot-ups.
Re-image the hard drive with this image and boot-up the machine a few times.
For me.. the time it takes for a single core to generate a UUIDv1 guarantees it will be unique. Even in a multi core situation if the UUID generator only allows one UUID to be generated at a time for your specific resource (keep in mind that multiple resources can totally utilize the same UUIDs however unlikely since the resource inherently part of the address) then you will have more than enough UUIDs to last you until the timestamp burns out. At which point I really doubt you would care.
Here's a solution, too:
int main()
{
QUuid uuid;
while ( (uuid = QUuid::createUuid()) != QUuid::createUuid() ) { }
std::cout << "Aha! I've found one! " << qPrintable( uuid.toString() ) << std::endl;
}
Note: requires Qt, but I guarantee that if you let it run long enough, it might find one.
(Note note: actually, now that I'm looking at it, there may be something about the generation algorithm that prevents two subsequently generated uuids that collide--but I kinda doubt it).
The only solution to prove GUIDs are not unique would be by having a World GUID Pool. Each time a GUID is generated somewhere, it should be registered to the organization. Or heck, we might include a standardization that all GUID generators needs to register it automatically and for that it needs an active internet connection!
What are the pros and cons of using System.Security.Cryptography.RNGCryptoServiceProvider vs System.Random. I know that RNGCryptoServiceProvider is 'more random', i.e. less predictable for hackers. Any other pros or cons?
UPDATE:
According to the responses, here are the pros and cons of using RNGCryptoServiceProvider so far:
Pros
RNGCryptoServiceProvider is a stronger cryptographically random number, meaning it would be better for determining encryption keys and the likes.
Cons
Random is faster because it is a simpler calculation; when used in simulations or long calculations where cryptographic randomness isn't important, this should be used. Note: see Kevin's answer for details about simulations - Random is not necessarily random enough, and you may want to use a different non-cryptographic PRNG.
A cryptographically strong RNG will be slower --- it takes more computation --- and will be spectrally white, but won't be as well suited to simulations or Monte Carlo methods, both because they do take more time, and because they may not be repeatable, which is nice for testing.
In general, you want to use a cryptographic PRNG when you want a unique number like a UUID, or as a key for encryption, and a deterministic PRNG for speed and in simulation.
System.Random is not thread safe.
Yes, there is only one more. As Charlie Martin wrote System.Random is faster.
I would like to add the following info:
The RNGCryptoServiceProvider is the default implementation of a security standards compliant random number generator. If you need a random variable for security purposes, you must use this class, or an equivalent, but don't use System.Random because it is highly predictable.
For all other uses the higher performance of System.Random, and equivalent classes, are welcome.
In addition to the prior answers:
System.Random should NEVER be used in simulations or numerical solvers for science and engineering, where there are material negative consequences of inaccurate simulation results or convergence failure. This is because Microsoftʼs implementation is deeply flawed in several respects, and they cannot (or will not) easily fix it due to compatibility issues. See this post.
So:
If there is an adversary who shouldnʼt know the generated sequence, then use RNGCryptoServiceProvider or another carefully designed, implemented and validated cryptographically strong RNG, and ideally use hardware randomness where possible. Otherwise;
If it is an application such as a simulation that requires good statistical properties, then use a carefully designed and implemented non‑crypto PRNG such as the Mersenne Twister. (A crypto RNG would also be correct in these cases, but often too slow and unwieldy.) Otherwise;
ONLY if the use of the numbers is completely trivial, such as deciding which picture to show next in a randomized slideshow, then use System.Random.
I recently encountered this issue very tangibly when working on a Monte Carlo simulation intended to test the effects of different usage patterns for medical devices. The simulation produced results that went mildly in the opposite direction of what would have been reasonably expected.
Sometimes when you canʼt explain something, thereʼs a reason behind it, and that reason could be very onerous!
Hereʼs a plot of the p‑values that were obtained over an increasing number of simulation batches:
The red and magenta plots show the statistical significance of differences between the two usage models in two output metrics under study.
The cyan plot is a particularly shocking result, because it represents p‑values for a characteristic of the random input to the simulation. (This was being plotted just to confirm the input wasnʼt faulty.) The input was, of course, by design the same between the two usage models under study, so there should not have been any statistically significant difference between the input to the two models. Yet here I was seeing better than 99.97% confidence that there was such a difference!!
Initially I thought there was something wrong in my code, but everything checked out. (In particular I confirmed that threads were not sharing System.Random instances.) When repeated testing found this unexpected result to be highly consistent, I started suspecting System.Random.
I replaced System.Random with a Mersenne Twister implementation — no other changes, — and immediately the output became drastically different, as shown here:
This chart reflects there being no statistically significant difference between the two usage models for the parameters being used in this particular test set. This was an expected result.
Note that in the first chart, the vertical log scale (on the p‑value) covers seven decades, whereas there is only a single decade in the second — demonstrating just how pronounced the statistical significance of the spurious discrepancies was! (The vertical scale indicates the probability the discrepancies could have arisen by chance.)
I suspect what was happening was that System.Random has some correlations over some fairly short generator cycle, and different patterns of internal randomness sampling between the two models under test (which had substantially different numbers of calls to Random.Next) caused these to affect the two models in distinct ways.
It so happens that the simulation input draws from the same RNG streams as the models use for internal decisions, and this apparently caused these sampling discrepancies to impact the input. (This was actually a lucky thing, because otherwise I may not have realized that the unexpected result was a software fault and not some real property of the devices being simulated!)