Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have read an article about various shuffle algorithms over at Coding Horror. I have seen that somewhere people have done this to shuffle a list:
var r = new Random();
var shuffled = ordered.OrderBy(x => r.Next());
Is this a good shuffle algorithm? How does it work exactly? Is it an acceptable way of doing this?
It's not a way of shuffling that I like, mostly on the grounds that it's O(n log n) for no good reason when it's easy to implement an O(n) shuffle. The code in the question "works" by basically giving a random (hopefully unique!) number to each element, then ordering the elements according to that number.
I prefer Durstenfeld's variant of the Fisher-Yates shuffle which swaps elements.
Implementing a simple Shuffle extension method would basically consist of calling ToList or ToArray on the input then using an existing implementation of Fisher-Yates. (Pass in the Random as a parameter to make life generally nicer.) There are plenty of implementations around... I've probably got one in an answer somewhere.
The nice thing about such an extension method is that it would then be very clear to the reader what you're actually trying to do.
EDIT: Here's a simple implementation (no error checking!):
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source, Random rng)
{
T[] elements = source.ToArray();
// Note i > 0 to avoid final pointless iteration
for (int i = elements.Length-1; i > 0; i--)
{
// Swap element "i" with a random earlier element it (or itself)
int swapIndex = rng.Next(i + 1);
T tmp = elements[i];
elements[i] = elements[swapIndex];
elements[swapIndex] = tmp;
}
// Lazily yield (avoiding aliasing issues etc)
foreach (T element in elements)
{
yield return element;
}
}
EDIT: Comments on performance below reminded me that we can actually return the elements as we shuffle them:
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source, Random rng)
{
T[] elements = source.ToArray();
for (int i = elements.Length - 1; i >= 0; i--)
{
// Swap element "i" with a random earlier element it (or itself)
// ... except we don't really need to swap it fully, as we can
// return it immediately, and afterwards it's irrelevant.
int swapIndex = rng.Next(i + 1);
yield return elements[swapIndex];
elements[swapIndex] = elements[i];
}
}
This will now only do as much work as it needs to.
Note that in both cases, you need to be careful about the instance of Random you use as:
Creating two instances of Random at roughly the same time will yield the same sequence of random numbers (when used in the same way)
Random isn't thread-safe.
I have an article on Random which goes into more detail on these issues and provides solutions.
This is based on Jon Skeet's answer.
In that answer, the array is shuffled, then returned using yield. The net result is that the array is kept in memory for the duration of foreach, as well as objects necessary for iteration, and yet the cost is all at the beginning - the yield is basically an empty loop.
This algorithm is used a lot in games, where the first three items are picked, and the others will only be needed later if at all. My suggestion is to yield the numbers as soon as they are swapped. This will reduce the start-up cost, while keeping the iteration cost at O(1) (basically 5 operations per iteration). The total cost would remain the same, but the shuffling itself would be quicker. In cases where this is called as collection.Shuffle().ToArray() it will theoretically make no difference, but in the aforementioned use cases it will speed start-up. Also, this would make the algorithm useful for cases where you only need a few unique items. For example, if you need to pull out three cards from a deck of 52, you can call deck.Shuffle().Take(3) and only three swaps will take place (although the entire array would have to be copied first).
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source, Random rng)
{
T[] elements = source.ToArray();
// Note i > 0 to avoid final pointless iteration
for (int i = elements.Length - 1; i > 0; i--)
{
// Swap element "i" with a random earlier element it (or itself)
int swapIndex = rng.Next(i + 1);
yield return elements[swapIndex];
elements[swapIndex] = elements[i];
// we don't actually perform the swap, we can forget about the
// swapped element because we already returned it.
}
// there is one item remaining that was not returned - we return it now
yield return elements[0];
}
Starting from this quote of Skeet:
It's not a way of shuffling that I like, mostly on the grounds that it's O(n log n) for no good reason when it's easy to implement an O(n) shuffle. The code in the question "works" by basically giving a random (hopefully unique!) number to each element, then ordering the elements according to that number.
I'll go on a little explaining the reason for the hopefully unique!
Now, from the Enumerable.OrderBy:
This method performs a stable sort; that is, if the keys of two elements are equal, the order of the elements is preserved
This is very important! What happens if two elements "receive" the same random number? It happens that they remain in the same order they are in the array. Now, what is the possibility for this to happen? It is difficult to calculate exactly, but there is the Birthday Problem that is exactly this problem.
Now, is it real? Is it true?
As always, when in doubt, write some lines of program: http://pastebin.com/5CDnUxPG
This little block of code shuffles an array of 3 elements a certain number of times using the Fisher-Yates algorithm done backward, the Fisher-Yates algorithm done forward (in the wiki page there are two pseudo-code algorithms... They produce equivalent results, but one is done from first to last element, while the other is done from last to first element), the naive wrong algorithm of http://blog.codinghorror.com/the-danger-of-naivete/ and using the .OrderBy(x => r.Next()) and the .OrderBy(x => r.Next(someValue)).
Now, Random.Next is
A 32-bit signed integer that is greater than or equal to 0 and less than MaxValue.
so it's equivalent to
OrderBy(x => r.Next(int.MaxValue))
To test if this problem exists, we could enlarge the array (something very slow) or simply reduce the maximum value of the random number generator (int.MaxValue isn't a "special" number... It is simply a very big number). In the end, if the algorithm isn't biased by the stableness of the OrderBy, then any range of values should give the same result.
The program then tests some values, in the range 1...4096. Looking at the result, it's quite clear that for low values (< 128), the algorithm is very biased (4-8%). With 3 values you need at least r.Next(1024). If you make the array bigger (4 or 5), then even r.Next(1024) isn't enough. I'm not an expert in shuffling and in math, but I think that for each extra bit of length of the array, you need 2 extra bits of maximum value (because the birthday paradox is connected to the sqrt(numvalues)), so that if the maximum value is 2^31, I'll say that you should be able to sort arrays up to 2^12/2^13 bits (4096-8192 elements)
It's probablly ok for most purposes, and almost always it generates a truly random distribution (except when Random.Next() produces two identical random integers).
It works by assigning each element of the series a random integer, then ordering the sequence by these integers.
It's totally acceptable for 99.9% of the applications (unless you absolutely need to handle the edge case above). Also, skeet's objection to its runtime is valid, so if you're shuffling a long list you might not want to use it.
This has come up many times before. Search for Fisher-Yates on StackOverflow.
Here is a C# code sample I wrote for this algorithm. You can parameterize it on some other type, if you prefer.
static public class FisherYates
{
// Based on Java code from wikipedia:
// http://en.wikipedia.org/wiki/Fisher-Yates_shuffle
static public void Shuffle(int[] deck)
{
Random r = new Random();
for (int n = deck.Length - 1; n > 0; --n)
{
int k = r.Next(n+1);
int temp = deck[n];
deck[n] = deck[k];
deck[k] = temp;
}
}
}
Seems like a good shuffling algorithm, if you're not too worried on the performance. The only problem I'd point out is that its behavior is not controllable, so you may have a hard time testing it.
One possible option is having a seed to be passed as a parameter to the random number generator (or the random generator as a parameter), so you can have more control and test it more easily.
I found Jon Skeet's answer to be entirely satisfactory, but my client's robo-scanner will report any instance of Random as a security flaw. So I swapped it out for System.Security.Cryptography.RNGCryptoServiceProvider. As a bonus, it fixes that thread-safety issue that was mentioned. On the other hand, RNGCryptoServiceProvider has been measured as 300x slower than using Random.
Usage:
using (var rng = new RNGCryptoServiceProvider())
{
var data = new byte[4];
yourCollection = yourCollection.Shuffle(rng, data);
}
Method:
/// <summary>
/// Shuffles the elements of a sequence randomly.
/// </summary>
/// <param name="source">A sequence of values to shuffle.</param>
/// <param name="rng">An instance of a random number generator.</param>
/// <param name="data">A placeholder to generate random bytes into.</param>
/// <returns>A sequence whose elements are shuffled randomly.</returns>
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source, RNGCryptoServiceProvider rng, byte[] data)
{
var elements = source.ToArray();
for (int i = elements.Length - 1; i >= 0; i--)
{
rng.GetBytes(data);
var swapIndex = BitConverter.ToUInt32(data, 0) % (i + 1);
yield return elements[swapIndex];
elements[swapIndex] = elements[i];
}
}
Looking for an algorithm? You can use my ShuffleList class:
class ShuffleList<T> : List<T>
{
public void Shuffle()
{
Random random = new Random();
for (int count = Count; count > 0; count--)
{
int i = random.Next(count);
Add(this[i]);
RemoveAt(i);
}
}
}
Then, use it like this:
ShuffleList<int> list = new ShuffleList<int>();
// Add elements to your list.
list.Shuffle();
How does it work?
Let's take an initial sorted list of the 5 first integers: { 0, 1, 2, 3, 4 }.
The method starts by counting the nubmer of elements and calls it count. Then, with count decreasing on each step, it takes a random number between 0 and count and moves it to the end of the list.
In the following step-by-step example, the items that could be moved are italic, the selected item is bold:
0 1 2 3 4
0 1 2 3 4
0 1 2 4 3
0 1 2 4 3
1 2 4 3 0
1 2 4 3 0
1 2 3 0 4
1 2 3 0 4
2 3 0 4 1
2 3 0 4 1
3 0 4 1 2
This algorithm shuffles by generating a new random value for each value in a list, then ordering the list by those random values. Think of it as adding a new column to an in-memory table, then filling it with GUIDs, then sorting by that column. Looks like an efficient way to me (especially with the lambda sugar!)
Slightly unrelated, but here is an interesting method (that even though it is really excessibe, has REALLY been implemented) for truly random generation of dice rolls!
Dice-O-Matic
The reason I'm posting this here, is that he makes some interesting points about how his users reacted to the idea of using algorithms to shuffle, over actual dice. Of course, in the real world, such a solution is only for the really extreme ends of the spectrum where randomness has such an big impact and perhaps the impact affects money ;).
I would say that many answers here like "This algorithm shuffles by generating a new random value for each value in a list, then ordering the list by those random values" might be very wrong!
I'd think that this DOES NOT assign a random value to each element of the source collection. Instead there might be a sort algorithm running like Quicksort which would call a compare-function approximately n log n times. Some sort algortihm really expect this compare-function to be stable and always return the same result!
Couldn't it be that the IEnumerableSorter calls a compare function for each algorithm step of e.g. quicksort and each time calls the function x => r.Next() for both parameters without caching these!
In that case you might really mess up the sort algorithm and make it much worse than the expectations the algorithm is build up on. Of course, it eventually will become stable and return something.
I might check it later by putting debugging output inside a new "Next" function so see what happens.
In Reflector I could not immediately find out how it works.
It is worth noting that due to the deferred execution of LINQ, using a random number generator instance with OrderBy() can result in a possibly unexpected behavior: The sorting does not happen until the collection is read. This means each time you read or enumerate the collection, the order changes. One would possibly expect the elements to be shuffled once and then to retain the order each time it is accessed thereafter.
Random random = new();
var shuffled = ordered.OrderBy(x => random.Next())
The code above passes a lambda function x => random.Next() as a parameter to OrderBy(). This will capture the instance referred to by random and save it with the lambda by so that it can call Next() on this instance to perform the ordering later which happens right before it is enumerated(when the first element is requested from the collection).
The problem here, is since this execution is saved for later, the ordering happens each time just before the collection is enumerated using new numbers obtained by calling Next() on the same random instance.
Example
To demonstrate this behavior, I have used Visual Studio's C# Interactive Shell:
> List<int> list = new() { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
> Random random = new();
> var shuffled = list.OrderBy(element => random.Next());
> shuffled.ToList()
List<int>(10) { 5, 9, 10, 4, 6, 2, 8, 3, 1, 7 }
> shuffled.ToList()
List<int>(10) { 8, 2, 9, 1, 3, 6, 5, 10, 4, 7 } // Different order
> shuffled.ElementAt(0)
9 // First element is 9
> shuffled.ElementAt(0)
7 // First element is now 7
>
This behavior can even be seen in action by placing a breakpoint just after where the IOrderedEnumerable is created when using Visual Studio's debugger: each time you hover on the variable, the elements show up in a different order.
This, of course, does not apply if you immediately enumerate the elements by calling ToList() or an equivalent. However, this behavior can lead to bugs in many cases, one of them being when the shuffled collection is expected to contain a unique element at each index.
Startup time to run on code with clear all threads and cache every new test,
First unsuccessful code. It runs on LINQPad. If you follow to test this code.
Stopwatch st = new Stopwatch();
st.Start();
var r = new Random();
List<string[]> list = new List<string[]>();
list.Add(new String[] {"1","X"});
list.Add(new String[] {"2","A"});
list.Add(new String[] {"3","B"});
list.Add(new String[] {"4","C"});
list.Add(new String[] {"5","D"});
list.Add(new String[] {"6","E"});
//list.OrderBy (l => r.Next()).Dump();
list.OrderBy (l => Guid.NewGuid()).Dump();
st.Stop();
Console.WriteLine(st.Elapsed.TotalMilliseconds);
list.OrderBy(x => r.Next()) uses 38.6528 ms
list.OrderBy(x => Guid.NewGuid()) uses 36.7634 ms (It's recommended from MSDN.)
the after second time both of them use in the same time.
EDIT:
TEST CODE on Intel Core i7 4#2.1GHz, Ram 8 GB DDR3 #1600, HDD SATA 5200 rpm with [Data: www.dropbox.com/s/pbtmh5s9lw285kp/data]
using System;
using System.Runtime;
using System.Diagnostics;
using System.IO;
using System.Collections.Generic;
using System.Collections;
using System.Linq;
using System.Threading;
namespace Algorithm
{
class Program
{
public static void Main(string[] args)
{
try {
int i = 0;
int limit = 10;
var result = GetTestRandomSort(limit);
foreach (var element in result) {
Console.WriteLine();
Console.WriteLine("time {0}: {1} ms", ++i, element);
}
} catch (Exception e) {
Console.WriteLine(e.Message);
} finally {
Console.Write("Press any key to continue . . . ");
Console.ReadKey(true);
}
}
public static IEnumerable<double> GetTestRandomSort(int limit)
{
for (int i = 0; i < 5; i++) {
string path = null, temp = null;
Stopwatch st = null;
StreamReader sr = null;
int? count = null;
List<string> list = null;
Random r = null;
GC.Collect();
GC.WaitForPendingFinalizers();
Thread.Sleep(5000);
st = Stopwatch.StartNew();
#region Import Input Data
path = Environment.CurrentDirectory + "\\data";
list = new List<string>();
sr = new StreamReader(path);
count = 0;
while (count < limit && (temp = sr.ReadLine()) != null) {
// Console.WriteLine(temp);
list.Add(temp);
count++;
}
sr.Close();
#endregion
// Console.WriteLine("--------------Random--------------");
// #region Sort by Random with OrderBy(random.Next())
// r = new Random();
// list = list.OrderBy(l => r.Next()).ToList();
// #endregion
// #region Sort by Random with OrderBy(Guid)
// list = list.OrderBy(l => Guid.NewGuid()).ToList();
// #endregion
// #region Sort by Random with Parallel and OrderBy(random.Next())
// r = new Random();
// list = list.AsParallel().OrderBy(l => r.Next()).ToList();
// #endregion
// #region Sort by Random with Parallel OrderBy(Guid)
// list = list.AsParallel().OrderBy(l => Guid.NewGuid()).ToList();
// #endregion
// #region Sort by Random with User-Defined Shuffle Method
// r = new Random();
// list = list.Shuffle(r).ToList();
// #endregion
// #region Sort by Random with Parallel User-Defined Shuffle Method
// r = new Random();
// list = list.AsParallel().Shuffle(r).ToList();
// #endregion
// Result
//
st.Stop();
yield return st.Elapsed.TotalMilliseconds;
foreach (var element in list) {
Console.WriteLine(element);
}
}
}
}
}
Result Description: https://www.dropbox.com/s/9dw9wl259dfs04g/ResultDescription.PNG
Result Stat: https://www.dropbox.com/s/ewq5ybtsvesme4d/ResultStat.PNG
Conclusion:
Assume: LINQ OrderBy(r.Next()) and OrderBy(Guid.NewGuid()) are not worse than User-Defined Shuffle Method in First Solution.
Answer: They are contradiction.
Related
I have two lists (lista and listb), each containing an unknown number of points (two ints in a struct).
I want to create a new list containing unique random pairings from lista and listb. So an example entry might be [12,14] where 12 is an index for lista and 14 is an index for listb.
I also want to set a maximum number of pairings when calling this function. So instead of pairing every element in lista with every element in listb, I could limit it to 200 random pairings as an example.
My first attempt at this was to simply generate every possible pairing. Shuffle that list and knock off any elements past my max. This solution isn't nearly efficient enough.
My next attempt was to make an array per original list of every possible index, shuffle those separately, and then just iterate over them both until I had the max number of pairings (or all of them). This has several problems I'm not certain how to solve. One of which, lista could have 10 million elements for all I know. Creating a new array of 10 million elements (the indices list) and shuffling that when my max pairs might only be 200? Seems silly to go that far.
I've considered just choosing random elements from both lista/listb and seeing if I've already paired them before adding them to the new list. This is also quite a silly option as a lot of time can be spent picking duplicate pairings over and over.
So, what's a good option here or is there one? I don't want to iterate over every possible combination, pairings need to be unique, removing options from a list is quite slow due to the array re-sizing when they are quite large, distribution needs to be pretty uniform in the selection process for each list, etc.
Thanks for any and all help.
Edit - I meant the unique aspect regarding the pairs themselves. So element 10 in lista could be used over and over as long as the element in listb is different each time. The only catch there is I don't want to limit lista and listb right off as I need fairly even distribution across both lists for every pairing.
To avoid duplicates completely, you could try doing a sparse Fisher-Yates shuffle.
Create a Dictionary<int, int> dict that will map "indices in the Fisher-Yates array that do not hold their own index" to "the value at that index".
For the nth item, pick a random number x from n (inclusive) to "size of ListA * size of ListB" (exclusive)
dict[x] ?? x is your selected item.
Store dict[n] ?? n in dict[x].
Map the selected item back to a pair of indices (divide by size of ListA for the ListB index, modulus by the size of ListA for the ListA index).
A math or statistics buff might give you a formula for evaluating this but I just wrote some test code.
The code simply picks random pairs, and every time it sees a duplicate it tries again. Then for each such "pick a random pair until unique" cycle it counts how many retries it did and tracks this. Then finally this is summed up into a global array to track the relative frequency of these things.
Here's the results after about 1 minute of execution:
84382319 81 0 0 0 0 0 0 0 0
The numbers mean this:
Out of 421912 cycles [(84382319+81)/200]:
81 duplicates were found but retrying did not find a duplicate (3rd number and up is 0)
84382319 unique pairs could be found on the first try without duplicates
So, obviously this will start to rise if you increase the number of pairs you want generated or lower the numbers to choose wrong, but I'm not sure this will pose a problem in practice.
Here's the LINQPad program I used:
static Random R = new Random();
void Main()
{
var a = 10000;
var b = 10000;
var n = 200;
int[] counts = new int[10];
var dc = new DumpContainer().Dump();
while (true)
{
var once = Test(a, b, n);
for (int i = 0; i < once.Length; i++)
counts[i] += once[i];
dc.Content = Util.HorizontalRun(true, counts);
}
}
public static int[] Test(int a, int b, int n)
{
var seen = new HashSet<Tuple<int, int>>();
var result = new int[10];
for (int index = 0; index < n; index++)
{
int tries = 0;
while (true)
{
var av = R.Next(a);
var bv = R.Next(a);
var t = Tuple.Create(av, bv);
if (seen.Contains(t))
tries++;
else
{
seen.Add(t);
break;
}
}
result[tries]++;
}
return result;
}
I have an array of boolean values and need to randomly select a specific quantity of indices for values which are true.
What is the most efficient way to generate the array of indices?
For instance,
BitArray mask = GenerateSomeMask(length: 100000);
int[] randomIndices = RandomIndicesForTrue(mask, quantity: 10);
In this case the length of randomIndices would be 10.
There's a faster way to do this that requires only a single scan of the list.
Consider picking a line at random from a text file when you don't know how many lines are in the file, and the file is too large to fit in memory. The obvious solution is to read the file once to count the lines, pick a random number in the range of 0 to Count-1, and then read the file again up to the chosen line number. That works, but requires you to read the file twice.
A faster solution is to read the first line and save it as the selected line. You replace the selected line with the next line with probability 1/2. When you read the third line, you replace with probability 1/3, etc. When you've read the entire file, you have selected a line at random, and every line had equal probability of being selected. The code looks something like this:
string selectedLine = null;
int numLines = 0;
Random rnd = new Random();
foreach (var line in File.ReadLines(filename))
{
++numLines;
double prob = 1.0/numLines;
if (rnd.Next() >= prob)
selectedLine = line;
}
Now, what if you want to select 2 lines? You select the first two. Then, as each line is read the probability that it will replace one of the two lines is 2/n, where n is the number of lines already read. If you determine that you need to replace a line, you randomly select the line to be replaced. You can follow that same basic idea to select any number of lines at random. For example:
string[] selectedLines = new int[M];
int numLines = 0;
Random rnd = new Random();
foreach (var line in File.ReadLines(filename))
{
++numLines;
if (numLines <= M)
{
selectedLines[numLines-1] = line;
}
else
{
double prob = (double)M/numLines;
if (rnd.Next() >= prob)
{
int ix = rnd.Next(M);
selectedLines[ix] = line;
}
}
}
You can apply that to your BitArray quite easily:
int[] selected = new int[quantity];
int num = 0; // number of True items seen
Random rnd = new Random();
for (int i = 0; i < items.Length; ++i)
{
if (items[i])
{
++num;
if (num <= quantity)
{
selected[num-1] = i;
}
else
{
double prob = (double)quantity/num;
if (rnd.Next() > prob)
{
int ix = rnd.Next(quantity);
selected[ix] = i;
}
}
}
}
You'll need some special case code at the end to handle the case where there aren't quantity set bits in the array, but you'll need that with any solution.
This makes a single pass over the BitArray, and the only extra memory it uses is for the list of selected indexes. I'd be surprised if it wasn't significantly faster than the LINQ version.
Note that I used the probability calculation to illustrate the math. You can change the inner loop code in the first example to:
if (rnd.Next(numLines+1) == numLines)
{
selectedLine = line;
}
++numLines;
You can make a similar change to the other examples. That does the same thing as the probability calculation, and should execute a little faster because it eliminates a floating point divide for each item.
There are two families of approaches you can use: deterministic and non-deterministic. The first one involves finding all the eligible elements in the collection and then picking N at random; the second involves randomly reaching into the collection until you have found N eligible items.
Since the size of your collection is not negligible at 100K and you only want to pick a few out of those, at first sight non-deterministic sounds like it should be considered because it can give very good results in practice. However, since there is no guarantee that N true values even exist in the collection, going non-deterministic could put your program into an infinite loop (less catastrophically, it could just take a very long time to produce results).
Therefore I am going to suggest going for a deterministic approach, even though you are going to pay for the guarantees you need through the nose with resource usage. In particular, the operation will involve in-place sorting of an auxiliary collection; this will practically undo the nice space savings you got by using BitArray.
Theory aside, let's get to work. The standard way to handle this is:
Filter all eligible indices into an auxiliary collection.
Randomly shuffle the collection with Fisher-Yates (there's a convenient implementation on StackOverflow).
Pick the N first items of the shuffled collection. If there are less than N then your input cannot satisfy your requirements.
Translated into LINQ:
var results = mask
.Select((i, f) => Tuple.Create) // project into index/bool pairs
.Where(t => t.Item2) // keep only those where bool == true
.Select(t => t.Item1) // extract indices
.ToList() // prerequisite for next step
.Shuffle() // Fisher-Yates
.Take(quantity) // pick N
.ToArray(); // into an int[]
if (results.Length < quantity)
{
// not enough true values in input
}
If you have 10 indices to choose from, you could generate a random number from 0 to 2^10 - 1, and use that as you mask.
var list = new List<int>(){1,17,18,21,30};
Random rnd = new Random(DateTime.Now.Second);
int r;
do
{
r = rnd.Next(1, 30);
}
while (list.Contains(r));
but i think that's a stupid solution, can anyone give me a more optimized approach?
even better if there is a way to prevent the Random instance from returning a number that it has already returned.
in case anyone wonders why do i need this its the first step in shuffling 3 byte arrays and combining them into one byte array and producing 3 byte arrays that hold the indices original order as it was in the original arrays.
Yes, one thing to make it much more efficient is use a HashSet<int> instead of a List<int> lookups for a HashSet are MUCH faster than a List (however the cost of the constructor will be slightly more for a HashSet).
Also if the input list is always the same numbers move it out of the function to help reduce the cost overhead of generating the HashSet the first time.
Due to order now mattering, in my personal experience (please test and profile for your own situation), after about 14 items in the list it is faster to convert a list to a HashSet and do the lookup than doing the lookup in the list itself.
var list = new List<int>(){1,17,18,21,30};
Random rnd = new Random(DateTime.Now.Second);
int r;
//In this example with 5 items in the list the HashSet will be slower do to the cost
// of creating it, but if we knew that the 5 items where fixed I would make this
// outside of the function so I would only have to pay the cost once per program
// start-up and it would be considered faster again due to amortized start-up cost.
var checkHashSet = new HashSet<int>(list);
do
{
r = rnd.Next(1, 30);
}
while (checkHashSet.Contains(rnd.Next(1, 30))); //Shouldent this be "r" not "rnd.Next(1,30)"?
You're right that looping isn't particularly efficient. You can use some handy extensions to select a number if you consider the constraint of the list of valid numbers, as opposed to the list of invalid ones.
So you have your list of invalid numbers:
var list = new List<int>(){1,17,18,21,30};
Which means that your list of valid numbers is the range from 1-30 except for these. Something like:
var validList = Enumerable.Range(1, 30).Except(list);
So we can use these extensions from the linked answer:
public static T RandomElement(this IEnumerable<T> enumerable)
{
return enumerable.RandomElementUsing(new Random());
}
public static T RandomElementUsing(this IEnumerable<T> enumerable, Random rand)
{
int index = rand.Next(0, enumerable.Count());
return enumerable.ElementAt(index);
}
And select a random element from the list of known valid numbers:
var kindOfRandomNumber = Enumerable.Range(1, 30).Except(list).RandomElement();
I'm trying to implement a paging algorithm for a dataset sortable via many criteria. Unfortunately, while some of those criteria can be implemented at the database level, some must be done at the app level (we have to integrate with another data source). We have a paging (actually infinite scroll) requirement and are looking for a way to minimize the pain of sorting the entire dataset at the app level with every paging call.
What is the best way to do a partial sort, only sorting the part of the list that absolutely needs to be sorted? Is there an equivalent to C++'s std::partial_sort function available in the .NET libraries? How should I go about solving this problem?
EDIT: Here's an example of what I'm going for:
Let's say I need to get elements 21-40 of a 1000 element set, according to some sorting criteria. In order to speed up the sort, and since I have to go through the whole dataset every time anyway (this is a web service over HTTP, which is stateless), I don't need the whole dataset ordered. I only need elements 21-40 to be correctly ordered. It is sufficient to create 3 partitions: Elements 1-20, unsorted (but all less than element 21); elements 21-40, sorted; and elements 41-1000, unsorted (but all greater than element 40).
OK. Here's what I would try based on what you said in reply to my comment.
I want to be able to say "4th through 6th" and get something like: 3,
2, 1 (unsorted, but all less than proper 4th element); 4, 5, 6 (sorted
and in the same place they would be for a sorted list); 8, 7, 9
(unsorted, but all greater than proper 6th element).
Lets add 10 to our list to make it easier: 10, 9, 8, 7, 6, 5, 4, 3, 2, 1.
So, what you could do is use the quick select algorithm to find the the ith and kth elements. In your case above i is 4 and k is 6. That will of course return the values 4 and 6. That's going to take two passes through your list. So, so far the runtime is O(2n) = O(n). The next part is easy, of course. We have lower and upper bounds on the data we care about. All we need to do is make another pass through our list looking for any element that is between our upper and lower bounds. If we find such an element we throw it into a new List. Finally, we then sort our List which contains only the ith through kth elements that we care about.
So, I believe the total runtime ends up being O(N) + O((k-i)lg(k-i))
static void Main(string[] args) {
//create an array of 10 million items that are randomly ordered
var list = Enumerable.Range(1, 10000000).OrderBy(x => Guid.NewGuid()).ToList();
var sw = Stopwatch.StartNew();
var slowOrder = list.OrderBy(x => x).Skip(10).Take(10).ToList();
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
//Took ~8 seconds on my machine
sw.Restart();
var smallVal = Quickselect(list, 11);
var largeVal = Quickselect(list, 20);
var elements = list.Where(el => el >= smallVal && el <= largeVal).OrderBy(el => el);
Console.WriteLine(sw.ElapsedMilliseconds);
//Took ~1 second on my machine
}
public static T Quickselect<T>(IList<T> list , int k) where T : IComparable {
Random rand = new Random();
int r = rand.Next(0, list.Count);
T pivot = list[r];
List<T> smaller = new List<T>();
List<T> larger = new List<T>();
foreach (T element in list) {
var comparison = element.CompareTo(pivot);
if (comparison == -1) {
smaller.Add(element);
}
else if (comparison == 1) {
larger.Add(element);
}
}
if (k <= smaller.Count) {
return Quickselect(smaller, k);
}
else if (k > list.Count - larger.Count) {
return Quickselect(larger, k - (list.Count - larger.Count));
}
else {
return pivot;
}
}
You can use List<T>.Sort(int, int, IComparer<T>):
inputList.Sort(startIndex, count, Comparer<T>.Default);
Array.Sort() has an overload that accepts index and length arguments that lets you sort a subset of an array. The same exists for List.
You cannot sort an IEnumerable directly, of course.
I want to make limitation for random class in c# like generate random variables from 2 ranges without repeat it?
example :
Xpoints[i] = random.Next(0, 25);
Ypoints[i] = random.Next(0, 12);
where 25 we 12 is image dimension so I need all pixels in this image but random ? any suggestion if I use this code i didn't get some pixels and some pixels repeated
Update Simplified by not requiring any specific hashing [1]
Update Generalzed into generic SimpleShuffle extension method
public static IEnumerable<T> SimpleShuffle<T>(this IEnumerable<T> sequence)
{
var rand = new Random();
return sequence.Select(i => new {i, k=rand.Next()})
.OrderBy(p => p.k)
.Select(p => p.i);
}
I though in addition to downvoting (shouting? sorry :)) Anx's answer I thought it'd be nicer to also show what my code would look like:
using System;
using System.Linq;
using System.Collections.Generic;
namespace NS
{
static class Program
{
public static IEnumerable<T> SimpleShuffle<T>(this IEnumerable<T> sequence)
{
var rand = new Random();
return sequence.Select(i => new {i, k=rand.Next()}).OrderBy(p => p.k).Select(p => p.i);
}
public static void Main(string[] args)
{
var pts = from x in Enumerable.Range(0, 24)
from y in Enumerable.Range(0, 11)
select new { x, y };
foreach (var pt in pts.SimpleShuffle())
Console.WriteLine("{0},{1}", pt.x, pt.y);
}
}
}
I totally fixed my earlier problem of how to generate a good hash by realizing that we don't need a hash unless:
a. the source contains (logical) duplicates
b. and we need those to have equivalent sort order
c. and we want to have the same 'random' sort order (deterministic hashing) each time round
a. and b. are false in this case and c. was even going to be a problem (depending on what the OP was requiring). So now, without any strings attached, no more worries about performance (even the irrational worries),
Good luck!
[1] Incidentally this makes the whole thing more flexible because I no longer require the coords to be expressed a byte[]; you can now shuffle any structure you want.
Have a look at the Fisher-Yates Algorithm:
http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle
It's easy to implement, and works really well.
It shuffles an array of digits, then you can access them sequentially if you like to ensure no repeats.
You might want to use a shuffle algorithm on a list of the indexes (e.g. 25 elements with the values 0..24 for the X axis) instead of random.
By design random doesn't guarantee that no value is repeated; repetitions are very likely.
See also: Optimal LINQ query to get a random sub collection - Shuffle (look for the Fisher-Yates-Durstenfeld solution)
I also believe, Random should not be predictable, and we shouldn't even predict that the value will not be repeating.
But I think sometimes it could be required to randomly get non repeating int, for that we need to maintain state, like for particular instance of Random class, what all values were returned.
here is a small quick and dirty implementation of an algorithm which I thought just now, I am not sure if it is the same as Fisher-Yates solution. I just wrote this class so that you can use it instead of System.Random class.
So It may help you for your requirement, use below NonRepeatingRandom class as per your need...
class NonRepeatingRandom : Random
{
private HashSet<int> _usedValues = new HashSet<int>();
public NonRepeatingRandom()
{
}
public NonRepeatingRandom(int seed):base(seed)
{
}
public override int Next(int minValue, int maxValue)
{
int rndVal = base.Next(minValue, maxValue);
if (_usedValues.Contains(rndVal))
{
int oldRndVal = rndVal;
do
{
rndVal++;
} while (_usedValues.Contains(rndVal) && rndVal <= maxValue);
if (rndVal == maxValue + 1)
{
rndVal = oldRndVal;
do
{
rndVal--;
} while (_usedValues.Contains(rndVal) && rndVal >= minValue);
if (rndVal == minValue - 1)
{
throw new ApplicationException("Cannot get non repeating random for provided range.");
}
}
}
_usedValues.Add(rndVal);
return rndVal;
}
}
Please not that only "Next" method is overridden, and not other, if you want you can override other methods of "Random" class too.
Ps. Just before clicking "Post Your Answer" I saw sehe's answer, I liked his overall idea, but to hash 2 bytes, creating a 16 byte hash? or am I missing something? In my code I am using HashSet which uses int's implementation of GetHashCode method, which is nothing but that value of int itself so no overhead of hashing. But I could be missing some point as it is 3:59 AM here in India :)
Hope it helps salamonti...
The whole point of random numbers is that you do get repeats.
However, if you want to make sure you don't then remove the last chosen value from your array before picking the next number. So if you have a list of numbers:
index = random.Next(0, originallist.Length);
radomisedList.Add(originalList[index]);
originalList.RemoveAt(index);
The list will be randomised yet contain no repeats.
Instead of creating image through two one-dimensional arrays you should create an image through one two-dimensional matrix. Each time you get new random coordinate check if that pixel is already set. If it is then repeat the procedure for that pixel.