I'm trying to change the elements of an array randomly, by changing the indexes. Thats ok. Now the problem is that as the random is always random, I can get two results that are the same.
for example:
Monday:
song 1
song 2
song 3
Tuesday:
song 2
song 1
song 3
Wednesday:
song 1
song 2
song 3
And so on...
And the list from
Monday
and
Wednesday
in this case is the same. I need to control that, but as you can see on the code, once I get the list from one day, I just print it. I thought about putting it on an array or Tuples and check if that tuple exists, but I think its too complicated. I thought that maybe I can make my own random function. But still, I'm not sure about that solution either. Any ideas of how can I solve this situation? Thanks!
Here is the code I have so far:
static string[] songs = new string[] { "song1", "song2", "song3" };
static string[] days = new string[] { "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday" };
private static Random random = new Random();
/* Random number between lower and higher, inclusive */
public static int rand(int lower, int higher)
{
int r = lower + (int)(random.Next(0, 2) * (higher - lower));
return r;
}
/* pick M elements from original array. Clone original array so that
7 * we don’t destroy the input. */
public static string[] pickMRandomly()
{
string[] subset = new string[songs.Length];
string[] array = (string[])songs.Clone();
for (int j = 0; j < songs.Length; j++)
{
int index = rand(j, array.Length - 1);
subset[j] = array[index];
array[index] = array[j]; // array[j] is now “dead”
}
return subset;
}
public static void playListCreation()
{
for (int j = 0; j < days.Length; j++)
{
var result =pickMRandomly();
System.Console.WriteLine(days[j]);
foreach (var i in result)
{
System.Console.WriteLine(i + " ");
}
System.Console.WriteLine("/n");
}
}
}
If I understand you correctly, you don't just want a random arrangement of songs for each day, you want a unique (and random) arrangement of songs each day.
The only way that I can think of to guarantee this is to work out all of the possible combinations of songs and to randomly sort them - then to pick out a different combination from the list for each day.
using System;
using System.Collections.Generic;
using System.Linq;
namespace StackOverflowAnswer
{
class Program
{
static string[] songs = new string[] { "song1", "song2", "song3" };
static string[] days = new string[] { "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday" };
static void Main(string[] args)
{
var rnd = new Random();
var allCombinationsInRandomOrder = GetCombinations(songs, songs.Length)
.Select(combination => new { Combination = combination, Order = rnd.Next() })
.OrderBy(entry => entry.Order)
.Select(entry => entry.Combination);
var dayIndex = 0;
foreach (var combination in allCombinationsInRandomOrder)
{
var day = days[dayIndex];
Console.WriteLine(day);
Console.WriteLine(string.Join(", ", combination));
dayIndex++;
if (dayIndex >= days.Length)
break;
}
Console.ReadLine();
}
private static IEnumerable<IEnumerable<string>> GetCombinations(IEnumerable<string> songs, int numberOfSongsInGeneratedLists)
{
if (songs == null)
throw new ArgumentNullException(nameof(songs));
if (numberOfSongsInGeneratedLists <= 0)
throw new ArgumentOutOfRangeException(nameof(numberOfSongsInGeneratedLists));
if (numberOfSongsInGeneratedLists > songs.Count())
throw new ArgumentOutOfRangeException("can't ask for more songs in the returned combinations that are provided", nameof(numberOfSongsInGeneratedLists));
if (numberOfSongsInGeneratedLists == 1)
{
foreach (var song in songs)
yield return new[] { song };
yield break;
}
foreach (var combinationWithOneSongTooFew in GetCombinations(songs, numberOfSongsInGeneratedLists - 1))
{
foreach (var song in songs.Where(song => !combinationWithOneSongTooFew.Contains(song)))
yield return combinationWithOneSongTooFew.Concat(new[] { song });
}
}
}
}
From what I understand, you want to create a random playlist and if this playlist has been created before, you want to generate another (until it's unique). One way you could do this is to add a hash of some sort to a HashSet and see if it's previously been generated. For example,
bool HashSet<int> playlistHashes = new HashSet<int>();
private bool CheckIfUnique(string[] playlist)
{
//HashSet returns false if the hash already exists
//(i.e. playlist already likely to have been created)
return playlistHashes.Add(string.Join("",playlist).GetHashCode());
}
Then once you've generated your playlist, you can call that method and see if it returns false. If it returns false, that playlist order has been created before and so you can generate again. Using the technique above means that song1, song2, song3 is different from song3, song2, song1 so the order is important.
As mentioned in my comment on the question, if you're testing with 3 songs, there are only 6 different permutations and 7 days of the week so you're going to get a duplicate.
Side note, GetHashCode can throw 'false-positives' but it's up to you to determine how likely it is, and if the impact is actually of any significance since a new playlist is generated anyway. Good thread for more information here. There are numerous hashing techniques possible with lower collision chances if GetHashCode would not suffice here.
Consider that you have 3 Songs in hand and want to assign unique Combination for each day of a week( 7days). It is not possible since you can made only six unique combinations with these three. So definitely there may be one repeating sequence. You will get 24 unique song sequence if you add another song(let it be "song4") to this collection. I have included a snippet that help you to get these combination of unique sequence of songs.
string[] songs = new string[] { "song1", "song2", "song3", "song4" };
int numberOfSongs = songs.Count();
var collection = songs.Select(x => x.ToString()); ;
for (int i = 1; i < numberOfSongs; i++)
{
collection = collection.SelectMany(x => songs, (x, y) => x + "," + y);
}
List<string> SongCollections = new List<string>();
SongCollections.AddRange(collection.Where(x => x.Split(',')
.Distinct()
.Count() == numberOfSongs)
.ToList());
Now the SongCollections will contains 24 unique sequence of 4 songs. (if you choose 3 songs then you will get 6 unique sequences). You can apply Random selection of sequence from these collection and assign to days as you wish.
Now Let me use a Dictionary<string, int> dayCollectionMap to map a collection to a day(Note : Here i use a collection of 4 songs since 3 is not enough for 7 days). Consider the snippet below:
Dictionary<string, int> dayCollectionMap = new Dictionary<string, int>();
string[] days = new string[] { "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday" };
Random randomCollection = new Random();
foreach (string day in days)
{
int currentRandom = randomCollection.Next(0, SongCollections.Count());
if (!dayCollectionMap.Any(x => x.Value == currentRandom))
{
dayCollectionMap.Add(day, currentRandom);
}
else
{
// The collection is already taken/ Add another random sequence
while (true)
{
currentRandom = randomCollection.Next(0, SongCollections.Count());
if (!dayCollectionMap.Any(x => x.Value == currentRandom))
{
dayCollectionMap.Add(day, currentRandom);
break;
}
}
}
}
So that you can select the song collection for Wednesday by using the code
var songCollectionForWed = SongCollections[dayCollectionMap["Wednesday"]];
Related
I was working on a HackerRank practice problem and ran into a interesting error
when finished. This code works on every case except the ones causing it to fail
(and they are all timeout exceptions)
Practice Problem
The short version of the problem is you are given a leaderboard (int[]) and "alice's" scores (int[]) you have to find what place she got for each score in the leaderboard...View the link above for the whole problem.
My Solution
public static int[] climbingLeaderboard(int[] scores, int[] alice)
{
int[] results = new int[alice.Length]; //The array that stores alice's placements for each score
Dictionary<int, List<int>> scoresDict = new Dictionary<int, List<int>>(); //Key = x place (1st, 2nd, etc), the list is all the numbers that are at x place
for(int i = 0; i < alice.Length; i++)
{
List<int> alicePlace = scores.ToList<int>();
//Add the score to the array (converted to list for .Add)
alicePlace.Add(alice[i]);
//Sorts in reverse order to get the new score in the correct place
alicePlace = RecalculateScores(alicePlace);
//Breaks down the scores into the dictionary above
scoresDict = SeperateScores(alicePlace);
//Addes the place to the array
results[i] = GetPlace(scoresDict, alice[i]);
}
return results;
}
//Returns scores[] in reverse SORTED order
public static List<int> RecalculateScores(List<int> scores)
{
List<int> scoresRet = scores;
scoresRet.Sort();
scoresRet.Reverse();
return scoresRet;
}
//Gets the place (key) for where score is in the dict's value list
public static int GetPlace(Dictionary<int, List<int>> dict, int score)
{
foreach (int i in dict.Keys)
{
foreach (int ii in dict[i])
{
if (ii == score)
{
return i;
}
}
}
return -1;
}
//Seperates the array into a dictionary by score placement
public static Dictionary<int, List<int>> SeperateScores(List<int> scores)
{
int placeholder = scores[0];
int currentPlace = 1;
Dictionary<int, List<int>> scoresByPlace = new Dictionary<int, List<int>>();
for (int i = 0; i < scores.Count(); i++)
{
if (scores[i] == placeholder)
{
if (!scoresByPlace.Keys.Contains(currentPlace) || scoresByPlace[currentPlace] == null)
{
scoresByPlace[currentPlace] = new List<int>();
}
scoresByPlace[currentPlace].Add(scores[i]);
placeholder = scores[i];
}
else
{
currentPlace++;
if (!scoresByPlace.Keys.Contains(currentPlace) || scoresByPlace[currentPlace] == null)
{
scoresByPlace[currentPlace] = new List<int>();
}
scoresByPlace[currentPlace].Add(scores[i]);
placeholder = scores[i];
}
}
return scoresByPlace;
}
Error
Whenever it gets tested with a large array amount (2 Million for examples) it returns an timeout exception (probably HackerRank generated to make it harder)
Attempted solution
Believe it or not but I changed a lot of things on the above code. For one,the results array in the first function used to be a list but I changed to array to make it faster. I feel the dictionary/List is slowing everything down but I need them for the solution (Especially the dictionary). Any Help would be appreciated
im trying to use random number to to pull 30 strings out of a array of 58 strings and am using a bool array to check and make sure the same number is not called twice. the method and the program always crashes with a index out of range error. here is the method.
static string[] newlist(string[] s)
{
string[] newlist = {};
bool[] issearched = new bool[s.Length];
Random callorder = new Random();
for (int i = 0; i < 31; i++)
{
int number = callorder.Next(0, s.Length);
if (issearched[number] == false)
{
newlist[number] = s[number];
issearched[number] = true;//this is where it always crashes even though the ide says issearced has 58 elements and the random number is always smaller than that.
}
else
i--;
}
return newlist;
}
im sure its simple but i can't figure out why index of 8 is outside the range of the array of 58.
Your array newlist (what a confusing name) has no space to store anything.
This line
string[] newlist = {};
declares the array but without setting the space to store any element, so when you try to use the indexer on it you get the exception.
I suggest to use a different approach to find 30 strings from your passed array.
Using a List<string> and continue to add to this list until you have 30 elements in the list
static string[] newlist(string[] s)
{
List<string> selectedElements = new List<string>();
bool[] issearched = new bool[s.Length];
Random callorder = new Random();
while(selectedElements.Count < 30))
{
int number = callorder.Next(0, s.Length);
if (!issearched[number])
{
selectedElements.Add(s[number]);
issearched[number] = true;
}
}
return selectedElements.ToArray();
}
If you prefer to use arrays as from your method then a couple of fixing is required to your code
static string[] newlist(string[] s)
{
string[] newlist = new string[30];
bool[] issearched = new bool[s.Length];
Random callorder = new Random();
for (int i = 0; i < 30; i++)
{
int number = callorder.Next(0, s.Length);
if (issearched[number] == false)
{
newlist[i] = s[number];
issearched[number] = true;
}
else
i--;
}
return newlist;
}
The newlist array is declared to have space to store 30 elements
The for loops for 30 times (not 31 as from your current code)
The newlist should use as indexer the value of the variable i
I believe you're actually crashing here:
newlist[number] = s[number];
Replace
string[] newlist = {};
With
string[] newlist = new string[s.Length];
Your newlist size is 0 elements, nowhere are you allocating enough space for it.
Also your program will go into an infinite loop if the input size is less than 31 elements.
I basically have two lists: list1 and list2 which each of them contains bitmap images as elements.
My question is how I can select bitmap elements from both list randomly and mix them together and store them is another list “list3”.
List<Bitmap> list1 = new List<Bitmap>();
List<Bitmap> list2 = new List<Bitmap>();
(List3 has elements which are mixing of list1 and list2 randomly and the size of the two lists are varying depending on the number of produced images)
This answer focuses on Randomly Interleaving two lists with minimal bias.
It is important to use an appropriate algorithm and is a significant bias when using rnd.Next(2) == 0 when interleaving between two unequal-length lists. This bias is very obvious when interleaving a list of 2 elements and a list of 20 elements - the elements in the shorter list will be clustered near the front of the result. While such a bias is not always readily seen, it exists between any two unequal-length lists without taking the "weights" of the lists into account.
Thus, instead of using rnd.Next(2) == 0 to pick the source list, an unbiased implementation should pick fairly between all the remaining elements.
if (randInt(remaining(l1) + remaining(l2)) < remaining(l1)) {
// take from list 1
// (also implies list 1 has elements; as rand < 0 never matches)
} else {
// take from list 2
// (also implies list 2 has elements)
}
An implementation might look like this:
IEnumerable<T> RandomInterleave<T>(IEnumerable<T> a, IEnumerable<T> b) {
var rnd = new Random();
int aRem = a.Count();
int bRem = b.Count();
while (aRem > 0 || bRem > 0) {
var i = rnd.Next(aRem + bRem);
if (i < aRem) {
yield return a.First();
a = a.Skip(1);
aRem--;
} else {
yield return b.First();
b = b.Skip(1);
bRem--;
}
}
}
var list3 = RandomInterleave(list1, list2).ToList();
And again, an example for you.
List<int> list1 = new List<int>();
List<int> list2 = new List<int>();
List<int> list3 = new List<int>();
// putting some values into the lists to mix them
for (int i=0; i<50; i++) list1.Add(1);
for (int i=0; i<40; i++) list2.Add(2);
var rnd = new Random();
int listIndex1 = 0;
int listIndex2 = 0;
while (listIndex1 < list1.Count || listIndex2 < list2.Count)
{
if (rnd.Next(2) == 0 || listIndex2 >= list2.Count)
{
list3.Add(list1[listIndex1++]);
}
else
{
list3.Add(list2[listIndex2++]);
}
}
foreach (var mixed in list3) { Console.WriteLine(mixed); }
Output:
1
1
2
2
2
2
1
2
2
1
1
2
2
2
1
...
I have been stumped on this one for a while. I want to take a List and order the list such that the Products with the largest Price end up in the middle of the list. And I also want to do the opposite, i.e. make sure that the items with the largest price end up on the outer boundaries of the list.
Imagine a data structure like this.. 1,2,3,4,5,6,7,8,9,10
In the first scenario I need to get back 1,3,5,7,9,10,8,6,4,2
In the second scenario I need to get back 10,8,6,4,2,1,3,5,7,9
The list may have upwards of 250 items, the numbers will not be evenly distributed, and they will not be sequential, and I wanted to minimize copying. The numbers will be contained in Product objects, and not simple primitive integers.
Is there a simple solution that I am not seeing?
Any thoughts.
So for those of you wondering what I am up to, I am ordering items based on calculated font size. Here is the code that I went with...
The Implementation...
private void Reorder()
{
var tempList = new LinkedList<DisplayTag>();
bool even = true;
foreach (var tag in this) {
if (even)
tempList.AddLast(tag);
else
tempList.AddFirst(tag);
even = !even;
}
this.Clear();
this.AddRange(tempList);
}
The Test...
[TestCase(DisplayTagOrder.SmallestToLargest, Result=new[]{10,14,18,22,26,30})]
[TestCase(DisplayTagOrder.LargestToSmallest, Result=new[]{30,26,22,18,14,10})]
[TestCase(DisplayTagOrder.LargestInTheMiddle, Result = new[] { 10, 18, 26, 30, 22, 14 })]
[TestCase(DisplayTagOrder.LargestOnTheEnds, Result = new[] { 30, 22, 14, 10, 18, 26 })]
public int[] CalculateFontSize_Orders_Tags_Appropriately(DisplayTagOrder sortOrder)
{
list.CloudOrder = sortOrder;
list.CalculateFontSize();
var result = (from displayTag in list select displayTag.FontSize).ToArray();
return result;
}
The Usage...
public void CalculateFontSize()
{
GetMaximumRange();
GetMinimunRange();
CalculateDelta();
this.ForEach((displayTag) => CalculateFontSize(displayTag));
OrderByFontSize();
}
private void OrderByFontSize()
{
switch (CloudOrder) {
case DisplayTagOrder.SmallestToLargest:
this.Sort((arg1, arg2) => arg1.FontSize.CompareTo(arg2.FontSize));
break;
case DisplayTagOrder.LargestToSmallest:
this.Sort(new LargestFirstComparer());
break;
case DisplayTagOrder.LargestInTheMiddle:
this.Sort(new LargestFirstComparer());
Reorder();
break;
case DisplayTagOrder.LargestOnTheEnds:
this.Sort();
Reorder();
break;
}
}
The appropriate data structure is a LinkedList because it allows you to efficiently add to either end:
LinkedList<int> result = new LinkedList<int>();
int[] array = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
Array.Sort(array);
bool odd = true;
foreach (var x in array)
{
if (odd)
result.AddLast(x);
else
result.AddFirst(x);
odd = !odd;
}
foreach (int item in result)
Console.Write("{0} ", item);
No extra copying steps, no reversing steps, ... just a small overhead per node for storage.
C# Iterator version
(Very simple code to satisfy all conditions.)
One function to rule them all! Doesn't use intermediate storage collection (see yield keyword). Orders the large numbers either to the middle, or to the sides depending on the argument. It's implemented as a C# iterator
// Pass forward sorted array for large middle numbers,
// or reverse sorted array for large side numbers.
//
public static IEnumerable<long> CurveOrder(long[] nums) {
if (nums == null || nums.Length == 0)
yield break; // Nothing to do.
// Move forward every two.
for (int i = 0; i < nums.Length; i+=2)
yield return nums[i];
// Move backward every other two. Note: Length%2 makes sure we're on the correct offset.
for (int i = nums.Length-1 - nums.Length%2; i >= 0; i-=2)
yield return nums[i];
}
Example Usage
For example with array long[] nums = { 1,2,3,4,5,6,7,8,9,10,11 };
Start with forward sort order, to bump high numbers into the middle.
Array.Sort(nums); //forward sort
// Array argument will be: { 1,2,3,4,5,6,7,8,9,10,11 };
long[] arrLargeMiddle = CurveOrder(nums).ToArray();
Produces: 1 3 5 7 9 11 10 8 6 4 2
Or, Start with reverse sort order, to push high numbers to sides.
Array.Reverse(nums); //reverse sort
// Array argument will be: { 11,10,9,8,7,6,5,4,3,2,1 };
long[] arrLargeSides = CurveOrder(nums).ToArray();
Produces: 11 9 7 5 3 1 2 4 6 8 10
Significant namespaces are:
using System;
using System.Collections.Generic;
using System.Linq;
Note: The iterator leaves the decision up to the caller about whether or not to use intermediate storage. The caller might simply be issuing a foreach loop over the results instead.
Extension Method Option
Optionally change the static method header to use the this modifier public static IEnumerable<long> CurveOrder(this long[] nums) { and put it inside a static class in your namespace;
Then call the order method directly on any long[ ] array instance like so:
Array.Reverse(nums); //reverse sort
// Array argument will be: { 11,10,9,8,7,6,5,4,3,2,1 };
long[] arrLargeSides = nums.CurveOrder().ToArray();
Just some (unneeded) syntactic sugar to mix things up a bit for fun. This can be applied to any answers to your question that take an array argument.
I might go for something like this
static T[] SortFromMiddleOut<T, U>(IList<T> list, Func<T, U> orderSelector, bool largestInside) where U : IComparable<U>
{
T[] sortedArray = new T[list.Count];
bool add = false;
int index = (list.Count / 2);
int iterations = 0;
IOrderedEnumerable<T> orderedList;
if (largestInside)
orderedList = list.OrderByDescending(orderSelector);
else
orderedList = list.OrderBy(orderSelector);
foreach (T item in orderedList)
{
sortedArray[index] = item;
if (add)
index += ++iterations;
else
index -= ++iterations;
add = !add;
}
return sortedArray;
}
Sample invocations:
int[] array = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
int[] sortedArray = SortFromMiddleOut(array, i => i, false);
foreach (int item in sortedArray)
Console.Write("{0} ", item);
Console.Write("\n");
sortedArray = SortFromMiddleOut(array, i => i, true);
foreach (int item in sortedArray)
Console.Write("{0} ", item);
With it being generic, it could be a list of Foo and the order selector could be f => f.Name or whatever you want to throw at it.
The fastest (but not the clearest) solution is probably to simply calculate the new index for each element:
Array.Sort(array);
int length = array.Length;
int middle = length / 2;
int[] result2 = new int[length];
for (int i = 0; i < array.Length; i++)
{
result2[middle + (1 - 2 * (i % 2)) * ((i + 1) / 2)] = array[i];
}
Something like this?
public IEnumerable<int> SortToMiddle(IEnumerable<int> input)
{
var sorted = new List<int>(input);
sorted.Sort();
var firstHalf = new List<int>();
var secondHalf = new List<int>();
var sendToFirst = true;
foreach (var current in sorted)
{
if (sendToFirst)
{
firstHalf.Add(current);
}
else
{
secondHalf.Add(current);
}
sendToFirst = !sendToFirst;
}
//to get the highest values on the outside just reverse
//the first list instead of the second
secondHalf.Reverse();
return firstHalf.Concat(secondHalf);
}
For your specific (general) case (assuming unique keys):
public static IEnumerable<T> SortToMiddle<T, TU>(IEnumerable<T> input, Func<T, TU> getSortKey)
{
var sorted = new List<TU>(input.Select(getSortKey));
sorted.Sort();
var firstHalf = new List<TU>();
var secondHalf = new List<TU>();
var sendToFirst = true;
foreach (var current in sorted)
{
if (sendToFirst)
{
firstHalf.Add(current);
}
else
{
secondHalf.Add(current);
}
sendToFirst = !sendToFirst;
}
//to get the highest values on the outside just reverse
//the first list instead of the second
secondHalf.Reverse();
sorted = new List<TU>(firstHalf.Concat(secondHalf));
//This assumes the sort keys are unique - if not, the implementation
//needs to use a SortedList<TU, T>
return sorted.Select(s => input.First(t => s.Equals(getSortKey(t))));
}
And assuming non-unique keys:
public static IEnumerable<T> SortToMiddle<T, TU>(IEnumerable<T> input, Func<T, TU> getSortKey)
{
var sendToFirst = true;
var sorted = new SortedList<TU, T>(input.ToDictionary(getSortKey, t => t));
var firstHalf = new SortedList<TU, T>();
var secondHalf = new SortedList<TU, T>();
foreach (var current in sorted)
{
if (sendToFirst)
{
firstHalf.Add(current.Key, current.Value);
}
else
{
secondHalf.Add(current.Key, current.Value);
}
sendToFirst = !sendToFirst;
}
//to get the highest values on the outside just reverse
//the first list instead of the second
secondHalf.Reverse();
return(firstHalf.Concat(secondHalf)).Select(kvp => kvp.Value);
}
Simplest solution - order the list descending, create two new lists, into the first place every odd-indexed item, into the other every even indexed item. Reverse the first list then append the second to the first.
Okay, I'm not going to question your sanity here since I'm sure you wouldn't be asking the question if there weren't a good reason :-)
Here's how I'd approach it. Create a sorted list, then simply create another list by processing the keys in order, alternately inserting before and appending, something like:
sortedlist = list.sort (descending)
biginmiddle = new list()
state = append
foreach item in sortedlist:
if state == append:
biginmiddle.append (item)
state = prepend
else:
biginmiddle.insert (0, item)
state = append
This will give you a list where the big items are in the middle. Other items will fan out from the middle (in alternating directions) as needed:
1, 3, 5, 7, 9, 10, 8, 6, 4, 2
To get a list where the larger elements are at the ends, just replace the initial sort with an ascending one.
The sorted and final lists can just be pointers to the actual items (since you state they're not simple integers) - this will minimise both extra storage requirements and copying.
Maybe its not the best solution, but here's a nifty way...
Let Product[] parr be your array.
Disclaimer It's java, my C# is rusty.
Untested code, but you get the idea.
int plen = parr.length
int [] indices = new int[plen];
for(int i = 0; i < (plen/2); i ++)
indices[i] = 2*i + 1; // Line1
for(int i = (plen/2); i < plen; i++)
indices[i] = 2*(plen-i); // Line2
for(int i = 0; i < plen; i++)
{
if(i != indices[i])
swap(parr[i], parr[indices[i]]);
}
The second case, Something like this?
int plen = parr.length
int [] indices = new int[plen];
for(int i = 0; i <= (plen/2); i ++)
indices[i] = (plen^1) - 2*i;
for(int i = 0; i < (plen/2); i++)
indices[i+(plen/2)+1] = 2*i + 1;
for(int i = 0; i < plen; i++)
{
if(i != indices[i])
swap(parr[i], parr[indices[i]]);
}
Anyone have a quick method for de-duplicating a generic List in C#?
If you're using .Net 3+, you can use Linq.
List<T> withDupes = LoadSomeData();
List<T> noDupes = withDupes.Distinct().ToList();
Perhaps you should consider using a HashSet.
From the MSDN link:
using System;
using System.Collections.Generic;
class Program
{
static void Main()
{
HashSet<int> evenNumbers = new HashSet<int>();
HashSet<int> oddNumbers = new HashSet<int>();
for (int i = 0; i < 5; i++)
{
// Populate numbers with just even numbers.
evenNumbers.Add(i * 2);
// Populate oddNumbers with just odd numbers.
oddNumbers.Add((i * 2) + 1);
}
Console.Write("evenNumbers contains {0} elements: ", evenNumbers.Count);
DisplaySet(evenNumbers);
Console.Write("oddNumbers contains {0} elements: ", oddNumbers.Count);
DisplaySet(oddNumbers);
// Create a new HashSet populated with even numbers.
HashSet<int> numbers = new HashSet<int>(evenNumbers);
Console.WriteLine("numbers UnionWith oddNumbers...");
numbers.UnionWith(oddNumbers);
Console.Write("numbers contains {0} elements: ", numbers.Count);
DisplaySet(numbers);
}
private static void DisplaySet(HashSet<int> set)
{
Console.Write("{");
foreach (int i in set)
{
Console.Write(" {0}", i);
}
Console.WriteLine(" }");
}
}
/* This example produces output similar to the following:
* evenNumbers contains 5 elements: { 0 2 4 6 8 }
* oddNumbers contains 5 elements: { 1 3 5 7 9 }
* numbers UnionWith oddNumbers...
* numbers contains 10 elements: { 0 2 4 6 8 1 3 5 7 9 }
*/
How about:
var noDupes = list.Distinct().ToList();
In .net 3.5?
Simply initialize a HashSet with a List of the same type:
var noDupes = new HashSet<T>(withDupes);
Or, if you want a List returned:
var noDupsList = new HashSet<T>(withDupes).ToList();
Sort it, then check two and two next to each others, as the duplicates will clump together.
Something like this:
list.Sort();
Int32 index = list.Count - 1;
while (index > 0)
{
if (list[index] == list[index - 1])
{
if (index < list.Count - 1)
(list[index], list[list.Count - 1]) = (list[list.Count - 1], list[index]);
list.RemoveAt(list.Count - 1);
index--;
}
else
index--;
}
Notes:
Comparison is done from back to front, to avoid having to resort list after each removal
This example now uses C# Value Tuples to do the swapping, substitute with appropriate code if you can't use that
The end-result is no longer sorted
I like to use this command:
List<Store> myStoreList = Service.GetStoreListbyProvince(provinceId)
.GroupBy(s => s.City)
.Select(grp => grp.FirstOrDefault())
.OrderBy(s => s.City)
.ToList();
I have these fields in my list: Id, StoreName, City, PostalCode
I wanted to show list of cities in a dropdown which has duplicate values.
solution: Group by city then pick the first one for the list.
It worked for me. simply use
List<Type> liIDs = liIDs.Distinct().ToList<Type>();
Replace "Type" with your desired type e.g. int.
As kronoz said in .Net 3.5 you can use Distinct().
In .Net 2 you could mimic it:
public IEnumerable<T> DedupCollection<T> (IEnumerable<T> input)
{
var passedValues = new HashSet<T>();
// Relatively simple dupe check alg used as example
foreach(T item in input)
if(passedValues.Add(item)) // True if item is new
yield return item;
}
This could be used to dedupe any collection and will return the values in the original order.
It's normally much quicker to filter a collection (as both Distinct() and this sample does) than it would be to remove items from it.
An extension method might be a decent way to go... something like this:
public static List<T> Deduplicate<T>(this List<T> listToDeduplicate)
{
return listToDeduplicate.Distinct().ToList();
}
And then call like this, for example:
List<int> myFilteredList = unfilteredList.Deduplicate();
In Java (I assume C# is more or less identical):
list = new ArrayList<T>(new HashSet<T>(list))
If you really wanted to mutate the original list:
List<T> noDupes = new ArrayList<T>(new HashSet<T>(list));
list.clear();
list.addAll(noDupes);
To preserve order, simply replace HashSet with LinkedHashSet.
This takes distinct (the elements without duplicating elements) and convert it into a list again:
List<type> myNoneDuplicateValue = listValueWithDuplicate.Distinct().ToList();
Use Linq's Union method.
Note: This solution requires no knowledge of Linq, aside from that it exists.
Code
Begin by adding the following to the top of your class file:
using System.Linq;
Now, you can use the following to remove duplicates from an object called, obj1:
obj1 = obj1.Union(obj1).ToList();
Note: Rename obj1 to the name of your object.
How it works
The Union command lists one of each entry of two source objects. Since obj1 is both source objects, this reduces obj1 to one of each entry.
The ToList() returns a new List. This is necessary, because Linq commands like Union returns the result as an IEnumerable result instead of modifying the original List or returning a new List.
As a helper method (without Linq):
public static List<T> Distinct<T>(this List<T> list)
{
return (new HashSet<T>(list)).ToList();
}
Here's an extension method for removing adjacent duplicates in-situ. Call Sort() first and pass in the same IComparer. This should be more efficient than Lasse V. Karlsen's version which calls RemoveAt repeatedly (resulting in multiple block memory moves).
public static void RemoveAdjacentDuplicates<T>(this List<T> List, IComparer<T> Comparer)
{
int NumUnique = 0;
for (int i = 0; i < List.Count; i++)
if ((i == 0) || (Comparer.Compare(List[NumUnique - 1], List[i]) != 0))
List[NumUnique++] = List[i];
List.RemoveRange(NumUnique, List.Count - NumUnique);
}
Installing the MoreLINQ package via Nuget, you can easily distinct object list by a property
IEnumerable<Catalogue> distinctCatalogues = catalogues.DistinctBy(c => c.CatalogueCode);
If you have tow classes Product and Customer and we want to remove duplicate items from their list
public class Product
{
public int Id { get; set; }
public string ProductName { get; set; }
}
public class Customer
{
public int Id { get; set; }
public string CustomerName { get; set; }
}
You must define a generic class in the form below
public class ItemEqualityComparer<T> : IEqualityComparer<T> where T : class
{
private readonly PropertyInfo _propertyInfo;
public ItemEqualityComparer(string keyItem)
{
_propertyInfo = typeof(T).GetProperty(keyItem, BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Public);
}
public bool Equals(T x, T y)
{
var xValue = _propertyInfo?.GetValue(x, null);
var yValue = _propertyInfo?.GetValue(y, null);
return xValue != null && yValue != null && xValue.Equals(yValue);
}
public int GetHashCode(T obj)
{
var propertyValue = _propertyInfo.GetValue(obj, null);
return propertyValue == null ? 0 : propertyValue.GetHashCode();
}
}
then, You can remove duplicate items in your list.
var products = new List<Product>
{
new Product{ProductName = "product 1" ,Id = 1,},
new Product{ProductName = "product 2" ,Id = 2,},
new Product{ProductName = "product 2" ,Id = 4,},
new Product{ProductName = "product 2" ,Id = 4,},
};
var productList = products.Distinct(new ItemEqualityComparer<Product>(nameof(Product.Id))).ToList();
var customers = new List<Customer>
{
new Customer{CustomerName = "Customer 1" ,Id = 5,},
new Customer{CustomerName = "Customer 2" ,Id = 5,},
new Customer{CustomerName = "Customer 2" ,Id = 5,},
new Customer{CustomerName = "Customer 2" ,Id = 5,},
};
var customerList = customers.Distinct(new ItemEqualityComparer<Customer>(nameof(Customer.Id))).ToList();
this code remove duplicate items by Id if you want remove duplicate items by other property, you can change nameof(YourClass.DuplicateProperty) same nameof(Customer.CustomerName) then remove duplicate items by CustomerName Property.
If you don't care about the order you can just shove the items into a HashSet, if you do want to maintain the order you can do something like this:
var unique = new List<T>();
var hs = new HashSet<T>();
foreach (T t in list)
if (hs.Add(t))
unique.Add(t);
Or the Linq way:
var hs = new HashSet<T>();
list.All( x => hs.Add(x) );
Edit: The HashSet method is O(N) time and O(N) space while sorting and then making unique (as suggested by #lassevk and others) is O(N*lgN) time and O(1) space so it's not so clear to me (as it was at first glance) that the sorting way is inferior
Might be easier to simply make sure that duplicates are not added to the list.
if(items.IndexOf(new_item) < 0)
items.add(new_item)
You can use Union
obj2 = obj1.Union(obj1).ToList();
Another way in .Net 2.0
static void Main(string[] args)
{
List<string> alpha = new List<string>();
for(char a = 'a'; a <= 'd'; a++)
{
alpha.Add(a.ToString());
alpha.Add(a.ToString());
}
Console.WriteLine("Data :");
alpha.ForEach(delegate(string t) { Console.WriteLine(t); });
alpha.ForEach(delegate (string v)
{
if (alpha.FindAll(delegate(string t) { return t == v; }).Count > 1)
alpha.Remove(v);
});
Console.WriteLine("Unique Result :");
alpha.ForEach(delegate(string t) { Console.WriteLine(t);});
Console.ReadKey();
}
There are many ways to solve - the duplicates issue in the List, below is one of them:
List<Container> containerList = LoadContainer();//Assume it has duplicates
List<Container> filteredList = new List<Container>();
foreach (var container in containerList)
{
Container duplicateContainer = containerList.Find(delegate(Container checkContainer)
{ return (checkContainer.UniqueId == container.UniqueId); });
//Assume 'UniqueId' is the property of the Container class on which u r making a search
if(!containerList.Contains(duplicateContainer) //Add object when not found in the new class object
{
filteredList.Add(container);
}
}
Cheers
Ravi Ganesan
Here's a simple solution that doesn't require any hard-to-read LINQ or any prior sorting of the list.
private static void CheckForDuplicateItems(List<string> items)
{
if (items == null ||
items.Count == 0)
return;
for (int outerIndex = 0; outerIndex < items.Count; outerIndex++)
{
for (int innerIndex = 0; innerIndex < items.Count; innerIndex++)
{
if (innerIndex == outerIndex) continue;
if (items[outerIndex].Equals(items[innerIndex]))
{
// Duplicate Found
}
}
}
}
David J.'s answer is a good method, no need for extra objects, sorting, etc. It can be improved on however:
for (int innerIndex = items.Count - 1; innerIndex > outerIndex ; innerIndex--)
So the outer loop goes top bottom for the entire list, but the inner loop goes bottom "until the outer loop position is reached".
The outer loop makes sure the entire list is processed, the inner loop finds the actual duplicates, those can only happen in the part that the outer loop hasn't processed yet.
Or if you don't want to do bottom up for the inner loop you could have the inner loop start at outerIndex + 1.
A simple intuitive implementation:
public static List<PointF> RemoveDuplicates(List<PointF> listPoints)
{
List<PointF> result = new List<PointF>();
for (int i = 0; i < listPoints.Count; i++)
{
if (!result.Contains(listPoints[i]))
result.Add(listPoints[i]);
}
return result;
}
All answers copy lists, or create a new list, or use slow functions, or are just painfully slow.
To my understanding, this is the fastest and cheapest method I know (also, backed by a very experienced programmer specialized on real-time physics optimization).
// Duplicates will be noticed after a sort O(nLogn)
list.Sort();
// Store the current and last items. Current item declaration is not really needed, and probably optimized by the compiler, but in case it's not...
int lastItem = -1;
int currItem = -1;
int size = list.Count;
// Store the index pointing to the last item we want to keep in the list
int last = size - 1;
// Travel the items from last to first O(n)
for (int i = last; i >= 0; --i)
{
currItem = list[i];
// If this item was the same as the previous one, we don't want it
if (currItem == lastItem)
{
// Overwrite last in current place. It is a swap but we don't need the last
list[i] = list[last];
// Reduce the last index, we don't want that one anymore
last--;
}
// A new item, we store it and continue
else
lastItem = currItem;
}
// We now have an unsorted list with the duplicates at the end.
// Remove the last items just once
list.RemoveRange(last + 1, size - last - 1);
// Sort again O(n logn)
list.Sort();
Final cost is:
nlogn + n + nlogn = n + 2nlogn = O(nlogn) which is pretty nice.
Note about RemoveRange:
Since we cannot set the count of the list and avoid using the Remove funcions, I don't know exactly the speed of this operation but I guess it is the fastest way.
Using HashSet this can be done easily.
List<int> listWithDuplicates = new List<int> { 1, 2, 1, 2, 3, 4, 5 };
HashSet<int> hashWithoutDuplicates = new HashSet<int> ( listWithDuplicates );
List<int> listWithoutDuplicates = hashWithoutDuplicates.ToList();
Using HashSet:
list = new HashSet<T>(list).ToList();
public static void RemoveDuplicates<T>(IList<T> list )
{
if (list == null)
{
return;
}
int i = 1;
while(i<list.Count)
{
int j = 0;
bool remove = false;
while (j < i && !remove)
{
if (list[i].Equals(list[j]))
{
remove = true;
}
j++;
}
if (remove)
{
list.RemoveAt(i);
}
else
{
i++;
}
}
}
If you need to compare complex objects, you will need to pass a Comparer object inside the Distinct() method.
private void GetDistinctItemList(List<MyListItem> _listWithDuplicates)
{
//It might be a good idea to create MyListItemComparer
//elsewhere and cache it for performance.
List<MyListItem> _listWithoutDuplicates = _listWithDuplicates.Distinct(new MyListItemComparer()).ToList();
//Choose the line below instead, if you have a situation where there is a chance to change the list while Distinct() is running.
//ToArray() is used to solve "Collection was modified; enumeration operation may not execute" error.
//List<MyListItem> _listWithoutDuplicates = _listWithDuplicates.ToArray().Distinct(new MyListItemComparer()).ToList();
return _listWithoutDuplicates;
}
Assuming you have 2 other classes like:
public class MyListItemComparer : IEqualityComparer<MyListItem>
{
public bool Equals(MyListItem x, MyListItem y)
{
return x != null
&& y != null
&& x.A == y.A
&& x.B.Equals(y.B);
&& x.C.ToString().Equals(y.C.ToString());
}
public int GetHashCode(MyListItem codeh)
{
return codeh.GetHashCode();
}
}
And:
public class MyListItem
{
public int A { get; }
public string B { get; }
public MyEnum C { get; }
public MyListItem(int a, string b, MyEnum c)
{
A = a;
B = b;
C = c;
}
}
I think the simplest way is:
Create a new list and add unique item.
Example:
class MyList{
int id;
string date;
string email;
}
List<MyList> ml = new Mylist();
ml.Add(new MyList(){
id = 1;
date = "2020/09/06";
email = "zarezadeh#gmailcom"
});
ml.Add(new MyList(){
id = 2;
date = "2020/09/01";
email = "zarezadeh#gmailcom"
});
List<MyList> New_ml = new Mylist();
foreach (var item in ml)
{
if (New_ml.Where(w => w.email == item.email).SingleOrDefault() == null)
{
New_ml.Add(new MyList()
{
id = item.id,
date = item.date,
email = item.email
});
}
}