Ways to Improve generic Dictionary performance - c#

I have a Dictinary<int, int> which populated with ~5Mio records.
While the performance is reasonably good considering the volume of data I'm looking to improve it. I don't care about data population my main concern is data retrieval.
First thing I'd done - I changed value type from decimal to int which got me twice better performance.
Then I tried trading 'genericness' for speed by passing non-generic IntComparer into Dictionary's ctor as follows:
public class IntegerComparer : IEqualityComparer<int>
{
public bool Equals(int x, int y)
{
return x == y;
}
public int GetHashCode(int obj)
{
return obj;
}
}
but to no avail, performance got degraded by 20%. SortedDictionary slowed things down by 10 times (didn't have much hope on it though). Wonder what can be done for improving the performance if any?
here's a synthetic test just for measuring performance:
var d = new Dictionary<int, int>();
for (var i = 0; i < 5000000; i++)
{
d.Add(i, i + 5);
}
var r = new Random();
var s = new Stopwatch();
s.Start();
for (var i = 0; i < 100000; i++)
{
var r0 = Enumerable.Range(1, 255).Select(t => r.Next(5000000));
var values = r0.Select(t => d[t]).ToList();
}
s.Stop();
MessageBox.Show(s.ElapsedMilliseconds.ToString());

As the comments point out your test is seriously flawed...
If the highest index you will see is 5,000,0000 then an array will be the most performant option. I've tried to quickly rewrite your test to try an eliminate some of the error. There will probably be mistakes, writing accurate benchmarks is hard.
static void Main(string[] args)
{
var loopLength = 100000000;
var d = new Dictionary<int, int>();
for (var i = 0; i < 5000000; i++)
{
d.Add(i, i + 5);
}
var ignore = d[7];
var a = new int[5000000];
for (var i = 0; i < 5000000; i++)
{
a[i] = i + 5;
}
ignore = a[7];
var s = new Stopwatch();
var x = 1;
s.Start();
for (var i = 0; i < loopLength; i++)
{
x = (x * 1664525 + 1013904223) & (4194303);
var y = d[x];
}
s.Stop();
Console.WriteLine(s.ElapsedMilliseconds);
s.Reset();
x = 1;
s.Start();
for (var i = 0; i < loopLength; i++)
{
x = (x * 1664525 + 1013904223) & (4194303);
var y = a[x];
}
s.Stop();
Console.WriteLine(s.ElapsedMilliseconds);
Console.ReadKey(true);
}
x coefficients borrowed from Wikipedia's Linear congruential generator article
My results:
24390
2076
That makes the array over 12x faster

Related

In C# is it faster to create a Hash Set for searching through a list, rather than searching the list itself? [duplicate]

This question already has answers here:
HashSet vs. List performance
(12 answers)
Closed 9 months ago.
I have two lists of strings, and I need to check to see if there are any matches, and I have to do this at a minimum of sixty times a second, but this can scale up to thousands of times a second.
Right now, the lists are both small; one is three, and another might have a few dozen elements at most, but the currently small list is probably gonna grow.
Would it be faster to do this:
for (int i = 0; i < listA.Length; i++)
{
for (int j = 0; j < listB.Length; j++) {
if (listA[i] == listB[j])
{
// do stuff
}
}
}
Or to do this:
var hashSetB = new HashSet<string>(listB.Length);
for (int i = 0; i < listB.Length; i++)
{
hashSetB.Add(listB[i]);
}
for (int i = 0; i < listA.Length; i++)
{
if (hashSetB.Contains(listA[i])) {
// do stuff
}
}
ListA and ListB when they come to me, will always be lists; I have no control over them.
I think the core of my question is that I don't know how long var hashSetB = new HashSet<string>(listB.Length); takes, so I'm not sure the change would be good or bad for smaller lists.
Was curious so here's some code I wrote to test it. From what I got back, HashSet was near instantaneous whereas nested loops were slow. Makes sense as you've essentially taken something where you needed to do lengthA * lengthB operations and simplified it to lengthA + lengthB operation.
const int size = 20000;
var listA = new List<int>();
for (int i = 0; i < size; i++)
{
listA.Add(i);
}
var listB = new List<int>();
for (int i = size - 5; i < 2 * size; i++)
{
listB.Add(i);
}
var sw = new Stopwatch();
sw.Start();
for (int i = 0; i < listA.Count; i++)
{
for (int j = 0; j < listB.Count; j++)
{
if (listA[i] == listB[j])
{
Console.WriteLine("Nested loop match");
}
}
}
long timeTaken1 = sw.ElapsedMilliseconds;
sw.Restart();
var hashSetB = new HashSet<int>(listB.Count);
for (int i = 0; i < listB.Count; i++)
{
hashSetB.Add(listB[i]);
}
for (int i = 0; i < listA.Count; i++)
{
if (hashSetB.Contains(listA[i]))
{
Console.WriteLine("HashSet match");
}
}
long timeTaken2 = sw.ElapsedMilliseconds;
Console.WriteLine("Time Taken Nested Loop: " + timeTaken1);
Console.WriteLine("Time Taken HashSet: " + timeTaken2);
Console.ReadLine();

Cannot understand the behaviour of the threads C#

static void Main(string[] args)
{
var sw = new Stopwatch();
sw.Start();
int noOfThreads = Environment.ProcessorCount;
//int minVal = 1;
int maxVal = 10000000;
int blockSize = maxVal / noOfThreads;
List<Thread> threads = new List<Thread>();
List<List<int>> results = new List<List<int>>();
object thisLock = new object();
for (int i = 0; i < noOfThreads; ++i)
{
lock(thisLock)
{
Thread th = new Thread(() =>
{
results.Add(GetPrimeNumbers(i * blockSize, i * blockSize + blockSize));
});
th.Start();
threads.Add(th);
}
}
foreach (var elem in threads)
elem.Join();
}
private static List<int> GetPrimeNumbers(int low, int high)
{
List<int> result = new List<int>();
//Debug.WriteLine("Low: {0}. High: {1}", low, high);
for(int i = low; i <= high; ++i)
{
if (IsPrime(i))
result.Add(i);
}
return result;
}
static bool IsPrime(int number)
{
if (number % 2 == 0)
return false;
else
{
var topLimit = (int)Math.Sqrt(number);
for (int i = 3; i <= topLimit; i += 2)
if (number % i == 0)
return false;
return true;
}
}
With the above code, I was expecting that when I put breakpoint in the GetPrimeNumbers(int low, int high) I would see range of values for low and high, e.g: (0, 1250000), (1250000, 2500000).....(8750000, 10000000). But what I observing is that there are certain blocks that gets passed multiple times - (2500000, 3750000) while certain do not passed at all -(0, 1250000) and this behaviour also matches the results I am getting.
I am curious why I am seeing this behaviour. Is there a way to prevent this?
I am aware of the fact that I can use Parallel.For() and over here I do see the expected behaviour at breakpoint in GetPrimes(int low, int high). But as I mentioned before I am curious why I am seeing the former behaviour.
Thanks in advance!
The problem is that a for loop reuses the same i variable across iterations, and your thread delegate is closing over that variable.
There are various ways to fix this. A simple one is to use a new variable declared within your loop:
for (int i = 0; i < noOfThreads; ++i)
{
int j = i; // capture the value
lock(thisLock)
{
Thread th = new Thread(() =>
{
results.Add(GetPrimeNumbers(j * blockSize, j * blockSize + blockSize));
});
th.Start();
threads.Add(th);
}
}
This still has other issues, though. I'd recommend something more like this:
var allPrimeNumbers = Enumerable.Range(0, numberOfThreads)
.AsParallel()
.SelectMany(i => GetPrimeNumbers(i * blockSize, i * blockSize + blockSize))
.ToList();
Further Reading
Is there a reason for C#'s reuse of the variable in a foreach?
StriplingWarrior had it close, but as mentioned in the comments, you still have a threading bug. You need to move the lock inside the Thread action. Also, to get the best performance, hold the lock for the shortest amount of time possible, which is when modifying the shared results variable. To do that I separated the GetPrimeNumbers call from the results.Add call.
for (int i = 0; i < noOfThreads; ++i)
{
int j = i; // capture the value
Thread th = new Thread(() =>
{
result = GetPrimeNumbers(j * blockSize, j * blockSize + blockSize);
lock(thisLock)
{
results.Add(result);
}
});
th.Start();
threads.Add(th);
}
Also, unless you really need to manage your own threads I would recommend using Tasks (TPL) instead. Here is a modification using Tasks
Task<List<int>> tasks = new Task<List<int>>();
for (int i = 0; i < noOfThreads; ++i)
{
int j = i; // capture the value
tasks.Add(Task.Run(() => GetPrimeNumbers(j * blockSize, j * blockSize + blockSize)));
}
Task.WaitAll(tasks);
results = tasks.Select(t => t.Result).ToList();

Generate random 16-digit string

What's a better way to create a random 16-digit string? I've used this code, can you suggest a more efficient or elegant way to do it?
static string Random16DigitString() {
var rand = new Random();
return $"{rand.Next(100000000).ToString().PadLeft(8, '0')}{rand.Next(100000000).ToString().PadLeft(8, '0')}";
}
PS: My reason for making this is to create a string of the form 0.0000000000000000 so I would use it in the following way:
var myString = "0." + Random16DigitString();
Your solution depends on string manipulation that will slow it down.
Try:
private static Random r = new Random();
static string Random16DigitString() {
var v = new char[16];
for (var j = 0; j < 16; j++) v[j] = (char)(r.NextDouble()*10 + 48);
return new string(v);
}
This will be faster since it doesn't depend on string operations like concatenation or interpolation. It just pokes random characters into a char array and then converts that array to a string. Executing your solution 100 million times takes about 47 seconds on my machine and my code takes about 27 seconds to produce the same results.
r.Next(10) + 48 would work in the above code but it's actually a little slower. r.Next(48,57) is even slower.
Your code could be simpler, also. $"{rand.Next(100000000):D8}{rand.Next(100000000):D8}" would do the same thing. It's about the same time to execute.
Here's the code I ended up using:
static readonly Random rnd = new Random();
static string Q() {
// https://stackoverflow.com/questions/767999/random-number-generator-only-generating-one-random-number/768001#768001
// It was decided to use a lock instead of [ThreadStatic] because this api object is designed to be used by many threads simultaneously.
lock (rnd) {
// Get a string representing a positive number greater than 0 and less than 1 with exactly 16 decimal places.
// Original implementation
//return $"0.{rnd.Next(100000000).ToString().PadLeft(8, '0')}{rnd.Next(100000000).ToString().PadLeft(8, '0')}";
// This works but is slow
//return rnd.NextDouble().ToString("F16");
// Found a better faster way: https://stackoverflow.com/questions/48455624/generate-random-16-digit-string/48457354#48457354
var chars = new char[18];
chars[0] = '0';
chars[1] = '.';
for (var i = 2; i < 18; i++)
chars[i] = (char)(rnd.NextDouble() * 10 + 48);
return new string(chars);
}
}
Here are the tests I used (with thanks to Jim Berg for his answer)
using System;
using System.Diagnostics;
using System.Text;
namespace NetCoreApp1 {
class Program {
static void Main(string[] args) {
var sync = new object();
var rnd = new Random();
Time("method1", () => {
var value = $"{rnd.Next(100000000).ToString().PadLeft(8, '0')}{rnd.Next(100000000).ToString().PadLeft(8, '0')}";
});
Time("method2", () => {
var value = $"{rnd.Next(100000000):D8}{rnd.Next(100000000):D8}";
});
Time("next double", () => {
var value = rnd.NextDouble().ToString("F16"); // turns out surprisingly slow, even slower than the first two
});
Time("method3", () => {
var v = new char[16];
for (var j = 0; j < 16; j++)
v[j] = (char)(rnd.NextDouble() * 10 + 48); // fastest
var value = new string(v);
});
Time("method3 with lock", () => {
lock (sync) {
var v = new char[16];
for (var j = 0; j < 16; j++)
v[j] = (char)(rnd.NextDouble() * 10 + 48); // a tiny bit slower with the lock
var value = new string(v);
}
});
Time("method4", () => {
var sb = new StringBuilder(16);
for (var j = 0; j < 16; j++)
sb.Append((char)(rnd.NextDouble() * 10 + 48)); // slower than method 3
var value = sb.ToString();
});
Console.WriteLine("Press Enter to exit.");
Console.ReadLine();
}
static void Time(string testName, Action action) {
var sw = Stopwatch.StartNew();
for (var i = 0; i < 10000000; i++)
action();
sw.Stop();
Console.WriteLine($"{testName}: {sw.ElapsedMilliseconds}ms");
}
}
}

How can I maximize the performance of element-wise operation on an big array in C#

The operation is to multiply every i-th element of a array (call it A) and i-th element of a matrix of the same size(B), and update the same i-th element of A with the value earned.
In a arithmetic formula,
A'[i] = A[i]*B[i] (0 < i < n(A))
What's the best way to optimize this operation in a multi-core environment?
Here's my current code;
var learningRate = 0.001f;
var m = 20000;
var n = 40000;
var W = float[m*n];
var C = float[m*n];
//my current code ...[1]
Parallel.ForEach(Enumerable.Range(0, m), i =>
{
for (int j = 0; j <= n - 1; j++)
{
W[i*n+j] *= C[i*n+j];
}
});
//This is somehow far slower than [1], but I don't know why ... [2]
Parallel.ForEach(Enumerable.Range(0, n*m), i =>
{
w[i] *= C[i]
});
//This is faster than [2], but not as fast as [1] ... [3]
for(int i = 0; i < m*n; i++)
{
w[i] *= C[i]
}
Tested the following method. But the performance didn't get better at all.
http://msdn.microsoft.com/en-us/library/dd560853.aspx
public static void Test1()
{
Random rnd = new Random(1);
var sw1 = new Stopwatch();
var sw2 = new Stopwatch();
sw1.Reset();
sw2.Reset();
int m = 10000;
int n = 20000;
int loops = 20;
var W = DummyDataUtils.CreateRandomMat1D(m, n);
var C = DummyDataUtils.CreateRandomMat1D(m, n);
for (int l = 0; l < loops; l++)
{
var v = DummyDataUtils.CreateRandomVector(n);
var b = DummyDataUtils.CreateRandomVector(m);
sw1.Start();
Parallel.ForEach(Enumerable.Range(0, m), i =>
{
for (int j = 0; j <= n - 1; j++)
{
W[i*n+j] *= C[i*n+j];
}
});
sw1.Stop();
sw2.Start();
// Partition the entire source array.
var rangePartitioner = Partitioner.Create(0, n*m);
// Loop over the partitions in parallel.
Parallel.ForEach(rangePartitioner, (range, loopState) =>
{
// Loop over each range element without a delegate invocation.
for (int i = range.Item1; i < range.Item2; i++)
{
W[i] *= C[i];
}
});
sw2.Stop();
Console.Write("o");
}
var t1 = (double)sw1.ElapsedMilliseconds / loops;
var t2 = (double)sw2.ElapsedMilliseconds / loops;
Console.WriteLine("t1: " + t1);
Console.WriteLine("t2: " + t2);
}
Result:
t1: 119
t2: 120.4
The problem is that while invoking a delegate is relatively fast, it adds up when you invoke it many times and the code inside the delegate is very simple.
What you could try instead is to use a Partitioner to specify the range you want to iterate, which allows you to iterate over many items for each delegate invocation (similar to what you're doing in [1]):
Parallel.ForEach(Partitioner.Create(0, n * m), partition =>
{
for (int i = partition.Item1; i < partition.Item2; i++)
{
W[i] *= C[i];
}
});

Sorting many TValue arrays with one TKey array

I have an outer array of N inner arrays of size M. I want to sort each inner array according to another array K exactly in the same way as a built-in Array.Sort<TKey, TValue> Method (TKey[], TValue[], IComparer<TKey>) .NET method does.
The method modifies the Key array after sorting, so I can use it to sort only single inner array. To sort many arrays, I copy the Key array to another KeyBuffer array for each inner array, reusing the KeyBuffer on each sorting step and avoiding allocation and GC. Is that the most efficient way if the typical N is 10K-100K and M < 1000? Given the low size of M the copying and sorting should be done in CPU cache, - which is the fastest that I can get?
My concern is that by doing so, I am sorting the buffer and discarding the results (N-1) times, which is a kind of waste. Also I am doing actual sorting N times, but after the first sorting I already know a map of old indexes to new indexes and I could somehow reuse that mapping for other (N-1) steps.
How would you avoid unnecessary sorting and apply known mapping from the first step to other steps?
Here is the code how I do it now. The question is if it is possible to do it more efficiently.
using System;
using System.Collections.Generic;
namespace MultiSorting {
class Program {
static void Main(string[] args) {
var N = 10;
var M = 5;
var outer = new List<string[]>(N);
for (var i = 0; i < N; i++) {
string[] inner = { "a" + i, "d" + i, "c" + i, "b" + i, "e" + i };
outer.Add(inner);
}
int[] keys = { 1, 4, 3, 2, 5 };
var keysBuffer = new int[M];
for (int i = 0; i < N; i++) {
Array.Copy(keys, keysBuffer, M);
// doing sort N times, but we know the map
// old_index -> new_index from the first sorting
// plus we sort keysBuffer N times but use the result only one time
Array.Sort(keysBuffer, outer[i]);
}
keys = keysBuffer;
foreach (var key in keys) {
Console.Write(key + " "); // 1, 2, 3, 4, 5
}
Console.WriteLine("");
for (var i = 0; i < N; i++) {
foreach (var item in outer[i]) {
Console.Write(item + " "); // a{i}, b{i}, c{i}, d{i}, e{i}
}
Console.WriteLine("");
}
Console.ReadLine();
}
}
Just played with this and implemented mapping reuse directly in a for loop. I didn't expect that a simple loop instead of native built-in methods could speed up things, probably because I had underestimated algorithmic costs of sorting vs costs of array looping and I used to relax when a profiler said the job was mostly done inside .NET methods...
Naive is the code from the question, ReuseMap is what is described in words in the question, Linq is from the answer by #L.B. ...InPlace modifies input, ...Copy doesn't.
Results with N = 2000, M = 500, 10 runs, in milliseconds:
NaiveInPlace: 1005
ReuseMapInPlace: 129 (Log2(500) = 9.0, speed-up = 7.8x)
NaiveCopy: 1181
ReuseMapCopy: 304
LinqCopy: 3284
The entire test is below:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
namespace MultiSorting {
class Program {
static void Main() {
const int n = 2;
const int m = 10;
var keys = GenerateKeys(m);
foreach (var key in keys) {
Console.Write(key + " ");
}
Console.WriteLine("");
var keysBuffer = new int[keys.Length];
Array.Copy(keys, keysBuffer, keys.Length);
Array.Sort(keysBuffer);
foreach (var key in keysBuffer) {
Console.Write(key + " ");
}
Console.WriteLine("");
// warm up, check that output is the same
List<string[]> outer = MultiSortNaiveInPlace(keys, GenerateOuter(n, m));
PrintResults(outer);
outer = MultiSortNaiveCopy(keys, GenerateOuter(n, m));
PrintResults(outer);
outer = MultiSortReuseMapInPlace(keys, GenerateOuter(n, m));
PrintResults(outer);
outer = MultiSortReuseMapCopy(keys, GenerateOuter(n, m));
PrintResults(outer);
outer = MultiSortLinqCopy(keys, GenerateOuter(n, m));
PrintResults(outer);
// tests
keys = GenerateKeys(500);
NaiveInPlace(2000, 500, keys);
ReuseMapInPlace(2000, 500, keys);
NaiveCopy(2000, 500, keys);
ReuseMapCopy(2000, 500, keys);
LinqCopy(2000, 500, keys);
Console.ReadLine();
}
private static void NaiveInPlace(int n, int m, int[] keys) {
const int rounds = 10;
var source = new List<List<string[]>>(rounds);
for (int i = 0; i < rounds; i++) {
source.Add(GenerateOuter(n, m));
}
GC.Collect();
var sw = Stopwatch.StartNew();
for (int i = 0; i < rounds; i++) {
source[i] = MultiSortNaiveInPlace(keys, source[i]);
}
sw.Stop();
Console.WriteLine("NaiveInPlace: " + sw.ElapsedMilliseconds);
}
private static void ReuseMapInPlace(int n, int m, int[] keys) {
const int rounds = 10;
var source = new List<List<string[]>>(rounds);
for (int i = 0; i < rounds; i++) {
source.Add(GenerateOuter(n, m));
}
GC.Collect();
var sw = Stopwatch.StartNew();
for (int i = 0; i < rounds; i++) {
source[i] = MultiSortReuseMapInPlace(keys, source[i]);
}
sw.Stop();
Console.WriteLine("ReuseMapInPlace: " + sw.ElapsedMilliseconds);
}
private static void NaiveCopy(int n, int m, int[] keys) {
const int rounds = 10;
var source = new List<List<string[]>>(rounds);
for (int i = 0; i < rounds; i++) {
source.Add(GenerateOuter(n, m));
}
GC.Collect();
var sw = Stopwatch.StartNew();
for (int i = 0; i < rounds; i++) {
source[i] = MultiSortNaiveCopy(keys, source[i]);
}
sw.Stop();
Console.WriteLine("NaiveCopy: " + sw.ElapsedMilliseconds);
}
private static void ReuseMapCopy(int n, int m, int[] keys) {
const int rounds = 10;
var source = new List<List<string[]>>(rounds);
for (int i = 0; i < rounds; i++) {
source.Add(GenerateOuter(n, m));
}
GC.Collect();
var sw = Stopwatch.StartNew();
for (int i = 0; i < rounds; i++) {
source[i] = MultiSortReuseMapCopy(keys, source[i]);
}
sw.Stop();
Console.WriteLine("ReuseMapCopy: " + sw.ElapsedMilliseconds);
}
private static void LinqCopy(int n, int m, int[] keys) {
const int rounds = 10;
var source = new List<List<string[]>>(rounds);
for (int i = 0; i < rounds; i++) {
source.Add(GenerateOuter(n, m));
}
GC.Collect();
var sw = Stopwatch.StartNew();
for (int i = 0; i < rounds; i++) {
source[i] = MultiSortLinqCopy(keys, source[i]);
}
sw.Stop();
Console.WriteLine("LinqCopy: " + sw.ElapsedMilliseconds);
}
private static void PrintResults(List<string[]> outer) {
for (var i = 0; i < outer.Count; i++) {
foreach (var item in outer[i]) {
Console.Write(item + " "); // a{i}, b{i}, c{i}, d{i}, e{i}
}
Console.WriteLine("");
}
}
private static int[] GenerateKeys(int m) {
var keys = new int[m];
for (int i = 0; i < m; i++) { keys[i] = i; }
var rnd = new Random();
keys = keys.OrderBy(x => rnd.Next()).ToArray();
return keys;
}
private static List<string[]> GenerateOuter(int n, int m) {
var outer = new List<string[]>(n);
for (var o = 0; o < n; o++) {
var inner = new string[m];
for (int i = 0; i < m; i++) { inner[i] = "R" + o + "C" + i; }
outer.Add(inner);
}
return outer;
}
private static List<string[]> MultiSortNaiveInPlace(int[] keys, List<string[]> outer) {
var keysBuffer = new int[keys.Length];
foreach (var inner in outer) {
Array.Copy(keys, keysBuffer, keys.Length);
// doing sort N times, but we know the map
// old_index -> new_index from the first sorting
// plus we sort keysBuffer N times but use the result only one time
Array.Sort(keysBuffer, inner);
}
return outer;
}
private static List<string[]> MultiSortNaiveCopy(int[] keys, List<string[]> outer) {
var result = new List<string[]>(outer.Count);
var keysBuffer = new int[keys.Length];
for (var n = 0; n < outer.Count(); n++) {
var inner = outer[n];
var newInner = new string[keys.Length];
Array.Copy(keys, keysBuffer, keys.Length);
Array.Copy(inner, newInner, keys.Length);
// doing sort N times, but we know the map
// old_index -> new_index from the first sorting
// plus we sort keysBuffer N times but use the result only one time
Array.Sort(keysBuffer, newInner);
result.Add(newInner);
}
return result;
}
private static List<string[]> MultiSortReuseMapInPlace(int[] keys, List<string[]> outer) {
var itemsBuffer = new string[keys.Length];
var keysBuffer = new int[keys.Length];
Array.Copy(keys, keysBuffer, keysBuffer.Length);
var map = new int[keysBuffer.Length];
for (int m = 0; m < keysBuffer.Length; m++) {
map[m] = m;
}
Array.Sort(keysBuffer, map);
for (var n = 0; n < outer.Count(); n++) {
var inner = outer[n];
for (int m = 0; m < map.Length; m++) {
itemsBuffer[m] = inner[map[m]];
}
Array.Copy(itemsBuffer, outer[n], inner.Length);
}
return outer;
}
private static List<string[]> MultiSortReuseMapCopy(int[] keys, List<string[]> outer) {
var keysBuffer = new int[keys.Length];
Array.Copy(keys, keysBuffer, keysBuffer.Length);
var map = new int[keysBuffer.Length];
for (int m = 0; m < keysBuffer.Length; m++) {
map[m] = m;
}
Array.Sort(keysBuffer, map);
var result = new List<string[]>(outer.Count);
for (var n = 0; n < outer.Count(); n++) {
var inner = outer[n];
var newInner = new string[keys.Length];
for (int m = 0; m < map.Length; m++) {
newInner[m] = inner[map[m]];
}
result.Add(newInner);
}
return result;
}
private static List<string[]> MultiSortLinqCopy(int[] keys, List<string[]> outer) {
var result = outer.Select(arr => arr.Select((item, inx) => new { item, key = keys[inx] })
.OrderBy(x => x.key)
.Select(x => x.item)
.ToArray()) // allocating
.ToList(); // allocating
return result;
}
}
}

Categories

Resources