LinkedList faster iteration than List? - c#

For some reason it seems like my LinkedList is outperforming my List. I have a LinkedList because there is part of the code where I have to rearrange the children a lot. Everything after that bit of rearranging is simply looping over the data and performing calculations. I was previously just iterating over the LinkedList using an Iterator, but then I thought that I should simultaneously store a List of the same elements so I can iterate over them faster. Somehow, adding the List and iterating over that was significantly slower for the exact same thing. Not sure how this could be, so any info is helpful.
Here is roughly the code that I would EXPECT to be faster:
class Program {
public static void Main(string[] args) {
var originalList = new List<Thing>();
for (var i = 0; i < 1000; i++) {
var t = new Thing();
t.x = 0d;
t.y = 0d;
originalList.Add(t);
}
var something = new Something(originalList);
for (var i = 0; i < 1000; i++) {
var start = DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond;
something.iterate();
time += (DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond) - start;
Console.Out.WriteLine(time / (i + 1));
}
}
class Thing {
public double x {get; set;}
public double y {get; set;}
}
class Something {
private List<Thing> things;
private LinkedList<Thing> things1 = new LinkedList<Thing>();
private List<Thing> things2 = new List<Thing>();
public Class(List<Thing> things) {
this.things = things;
for (var i = 0; i < things.Count; i++) {
things1.AddLast(things[i]);
things2.Add(things[i]);
}
}
public void iterate() {
//loops like this happen a few times, but the list is never changed, only the
//objects properties in the list
for (var i = 0; i < things2.Count; i++) {
var thing = things2[i];
thing.x += someDouble;
thing.y += someOtherDouble;
}
}
}
}
This was what I was doing first, which I think SHOULD be SLOWER:
class Program {
public static void Main(string[] args) {
var originalList = new List<Thing>();
for (var i = 0; i < 1000; i++) {
var t = new Thing();
t.x = 0d;
t.y = 0d;
originalList.Add(t);
}
var something = new Something(originalList);
for (var i = 0; i < 1000; i++) {
var start = DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond;
something.iterate();
time += (DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond) - start;
Console.Out.WriteLine(time / (i + 1));
}
}
class Thing {
public double x {get; set;}
public double y {get; set;}
}
class Something {
private List<Thing> things;
private LinkedList<Thing> things1 = new LinkedList<Thing>();
public Class(List<Thing> things) {
this.things = things;
for (var i = 0; i < things.Count; i++) {
things1.AddLast(things[i]);
}
}
public void iterate() {
//loops like this happen a few times, but the list is never changed, only the
//objects properties in the list
var iterator = things1.First;
while (iterator != null) {
var value = iterator.Value;
value.x += someDouble;
value.y += someOtherDouble;
iterator = iterator.Next;
}
}
}
}

It's difficult to verify anything since it doesn't compile, so I can't run it on my machine, but still there are some big problems:
You should not be using DateTime.Now to measure performance. Instead use Stopwatch, which is capable of high fidelity time measurements, e.g.:
var stopwatch = Stopwatch.StartNew();
//do stuff here
stopwatch.Stop();
double timeInSeconds = stopwatch.Elapsed.TotalSeconds;
Your arithmetic is fundametally flawed in the following line:
DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond
Ticks are represented as integral numbers (long here) and the result of division is not a real number, but is truncated. E.g. 55 / 7 = 7. Therefore it definitely not a stable way of benchmarking.
Additionally, run the benchmark with more elements and ensure you do it in the Release mode.

Based on your code, the cause of slow performance is not the iteration, it is from the action of adding element to the List<T>.
This post does a very good job on comparing the List and the LinkedList. List<T> is based on array, it is allocated in one contiguous block, so when you add more elements to it, the array will be resized, which cause the slow performance in your code.
If you come from the C++ world, List<T> in C# is equivalent to vector<T> in STD, while LinkedList<T> is equivalent to list<T> in STD.

Related

Search complexity of Dictionary<TKey,TValue> vs List<T>

I've been doing some reading on the generic Dictionary class and the general advice is to use Dictionary if you need really fast access to an item matching a specific key. This is because a dictionary is using a type-safe Hashtable under the hood. When accessing items the search complexity is O(1) in dictionaries whereas in a List we would need to loop through EVERY SINGLE item until we find a match making the complexity O(n).
I wrote a little console app to see just how significant the difference between the two would be. The app stores 10 million items in each collection and attempts to access the second last item. The time difference between the List and Dictionary<TKey,TValue> is only one second, making the dictionary a winner but only just.
Question - can you provide an example(verbal is fine) where using a Dictionary vs a List would yield significant performance improvements?
class Program
{
static void Main(string[] args)
{
var iterations = 10000000;//10 million
var sw = new Stopwatch();
sw.Start();
var value1 = GetSecondLastFromDictionary(iterations);
sw.Stop();
var t1 = sw.Elapsed.ToString();
sw.Restart();
var value2 = GetSecondLastFromList(iterations);
sw.Stop();
var t2 = sw.Elapsed.ToString();
Console.WriteLine($"Dictionary - {t1}\nList - {t2}");
Console.ReadKey();
}
private static string GetSecondLastFromList(int iterations)
{
var collection = new List<Test>();
for (var i = 0; i < iterations; i++)
collection.Add(new Test { Key = i, Value = $"#{i}" });
return collection.Where(e => e.Key == iterations - 1).First().Value;
}
private static string GetSecondLastFromDictionary(int iterations)
{
var collection = new Dictionary<int, string>();
for (var i = 0; i < iterations; i++)
collection.Add(i, $"#{i}");
return collection[iterations - 1];
}
}
class Test
{
public int Key { get; set; }
public string Value { get; set; }
}
Your own example is fine to show where using a Dictionary yields significant performance improvements. The problem is you're not looking at the right thing. Your code spends a lot of time creating the dictionary or list and then does just one access of it. You need to separate out the collection creation and time multiple accesses of the item.
The code below does this. I get multiple accesses of the dictionary take 0.001s, whereas of the list the same number of accesses takes 2 minutes 32 seconds. Assuming I've done that right I think it shows dictionaries are faster for access.
static void Main(string[] args)
{
var iterations = 100000;
var sw = new Stopwatch();
var dict = CreateDict(iterations);
var list = CreateList(iterations);
sw.Start();
GetSecondLastFromDictionary(iterations, dict);
sw.Stop();
var t1 = sw.Elapsed.ToString();
sw.Restart();
GetSecondLastFromList(iterations, list);
sw.Stop();
var t2 = sw.Elapsed.ToString();
Console.WriteLine($"Dictionary - {t1}\nList - {t2}");
Console.ReadKey();
}
private static Dictionary<int, string> CreateDict(int iterations)
{
var collection = new Dictionary<int, string>();
for (var i = 0; i < iterations; i++)
collection.Add(i, $"#{i}");
return collection;
}
private static List<Test> CreateList(int iterations)
{
var collection = new List<Test>();
for (var i = 0; i < iterations; i++)
collection.Add(new Test { Key = i, Value = $"#{i}" });
return collection;
}
private static void GetSecondLastFromList(int iterations, List<Test> collection)
{
string test;
for (var i = 0; i < iterations; i++)
test = collection.Where(e => e.Key == iterations - 1).First().Value;
}
private static void GetSecondLastFromDictionary(int iterations, Dictionary<int, string> collection)
{
string test;
for (var i = 0; i < iterations; i++)
test = collection[iterations - 1];
}
}

Task process ending before finishing all the work

I've been having trouble running multiple tasks with heavy operations.
It seems as if the task processes is killed before all the operations are complete.
The code here is an example code I used to replicate the issue. If I add something like Debug.Write(), the added wait for writing fixes the issue. The issue is gone if I test on a smaller sample size too. The reason there is a class in the example below is to create complexity for the test.
The real case where I encountered the issue first is too complicated to explain for a post here.
public static class StaticRandom
{
static int seed = Environment.TickCount;
static readonly ThreadLocal<Random> random =
new ThreadLocal<Random>(() => new Random(Interlocked.Increment(ref seed)));
public static int Next()
{
return random.Value.Next();
}
public static int Next(int maxValue)
{
return random.Value.Next(maxValue);
}
public static double NextDouble()
{
return random.Value.NextDouble();
}
}
// this is the test function I run to recreate the problem:
static void tasktest()
{
var testlist = new List<ExampleClass>();
for (var index = 0; index < 10000; ++index)
{
var newClass = new ExampleClass();
newClass.Populate(Enumerable.Range(0, 1000).ToList());
testlist.Add(newClass);
}
var anotherClassList = new List<ExampleClass>();
var threadNumber = 5;
if (threadNumber > testlist.Count)
{
threadNumber = testlist.Count;
}
var taskList = new List<Task>();
var tokenSource = new CancellationTokenSource();
CancellationToken cancellationToken = tokenSource.Token;
int stuffPerThread = testlist.Count / threadNumber;
var stuffCounter = 0;
for (var count = 1; count <= threadNumber; ++count)
{
var toSkip = stuffCounter;
var threadWorkLoad = stuffPerThread;
var currentIndex = count;
// these ifs make sure all the indexes are covered
if (stuffCounter + threadWorkLoad > testlist.Count)
{
threadWorkLoad = testlist.Count - stuffCounter;
}
else if (count == threadNumber && stuffCounter + threadWorkLoad < testlist.Count)
{
threadWorkLoad = testlist.Count - stuffCounter;
}
taskList.Add(Task.Factory.StartNew(() => taskfunc(testlist, anotherClassList, toSkip, threadWorkLoad),
cancellationToken, TaskCreationOptions.None, TaskScheduler.Default));
stuffCounter += stuffPerThread;
}
Task.WaitAll(taskList.ToArray());
}
public class ExampleClass
{
public ExampleClassInner[] Inners { get; set; }
public ExampleClass()
{
Inners = new ExampleClassInner[5];
for (var index = 0; index < Inners.Length; ++index)
{
Inners[index] = new ExampleClassInner();
}
}
public void Populate(List<int> intlist) {/*adds random ints to the inner class*/}
public ExampleClass(ExampleClass copyFrom)
{
Inners = new ExampleClassInner[5];
for (var index = 0; index < Inners.Length; ++index)
{
Inners[index] = new ExampleClassInner(copyFrom.Inners[index]);
}
}
public class ExampleClassInner
{
public bool SomeBool { get; set; } = false;
public int SomeInt { get; set; } = -1;
public ExampleClassInner()
{
}
public ExampleClassInner(ExampleClassInner copyFrom)
{
SomeBool = copyFrom.SomeBool;
SomeInt = copyFrom.SomeInt;
}
}
}
static int expensivefunc(int theint)
{
/*a lot of pointless arithmetic and loops done only on primitives and with primitives,
just to increase the complexity*/
theint *= theint + 1;
var anotherlist = Enumerable.Range(0, 10000).ToList();
for (var index = 0; index < anotherlist.Count; ++index)
{
theint += index;
if (theint % 5 == 0)
{
theint *= index / 2;
}
}
var yetanotherlist = Enumerable.Range(0, 50000).ToList();
for (var index = 0; index < yetanotherlist.Count; ++index)
{
theint += index;
if (theint % 7 == 0)
{
theint -= index / 3;
}
}
while (theint > 8)
{
theint /= 2;
}
return theint;
}
// this function is intentionally creating a lot of objects, to simulate complexity
static void taskfunc(List<ExampleClass> intlist, List<ExampleClass> anotherClassList, int skip, int take)
{
if (take == 0)
{
take = intlist.Count;
}
var partial = intlist.Skip(skip).Take(take).ToList();
for (var index = 0; index < partial.Count; ++index)
{
var testint = expensivefunc(index);
var newClass = new ExampleClass(partial[index]);
newDna.Inners[StaticRandom.Next(5)].SomeInt = testint;
anotherClassList.Add(new ExampleClass(newClass));
}
}
The expected result is that the list anotherClassList will be the same size as testlist and this happens when the lists are smaller or the complexity of the task operations is smaller. However, when I increase the volume of operations, the anotherClassList has a few indexes missing and sometimes some of the indexes in the list are null objects.
Example result:
Why does this happen, I have Task.WaitAll?
Your problem is it's just not thread-safe; you just can't add to a list<T> in a multi-threaded environment and expect it to play nice.
One way is to use lock or a thread safe collection, but I feel this all should be refactored (my OCD is going off all over the place).
private static object _sync = new object();
...
private static void TaskFunc(List<ExampleClass> intlist, List<ExampleClass> anotherClassList, int skip, int take)
{
...
var partial = intlist.Skip(skip).Take(take).ToList();
...
// note that locking here will likely drastically decrease any performance threading gain
lock (_sync)
{
for (var index = 0; index < partial.Count; ++index)
{
// this is your problem, you are adding to a list from multiple threads
anotherClassList.Add(...);
}
}
}
In short, I think you need to better thinking about the threading logic of your method, identify what you are trying to achieve, and how to make it conceptually thread safe (while keeping your performance gains).
After TheGeneral enlightened me that Lists are not thread safe, I changed the List to which I was adding in a thread, to an Array type and this fixed my issue.

Processing records speed

I realise this is a non-specific code question. But I suspect that people with answers are on this forum.
I am receiving a large amount of records of < 100 bytes via TCP at a rate of 10 per millisecond.
I have to parse and process the data and that takes me 100 microseconds - so I am pretty maxed out.
Does 100 microseconds seem large?
Here is an example of the kind of processing I do with LINQ. It is really convenient - but is it inherently slow?
public void Process()
{
try
{
int ptr = PayloadOffset + 1;
var cPair = MessageData.GetString(ref ptr, 7);
var orderID = MessageData.GetString(ref ptr, 15);
if (Book.CPairs.ContainsKey(cPair))
{
var cPairGroup = Book.CPairs[cPair];
if (cPairGroup.BPrices != null)
{
cPairGroup.BPrices.ForEach(x => { x.BOrders.RemoveAll(y => y.OrderID.Equals(orderID)); });
cPairGroup.BPrices.RemoveAll(x => x.BOrders.Count == 0);
}
}
}
}
public class BOrderGroup
{
public double Amount;
public string OrderID;
}
public class BPriceGroup
{
public double BPrice;
public List<BOrderGroup> BOrders;
}
public class CPairGroup
{
public List<BPriceGroup> BPrices;
}
public static Dictionary<string, CPairGroup> CPairs;
As other have mentioned, LINQ is not inherently slow. But it can be slower than equivalent non-LINQ code (this is why Roslyn team has "Avoid LINQ" guide under coding conventions).
If this is your hot path and you need every microsecond than you should probably implement logic in such a way:
public void Process()
{
try
{
int ptr = PayloadOffset + 1;
var cPair = MessageData.GetString(ref ptr, 7);
var orderID = MessageData.GetString(ref ptr, 15);
if (Book.CPairs.TryGetValue(cPair, out CPairGroup cPairGroup) && cPairGroup != null)
{
for (int i = cPairGroup.BPrices.Count - 1; i >= 0; i--)
{
var x = cPairGroup.BPrices[i];
for (int j = x.BOrders.Count - 1; j >= 0; j--)
{
var y = x.BOrders[j];
if (y.OrderID.Equals(orderID))
{
x.BOrders.RemoveAt(j);
}
}
if (x.BOrders.Count == 0)
{
cPairGroup.BPrices.RemoveAt(i);
}
}
}
}
}
Main points:
Avoid double dictionary lookup by using TryGetValue
Single iteration over cPairGroup.BPrices
In place modification of structures by iterating backwards
This code should not contain any additional heap allocations

Thread safety Parallel.For c#

im frenchi so sorry first sorry for my english .
I have an error on visual studio (index out of range) i have this problem only with a Parallel.For not with classic for.
I think one thread want acces on my array[i] and another thread want too ..
It's a code for calcul Kmeans clustering for building link between document (with cosine similarity).
more information :
IndexOutOfRange is on similarityMeasure[i]=.....
I have a computer with 2 Processor (12logical)
with classic for , cpu usage is 9-14% , time for 1 iteration=9min..
with parallel.for , cpu usage is 70-90% =p, time for 1 iteration =~1min30
Sometimes it works longer before generating an error
My function is :
private static int FindClosestClusterCenter(List<Centroid> clustercenter, DocumentVector obj)
{
float[] similarityMeasure = new float[clustercenter.Count()];
float[] copy = similarityMeasure;
object sync = new Object();
Parallel.For(0, clustercenter.Count(), (i) => //for(int i = 0; i < clustercenter.Count(); i++) Parallel.For(0, clustercenter.Count(), (i) => //
{
similarityMeasure[i] = SimilarityMatrics.FindCosineSimilarity(clustercenter[i].GroupedDocument[0].VectorSpace, obj.VectorSpace);
});
int index = 0;
float maxValue = similarityMeasure[0];
for (int i = 0; i < similarityMeasure.Count(); i++)
{
if (similarityMeasure[i] > maxValue)
{
maxValue = similarityMeasure[i];
index = i;
}
}
return index;
}
My function is call here :
do
{
prevClusterCenter = centroidCollection;
DateTime starttime = DateTime.Now;
foreach (DocumentVector obj in documentCollection)//Parallel.ForEach(documentCollection, parallelOptions, obj =>//foreach (DocumentVector obj in documentCollection)
{
int ind = FindClosestClusterCenter(centroidCollection, obj);
resultSet[ind].GroupedDocument.Add(obj);
}
TimeSpan tempsecoule = DateTime.Now.Subtract(starttime);
Console.WriteLine(tempsecoule);
//Console.ReadKey();
InitializeClusterCentroid(out centroidCollection, centroidCollection.Count());
centroidCollection = CalculMeanPoints(resultSet);
stoppingCriteria = CheckStoppingCriteria(prevClusterCenter, centroidCollection);
if (!stoppingCriteria)
{
//initialisation du resultat pour la prochaine itération
InitializeClusterCentroid(out resultSet, centroidCollection.Count);
}
} while (stoppingCriteria == false);
_counter = counter;
return resultSet;
FindCosSimilarity :
public static float FindCosineSimilarity(float[] vecA, float[] vecB)
{
var dotProduct = DotProduct(vecA, vecB);
var magnitudeOfA = Magnitude(vecA);
var magnitudeOfB = Magnitude(vecB);
float result = dotProduct / (float)Math.Pow((magnitudeOfA * magnitudeOfB),2);
//when 0 is divided by 0 it shows result NaN so return 0 in such case.
if (float.IsNaN(result))
return 0;
else
return (float)result;
}
CalculMeansPoint :
private static List<Centroid> CalculMeanPoints(List<Centroid> _clust)
{
for (int i = 0; i < _clust.Count(); i++)
{
if (_clust[i].GroupedDocument.Count() > 0)
{
for (int j = 0; j < _clust[i].GroupedDocument[0].VectorSpace.Count(); j++)
{
float total = 0;
foreach (DocumentVector vspace in _clust[i].GroupedDocument)
{
total += vspace.VectorSpace[j];
}
_clust[i].GroupedDocument[0].VectorSpace[j] = total / _clust[i].GroupedDocument.Count();
}
}
}
return _clust;
}
You may have some side effects in FindCosineSimilarity, make sure it does not modify any field or input parameter. Example: resultSet[ind].GroupedDocument.Add(obj);. If resultSet is not a reference to locally instantiated array, then that is a side effect.
That may fix it. But FYI you could use AsParallel for this rather than Parallel.For:
similarityMeasure = clustercenter
.AsParallel().AsOrdered()
.Select(c=> SimilarityMatrics.FindCosineSimilarity(c.GroupedDocument[0].VectorSpace, obj.VectorSpace))
.ToArray();
You realize that if you synchronize the whole Content of the Parallel-For, it's just the same as having a normal synchrone for-loop, right? Meaning the code as it is doesnt do anything in parallel, so I dont think you'll have any Problems with concurrency. My guess from what I can tell is clustercenter[i].GroupedDocument is propably an empty Array.

Mutithreading in C# queries

I am new to multithreading in C# . I have a 3D array of size (x)(y)(z) and say i want to calculate the average of all the z samples for every (x,y) values. I wish to do that using multithreading (say 2 threads) where i will send half the array of size (x/2)*y*z for processing to thread1 and the other half to thread2.
How to do it? How do I pass and retrieve arguments from individual threads? A code example will be helpful.
Regards
I would recommend using PLINQ for this instead of threading this yourself.
It will let you run your query using LINQ syntax, but parallelize it (across all of your cores) automatically.
There are many reasons why it makes sense to use something PLINQ (as mentioned by Reed) or Parallel.For as implementing a low-overhead scheduler for distributing jobs over several cpus is a bit challenging.
So if I understood you correctly maybe this could get you started (on my 4 core machine the parallel version is 3 times faster than the single core version):
using System;
using System.Diagnostics;
using System.Linq;
using System.Threading.Tasks;
class Program
{
static void AverageOfZ (
double[] input,
double[] result,
int x,
int y,
int z
)
{
Debug.Assert(input.Length == x*y*z);
Debug.Assert(result.Length == x*y);
//Replace Parallel with Sequential to compare with non-parallel loop
//Sequential.For(
Parallel.For(
0,
x*y,
i =>
{
var begin = i*z;
var end = begin + z;
var sum = 0.0;
for (var iter = begin; iter < end; ++iter)
{
sum += input[iter];
}
result[i] = sum/z;
});
}
static void Main(string[] args)
{
const int X = 64;
const int Y = 64;
const int Z = 64;
const int Repetitions = 40000;
var random = new Random(19740531);
var samples = Enumerable.Range(0, X*Y*Z).Select(x => random.NextDouble()).ToArray();
var result = new double[X*Y];
var then = DateTime.Now;
for (var iter = 0; iter < Repetitions; ++iter)
{
AverageOfZ(samples, result, X, Y, Z);
}
var diff = DateTime.Now - then;
Console.WriteLine(
"{0} samples processed {1} times in {2} seconds",
samples.Length,
Repetitions,
diff.TotalSeconds
);
}
}
static class Sequential
{
public static void For(int from, int to, Action<int> action)
{
for (var iter = from; iter < to; ++iter)
{
action(iter);
}
}
}
PS. When going for concurrent performance its important to consider how the different cores access memory as its very easy to get disappointing performance otherwise.
Dot Net 3.5 and onward introduced many shortcut keywords that abstract away the complexity of things like Parallel for multi threading or Async for Async IO. Unfortunately this also provides no opportunity for understanding whats involved in these tasks. For example a colleague of mine was recently trying to use Async for a login method which returned an authentication token.
Here is the full blown multi threaded sample code for your scenario. to make it more real the sample code pretends that:
X is Longitude
Y is Lattitude
and Z is Rainfall Samples at the coordinates
The sample code also follows the Unit of Work design pattern where Rainfall Samples at each coordinate becomes a work item. It also creates discrete foreground threads instead of using a background threadpool.
Due to the simplicity of the work item and short compute time involved I've split the thread synchronization locks into two locks. one for the work queue and one for the output data.
Note: I have not used any Dot net shortcuts such as Lync so this code should run on Dot Net 2.0 as well.
In real world app development something like whats below would only be needed in complex scenarios such as stream processing of a continuous stream of work items in which case you would also need to implement output data buffers cleared regularly as the threads would effectively run forever.
public static class MultiThreadSumRainFall
{
const int LongitudeSize = 64;
const int LattitudeSize = 64;
const int RainFallSamplesSize = 64;
const int SampleMinValue = 0;
const int SampleMaxValue = 1000;
const int ThreadCount = 4;
public static void SumRainfallAndOutputValues()
{
int[][][] SampleData;
SampleData = GenerateSampleRainfallData();
for (int Longitude = 0; Longitude < LongitudeSize; Longitude++)
{
for (int Lattitude = 0; Lattitude < LattitudeSize; Lattitude++)
{
QueueWork(new WorkItem(Longitude, Lattitude, SampleData[Longitude][Lattitude]));
}
}
System.Threading.ThreadStart WorkThreadStart;
System.Threading.Thread WorkThread;
List<System.Threading.Thread> RunningThreads;
WorkThreadStart = new System.Threading.ThreadStart(ParallelSum);
int NumThreads;
NumThreads = ThreadCount;
if (ThreadCount < 1)
{
NumThreads = 1;
}
else if (NumThreads > (Environment.ProcessorCount + 1))
{
NumThreads = Environment.ProcessorCount + 1;
}
OutputData = new int[LongitudeSize, LattitudeSize];
RunningThreads = new List<System.Threading.Thread>();
for (int I = 0; I < NumThreads; I++)
{
WorkThread = new System.Threading.Thread(WorkThreadStart);
WorkThread.Start();
RunningThreads.Add(WorkThread);
}
bool AllThreadsComplete;
AllThreadsComplete = false;
while (!AllThreadsComplete)
{
System.Threading.Thread.Sleep(100);
AllThreadsComplete = true;
foreach (System.Threading.Thread WorkerThread in RunningThreads)
{
if (WorkerThread.IsAlive)
{
AllThreadsComplete = false;
}
}
}
for (int Longitude = 0; Longitude < LongitudeSize; Longitude++)
{
for (int Lattitude = 0; Lattitude < LattitudeSize; Lattitude++)
{
Console.Write(string.Concat(OutputData[Longitude, Lattitude], #" "));
}
Console.WriteLine();
}
}
private class WorkItem
{
public WorkItem(int _Longitude, int _Lattitude, int[] _RainFallSamples)
{
Longitude = _Longitude;
Lattitude = _Lattitude;
RainFallSamples = _RainFallSamples;
}
public int Longitude { get; set; }
public int Lattitude { get; set; }
public int[] RainFallSamples { get; set; }
}
public static int[][][] GenerateSampleRainfallData()
{
int[][][] Result;
Random Rnd;
Rnd = new Random();
Result = new int[LongitudeSize][][];
for(int Longitude = 0; Longitude < LongitudeSize; Longitude++)
{
Result[Longitude] = new int[LattitudeSize][];
for (int Lattidude = 0; Lattidude < LattitudeSize; Lattidude++)
{
Result[Longitude][Lattidude] = new int[RainFallSamplesSize];
for (int Sample = 0; Sample < RainFallSamplesSize; Sample++)
{
Result[Longitude][Lattidude][Sample] = Rnd.Next(SampleMinValue, SampleMaxValue);
}
}
}
return Result;
}
private static object SyncRootWorkQueue = new object();
private static Queue<WorkItem> WorkQueue = new Queue<WorkItem>();
private static void QueueWork(WorkItem SamplesWorkItem)
{
lock(SyncRootWorkQueue)
{
WorkQueue.Enqueue(SamplesWorkItem);
}
}
private static WorkItem DeQueueWork()
{
WorkItem Samples;
Samples = null;
lock (SyncRootWorkQueue)
{
if (WorkQueue.Count > 0)
{
Samples = WorkQueue.Dequeue();
}
}
return Samples;
}
private static int QueueSize()
{
lock(SyncRootWorkQueue)
{
return WorkQueue.Count;
}
}
private static object SyncRootOutputData = new object();
private static int[,] OutputData;
private static void SetOutputData(int Longitude, int Lattitude, int SumSamples)
{
lock(SyncRootOutputData)
{
OutputData[Longitude, Lattitude] = SumSamples;
}
}
private static void ParallelSum()
{
WorkItem SamplesWorkItem;
int SummedResult;
SamplesWorkItem = DeQueueWork();
while (SamplesWorkItem != null)
{
SummedResult = 0;
foreach (int SampleValue in SamplesWorkItem.RainFallSamples)
{
SummedResult += SampleValue;
}
SetOutputData(SamplesWorkItem.Longitude, SamplesWorkItem.Lattitude, SummedResult);
SamplesWorkItem = DeQueueWork();
}
}
}

Categories

Resources