Threading.Timer stops in Console application - c#

I'm working with with a dictionary containing a ID as key and a queue as the value. I have one thread writing to the queues, and another thread reading from the queues, so I need to use the Concurrent-structures that were introduced in .NET 4.0. As a part of this i tried to write a test application just to fill the queues, but I came across an issue with the timers stopping after around 10 seconds. I really don't understand why as there is nothing to catch, no error message or anything to give me a hint about what might be wrong.
So can someone please explain to me why the timer stops after around 10 seconds? I've tried this on two different computers (both using Visual Studio 2012, but with .NET Framework 4.0).
class Program {
private readonly ConcurrentDictionary<int, ConcurrentQueue<TestObject>> _pipes =
new ConcurrentDictionary<int, ConcurrentQueue<TestObject>>();
static void Main() {
Program program = new Program();
program.Run();
Console.Read();
}
private void Run() {
_pipes[100] = new ConcurrentQueue<TestObject>();
_pipes[200] = new ConcurrentQueue<TestObject>();
_pipes[300] = new ConcurrentQueue<TestObject>();
Timer timer = new Timer(WriteStuff, null, 0, 100);
}
private void WriteStuff(object sender) {
for (int i = 0; i < 5; i++) {
foreach (KeyValuePair<int, ConcurrentQueue<TestObject>> pipe in _pipes) {
pipe.Value.Enqueue(
new TestObject { Name = DateTime.Now.ToString("o") + "-" + i });
}
i++;
}
Console.WriteLine(DateTime.Now + "added stuff");
}
}
internal class TestObject {
public string Name { get; set; }
public bool Sent { get; set; }
}

Most likely, the timer is going out of scope and being collected. Declare the timer at outer scope. That is:
private Timer timer;
private void Run()
{
...
timer = new Timer(WriteStuff, null, 0, 100);
}
Also, I think you'll find that BlockingCollection is easier to work with than ConcurrentQueue. BlockingCollection wraps a very nice API around concurrent collections, making it easier to do non-busy waits on the queue when removing things. In its default configuration, it uses a ConcurrentQueue as the backing store. All you have to do to use it is replace ConcurrentQueue in your code with BlockingCollection, and change from calling Enqueue to calling Add. As in:
for (int i = 0; i < 5; i++)
{
foreach (var pipe in _pipes)
{
pipe.Value.Add(
new TestObject { Name = DateTime.Now.ToString("o") + "-" + i });
}
}

Related

Unable to implement data parsing in a multi-threaded context using lock

I've built a program that
takes in a list of record data from a file
parses and cleans up each record in a parsing object
outputs it to an output file
So far this has worked on a single thread, but considering the fact that records can exceed 1 million in some cases, we want to implement this in a multi threading context. Multi threading is new to me in .Net, and I've given it a shot but its not working. Below I will provide more details and code:
Main Class (simplified):
public class MainClass
{
parseObject[] parseObjects;
Thread[] threads;
List<InputLineItem> inputList = new List<InputLineItem>();
FileUtils fileUtils = new FileUtils();
public GenParseUtilsThreaded(int threadCount)
{
this.threadCount = threadCount;
Init();
}
public void Init()
{
inputList = fileUtils.GetInputList();
parseObjects = new parseObject[threadCount - 1];
threads = new Thread[threadCount - 1];
InitParseObjects();
Parse();
}
private void InitParseObjects()
{
//using a ref of fileUtils to use as my lock expression
parseObjects[0] = new ParseObject(ref fileUtils);
parseObjects[0].InitValues();
for (int i = 1; i < threadCount - 1; i++)
{
parseObjects[i] = new parseObject(ref fileUtils);
parseObjects[i].InitValues();
}
}
private void InitThreads()
{
for (int i = 0; i < threadCount - 1; i++)
{
Thread t = new Thread(new ThreadStart(parseObjects[0].CleanupAndParseInput));
threads[i] = t;
}
}
public void Parse()
{
try
{
InitThreads();
int objectIndex = 0;
foreach (InputLineItem inputLineItem in inputList)
{
parseObjects[0].inputLineItem = inputLineItem;
threads[objectIndex].Start();
objectIndex++;
if (objectIndex == threadCount)
{
objectIndex = 0;
InitThreads(); //do i need to re-init the threads after I've already used them all once?
}
}
}
catch (Exception e)
{
Console.WriteLine("(286) The following error occured: " + e);
}
}
}
}
And my Parse object class (also simplified):
public class ParseObject
{
public ParserLibrary parser { get; set; }
public FileUtils fileUtils { get; set; }
public InputLineItem inputLineItem { get; set; }
public ParseObject( ref FileUtils fileUtils)
{
this.fileUtils = fileUtils;
}
public void InitValues()
{
//relevant config of parser library object occurs here
}
public void CleanupFields()
{
parser.Clean(inputLineItem.nameValue);
inputLineItem.nameValue = GetCleanupUpValueFromParser();
}
private string GetCleanupFieldValue()
{
//code to extract cleanup up value from parses
}
public void CleanupAndParseInput()
{
CleanupFields();
ParseInput();
}
public void ParseInput()
{
try
{
parser.Parse(InputLineItem.NameValue);
}
catch (Exception e)
{
}
try
{
lock (fileUtils)
{
WriteOutputToFile(inputLineItem);
}
}
catch (Exception e)
{
Console.WriteLine("(414) Failed to write to output: " + e);
}
}
public void WriteOutputToFile(InputLineItem inputLineItem)
{
//writes updated value to output file
}
}
The error I get is when trying to run the Parse function, I get this message:
An unhandled exception of type 'System.AccessViolationException' occurred in GenParse.NET.dll
Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
That being said, I feel like there's a whole lot more that I'm doing wrong here aside from what is causing that error.
I also have further questions:
Do I create multiple parse objects and iteratively feed them to each thread as I'm attempting to do, or should I use one Parse object that gets shared or cloned across each thread?
If, outside the thread, I change a value in the object that I'm passing to the thread, will that change reflect in the object passed to the thread? i.e, is the object passed by value or reference?
Is there a more efficient way for each record to be assigned to a thread and its parse object than I am currently doing with the objectIndex iterator?
THANKS!
Do I create multiple parse objects and iteratively feed them to each thread as I'm attempting to do, or should I use one Parse object that gets shared or cloned across each thread?
You initialize each thread with new ThreadStart(parseObjects[0].CleanupAndParseInput) so all threads will share the same parse object. It is a fairly safe bet that the parse objects are not threadsafe. So each thread should have a separate object. Note that this might not be sufficient, if the parse library uses any global fields it might be non-threadsafe even when using separate objects.
If, outside the thread, I change a value in the object that I'm passing to the thread, will that change reflect in the object passed to the thread? i.e, is the object passed by value or reference?
Objects (i.e. classes) are passed by reference. But any changes to an object are not guaranteed to be visible in other threads unless a memoryBarrier is issued. Most synchronization code (like lock) will issue memory barriers. Keep in mind that any non-atomic operation is unsafe if a field is written an read concurrently.
Is there a more efficient way for each record to be assigned to a thread and its parse object than I am currently doing with the objectIndex iterator?
Using manual threads in this way is very old-school. The modern, easier, and probably faster way is to use a parallel-for loop. This will try to be smart about how many threads it will use and try to adapt chunk sizes to keep the synchronization overhead low.
var items = new List<int>();
ParseObject LocalInit()
{
// Do initalization, This is run once for each thread used
return new ParseObject();
}
ParseObject ThreadMain(int value, ParallelLoopState state, ParseObject threadLocalObject)
{
// Do whatever you need to do
// This is run on multiple threads
return threadLocalObject;
}
void LocalFinally(ParseObject obj)
{
// Do Cleanup for each thread
}
Parallel.ForEach(items, LocalInit, ThreadMain, LocalFinally);
As a final note, I would advice against using multithreading unless you are familiar with the potential dangers and pitfalls it involves, at least for any project where the result is important. There are many ways to screw up and make a program that will work 99.9% of the time, and silently corrupt data the remaining 0.1% of the time.

Using ClearScript inside Threads

I have a C# application where I spawn multiple threads. I'm on .NET framework 4.7.1. Inside these threads, work is performed and this work may execute user-defined scripted functions. I am using ClearScript as the scripting engine and for purposes of this question I am using the VBScriptEngine. Here is an sample application demonstrating my problem:
static VBScriptEngine vbsengine = new VBScriptEngine();
static void Main(string[] args)
{
for (int i=0;i<4000;i++)
{
Thread t = new Thread(Program.ThreadedFunc);
t.Start(i);
}
Console.ReadKey();
}
static void ThreadedFunc(object i)
{
Console.WriteLine(i + ": " + vbsengine.Evaluate("1+1"));
}
On the Evaluate() function I get the following error:
System.InvalidOperationException: 'The calling thread cannot access this object because a different thread owns it.'
I understand ClearScript has implemented thread affinity and a spawned thread cannot access the globally defined engine. So what is my alternative? Create a new instance of ClearScript for each new thread? This seems incredibly wasteful and would create a lot of overhead - my application will need to process thousands of threads. I went ahead and tried this approach anyways - and while it does work (for a while) - end up getting an error. Here's a revised version of my sample app:
static void Main(string[] args)
{
for (int i=0;i<4000;i++)
{
Thread t = new Thread(Program.ThreadedFunc);
t.Start(i);
}
Console.ReadKey();
}
static void ThreadedFunc(object i)
{
using (VBScriptEngine vbsengine = new VBScriptEngine())
{
Console.WriteLine(i + ": " + vbsengine.Evaluate("1+1"));
}
}
On the new VBScriptEngine() call, I now start getting: System.ComponentModel.Win32Exception: 'Not enough storage is available to process this command'.
I'm not really sure what's causing that message as the application isn't taking up a lot of RAM.
I realize this sample application is starting all the threads at once but my full application ensures only 4 threads are running and I still end up getting this message after a while. I don't know why, but I can't get rid of this message either - even after restarting the app and Visual Studio. A little clarity on what's causing this message would be useful.
So my question is - if I only need, say 4 threads, running at once - is there a way I can just create 4 instances of the VBScriptEngine and reuse it for each new thread call? Or even just 1 instance of the VBScriptEngine on the main thread and each new thread just shares it?
With some help from the ClearScript team, I was able to get able to get my sample app to work using only 1 dedicated engine instance per thread. The trick was creating all the necessary engines first with its own thread, then within my loop using Dispatcher.Invoke() to call my threaded function. Here is an updated sample app using this approach:
static void Main(string[] args)
{
var vbengines = new VBScriptEngine[Environment.ProcessorCount];
var checkPoint = new ManualResetEventSlim();
for (var index = 0; index < vbengines.Length; ++index)
{
var thread = new Thread(indexArg =>
{
using (var engine = new VBScriptEngine())
{
vbengines[(int)indexArg] = engine;
checkPoint.Set();
Dispatcher.Run();
}
});
thread.Start(index);
checkPoint.Wait();
checkPoint.Reset();
}
Parallel.ForEach(Enumerable.Range(0, 4000), item => {
var engine = vbengines[item % vbengines.Length];
engine.Dispatcher.Invoke(() => {
ThreadedFunc(new myobj() { vbengine = engine, index = item });
});
});
Array.ForEach(vbengines, engine => engine.Dispatcher.InvokeShutdown());
Console.ReadKey();
}
static void ThreadedFunc(object obj)
{
Console.WriteLine(((myobj)obj).index.ToString() + ": " + ((myobj)obj).vbengine.Evaluate("1+1").ToString());
}
class myobj
{
public VBScriptEngine vbengine { get; set; }
public int index { get; set; }
}

static method vs instance method, multi threading, performance

Can you help explain how multiple threads access static methods? Are multiple threads able to access the static method concurrently?
To me it would seem logical that if a method is static that would make it a single resouce that is shared by all the threads. Therefore only one thread would be able to use it at a time. I have created a console app to test this. But from the results of my test it would appear that my assumption is incorrect.
In my test a number of Worker objects are constructed. Each Worker has a number of passwords and keys. Each Worker has an instance method that hashes it's passwords with it's keys. There is also a static method which has exactly the same implementation, the only difference being that it is static. After all the Worker objects have been created the start time is written to the console. Then a DoInstanceWork event is raised and all of the Worker objects queue their useInstanceMethod to the threadpool. When all the methods or all the Worker objects have completed the time it took for them all to complete is calculated from the start time and is written to the console. Then the start time is set to the current time and the DoStaticWork event is raised. This time all the Worker objects queue their useStaticMethod to the threadpool. And when all these method calls have completed the time it took until they had all completed is again calculated and written to the console.
I was expecting the time taken when the objects use their instance method to be 1/8 of the time taken when they use the static method. 1/8 because my machine has 4 cores and 8 virtual threads. But it wasn't. In fact the time taken when using the static method was actually fractionally faster.
How is this so? What is happening under the hood? Does each thread get it's own copy of the static method?
Here is the Console app-
using System;
using System.Collections.Generic;
using System.Security.Cryptography;
using System.Threading;
namespace bottleneckTest
{
public delegate void workDelegate();
class Program
{
static int num = 1024;
public static DateTime start;
static int complete = 0;
public static event workDelegate DoInstanceWork;
public static event workDelegate DoStaticWork;
static bool flag = false;
static void Main(string[] args)
{
List<Worker> workers = new List<Worker>();
for( int i = 0; i < num; i++){
workers.Add(new Worker(i, num));
}
start = DateTime.UtcNow;
Console.WriteLine(start.ToString());
DoInstanceWork();
Console.ReadLine();
}
public static void Timer()
{
complete++;
if (complete == num)
{
TimeSpan duration = DateTime.UtcNow - Program.start;
Console.WriteLine("Duration: {0}", duration.ToString());
complete = 0;
if (!flag)
{
flag = true;
Program.start = DateTime.UtcNow;
DoStaticWork();
}
}
}
}
public class Worker
{
int _id;
int _num;
KeyedHashAlgorithm hashAlgorithm;
int keyLength;
Random random;
List<byte[]> _passwords;
List<byte[]> _keys;
List<byte[]> hashes;
public Worker(int id, int num)
{
this._id = id;
this._num = num;
hashAlgorithm = KeyedHashAlgorithm.Create("HMACSHA256");
keyLength = hashAlgorithm.Key.Length;
random = new Random();
_passwords = new List<byte[]>();
_keys = new List<byte[]>();
hashes = new List<byte[]>();
for (int i = 0; i < num; i++)
{
byte[] key = new byte[keyLength];
new RNGCryptoServiceProvider().GetBytes(key);
_keys.Add(key);
int passwordLength = random.Next(8, 20);
byte[] password = new byte[passwordLength * 2];
random.NextBytes(password);
_passwords.Add(password);
}
Program.DoInstanceWork += new workDelegate(doInstanceWork);
Program.DoStaticWork += new workDelegate(doStaticWork);
}
public void doInstanceWork()
{
ThreadPool.QueueUserWorkItem(useInstanceMethod, new WorkerArgs() { num = _num, keys = _keys, passwords = _passwords });
}
public void doStaticWork()
{
ThreadPool.QueueUserWorkItem(useStaticMethod, new WorkerArgs() { num = _num, keys = _keys, passwords = _passwords });
}
public void useInstanceMethod(object args)
{
WorkerArgs workerArgs = (WorkerArgs)args;
for (int i = 0; i < workerArgs.num; i++)
{
KeyedHashAlgorithm hashAlgorithm = KeyedHashAlgorithm.Create("HMACSHA256");
hashAlgorithm.Key = workerArgs.keys[i];
byte[] hash = hashAlgorithm.ComputeHash(workerArgs.passwords[i]);
}
Program.Timer();
}
public static void useStaticMethod(object args)
{
WorkerArgs workerArgs = (WorkerArgs)args;
for (int i = 0; i < workerArgs.num; i++)
{
KeyedHashAlgorithm hashAlgorithm = KeyedHashAlgorithm.Create("HMACSHA256");
hashAlgorithm.Key = workerArgs.keys[i];
byte[] hash = hashAlgorithm.ComputeHash(workerArgs.passwords[i]);
}
Program.Timer();
}
public class WorkerArgs
{
public int num;
public List<byte[]> passwords;
public List<byte[]> keys;
}
}
}
Methods are code - there's no problem with thread accessing that code concurrently since the code isn't modified by running it; it's a read-only resource (jitter aside). What needs to be handled carefully in multi-threaded situations is access to data concurrently (and more specifically, when modifying that data is a possibility). Whether a method is static or an instance method has nothing to do with whether or not it needs to ne serialized in some way to make it threadsafe.
In all cases, whether static or instance, any thread can access any method at any time unless you do explicit work to prevent it.
For example, you can create a lock to ensure only a single thread can access a given method, but C# will not do that for you.
Think of it like watching TV. A TV does nothing to prevent multiple people from watching it at the same time, and as long as everybody watching it wants to see the same show, there's no problem. You certainly wouldn't want a TV to only allow one person to watch it at once just because multiple people might want to watch different shows, right? So if people want to watch different shows, they need some sort of mechanism external to the TV itself (perhaps having a single remote control that the current viewer holds onto for the duration of his show) to make sure that one guy doesn't change the channel to his show while another guy is watching.
C# methods are "reentrant" (As in most languages; the last time I heard of genuinely non-reentrant code was DOS routines) Each thread has its own call stack, and when a method is called, the call stack of that thread is updated to have space for the return address, calling parameters, return value, local values, etc.
Suppose Thread1 and Thread2 calls the method M concurrently and M has a local int variable n. The call stack of Thread1 is seperate from the call stack of Thread2, so n will have two different instantiations in two different stacks. Concurrency would be a problem only if n is stored not in a stack but say in the same register (i.e. in a shared resource) CLR (or is it Windows?) is careful not to let that cause a problem and cleans, stores and restores the registers when switching threads. (What do you do in presence of multiple CPU's, how do you allocate registers, how do you implement locking. These are indeed difficult problems that makes one respect compiler, OS writers when one comes to think of it)
Being reentrant does not prove no bad things happen when two threads call the same method at the same time: it only proves no bad things happen if the method does not access and update other shared resources.
When you access an instance method, you are accessing it through an object reference.
When you access a static method, you are accessing it directly.
So static methods are a tiny bit faster.
When you instanciate a class you dont create a copy of the code. You have a pointer to the definition of the class, and the code is acceded through it. So, instance methods are accessed the sane way than static methods

Serially process ConcurrentQueue and limit to one message processor. Correct pattern?

I'm building a multithreaded app in .net.
I have a thread that listens to a connection (abstract, serial, tcp...).
When it receives a new message, it adds it to via AddMessage. Which then call startSpool. startSpool checks to see if the spool is already running and if it is, returns, otherwise, starts it in a new thread. The reason for this is, the messages HAVE to be processed serially, FIFO.
So, my questions are...
Am I going about this the right way?
Are there better, faster, cheaper patterns out there?
My apologies if there is a typo in my code, I was having problems copying and pasting.
ConcurrentQueue<IMyMessage > messages = new ConcurrentQueue<IMyMessage>();
const int maxSpoolInstances = 1;
object lcurrentSpoolInstances;
int currentSpoolInstances = 0;
Thread spoolThread;
public void AddMessage(IMyMessage message)
{
this.messages.Add(message);
this.startSpool();
}
private void startSpool()
{
bool run = false;
lock (lcurrentSpoolInstances)
{
if (currentSpoolInstances <= maxSpoolInstances)
{
this.currentSpoolInstances++;
run = true;
}
else
{
return;
}
}
if (run)
{
this.spoolThread = new Thread(new ThreadStart(spool));
this.spoolThread.Start();
}
}
private void spool()
{
Message.ITimingMessage message;
while (this.messages.Count > 0)
{
// TODO: Is this below line necessary or does the TryDequeue cover this?
message = null;
this.messages.TryDequeue(out message);
if (message != null)
{
// My long running thing that does something with this message.
}
}
lock (lcurrentSpoolInstances)
{
this.currentSpoolInstances--;
}
}
This would be easier using BlockingCollection<T> instead of ConcurrentQueue<T>.
Something like this should work:
class MessageProcessor : IDisposable
{
BlockingCollection<IMyMessage> messages = new BlockingCollection<IMyMessage>();
public MessageProcessor()
{
// Move this to constructor to prevent race condition in existing code (you could start multiple threads...
Task.Factory.StartNew(this.spool, TaskCreationOptions.LongRunning);
}
public void AddMessage(IMyMessage message)
{
this.messages.Add(message);
}
private void Spool()
{
foreach(IMyMessage message in this.messages.GetConsumingEnumerable())
{
// long running thing that does something with this message.
}
}
public void FinishProcessing()
{
// This will tell the spooling you're done adding, so it shuts down
this.messages.CompleteAdding();
}
void IDisposable.Dispose()
{
this.FinishProcessing();
}
}
Edit: If you wanted to support multiple consumers, you could handle that via a separate constructor. I'd refactor this to:
public MessageProcessor(int numberOfConsumers = 1)
{
for (int i=0;i<numberOfConsumers;++i)
StartConsumer();
}
private void StartConsumer()
{
// Move this to constructor to prevent race condition in existing code (you could start multiple threads...
Task.Factory.StartNew(this.spool, TaskCreationOptions.LongRunning);
}
This would allow you to start any number of consumers. Note that this breaks the rule of having it be strictly FIFO - the processing will potentially process "numberOfConsumer" elements in blocks with this change.
Multiple producers are already supported. The above is thread safe, so any number of threads can call Add(message) in parallel, with no changes.
I think that Reed's answer is the best way to go, but for the sake of academics, here is an example using the concurrent queue -- you had some races in the code that you posted (depending upon how you handle incrementing currnetSpoolInstances)
The changes I made (below) were:
Switched to a Task instead of a Thread (uses thread pool instead of incurring the cost of creating a new thread)
added the code to increment/decrement your spool instance count
changed the "if currentSpoolInstances <= max ... to just < to avoid having one too many workers (probably just a typo)
changed the way that empty queues were handled to avoid a race: I think you had a race, where your while loop could have tested false, (you thread begins to exit), but at that moment, a new item is added (so your spool thread is exiting, but your spool count > 0, so your queue stalls).
private ConcurrentQueue<IMyMessage> messages = new ConcurrentQueue<IMyMessage>();
const int maxSpoolInstances = 1;
object lcurrentSpoolInstances = new object();
int currentSpoolInstances = 0;
public void AddMessage(IMyMessage message)
{
this.messages.Enqueue(message);
this.startSpool();
}
private void startSpool()
{
lock (lcurrentSpoolInstances)
{
if (currentSpoolInstances < maxSpoolInstances)
{
this.currentSpoolInstances++;
Task.Factory.StartNew(spool, TaskCreationOptions.LongRunning);
}
}
}
private void spool()
{
IMyMessage message;
while (true)
{
// you do not need to null message because it is an "out" parameter, had it been a "ref" parameter, you would want to null it.
if(this.messages.TryDequeue(out message))
{
// My long running thing that does something with this message.
}
else
{
lock (lcurrentSpoolInstances)
{
if (this.messages.IsEmpty)
{
this.currentSpoolInstances--;
return;
}
}
}
}
}
Check 'Pipelines pattern': http://msdn.microsoft.com/en-us/library/ff963548.aspx
Use BlockingCollection for the 'buffers'.
Each Processor (e.g. ReadStrings, CorrectCase, ..), should run in a Task.
HTH..

What is an efficent method for in-order processing of events using CCR?

I was experimenting with CCR iterators as a solution to a task that requires parallel processing of tons of data feeds, where the data from each feed needs to be processed in order. None of the feeds are dependent on each other, so the in-order processing can be paralleled per-feed.
Below is a quick and dirty mockup with one integer feed, which simply shoves integers into a Port at a rate of about 1.5K/second, and then pulls them out using a CCR iterator to keep the in-order processing guarantee.
class Program
{
static Dispatcher dispatcher = new Dispatcher();
static DispatcherQueue dispatcherQueue =
new DispatcherQueue("DefaultDispatcherQueue", dispatcher);
static Port<int> intPort = new Port<int>();
static void Main(string[] args)
{
Arbiter.Activate(
dispatcherQueue,
Arbiter.FromIteratorHandler(new IteratorHandler(ProcessInts)));
int counter = 0;
Timer t = new Timer( (x) =>
{ for(int i = 0; i < 1500; ++i) intPort.Post(counter++);}
, null, 0, 1000);
Console.ReadKey();
}
public static IEnumerator<ITask> ProcessInts()
{
while (true)
{
yield return intPort.Receive();
int currentValue;
if( (currentValue = intPort) % 1000 == 0)
{
Console.WriteLine("{0}, Current Items In Queue:{1}",
currentValue, intPort.ItemCount);
}
}
}
}
What surprised me about this greatly was that CCR could not keep up on a Corei7 box, with the queue size growing without bounds. In another test to measure the latency from the Post() to the Receive() under a load or ~100 Post/sec., the latency between the first Post() and Receive() in each batch was around 1ms.
Is there something wrong with my mockup? If so, what is a better way of doing this using CCR?
Yes, I agree, this does indeed seem weird. Your code seems initially to perform smoothly, but after a few thousand items, processor usage rises to the point where performance is really lacklustre. This disturbs me and suggests a problem in the framework. After a play with your code, I can't really identify why this is the case. I'd suggest taking this problem to the Microsoft Robotics Forums and seeing if you can get George Chrysanthakopoulos (or one of the other CCR brains) to tell you what the problem is. I can however surmise that your code as it stands is terribly inefficient.
The way that you are dealing with "popping" items from the Port is very inefficient. Essentially the iterator is woken each time there is a message in the Port and it deals with only one message (despite the fact that there might be several hundred more in the Port), then hangs on the yield while control is passed back to the framework. At the point that the yielded receiver causes another "awakening" of the iterator, many many messages have filled the Port. Pulling a thread from the Dispatcher to deal with only a single item (when many have piled up in the meantime) is almost certainly not the best way to get good throughput.
I've modded your code such that after the yield, we check the Port to see if there are any further messages queued and deal with them too, thereby completely emptying the Port before we yield back to the framework. I've also refactored your code somewhat to use CcrServiceBase which simplifies the syntax of some of the tasks you are doing:
internal class Test:CcrServiceBase
{
private readonly Port<int> intPort = new Port<int>();
private Timer timer;
public Test() : base(new DispatcherQueue("DefaultDispatcherQueue",
new Dispatcher(0,
"dispatcher")))
{
}
public void StartTest() {
SpawnIterator(ProcessInts);
var counter = 0;
timer = new Timer(x =>
{
for (var i = 0; i < 1500; ++i)
intPort.Post(counter++);
}
,
null,
0,
1000);
}
public IEnumerator<ITask> ProcessInts()
{
while (true)
{
yield return intPort.Receive();
int currentValue = intPort;
ReportCurrent(currentValue);
while(intPort.Test(out currentValue))
{
ReportCurrent(currentValue);
}
}
}
private void ReportCurrent(int currentValue)
{
if (currentValue % 1000 == 0)
{
Console.WriteLine("{0}, Current Items In Queue:{1}",
currentValue,
intPort.ItemCount);
}
}
}
Alternatively, you could do away with the iterator completely, as it's not really well used in your example (although I'm not entirely sure what effect this has on the order of processing):
internal class Test : CcrServiceBase
{
private readonly Port<int> intPort = new Port<int>();
private Timer timer;
public Test() : base(new DispatcherQueue("DefaultDispatcherQueue",
new Dispatcher(0,
"dispatcher")))
{
}
public void StartTest()
{
Activate(
Arbiter.Receive(true,
intPort,
i =>
{
ReportCurrent(i);
int currentValue;
while (intPort.Test(out currentValue))
{
ReportCurrent(currentValue);
}
}));
var counter = 0;
timer = new Timer(x =>
{
for (var i = 0; i < 500000; ++i)
{
intPort.Post(counter++);
}
}
,
null,
0,
1000);
}
private void ReportCurrent(int currentValue)
{
if (currentValue % 1000000 == 0)
{
Console.WriteLine("{0}, Current Items In Queue:{1}",
currentValue,
intPort.ItemCount);
}
}
}
Both these examples significantly increase throughput by orders of magnitude. Hope this helps.

Categories

Resources