Are the c# ManualResetEvent and AutoResetEvent classes expensive to create or to maintain?
Do they consume some kind of limited Windows kernel resources, and if so, how limited is it?
E.g. if I have code that can create a new AutoResetEvent every 100ms (to be disposed shortly afterwards), should I worry about putting old AutoResetEvents in a pool and reusing them, or is that not a significant concern?
Since they are IDisposables, I presume they consume some sort of limited resource. Just how much do they consume, and at which point should I start worrying about using too many of them?
The fact that there is a ManualResetEventSlim, but no AutoResetEventSlim also worries me a bit.
ManualResetEvent uses Wait Handles whereas ManualResetEventSlim uses Busy Spinning
The best performance, in order, is: 1) standard locking (Monitor) 2) "slim" event classes 3) standard event classes
Given your use case, I would recommend using the "slim" classes, since you will only be waiting for a short amount of time. Also, if a "slim" class waits for too long, it behaves as a "non-slim" class anyway.
Note you cannot use the "slim" classes across processes.
EDIT: This is why AutoResetEvent does not have a "slim" version - basically it's because the wait times of AutoResetEvent are typically longer than ManualResetEvent, so it isn't appropriate to use "busy spinning"
EDIT: A wait handle inherits from MarshalByRefObject. Ultimately .NET runtime sets up a proxy (TransparentProxy class) for remote access to your wait handle.
See here and here for more information.
Related
I'm making a project in a p2p sharing system which will initiate a lot of sockets with the same ports. right now I'm using a global UdpClient which will use receive and sendasync methods on different threads with different endpoints. there is no usage of mutex as of now which is why I'm asking if collisions are possible using said object if I'm not changing the information inside this object
right now I tried only one example and it doesn't seem to collide although I don't trust one example enough for a full answer
As far as I can see, UdpClient is not thread safe. Thread safe objects should specifically mention that in the documentation, and UdpClient does not seem to do that.
So without any type of synchronization your code is most likely not safe. Testing is not sufficient since multi threading bugs are notorious for being difficult to reproduce. When you write multi threaded code you need to ensure any shared data is synchronized appropriately.
Using it within a lock is probably safe. But that is not a guarantee, UI objects are only safe to use from the thread that created the. Unfortunately that is not always well documented. A problem with locks is that it will block the thread, so locks are best used for very short and fast sections of code, not while doing long running operations like IO. And I don't think the compiler will even let you hold a lock while awaiting.
Another pattern is to use one or more concurrent queues, i.e. threads put messages on the queue, and another thread reads from the queue and sends the messages. There are many possible designs, and the best design will really depend on the specific application. However, designing concurrent systems is difficult, and I would recommend trying to create modules that are fairly independent, so you can understand and test a single module, without having to understand the entire program.
Memory is safe read concurrently. But the same does not extend to objects, since many object may mutate internal state when reading. Some types, like List<T>, specifically mentions that concurrent reads are safe. So make sure you check the documentation before using any object concurrently.
I am trying to leverage .NET 4.5 new threading capabilities to engineer a system to update a list of objects in memory.
I worked with multithreading years ago in Java and I fear that my skills have become stagnant, especially in the .NET region.
Basically, I have written a Feed abstract class and I inherit from that for each of my threads. The thread classes themselves are simple and run fine.
Each of these classes run endlessly, they block until an event occurs and it updates the List.
So the first question is, how might I keep the parent thread alive while these threads run? I've prevented this race condition by writing this currently in a dev console app with a Console.read().
Second, I would like to set up a repository of List objects that I can access from the parent thread. How would I update those Lists from the child thread and expose them to another system? Trying to avoid SQL. Shared memory?
I hope I've made this clear. I was hoping for something like this: Writing multithreaded methods using async/await in .Net 4.5
except, we need the adaptability of external classes and of course, we need to somehow expose those Lists.
You can run the "parent" thread in a while with some flag to stop it:
while(flag){
//run the thread
}
You can expose a public List as a property of some class to hold your data. Remember to lock access in multithreading code.
If the 'parent' thread is supposed to wait during the processing it could simply await the call(s) to the async method(s).
If it has to wait for specific events you could use a signaling object such as a Barrier.
If the thread has to 'do' things while waiting you could check the availability of the result or the progress: How to do progress reporting using Async/Await
If you're using tasks, you can use Tasks.WaitAll to wait for the tasks to complete. The default is that Tasks and async/await use your system's ThreadPool, so I'd avoid placing anything but relatively short running tasks here.
If you're using System.Threading.Thread (I prefer using these for long running threads), check out the accepted answer here: C# Waiting for multiple threads to finish
If you can fetch batches of data, you can expose services allowing access to the shared objects using self hosted Web API or something like NancyFX. WCF and remoting are also options if you prefer binary communication.
Shared memory, keep-alive TCP connections or UDP are options if you have many small transactions. Perhaps you could use ZeroMQ (it's not a traditional queue) with the C# binding they provide?
For concurrent access to the lists take a look at the classes in System.Collections.Concurrent before implementing your own locking.
I have a situation where I have a polling thread for a TCPClient (is that the best plan for a discrete TCP device?) which aggregates messages and occasionally responds to those messages by firing off events. The event producer really doesn't care much if the thread is blocked for a long time, but the consumer's design is such that I'd prefer to have it invoke the handlers on a single worker thread that I've got for handling a state machine.
The question then is this. How should I best manage the creation, configuration (thread name, is background, etc.) lifetime, and marshaling of calls for these threads using the Task library? I'm somewhat familiar with doing this explicitly using the Thread type, but when at all possible my company prefers to do what we can just through the use of Task.
Edit: I believe what I need here will be based around a SynchronizationContext on the consumer's type that ensures that tasks are schedules on a single thread tied to that context.
The question then is this. How should I best manage the creation, configuration (thread name, is background, etc.) lifetime, and marshaling of calls for these threads using the Task library?
This sounds like a perfect use case for BlockingCollection<T>. This class is designed specifically for producer/consumer scenarios, and allows you to have any threads add to the collection (which acts like a thread safe queue), and one (or more) thread or task call blockingCollection.GetConsumingEnumerable() to "consume" the items.
You could consider using TPL DataFlow where you setup an ActionBlock<T> that you push messages into from your TCP thread and then TPL DataFlow will take care of the rest by scaling out the processing of the actions as much your hardware can handle. You can also control exactly how much processing of the actions happen by configuring the ActionBlock<T> with a MaxDegreeOfParallelism.
Since processing sometimes can't keep up with the flow of incoming data, you might want to consider "linking" a BufferBlock<T> in front of the ActionBlock<T> to ensure that the TCP processing thread doesn't get too far ahead of what you can actually process. This would have the same effect as using BlockingCollection<T> with a bounded capacity.
Finally, note that I'm linking to .NET 4.5 documentation because it's easiest, but TPL DataFlow is available for .NET 4.0 via a separate download. Unfortunately they never made a NuGet package out of it.
I have a program that involves receiving a packet from a network on one thread, then notifying other threads that the packet was received. My current approach uses Thread.Interrupt, which seems to be a bit slow when transferring huge amounts of data. Would it be faster to use "lock" to avoid using to many interrupts, or is a lock really just calling Interrupt() in its implementation?
I don't understand why you would use Thread.Interrupt rather than some more traditional signalling method to notify waiting threads that data is received. Thread.Interrupt requires the target thread to be in a wait state anyway, so why not just add an object that you can signal to the target thread's wait logic, and use that to kick it for new data?
lock is used to protect critical code or data from execution by other threads and is ill-suited as a mechanism for inter-thread active signalling.
Use WaitOne or WaitAll on suitable object(s) instead of either. System.Collections.Concurrent in .Net 4 also provides excellent means for queueing new data to a pol of target threads, and other possible approaches to your problem.
Both Thread.Interrupt and lock are not well suited for signaling other threads.
Thread.Interrupt is used to poke or unstick one of the blocking calls in the BCL.
lock is used to prevent simultaneous access to a resource or a block of code.
Signaling other threads is better accomplished with one the following mechanisms.
ManualResetEvent (or ManualResetEventSlim)
AutoResetEvent
EventWaitHandle
Barrier
CountdownEvent
WaitHandle.WaitAny or WaitHandle.WaitAll
I usually use a standard queue and the lock keyword when reading or writing it. Alternatively, the Synchronized method on the queue removes the need to use lock. A System.Threading.Semaphore is the best tool for notifying worker threads when there is a new job to process.
Example of how to add to queue
lock ( myQueue) { myQueue.Enqueue(workItem); }
mySemaphore.Release();
Example of how to process a work item:
mySemaphore.WaitOne();
lock (myQueue) { object workItem = myQueue.Dequeue(); }
// process work item
Semaphore setup:
mySemaphore = new Semaphore(0, int.MaxValue);
If this is too slow and synchronization overhead still dominates your application, you might want to look at dispatching more than one work item at a time.
Depending on what you're doing, the new parallelization features in .NET 4.0 may also be very helpful for your application (if that's an option).
I'm writing an app that will need to make use of Timers, but potentially very many of them. How scalable is the System.Threading.Timer class? The documentation merely say it's "lightweight", but doesn't explain further. Do these timers get sucked into a single thread (or very small threadpool) that processes all the callbacks on behalf of a Timer, or does each Timer have its own thread?
I guess another way to rephrase the question is: How is System.Threading.Timer implemented?
I say this in response to a lot of questions: Don't forget that the (managed) source code to the framework is available. You can use this tool to get it all: http://www.codeplex.com/NetMassDownloader
Unfortunately, in this specific case, a lot of the implementation is in native code, so you don't get to look at it...
They definitely use pool threads rather than a thread-per-timer, though.
The standard way to implement a big collection of timers (which is how the kernel does it internally, and I would suspect is indirectly how your big collection of Timers ends up) is to maintain the list sorted by time-until-expiry - so the system only ever has to worry about checking the next timer which is going to expire, not the whole list.
Roughly, this gives O(log n) for starting a timer and O(1) for processing running timers.
Edit: Just been looking in Jeff Richter's book. He says (of Threading.Timer) that it uses a single thread for all Timer objects, this thread knows when the next timer (i.e. as above) is due and calls ThreadPool.QueueUserWorkItem for the callbacks as appropriate. This has the effect that if you don't finish servicing one callback on a timer before the next is due, that your callback will reenter on another pool thread. So in summary I doubt you'll see a big problem with having lots of timers, but you might suffer thread pool exhaustion if large numbers of them are firing at the same timer and/or their callbacks are slow-running.
I think you might want to rethink your design (that is, if you have control over the design yourself). If you're using so many timers that this is actually a concern for you, there's clearly some potential for consolidation there.
Here's a good article from MSDN Magazine from a few years ago that compares the three available timer classes, and gives some insight into their implementations:
http://msdn.microsoft.com/en-us/magazine/cc164015.aspx
Consolidate them. Create a timer
service and ask that for the timers.
It will only need to keep 1 active
timer (for the next due call)...
For this to be an improvement over just creating lots of Threading.Timer objects, you have to assume that it isn't exactly what Threading.Timer is already doing internally. I'd be interested to know how you came to that conclusion (I haven't disassembled the native bits of the framework, so you could well be right).
^^ as DannySmurf says : Consolidate them. Create a timer service and ask that for the timers. It will only need to keep 1 active timer (for the next due call) and a history of all the timer requests and recalculate this on AddTimer() / RemoveTimer().