locking resources or not on multithreaded server [closed] - c#

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I'm creating a client-server structure where the server has a thread for every client.
That specific thread only sends and receives data. In the main thread of the server I'd want to read the input that the clientthread received. But it's possible that that input is being modified by the clientthread at the same time the mainthread is reading. How would i prevent this? I have read on locks but have no idea how to implement them that way.
A second part of my question is: the clientthread is a loop that constantly reads from a networkstream, and thus blocks that thread until it can read something. But can i call a function from my main thread (that function would send something through that networkstream) that the existing clientthread (that is looping) must execute?
Sorry i can't give any code right now, but i think it's clear enough?

It sounds like a producer-consumer design might be a good fit for your problem. In general terms, the client threads will put any received data into a (thread safe) queue and not modify it after that - any new data that arrives will go to a new slot in the queue. The main thread can then wait for new data in any of the queues and process it once it arrives. The main thread could either check on all the queues periodically, or (better) receive some sort of notification when data is placed in a queue, so that it can sleep while nothing is happening and won't eat CPU time.
Since you ask about locks: Here is a basic lock-based implementation as an alternative to queues, perhaps that will help you understand the principle
class IncomingClientData
{
private List<byte> m_incomingData = new List<byte>();
private readonly object m_lock = new object();
public void Append(IEnumerable<byte> data)
{
lock(m_lock)
{
m_incomingData.AddRange(data);
}
}
public List<byte> ReadAndClear()
{
lock(m_lock)
{
List<byte> result = m_incomingData;
m_incomingData = new List<byte>();
return result;
}
}
}
In this example, your client threads would call Append with the data that they have received, and the main thread could collect all the rececived data that arrived since the last check by calling ReadAndClear.
This is made thread-safe by locking all the code in both functions on m_lock, which is just a regular plain object - you can lock on any object in C#, but I believe this can be confusing and actually lead to subtle bugs if used carelessly, so I almost always use a dedicated object to lock on. Only one thread at a time can hold the lock on an object, so the code of those functions will only run in one thread at a time. For example, if your main thread calls ReadAndClear while the client thread is still busy appending data to the list, the main thread will wait until the client thread leaves the Append function.
It's not required to make a new class for this, but it can prevent accidents, because we can carefully control how the shared state is being accessed. For example, we know that it is safe to return the internal list in ReadAndClear() because there can be no other reference to the same list at that time.
Now for your second question: Just plain calling a method won't ever cause the method to run on a different thread, no matter which class the method is in. Invoke is a special feature of the WinForms UI thread, you'd have to implement that functionality yourself if you want to Invoke something in your worker threads. Internally, Invoke works by placing the code you want to run into a queue of all things that are supposed to run on the UI thread, including e.g. UI events. The UI thread itself is basically a loop that always pulls the next piece of work from that queue, and then performs that work, then takes the next item from the queue and so on. That is also why you shouldn't do long work in an event handler - as long as the UI thread is busy running your code, it won't be able to process the next items in its queue, so you'll hold up all the other work items / events that occur.
If you want your client threads to run a certain function, you have to actually provide the code for that - e.g. have the client threads check some queue for commands from the main thread.

Related

Can Task.Delay cause thread switching? [duplicate]

This question already has answers here:
I thought await continued on the same thread as the caller, but it seems not to
(3 answers)
Closed 2 years ago.
I have a long running process that sends data to the other machine. But this data is received in chunks (like a set of 100 packets, then delay of minimum 10 seconds).
I started the sending function on a separate thread using
Task.Run(() => { SendPackets(); });
The packets to be sent are queued in a Queue<Packet> object by some other function.
In SendPackets() I am using a while loop to retrieve and send (asynchronously) all items available in the queue.
void SendPackets()
{
while(isRunning)
{
while(thePacketQueue.Count > 0)
{
Packet pkt = thePacketQueue.Dequeue();
BeginSend(pkt, callback); // Actual code to send data over asynchronously
}
Task.Delay(1000); // <---- My question lies here
}
}
All the locks are in place!
My question is, when I use Task.Delay, is it possible then the next cycle may be executed by a thread different from the current one?
Is there any other approach, where instead of specifying delay for 1 second, I can use something like ManualResetEvent, and what would be the respective code (I have no idea how to use the ManualResetEvent etc.
Also, I am new to async/await and TPL, so kindly bear with my ignorance.
TIA.
My question is, when I use Task.Delay, is it possible then the next cycle may be executed by a thread different from the current one?
Not with the code you've got, because that code is buggy. It won't actually delay between cycles at all. It creates a task that will complete in a second, but then ignores that task. Your code should be:
await Task.Delay(1000);
or potentially:
await Task.Delay(1000).ConfigureAwait(false);
With the second piece of code, that can absolutely run each cycle on a different thread. With the first piece of code, it will depend on the synchronization context. If you were running in a synchronization context with thread affinity (e.g. you're calling this from the UI thread of a WPF or WinForms app) then the async method will continue on the same thread after the delay completes. If you're running without a synchronization context, or in a synchronization context that doesn't just use one thread, then again it could run each cycle in a different thread.
As you're starting this code with Task.Run, you won't have a synchronization context - but it's worth being aware that the same piece of code could behave differently when run in a different way.
As a side note, it's not clear what's adding items to thePacketQueue, but unless that's a concurrent collection (e.g. ConcurrentQueue), you may well have a problem there too.

How can two threads access a common array of buffers with minimal blocking ? (c#)

I'm working on an image processing application where I have two threads on top of my main thread:
1 - CameraThread that captures images from the webcam and writes them into a buffer
2 - ImageProcessingThread that takes the latest image from that buffer for filtering.
The reason why this is multithreaded is because speed is critical and I need to have CameraThread to keep grabbing pictures and making the latest capture ready to pick up by ImageProcessingThread while it's still processing the previous image.
My problem is about finding a fast and thread-safe way to access that common buffer and I've figured that, ideally, it should be a triple buffer (image[3]) so that if ImageProcessingThread is slow, then CameraThread can keep on writing on the two other images and vice versa.
What sort of locking mechanism would be the most appropriate for this to be thread-safe ?
I looked at the lock statement but it seems like it would make a thread block-waiting for another one to be finished and that would be against the point of triple buffering.
Thanks in advance for any idea or advice.
J.
This could be a textbook example of the Producer-Consumer Pattern.
If you're going to be working in .NET 4, you can use the IProducerConsumerCollection<T> and associated concrete classes to provide your functionality.
If not, have a read of this article for more information on the pattern, and this question for guidance in writing your own thread-safe implementation of a blocking First-In First-Out structure.
Personally I think you might want to look at a different approach for this, rather than writing to a centralized "buffer" that you have to manage access to, could you switch to an approach that uses events. Once the camera thread has "received" an image it could raise an event, that passed the image data off to the process that actually handles the image processing.
An alternative would be to use a Queue, which the queue is a FIFO (First in First Out) data structure, now it is not thread-safe for access so you would have to lock it, but your locking time would be very minimal to put the item in the queue. There are also other Queue classes out there that are thread-safe that you could use.
Using your approach there are a number of issues that you would have to contend with. Blocking as you are accessing the array, limitations as to what happens after you run out of available array slots, blocking, etc..
Given the amount of precessing needed for a picture, I don't think that a simple locking scheme would be your bottleneck. Measure before you start wasting time on the wrong problem.
Be very careful with 'lock-free' solutions, they are always more complicated than they look.
And you need a Queue, not an array.
If you can use dotNET4 I would use the ConcurrentQuue.
You will have to run some performance metrics, but take a look at lock free queues.
See this question and its associated answers, for example.
In your particular application, though, you processor is only really interested in the most recent image. In effect this means you only really want to maintain a queue of two items (the new item and the previous item) so that there is no contention between reading and writing. You could, for example, have your producer remove old entries from the queue once a new one is written.
Edit: having said all this, I think there is a lot of merit in what is said in Mitchel Sellers's answer.
I would look at using a ReaderWriterLockSlim which allows fast read and upgradable locks for writes.
This isn't a direct answer to your question, but it may be better to rethink your concurrency model. Locks are a terrible way to syncronize anything -- too low level, error prone, etc. Try to rethink your problem in terms of message passing concurrency:
The idea here is that each thread is its own tightly contained message loop, and each thread has a "mailbox" for sending and receiving messages -- we're going to use the term MailboxThread to distinguish these types of objects from plain jane threads.
So instead of having two threads accessing the same buffer, you instead have two MailboxThreads sending and receiving messages between one another (pseudocode):
let filter =
while true
let image = getNextMsg() // blocks until the next message is recieved
process image
let camera(filterMailbox) =
while true
let image = takePicture()
filterMailbox.SendMsg(image) // sends a message asyncronous
let filterMailbox = Mailbox.Start(filter)
let cameraMailbox = Mailbox.Start(camera(filterMailbox))
Now you're processing threads don't know or care about any buffers at all. They just wait for messages and process them whenever they're available. If you send to many message for the filterMailbox to handle, those messages get enqueued to be processed later.
The hard part here is actually implementing your MailboxThread object. Although it requires some creativity to get right, its wholly possible to implement these types of objects so that they only hold a thread open while processing a message, and release the executing thread back to the thread-pool when there are no messages left to handle (this implementation allows you to terminate your application without dangling threads).
The advantage here is how threads send and receive messages without worrying about locking or syncronization. Behind the scenes, you need to lock your message queue between enqueing or dequeuing a message, but that implementation detail is completely transparent to your client-side code.
Just an Idea.
Since we're talking about only two threads, we can make some assumptions.
Lets use your tripple buffer idea. Assuming there is only 1 writer and 1 reader thread, we can toss a "flag" back-and-forth in the form of an integer. Both threads will continuously spin but update their buffers.
WARNING: This will only work for 1 reader thread
Pseudo Code
Shared Variables:
int Status = 0; //0 = ready to write; 1 = ready to read
Buffer1 = New bytes[]
Buffer2 = New bytes[]
Buffer3 = New bytes[]
BufferTmp = null
thread1
{
while(true)
{
WriteData(Buffer1);
if (Status == 0)
{
BufferTmp = Buffer1;
Buffer1 = Buffer2;
Buffer2 = BufferTmp;
Status = 1;
}
}
}
thread2
{
while(true)
{
ReadData(Buffer3);
if (Status == 1)
{
BufferTmp = Buffer1;
Buffer2 = Buffer3;
Buffer3 = BufferTmp;
Status = 0;
}
}
}
just remember, you're writedata method wouldn't create new byte objects, but update the current one. Creating new objects is expensive.
Also, you may want a thread.sleep(1) in an ELSE statement to accompany the IF statements, otherwise one a single core CPU, a spinning thread will increase the latency before the other thread gets scheduled. eg. The write thread may run spin 2-3 times before the read thread gets scheduled, because the schedulers sees the write thread doing "work"

Is it possible to overload a thread using ISynchronizeInvoke.BeginInvoke()?

My problem is this:
I have two threads, my UI thread, and a worker thread. My worker thread is running in a seperate class that gets instantiated by the form, which passes itself as an ISynchronizeInvoke to the worker class, which then uses Invoke on that interface to call it's events, which provide status updates to the UI for display. This works wonderfully.
I noticed that my background thread seemed to be running slowly though, so I changed the call to Invoke to BeginInvoke, thinking that "I'm just providing progress updates, it doesn't need to be exactly synchronous, no harm done" except that now I'm getting oddities with the progress update. My progress bar updates, but the label's text doesn't, and if I change to another window and try to change back, it acts like the UI thread is locked up, so I'm wondering if perhaps my progress calls (which happen very often) are overloading the UI thread so much that it never processes messages. Is this possible at all, or is there something else at work here?
You're definitively overloading the UI thread.
In your first sample, you were (behind the scenes) sending a message to the UI thread, waiting for it to be processed (that's the purpose of invoke, which ultimately relies on SendMessage), and then sending another one. In the meantime, other messages were probably enqueued (WM_PAINT messages, for example) and processed.
In your second sample, by using BeginInvoke (which ultimately relies on PostMessage), you massively enqueued a lot of messages in the message queue, that the message pump must sequentially handle. And of course, while it's handling those thousands of messages, it cannot handle the OS messages (WM_PAINT, etc..) which makes your UI look "frozen"
You're probably providing too much status updates ; try to lower the feedback level.
If you want to understand better how messages work in windows, this is the place to start.
A few thoughts;
try batching your updates; for example, there is no point updating for every iteration in a loop; depending on the speed, perhaps every 50 / 500. In the case of lists, you would buffer in a local list variable, take the list over via Invoke / BeginInvoke, and process the buffer on the UI thread
variable capture; if you are using BeginInvoke and anonymous methods, you could have problems... I'll add an example below
making the UI update efficient - especially if you are processing a list; some controls (especially list-based controls) have a pair of methods like BeginEdit / EndEdit, that stop the UI redrawing when you are making lots of updates; instead, it waits until the End* is called
capture problem... imagine (worker):
List<string> stuff = new List<string>();
for(int i = 0 ; i < 50000 ; i++) {
stuff.Add(i.ToString());
if((i % 100) == 0) {
// update UI
BeginInvoke((MethodInvoker) delegate {
foreach(string s in stuff) {
listBox.Items.Add(s);
}
});
}
}
Did you notice that at some point both threads are talking to stuff? The UI thread can be iterating it while the worker thread (which has kept running past BeginInvoke) keeps adding. This can cause issues. Not usually performance issues (unless you are catching the exceptions and taking a long time to log them), but definitely issues. Options here would include:
using Invoke to run the update synchronously
create a new buffer per update, so that the two threads never have the same list instance (you'd need to look very carefully at the variable scoped to make sure, though)

How can I check if a function is being called on a particular Thread?

If I have Thread A which is the main Application Thread and a secondary Thread. How can I check if a function is being called within Thread B?
Basically I am trying to implement the following code snippit:
public void ensureRunningOnCorrectThread()
{
if( function is being called within ThreadB )
{
performIO()
}
else
{
// call performIO so that it is called (invoked?) on ThreadB
}
}
Is there a way to perform this functionality within C# or is there a better way of looking at the problem?
EDIT 1
I have noticed the following within the MSDN documentation, although Im a dit dubious as to whether or not its a good thing to be doing! :
// if function is being called within ThreadB
if( System.Threading.Thread.CurrentThread.Equals(ThreadB) )
{
}
EDIT 2
I realise that Im looking at this problem in the wrong way (thanks to the answers below who helped me see this) all I care about is that the IO does not happen on ThreadA. This means that it could happen on ThreadB or indeed anyother Thread e.g. a BackgroundWorker. I have decided that creating a new BackgroundWorker within the else portion of the above f statement ensures that the IO is performed in a non-blocking fashion. Im not entirely sure that this is the best solution to my problem, however it appears to work!
Here's one way to do it:
if (System.Threading.Thread.CurrentThread.ManagedThreadId == ThreadB.ManagedThreadId)
...
I don't know enough about .NET's Thread class implementation to know if the comparison above is equivalent to Equals() or not, but in absence of this knowledge, comparing the IDs is a safe bet.
There may be a better (where better = easier, faster, etc.) way to accomplish what you're trying to do, depending on a few things like:
what kind of app (ASP.NET, WinForms, console, etc.) are you building?
why do you want to enforce I/O on only one thread?
what kind of I/O is this? (e.g. writes to one file? network I/O constrained to one socket? etc.)
what are your performance constraints relative to cost of locking, number of concurrent worker threads, etc?
whether the "else" clause in your code needs to be blocking, fire-and-forget, or something more sophisticated
how you want to deal with timeouts, deadlocks, etc.
Adding this info to your question would be helpful, although if yours is a WinForms app and you're talking about user-facing GUI I/O, you can skip the other questions since the scenario is obvious.
Keep in mind that // call performIO so that it is called (invoked?) on ThreadB implementation will vary depending on whether this is WinForms, ASP.NET, console, etc.
If WinForms, check out this CodeProject post for a cool way to handle it. Also see MSDN for how this is usually handled using InvokeRequired.
If Console or generalized server app (no GUI), you'll need to figure out how to let the main thread know that it has work waiting-- and you may want to consider an alternate implementation which has a I/O worker thread or thread pool which just sits around executing queued I/O requests that you queue to it. Or you might want to consider synchronizing your I/O requests (easier) instead of marshalling calls over to one thread (harder).
If ASP.NET, you're probably implementing this in the wrong way. It's usually more effective to use ASP.NET async pages and/or to (per above) synchronize snchronizing to your I/O using lock{} or another synchronization method.
What you are trying to do is the opposite of what the InvokeRequired property of a windows form control does, so if it's a window form application, you could just use the property of your main form:
if (InvokeRequired) {
// running in a separate thread
} else {
// running in the main thread, so needs to send the task to the worker thread
}
The else part of your snippet, Invoking PerformIO on ThreadB is only going to work when ThreadB is the Main thread running a Messageloop.
So maybe you should rethink what you are doing here, it is not a normal construction.
Does your secondary thread do anything else besides the performIO() function? If not, then an easy way to do this is to use a System.Threading.ManualResetEvent. Have the secondary thread sit in a while loop waiting for the event to be set. When the event is signaled, the secondary thread can perform the I/O processing. To signal the event, have the main thread call the Set() method of the event object.
using System.Threading;
static void Main(string[] args)
{
ManualResetEvent processEvent = new ManualResetEvent(false);
Thread thread = new Thread(delegate() {
while (processEvent.WaitOne()) {
performIO();
processEvent.Reset(); // reset for next pass...
}
});
thread.Name = "I/O Processing Thread"; // name the thread
thread.Start();
// Do GUI stuff...
// When time to perform the IO processing, signal the event.
processEvent.Set();
}
Also, as an aside, get into the habit of naming any System.Threading.Thread objects as they are created. When you create the secondary thread, set the thread name via the Name property. This will help you when looking at the Threads window in Debug sessions, and it also allows you to print the thread name to the console or the Output window if the thread identity is ever in doubt.

C# thread pool limiting threads

Alright...I've given the site a fair search and have read over many posts about this topic. I found this question: Code for a simple thread pool in C# especially helpful.
However, as it always seems, what I need varies slightly.
I have looked over the MSDN example and adapted it to my needs somewhat. The example I refer to is here: http://msdn.microsoft.com/en-us/library/3dasc8as(VS.80,printer).aspx
My issue is this. I have a fairly simple set of code that loads a web page via the HttpWebRequest and WebResponse classes and reads the results via a Stream. I fire off this method in a thread as it will need to executed many times. The method itself is pretty short, but the number of times it needs to be fired (with varied data for each time) varies. It can be anywhere from 1 to 200.
Everything I've read seems to indicate the ThreadPool class being the prime candidate. Here is what things get tricky. I might need to fire off this thing say 100 times, but I can only have 3 threads at most running (for this particular task).
I've tried setting the MaxThreads on the ThreadPool via:
ThreadPool.SetMaxThreads(3, 3);
I'm not entirely convinced this approach is working. Furthermore, I don't want to clobber other web sites or programs running on the system this will be running on. So, by limiting the # of threads on the ThreadPool, can I be certain that this pertains to my code and my threads only?
The MSDN example uses the event drive approach and calls WaitHandle.WaitAll(doneEvents); which is how I'm doing this.
So the heart of my question is, how does one ensure or specify a maximum number of threads that can be run for their code, but have the code keep running more threads as the previous ones finish up until some arbitrary point? Am I tackling this the right way?
Sincerely,
Jason
Okay, I've added a semaphore approach and completely removed the ThreadPool code. It seems simple enough. I got my info from: http://www.albahari.com/threading/part2.aspx
It's this example that showed me how:
[text below here is a copy/paste from the site]
A Semaphore with a capacity of one is similar to a Mutex or lock, except that the Semaphore has no "owner" – it's thread-agnostic. Any thread can call Release on a Semaphore, while with Mutex and lock, only the thread that obtained the resource can release it.
In this following example, ten threads execute a loop with a Sleep statement in the middle. A Semaphore ensures that not more than three threads can execute that Sleep statement at once:
class SemaphoreTest
{
static Semaphore s = new Semaphore(3, 3); // Available=3; Capacity=3
static void Main()
{
for (int i = 0; i < 10; i++)
new Thread(Go).Start();
}
static void Go()
{
while (true)
{
s.WaitOne();
Thread.Sleep(100); // Only 3 threads can get here at once
s.Release();
}
}
}
Note: if you are limiting this to "3" just so you don't overwhelm the machine running your app, I'd make sure this is a problem first. The threadpool is supposed to manage this for you. On the other hand, if you don't want to overwhelm some other resource, then read on!
You can't manage the size of the threadpool (or really much of anything about it).
In this case, I'd use a semaphore to manage access to your resource. In your case, your resource is running the web scrape, or calculating some report, etc.
To do this, in your static class, create a semaphore object:
System.Threading.Semaphore S = new System.Threading.Semaphore(3, 3);
Then, in each thread, you do this:
System.Threading.Semaphore S = new System.Threading.Semaphore(3, 3);
try
{
// wait your turn (decrement)
S.WaitOne();
// do your thing
}
finally {
// release so others can go (increment)
S.Release();
}
Each thread will block on the S.WaitOne() until it is given the signal to proceed. Once S has been decremented 3 times, all threads will block until one of them increments the counter.
This solution isn't perfect.
If you want something a little cleaner, and more efficient, I'd recommend going with a BlockingQueue approach wherein you enqueue the work you want performed into a global Blocking Queue object.
Meanwhile, you have three threads (which you created--not in the threadpool), popping work out of the queue to perform. This isn't that tricky to setup and is very fast and simple.
Examples:
Best threading queue example / best practice
Best method to get objects from a BlockingQueue in a concurrent program?
It's a static class like any other, which means that anything you do with it affects every other thread in the current process. It doesn't affect other processes.
I consider this one of the larger design flaws in .NET, however. Who came up with the brilliant idea of making the thread pool static? As your example shows, we often want a thread pool dedicated to our task, without having it interfere with unrelated tasks elsewhere in the system.

Categories

Resources