I have a serial connection to a device that I want to periodically monitor at a frequency of 1 to 1/10 Hz (not decided yet).
I would like the communication to be non-blocking, and thus I have decided to put the communication in a thread of some kind and let the main application receive data via events fired from the thread.
I was thinking about making a thread at application start that sends and receives data continuously. The thread empties a queue containing commands to send to the device, and then listen for response firing an event when a response is complete.
The thread is put to sleep when no commands are in the queue, and woken when commands enter the queue.
Is this a good way of doing it?
I was thinking about maybe using some of established features of the framework, like BackgroundWorker or Task, since these might have advantages over what I'm doing.
Maybe there are other and better ways of accomplishing this?
That isn't frequent enough to justify burning up an expensive resource like a Thread. Use a System.Timers.Timer or System.Threading.Timer (better) instead. Write the device query command in the callback. Use the SerialPort.DataReceived event to receive the response and fire the event. Now everything runs on cheap threadpool threads.
I found http://social.msdn.microsoft.com/forums/en-US/netfxbcl/thread/e36193cd-a708-42b3-86b7-adff82b19e5e/ which explains a bit more of the architecture of SerialPort.
Related
My application is a communications server that receives TCP messages from a web server and then re-broadcasts the message to a number of iPads. It's a windows forms application in C#. Program.cs creates an instance of the primary form and then that form creates four threads that do the communications work. There is a thread that listens for messages from the web server, a thread that processes the incoming messages into the data that needs to be transmitted and a thread that handles sending the outbound messages. The fourth thread does database cleanup and spends 99% of it's time sleeping.
The problem I'm seeing is the GUI locks up with a load is placed on the system. On incoming message may represent 50 or 100 outgoing messages. While testing I'm restricting they system to only send 5 messages out at a time, so it requires a longer transmission time. The sending process is using async call backs, but even it it wasn't, I can't understand why load on the tread could be stalling the GUI thread.
I'm removed a much of the cross thread communications from the thread to the GUI for status update. The pattern for communications to the GUI is:
public void StatusOutput(string myString)
{
if (this.lbStatus.InvokeRequired)
{
this.lbStatus.BeginInvoke(new DebugOutputInvoker(StatusOutput), myString);
}
else
{
lbStatus.Items.Add(myString);
while (lbStatus.Items.Count >= 501)
{
lbStatus.Items.RemoveAt(0);
}
lbStatus.SelectedItem = lbStatus.Items.Count - 1;
} // StatusUpdate() ...
Can anyone give me any advice on how to pursue this? I though threads were completely isolated from the GUI and couldn't load it.
Thanks!
As an update, I removed all thread to GUI communications and the GUI stopped locking up. So this proved it wasn't a GUI thread issue. I work through the threads till I found out it's the TCPSender thread that has a lot of async callback functions, potentially hundreds of them. When ever this thread get's busy with it's async calls, the GUI locks up. I suspect it has to do with call backs happening while the thread has one of the GUI's methods in process.
I've solved the issue with making a new thread that just collects data from the operational threads and then updates the user interface. General status message that were transferred to the status display on the GUI thread are now queued and updated to the GUI through this new thread.
The GUI remains responsive and actually seems to be able to display more data now.
I need some guidance on a project we are developing. When triggered, my program needs to contact 1,000 devices by TCP and exchange about 200 bytes of information. All the clients are wireless on a private network. The majority of the time the program will be sitting idle, but then needs to send these messages as quickly as possible. I have come up with two possible methods:
Method 1
Use thread pooling to establish a number of worker threads and have these threads process their way through the 1,000 conversations. One thread handles one conversation until completion. The number of threads in the thread pool would then be tuned for best use of resources.
Method 2
A number of threads would be used to handle multiple conversations per thread. For example a thread process would open 10 socket connections start the conversation and then use asynchronous methods to wait for responses. As a communication is completed, a new device would be contacted.
Method 2 looks like it would be more effective in that operations wouldn’t have to wait with the server device responded. It would also save on the overhead of starting the stopping all those threads.
Am I headed in the right direction here? What am I missing or not considering?
There is a well-established way to deal with this problem. Simply use async IO. There is no need to maintain any threads at all. Async IO uses no threads while the IO is in progress.
Thanks to await doing this is quite easy.
The select/poll model is obsolete in .NET.
I have read dozens of articles about threading in c# and Application.DoEvents() ... Still can't use it properly to get my task done:
I have a controller connected to my COM, this controller works on command (i send command, need to wait few ms to get response from it), assume the response is a data that i want to plot every time interval using a loop:
start my loop.
send command to controller via serialPort.
wait for response (wait let say 20 ms).
obtain data.
repeat this loop every let say 100 ms.
this simply doesn't want to work!! i tried to communicate with the data controller on other thread but it seems that it can't access the serialPort which belongs to the main thread (roughly speaking).
any help is appreciated
Application.DoEvents is for all it does - nothing more than a nested call to a windows (low level) message loop on the same thread. Which might easily cause recursion if you call it in in an event handler. You might consider creating your serial port object on the worker thread and communicate through threading classes (i.e. the WaitHandles and similar). Or call back to your UI thread using "BeginInvoke" and "EndInvoke" on the UI object.
If you catch the SerialPort.DataReceived event and then use wither SerialPort.ReadLine or SerialPort.Read(byte[],int,int) those methods will be executed on a new thread. I prefer to use a mutex to control access to the buffer of bytes as a shared resource. Also have you ever communicated with your device successfully? If not in addition to the port setting check the SerialPort.NewLine property and the SerialPort.Handshake property. These settings vary depending on the device you are trying to communicate with.
Why do you use it to begin with?
Have a look at this pages, it might give you a direction
My favorite: Is DoEvents Evil?
From msdn blog Keeping your UI Responsive and the Dangers of Application.DoEvents
From msdn forums Application does not return from call to DoEvents
Without code, it'll be hard to help. Even with code, it might be hard to help :)
I'm agreeing with gunr2171 on this :)
I've run into a problem while writing an async multi-server network app in c#. I have many jobs being taken care of by the thread pool and these include the writes to the network sockets. This ended up allowing for the case where more than one thread could write to the socket at the same time and discombobulate my outgoing messages. My idea for getting around this was to implement a queue system where whenever data got added to the queue, the socket would write it.
My problem is, I can't quite wrap my head around the architecture of something of this nature. I imagine having a queue object that fires an event on whenever data gets added to the queue. The event then writes the data being held in the queue, but that won't work because if two threads come by and add to the queue simultaneously, even if the queue is made to be thread safe, events will still be fired for both and I'll run into the same problem. So then maybe someway to hold off an event if another is in progress, but then how do I continue that event once the first finishes without simply blocking the thread on some mutex or something. This wouldn't be so hard if I wasn't trying to stay strict with my "block nothing" architecture but this particular application requires that I allow the thread pool threads to keep doing their thing.
Any ideas?
While similar to Porges answer it differs a bit in implementation.
First, I usually don't queue the bytes to send, but objects and seralize them in the sending thread but I guess that's a matter of taste.
But the bigger difference is in the use of ConcurrentQueues (in addition to the BlockingCollection).
So I'd end up with code similar to
BlockingCollection<Packet> sendQueue = new BlockingCollection<Packet>(new ConcurrentQueue<Packet>());
while (true)
{
var packet = sendQueue.Take(); //this blocks if there are no items in the queue.
SendPacket(packet); //Send your packet here.
}
The key-take away here is that you have one thread which loops this code, and all other threads can add to the queue in a thread-safe way (both, BlockingCollection and ConcurrentQueue are thread-safe)
have a look at Processing a queue of items asynchronously in C# where I answered a similar question.
Sounds like you need one thread writing to the socket synchronously and a bunch of threads writing to a queue for that thread to process.
You can use a blocking collection (BlockingCollection<T>) to do the hard work:
// somewhere there is a queue:
BlockingCollection<byte[]> queue = new BlockingCollection<byte[]>();
// in socket-writing thread, read from the queue and send the messages:
foreach (byte[] message in queue.GetConsumingEnumerable())
{
// just an example... obviously you'd need error handling and stuff here
socket.Send(message);
}
// in the other threads, just enqueue messages to be sent:
queue.Add(someMessage);
The BlockingCollection will handle all synchronization. You can also enforce a maximum queue length and other fun things.
I don't know C#, but what I would do is have the event trigger the socket manager to start pulling from the queue and write things out one at a time. If it is already going the trigger won't do anything, and once there is nothing in the queue, it stops.
This solves the problem of two threads writing to the queue simultaneously because the second event would be a no-op.
You could have a thread-safe queue that all your worker thread write their results to. Then have another thread that polls the queue and sends results when it sees them waiting.
My C# class must be able to process a high volume of events received via a tcp stream style socket connection. The volume of event messages received from the tcp server by the class's socket is completely variable. For instance, sometimes it will only receive one event message in a period of ten seconds and at other times it will receive a sixty event messages within a second.
I am using Socket.ReceiveAsync to receive messages. ReceiveAsync returns true if the receive operation is pending or false if there is already data on the wire and the receive operation completed synchronously. If the operation is pending, the Socket will call my callback on an IO completion thread, otherwise I call my own callback in the current (IOC) thread. Furthermore, mixed in with event messages I also receive responses to commands that were sent to this tcp server. Response messages are processed right away; individually, by firing off a threadpool worker.
However, I would like to queue event messages until I have "enough" (N) of them OR until there are no more on the wire...and then fire off a threadpool worker to process a batch of event messages. Also, I want all events to be processed sequentially so I only want one threadpool worker to be working on this at a time.
The processor of event messages need only copy the message buffer into an object, raise an event and then release the message buffer back into the ring-buffer pool. So my question is...what do you think is the best strategy to accomplish this?
Do you need more info? Let me know. Thanks!!
I would not call 60 events per second high volume. At that low level of activity any socket processing method at all with be fine. I've handled 5,000 events per second on a single thread using hardware that's much less capable than current machines, just using select.
I will say that if you are looking to scale, handing off messages individually between threads is going to be a disaster. You need to batch or your context switches will kill performance.