I'm currently working on a program's subsystem that requires writing data to disk. I've implemented this as a multithreaded Producer-Consumer model that generates packets of data in one thread, puts them in a queue and writes them to disk in another thread.
The program has to use minimal CPU resources, so to avoid the write thread idling while it is waiting for a packet of data to arrive, I extended the ConcurrentQueue class to trigger an Event when a packet has been added to the queue, so that the write function is only active when there is data available. Here's the generate function:
while (writeData)
{
for (int i = 0; i < packetLength; i++)
{
packet.data[i] = genData();
}
packet.num++;
// send packet to queue to be written
// this automatically triggers an Event
queue.Enqueue(packet);
}
My problem is that I haven't been able to figure out how to assign the Event Handler (ie: the write operation) to a seperate thread - as far as I'm aware, Event Handlers in C# are run on the same thread that triggered the event.
I've thought about using the ThreadPool, but I'm unsure as to whether the packets in the queue would be written sequentially or in parallel. Writing packets in parallel would not work in this context, as they are all being written to the same file and have to be written in the order they arrive in. This is what I have so far:
private void writeEventCatch(object sender, EventArgs e)
{
// catch event and queue a write operation
ThreadPool.QueueUserWorkItem(writeToDisk);
}
private void writeToDisk(Object stateInfo)
{
// this is a struct representing the packet
nonCompBinData_P packet;
while (queue.TryDequeue(out packet))
{
// write packet to stream
serialiser.Serialize(fileStream, packet);
packetsWritten++;
}
}
while (queue.TryDequeue(out packet)) will quit as long as there are no packets to dequeue. what you need to do is start single thread for writing operation deque work items and write data to disk. items will be dequeued one by one and in order they arrive.
Related
I'm new to multi-threading and I'm to understand how the AutoResetEvent works.
I'm trying to implement an optimization process where I'm sending data between two different softwares and I'm using two threads: the Main thread where I'm modifying and sending information and a Receiving Thread, always running in the background, waiting to catch the message with the results from the Sent Info. To implement that, after a message is sent, I want the main thread to wait until the receiver thread receives back the result and triggers the event that allows the main thread to continue where it left off.
Here is a simplified version of my code:
// Thread 1 - MAIN
static AutoResetEvent waitHandle = new AutoResetEvent(false);
void foo()
{
for (int i = 0; i < 5; i++)
{
// ... Modify sendInfo data
// Send data to other software
SendData(sendInfo);
// Wait for other software to process data and send back the result
waitHandle.WaitOne();
// Print Result
print(receivedData);
// Reset AutoResetEvent
waitHandle.Reset();
}
}
/////////////////////////////
// Thread 2 - Receiver thread (running in the background)
private event EventHandler MessageReceived;
// ... Code for triggerring MessageReceived event each time a message is received
private static void OnMessageReceived(object sender, EventArgs e)
{
waitHandle.Set();
}
My question is:
Can you repeatedly use an AutoResetEvent in a loop like this? Am I using it correctly?
I'm pretty sure my send/receive loop is working properly, with the MessageReceived event succesfully triggered shortly after the sent message. But while my code works fine for a single iteration, it gets stuck on multiple iterations and I'm not sure why. Any suggestions?
From what I've read, beginReceive is considered superior to receive in almost all cases (or all?). This is because beginReceive is asynchronous and waits for data to arrive on a seperate thread, thereby allowing the program to complete other tasks on the main thread while waiting.
But with beginReceive, the callback function still runs on the main thread. And so there is overhead with switching back and forth between the worker thread and the main thread each time data is received. I know the overhead is small, but why not avoid it by simply using a separate thread to host a continuous loop of receive calls?
Can someone explain to me what is inferior about programming with the following style:
static void Main()
{
double current_temperature = 0; //variable to hold data received from server
Thread t = new Thread (UpdateData);
t.start();
// other code down here that will occasionally do something with current_temperature
// e.g. send to a GUI when button pressed
... more code here
}
static void UpdateData()
{
Socket my_socket = new Socket(AddressFamily.InterNetwork,SocketType.Stream,ProtocolType.Tcp);
my_socket.Connect (server_endpoint);
byte [] buffer = new byte[1024];
while (true)
my_socket.Receive (buffer); //will receive data 50-100 times per second
// code here to take the data in buffer
// and use it to update current_temperature
... more code here
end
}
To begin with, I'm using unity. Which makes me stuck with .NET 3.5. I'm currently working on a server program which uses the Socket object's asynchronous methods (E.G. BeginReceive, BeginAccept, BeginReceiveFrom etc.). When the server receives a packet from a client, this packet is received on a worker thread. Now I'm left with some data on a worker thread, and I want the main thread to process this data using a function that I specify. I implemented that:
using System;
using System.Threading;
using System.Collections;
using System.Collections.Generic;
public class MyDispatcherClass
{
public delegate void MyDel();
private readonly Queue<MyDel> commands = new Queue<MyDel>();
Object lockObj = new object ();
public void Add(MyDel dc)
{
lock (lockObj)
{
commands.Enqueue (dc);
}
}
public void Invoke()
{
lock (lockObj)
{
while (commands.Count > 0)
{
commands.Dequeue().Invoke();
}
}
}
}
Then I would use it this way:
// As a global variable:
MyDispatcherClass SomeDispatcher = new MyDispatcherClass ();
//The function that I want to call:
public void MyFunction (byte[] data)
{
// Do some stuff on the main thread
}
//When I receive a message on a worker thread I do that:
SomeDispatcher.Add (()=> MyFunction (byte[] data)); //Asuume that "data" is the message I received from a client
//Each frame on the main thread I call:
SomeDispatcher.Invoke ();
After some research, I found that the lock statement does not guarantee a %100 FIFO implementation. Which is not what I wanted, sometimes this may cause a total server breakdown! I want to achieve the same result with a %100 guarantee that data will be processed in the same order it was received from a client. How could I accomplish that?
Threads will run in whatever order they want, so you can't force the order going into the queue. But you can put in more data into the queue than just what you will eventually be processing.
If you add a DateTime, (or even just an int with a specified order) to the data being sent you can sort your queue on that when you pull data from it, (and possibly not pull any data less than 0.5 seconds old to give time for other threads to write their data.)
Normally when dealing with client-server relationships each thread represents one client so you don't have to worry about this as commands are FIFO within the thread, (although they might not be when 2 different clients are sending messages.)
Do you close and re-open the socket on the same client? that could make it use different threads. If you need a specific order and are sending things fairly soon after each other it might be better to leave the socket open.
I am reading data through serial port which is working correctly. Following is my code in short
public Form1()
{
InitializeComponent();
//Background worker
m_oWorker = new BackgroundWorker();
m_oWorker.DoWork += new DoWorkEventHandler(m_oWorker_DoWork);
m_oWorker.ProgressChanged += new ProgressChangedEventHandler(m_oWorker_ProgressChanged);
m_oWorker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(m_oWorker_RunWorkerCompleted);
m_oWorker.WorkerReportsProgress = true;
m_oWorker.WorkerSupportsCancellation = true;
connectComPort.DataReceived += new SerialDataReceivedEventHandler(receiveData);
enableDisable();
}
void m_oWorker_DoWork(object sender, DoWorkEventArgs e)
{
backProcess();
m_oWorker.ReportProgress(100);
}
private void backProcess()
{
//do some work
Thread.Sleep(10000);
if(check if 2000 bytes received)
{
//do some work
}
}
backProcess() is running on background worker and I have a global queue in which I insert bytes received from serial port and I check this queue in if loop.
My problem is that when 2000 bytes are send from other end to pc I receive less than 1000 bytes till after thread.sleep statement but if I set a breakpoint at thread.sleep step I receive complete 2000 bytes. So does thread.sleep(background thread) blocks data receive event handler also? How can I avoid this problem?
Some things don't get quite clear from your question, but I think your design is flawed.
You don't need a background worker at all and you don't need to sleep some thread.
The best way to handle serial input is to use the already asynchronous DataReceived event of the SerialPort class, which is called whenever there's data to be read (you're already doing this, as far as I can tell from your latest edit).
You can then read the existing data, append it to a StringBuilder (or fill a list of up to 2000 bytes) and launch whatever you want to do from there.
Pseudo-Code example would be:
DataReceived event
1. Read data (using `SerialPort.ReadExisting`)
2. Append Data to buffer, increase total number of bytes read
3. If number of bytes >= 2000: Spawn new thread to handle the data
BackgroundWorker is NOT the right tool for this, by the way! If handling the 2000 bytes is fast enough, you don't even need to spawn a new thread at all.
I am porting an existing app from Borland C++ to .NET. Application handles 4 COM Ports simultaneously, i need to synchronize them, so that whilst one port is receiving data, the other three would block until one reads all the data in the receive buffer.
Requirements are, that new version works exactly in the same way as the previous one, so i need to find a way how to synchronize those 4 ports.
P.S.
I have got 4 instances of SerialPort class.
Below is a handler for receiving data over the COM port.
private void SerialPort_DataReceived( object sender, SerialDataReceivedEventArgs e )
{
SerialPort rThis = (SerialPort)sender;
string existingData = rThis.ReadExisting();
int NumReceived = existingData.Length;
if (NumReceived > 0)
{
char[] ReceivedByte = existingData.ToCharArray();
// if RX bytes cannot be processed
if (!rThis.ProcessReceivedBytes(ReceivedByte, NumReceived))
{
rThis.ReportThreadError(ThreadId.TI_READ, 0x07FFFFF);
}
}
}
Best thing is you have only one thread interacting with the ports, because this way you can't interact with the other ports while the thread is busy. This is exactly what you want, forget about multi-threading here.
Then, you should separate that low-level I/O thread from the GUI thread. So you'll end up with two threads that comunicate with one another over a well-defined API.
The low-level I/O thread requires a way of polling the serial ports without blocking, something like this:
while(polling) // GUI thread may interrupt polling on user request
{
foreach(SerialPort port in serialports)
{
if(port.HasDataToRead) // this is the polling you really need
{
// read data from port and handle it accordingly
}
}
// ... suspend thread now and then to prevent loop from consuming CPU time
}
The HasDataToRead should be implemented in the event handler, meaning:
catch in the event handler the event data is available and signal it with HasDataToRead inside the SerialPort class;
don't read the actual data in the event handler, event handlers often run on the GUI thread, you don't want to lock up the GUI;
at the end of the read method clear the HasDataToRead flag.
The cycle above really is a dispatcher, while the events are only used to orchestrate the flags inside the SerialPort instances.
Pay attention to the HasDataToRead flag, you'll have to lock it to avoid race conditions:
lock(HasDataToRead)
{
// access HasDataToRead
}