I have been trying to read data from the Twitter stream API using C#, and since sometimes the API will return no data, and I am looking for a near-realtime response, I have been hesitant to use a buffer length of more than 1 byte on the reader in case the stream doesn't return any more data for the next day or two.
I have been using the following line:
input.BeginRead(buffer, 0, buffer.Length, InputReadComplete, null);
//buffer = new byte[1]
Now that I plan to scale the application up, I think a size of 1 will result in a lot of CPU usage, and want to increase that number, but I still don't want the stream to just block. Is it possible to get the stream to return if no more bytes are read in the next 5 seconds or something similar?
Async Option
You can use a timer in the async callback method to complete the operation if no bytes are received for e.g. 5 seconds. Reset the timer every time bytes are received. Start it before BeginRead.
Sync Option
Alternatively, you can use the ReceiveTimeout property of the underlying socket to establish a maximum time to wait before completing the read. You can use a larger buffer and set the timeout to e.g. 5 seconds.
From the MSDN documentation that property only applies to a synchronous read. You could perform a synchronous read on a separate thread.
UPDATE
Here's rough, untested code pieced together from a similar problem. It will probably not run (or be bug-free) as-is, but should give you the idea:
private EventWaitHandle asyncWait = new ManualResetEvent(false);
private Timer abortTimer = null;
private bool success = false;
public void ReadFromTwitter()
{
abortTimer = new Timer(AbortTwitter, null, 50000, System.Threading.Timeout.Infinite);
asyncWait.Reset();
input.BeginRead(buffer, 0, buffer.Length, InputReadComplete, null);
asyncWait.WaitOne();
}
void AbortTwitter(object state)
{
success = false; // Redundant but explicit for clarity
asyncWait.Set();
}
void InputReadComplete()
{
// Disable the timer:
abortTimer.Change(System.Threading.Timeout.Infinite, System.Threading.Timeout.Infinite);
success = true;
asyncWait.Set();
}
Related
I have an application that receives data from a wireless radio using RS-232. These radios use an API for communicating with multiple clients. To use the radios I created a library for communicate with them that other software can utilize with minimal changes from a normal SerialPort connection. The library reads from a SerialPort object and inserts incoming data into different buffers depending on the radio it receives from. Each packet that is received contains a header indicating its length, source, etc.
I start by reading the header, which is fixed-length, from the port and parsing it. In the header, the length of the data is defined before the data payload itself, so once I know the length of the data, I then wait for that much data to be available, then read in that many bytes.
Example (the other elements from the header are omitted):
// Read header
byte[] header = new byte[RCV_HEADER_LENGTH];
this.Port.Read(header, 0, RCV_HEADER_LENGTH);
// Get length of data in packet
short dataLength = header[1];
byte[] payload = new byte[dataLength];
// Make sure all the payload of this packet is ready to read
while (this.Port.BytesToRead < dataLength) { }
this.Port.Read(payload, 0, dataLength);
Obviously the empty while port is bad. If for some reason the data never arrives the thread will lock. I haven't encountered this problem yet, but I'm looking for an elegant way to do this. My first thought is to add a short timer that starts just before the while-loop, and sets an abortRead flag when it elapses that would break the while loop, like this:
// Make sure all the payload of this packet is ready to read
abortRead = false;
readTimer.Start();
while (this.Port.BytesToRead < dataLength && !abortRead) {}
This code needs to handle a constant stream of incoming data as quickly as it can, so keeping overhead to a minimum is a concern, and am wondering if I am doing this properly.
You don't have to run this while loop, the method Read would either fill the buffer for you or would throw a TimeoutException if buffer wasn't filled within the SerialPort.ReadTimeout time (which you can adjust to your needs).
But some general remark - your while loop would cause intensive CPU work for nothing, in the few milliseconds it would take the data to arrive you would have thousends of this while loop iterations, you should've add some Thread.Sleep inside.
If you want to truly adress this problem, you need to run the code in the background. There are different options to do that; you can start a thread, you start a Task or you can use async await.
To fully cover all options, the answer would be endless. If you use threads or tasks with the default scheduler and your wait time is expected to be rather short, you can use SpinWait.SpinUntil instead of your while loop. This will perform better than your solution:
SpinWait.SpinUntil(() => this.Port.BytesToRead >= dataLength);
If you are free to use async await, I would recommend this solution, since you need only a few changes to your code. You can use Task.Delay and in the best case you pass a CancellationToken to be able to cancel your operation:
try {
while (this.Port.BytesToRead < dataLength) {
await Task.Delay(100, cancellationToken);
}
}
catch(OperationCancelledException) {
//Cancellation logic
}
I think I would do this asynchronously with the SerialPort DataReceived event.
// Class fields
private const int RCV_HEADER_LENGTH = 8;
private const int MAX_DATA_LENGTH = 255;
private SerialPort Port;
private byte[] PacketBuffer = new byte[RCV_HEADER_LENGTH + MAX_DATA_LENGTH];
private int Readi = 0;
private int DataLength = 0;
// In your constructor
this.Port.DataReceived += new SerialDataReceivedEventHandler(DataReceivedHandler);
private void DataReceivedHandler(object sender, SerialDataReceivedEventArgs e)
{
if (e.EventType != SerialData.Chars)
{
return;
}
// Read all available bytes.
int len = Port.BytesToRead;
byte[] data = new byte[len];
Port.Read(data, 0, len);
// Go through each byte.
for (int i = 0; i < len; i++)
{
// Add the next byte to the packet buffer.
PacketBuffer[Readi++] = data[i];
// Check if we've received the complete header.
if (Readi == RCV_HEADER_LENGTH)
{
DataLength = PacketBuffer[1];
}
// Check if we've received the complete data.
if (Readi == RCV_HEADER_LENGTH + DataLength)
{
// The packet is complete add it to the appropriate buffer.
Readi = 0;
}
}
}
I'm using the asynchronous methos BeginSend and I need some sort of a timeout mechanism. What I've implemented works fine for connect and receive timeouts but I have a problem with the BeginSend callback. Even a timeout of 25 seconds is often not enough and gets exceeded. This seems very strange to me and points towards a different cause.
public void Send(String data)
{
if (client.Connected)
{
// Convert the string data to byte data using ASCII encoding.
byte[] byteData = Encoding.ASCII.GetBytes(data);
client.NoDelay = true;
// Begin sending the data to the remote device.
IAsyncResult res = client.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback(SendCallback), client);
if (!res.IsCompleted)
{
sendTimer = new System.Threading.Timer(SendTimeoutCallback, null, 10000, Timeout.Infinite);
}
}
else MessageBox.Show("No connection to target! Send");
}
private void SendCallback(IAsyncResult ar)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 1, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
sendTimeoutflag = 0; //needs to be reset back to 0 for next reception
// we set the flag to 1, indicating it was completed.
if (sendTimer != null)
{
// stop the timer from firing.
sendTimer.Dispose();
}
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete sending the data to the remote device.
int bytesSent = client.EndSend(ar);
ef.updateUI("Sent " + bytesSent.ToString() + " bytes to server." + "\n");
}
catch (Exception e)
{
MessageBox.Show(e.ToString());
}
}
private void SendTimeoutCallback(object obj)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 2, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
// we set the flag to 2, indicating a timeout was hit.
sendTimer.Dispose();
client.Close(); // closing the Socket cancels the async operation.
MessageBox.Show("Connection to the target has been lost! SendTimeoutCallback");
}
I've tested timeout values up to 30 seconds. The value of 30 seconds has proved to be the only one never to time out. But that just seems like an overkill and I believe there's a different underlying cause.Any ideas as to why this could be happening?
Unfortunately, there's not enough code to completely diagnose this. You don't even show the declaration of sendTimeoutflag. The example isn't self-contained, so there's no way to test it. And you're not clear about exactly what happens (e.g. do you just get the timeout, do you complete a send and still get a timeout, does something else happen?).
That said, I see at least one serious bug in the code, which is your use of the sendTimeoutflag. The SendCallback() method sets this flag to 1, but it immediately sets it back to 0 again (this time without the protection of Interlocked.CompareExchange()). Only after it's set the value to 0 does it dispose the timer.
This means that even when you successfully complete the callback, the timeout timer is nearly guaranteed to have no idea and to close the client object anyway.
You can fix this specific issue by moving the assignment sendTimeoutflag = 0; to a point after you've actually completed the send operation, e.g. at the end of the callback method. And even then only if you take steps to ensure that the timer callback cannot execute past that point (e.g. wait for the timer's dispose to complete).
Note that even having fixed that specific issue, you may still have other bugs. Frankly, it's not clear why you want a timeout in the first place. Nor is it clear why you want to use lock-free code to implement your timeout logic. More conventional locking (i.e. Monitor-based with the lock statement) would be easier to implement correctly and would likely not impose a noticeable performance penalty.
And I agree with the suggestion that you would be better-served by using the async/await pattern instead of explicitly dealing with callback methods (but of course that would mean using a higher-level I/O object, since Socket doesn't suppose async/await).
Okay so I'm working on my file transfer service, and I can transfer the files fine with WCF streaming. I get good speeds, and I'll eventually be able to have good resume support because I chunk my files into small bits before streaming.
However, I'm running into issues with both the server side transfer and the client side receiving when it comes to measuring a detailed transfer speed as the messages are streamed and written.
Here's the code where the file is chunked, which is called by the service every time it needs to send another chunk to the client.
public byte[] NextChunk()
{
if (MoreChunks) // If there are more chunks, procede with the next chunking operation, otherwise throw an exception.
{
byte[] buffer;
using (BinaryReader reader = new BinaryReader(File.OpenRead(FilePath)))
{
reader.BaseStream.Position = currentPosition;
buffer = reader.ReadBytes((int)MaximumChunkSize);
}
currentPosition += buffer.LongLength; // Sets the stream position to be used for the next call.
return buffer;
}
else
throw new InvalidOperationException("The last chunk of the file has already been returned.");
In the above, I basically write to the buffer based on the chunk size I am using(in this case it's 2mb which I found to have the best transfer speeds compared to larger or smaller chunk sizes). I then do a little work to remember where I left off, and return the buffer.
The following code is the server side work.
public FileMessage ReceiveFile()
{
if (!transferSpeedTimer.Enabled)
transferSpeedTimer.Start();
byte[] buffer = chunkedFile.NextChunk();
FileMessage message = new FileMessage();
message.FileMetaData = new FileMetaData(chunkedFile.MoreChunks, buffer.LongLength);
message.ChunkData = new MemoryStream(buffer);
if (!chunkedFile.MoreChunks)
{
OnTransferComplete(this, EventArgs.Empty);
Timer timer = new Timer(20000f);
timer.Elapsed += (sender, e) =>
{
StopSession();
timer.Stop();
};
timer.Start();
}
//This needs to be more granular. This method is called too infrequently for a fast and accurate enough progress of the file transfer to be determined.
TotalBytesTransferred += buffer.LongLength;
return message;
}
In this method, which is called by the client in a WCF call, I get information for the next chunk, create my message, do a little bit with timers to stop my session once the transfer is complete and update the transfer speeds. Shortly before I return the message I increment my TotalBytesTransferred with the length of the buffer, which is used to help me calculate transfer speed.
The problem with this, is it takes a while to stream the file to the client, and so the speeds I'm getting are false. What I'm trying to aim for here is a more granular modification to the TotalBytesTransferred variable so I have a better representation of how much data is being sent to the client at any given time.
Now, for the client side code, which uses an entirely different way of calculating transfer speed.
if (Role == FileTransferItem.FileTransferRole.Receiver)
{
hostChannel = channelFactory.CreateChannel();
((IContextChannel)hostChannel).OperationTimeout = new TimeSpan(3, 0, 0);
bool moreChunks = true;
long bytesPreviousPosition = 0;
using (BinaryWriter writer = new BinaryWriter(File.OpenWrite(fileWritePath)))
{
writer.BaseStream.SetLength(0);
transferSpeedTimer.Elapsed += ((sender, e) =>
{
transferSpeed = writer.BaseStream.Position - bytesPreviousPosition;
bytesPreviousPosition = writer.BaseStream.Position;
});
transferSpeedTimer.Start();
while (moreChunks)
{
FileMessage message = hostChannel.ReceiveFile();
moreChunks = message.FileMetaData.MoreChunks;
writer.BaseStream.Position = filePosition;
// This is golden, but I need to extrapolate it out and do the stream copy myself so I can do calculations on a per byte basis.
message.ChunkData.CopyTo(writer.BaseStream);
filePosition += message.FileMetaData.ChunkLength;
// TODO This needs to be more granular
TotalBytesTransferred += message.FileMetaData.ChunkLength;
}
OnTransferComplete(this, EventArgs.Empty);
}
}
else
{
transferSpeedTimer.Elapsed += ((sender, e) =>
{
totalElapsedSeconds += (int)transferSpeedTimer.Interval;
transferSpeed = TotalBytesTransferred / totalElapsedSeconds;
});
transferSpeedTimer.Start();
host.Open();
}
Here, my TotalBytesTransferred is also based on the length of the chunk coming in. I know I can get a more granular calculation if I do the stream writing myself instead of using the CopyTo for the stream, but I'm not exactly sure how to best go about this.
Can anybody help me out here? Outside of this class I have another class polling the property of TransferSpeed as it's updated internally.
I apologize if I posted too much code, but I wasn't sure what to post and what not.
EDIT: I realize at least with the Server side implementation, the way I can get a more granular reading on how many bytes have been transferred, is by reading the position of the return message value of the stream. However, I don't know a way to do this to ensure absolute integrity on my count. I thought about maybe using a timer and polling the position as the stream was being transferred, but then the next call might be made and I would quickly become out of sync.
How can I poll data from the returning stream and know immediately when the stream finishes so I can quickly add up the remainder of what was left of the stream into my byte count?
Okay I have found what seems to be ideal for me. I don't know if it's perfect, but it's pretty darn good for my needs.
On the Server side, we have this code that does the work of transferring the file. The chunkedFile class obviously does the chunking, but this is the code that sends the information to the Client.
public FileMessage ReceiveFile()
{
byte[] buffer = chunkedFile.NextChunk();
FileMessage message = new FileMessage();
message.FileMetaData = new FileMetaData(chunkedFile.MoreChunks, buffer.LongLength, chunkedFile.CurrentPosition);
message.ChunkData = new MemoryStream(buffer);
TotalBytesTransferred = chunkedFile.CurrentPosition;
UpdateTotalBytesTransferred(message);
if (!chunkedFile.MoreChunks)
{
OnTransferComplete(this, EventArgs.Empty);
Timer timer = new Timer(20000f);
timer.Elapsed += (sender, e) =>
{
StopSession();
timer.Stop();
};
timer.Start();
}
return message;
}
The client basically calls this code, and the server proceeds to get a new chunk, put it in a stream, update the TotalBytesTransferred based on the position of the chunkedFile(which keeps track of the underlying file system file that is used to draw the data from). I'll show the method UpdateTotalBytesTransferred(message) in a moment, as that is where all the code for the server and client reside to achieve the more granular polling of the TotalBytesTransferred.
Next up is the client side work.
hostChannel = channelFactory.CreateChannel();
((IContextChannel)hostChannel).OperationTimeout = new TimeSpan(3, 0, 0);
bool moreChunks = true;
using (BinaryWriter writer = new BinaryWriter(File.OpenWrite(fileWritePath)))
{
writer.BaseStream.SetLength(0);
while (moreChunks)
{
FileMessage message = hostChannel.ReceiveFile();
moreChunks = message.FileMetaData.MoreChunks;
UpdateTotalBytesTransferred(message);
writer.BaseStream.Position = filePosition;
message.ChunkData.CopyTo(writer.BaseStream);
TotalBytesTransferred = message.FileMetaData.FilePosition;
filePosition += message.FileMetaData.ChunkLength;
}
OnTransferComplete(this, EventArgs.Empty);
}
This code is very simple. It calls the host to get the file stream, and also utilizes the UpdateTotalBytesTransferred(message) method. It does a little bit of work to remember the position of the underlying file that is being written, and copies the stream to that file while also updating the TotalBytesTransferred after finishing.
The way I achieved the granularity I was looking for was with the UpdateTotalBytesTransferred method as follows. It works exactly the same for both the Server and the Client.
private void UpdateTotalBytesTransferred(FileMessage message)
{
long previousStreamPosition = 0;
long totalBytesTransferredShouldBe = TotalBytesTransferred + message.FileMetaData.ChunkLength;
Timer timer = new Timer(500f);
timer.Elapsed += (sender, e) =>
{
if (TotalBytesTransferred + (message.ChunkData.Position - previousStreamPosition) < totalBytesTransferredShouldBe)
{
TotalBytesTransferred += message.ChunkData.Position - previousStreamPosition;
previousStreamPosition = message.ChunkData.Position;
}
else
{
timer.Stop();
timer.Dispose();
}
};
timer.Start();
}
What this does is take in the FileMessage which is basically just a stream and some information about the file itself. It has a variable previousStreamPosition to remember the last position it was when it was polling the underlying stream. It also does a simple calculation with totalBytesTransferredShouldBe based on how many bytes are already transferred plus the total length of the stream.
Finally, a timer is created and executed, which upon every tick checks to see if it needs to be incrementing the TotalBytesTransferred. If it's not supposed to update it anymore(reached the end of the stream basically), it stops and disposes of the timer.
This all allows me to get very small reads of how many bytes have been transferred, which lets me better calculate the total progress in a more fluid way, as more accurately measure the file transfer speeds achieved.
I have a function(please see code below) which reads some data from the web. The problem with this function is that sometimes it will return fast but another time it will wait indefinitely. I heard that threads helps me to wait for a definite period of time and return.
Can you please tell me how to make a thread wait for 'x' seconds and return if there is no activity recorded. My function also returns a string as a result, Is it possible to catch that value while using a thread?
private string ReadMessage(SslStream sslStream)
{
// Read the message sent by the server.
// The end of the message is signaled using the
// "<EOF>" marker.
byte[] buffer = new byte[2048];
StringBuilder messageData = new StringBuilder();
int bytes = -1;
try
{
bytes = sslStream.Read(buffer, 0, buffer.Length);
// Use Decoder class to convert from bytes to UTF8
// in case a character spans two buffers.
Decoder decoder = Encoding.ASCII.GetDecoder();
char[] chars = new char[decoder.GetCharCount(buffer, 0, bytes)];
decoder.GetChars(buffer, 0, bytes, chars, 0);
messageData.Append(chars);
// Check for EOF.
}
catch (Exception ex)
{
throw;
}
return messageData.ToString();
}
For Andre Calil's comment:
My need is to read/write some value to a SSL server. For every write operation the server sends some response, The ReadMessage is responsible for reading the incoming message. I've found situations wnen the ReadMessage(sslStream.Read(buffer, 0, buffer.Length);) waits forever. To combat this problem, i considered threads which can wait for 'x' seconds and return after that. Following code demonstrates the working of ReadMEssage
byte[] messsage = Encoding.UTF8.GetBytes(inputmsg);
// Send hello message to the server.
sslStream.Write(messsage);
sslStream.Flush();
// Read message from the server.
outputmsg = ReadMessage(sslStream);
// Console.WriteLine("Server says: {0}", serverMessage);
// Close the client connection.
client.Close();
You can't (sanely) make a second thread interrupt the one you're executing this code from. Use a read timeout instead:
private string ReadMessage(SslStream sslStream)
{
// set a timeout here or when creating the stream
sslStream.ReadTimeout = 20*1000;
// …
try
{
bytes = sslStream.Read(…);
}
catch (IOException)
{
// a timeout occurred, handle it
}
}
As an aside, the following construct is pointless:
try
{
// some code
}
catch (Exception ex) {
throw;
}
If all you're doing is rethrowing, you don't need the try..catch block at all.
You can set the ReadTimeout on the SslStream so that the call to Read will timeout after a specified amount of time.
If you don't want to block the main thread, use an asynchronous pattern.
Without knowing exactly what you are trying to achieve, it sounds like you want to read data from an SSL stream that may take quite a while to respond, without blocking your UI/main thread.
You can consider doing your read asynchronously instead using BeginRead
Using that approach, you define a callback method that is invoked every time Read has read data and placed it into the specified buffer.
Just sleeping (whether using Thread.Sleep or by setting ReadTimeout on the SslStream) will block the thread this code is running on.
Design it to be Asynchronous by putting the ReadMessage in its own thread waiting for the answer. Once the answer is provided create an event back to the main code to handle its output.
I have an ugly piece of Serial Port code which is very unstable.
void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
Thread.Sleep(100);
while (port.BytesToRead > 0)
{
var count = port.BytesToRead;
byte[] buffer = new byte[count];
var read = port.Read(buffer, 0, count);
if (DataEncapsulator != null)
buffer = DataEncapsulator.UnWrap(buffer);
var response = dataCollector.Collect(buffer);
if (response != null)
{
this.OnDataReceived(response);
}
Thread.Sleep(100);
}
}
If I remove either Thread.Sleep(100) calls the code stops working.
Of course this really slows things down and if lots of data streams in,
it stops working as well unless I make the sleep even bigger.
(Stops working as in pure deadlock)
Please note the DataEncapsulator and DataCollector are components
provided by MEF, but their performance is quite good.
The class has a Listen() method which starts a background worker to
receive data.
public void Listen(IDataCollector dataCollector)
{
this.dataCollector = dataCollector;
BackgroundWorker worker = new BackgroundWorker();
worker.DoWork += new DoWorkEventHandler(worker_DoWork);
worker.RunWorkerAsync();
}
void worker_DoWork(object sender, DoWorkEventArgs e)
{
port = new SerialPort();
//Event handlers
port.ReceivedBytesThreshold = 15;
port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived);
..... remainder of code ...
Suggestions are welcome!
Update:
*Just a quick note about what the IDataCollector classes do.
There is no way to know if all bytes of the data that has been sent
are read in a single read operation. So everytime data is read it is
passed to the DataColllector which returns true when a complete and
valid protocol message has been received. In this case here it just
checks for a sync byte, length , crc and tail byte. The real work
is done later by other classes.
*
Update 2:
I replaced the code now as suggested, but still there is something wrong:
void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
var count = port.BytesToRead;
byte[] buffer = new byte[count];
var read = port.Read(buffer, 0, count);
if (DataEncapsulator != null)
buffer = DataEncapsulator.UnWrap(buffer);
var response = dataCollector.Collect(buffer);
if (response != null)
{
this.OnDataReceived(response);
}
}
You see this works fine with a fast and stable connection.
But OnDataReceived is NOT called every time data is received.
(See the MSDN docs for more). So if the data gets fragmented
and you only read once within the event data gets lost.
And now I remember why I had the loop in the first place, because
it actually does have to read multiple times if the connection is slow or unstable.
Obviously I can't go back to the while loop solution, so what can I do?
My first concern with the original while-based code fragment is the constant allocation of memory for the byte buffer. Putting a "new" statement here specifically going to the .NET memory manager to allocate memory for the buffer, while taking the memory allocated in the last iteration and sending it back into the unused pool for eventual garbage collection. That seems like an awful lot of work to do in a relatively tight loop.
I am curious as to the performance improvement you would gain by creating this buffer at design-time with a reasonable size, say 8K, so you don't have all of this memory allocation and deallocation and fragmentation. Would that help?
private byte[] buffer = new byte[8192];
void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
Thread.Sleep(100);
while (port.BytesToRead > 0)
{
var count = port.BytesToRead;
var read = port.Read(buffer, 0, count);
// ... more code
}
}
My other concern with re-allocating this buffer on every iteration of the loop is that the reallocation may be unnecessary if the buffer is already large enough. Consider the following:
Loop Iteration 1: 100 bytes received; allocate buffer of 100 bytes
Loop Iteration 2: 75 bytes received; allocate buffer of 75 bytes
In this scenario, you don't really need to re-allocate the buffer, because the buffer of 100 bytes allocated in Loop Iteration 1 is more than enough to handle the 75 bytes received in Loop Iteration 2. There is no need to destroy the 100 byte buffer and create a 75 byte buffer. (This is moot, of course, if you just statically create the buffer and move it out of the loop altogether.)
On another tangent, I might suggest that the DataReceived loop concern itself only with the reception of the data. I am not sure what those MEF components are doing, but I question if their work has to be done in the data reception loop. Is it possible for the received data to be put on some sort of queue and the MEF components can pick them up there? I am interested in keeping the DataReceived loop as speedy as possible. Perhaps the received data can be put on a queue so that it can go right back to work receiving more data. You can set up another thread, perhaps, to watch for data arriving on the queue and have the MEF components pick up the data from there and do their work from there. That may be more coding, but it may help the data reception loop be as responsive as possible.
And it can be so simple...
Either you use DataReceived handler but without a loop and certainly without Sleep(), read what data is ready and push it somewhere (to a Queue or MemoryStream),
or
Start a Thread (BgWorker) and do a (blocking) serialPort1.Read(...), and again, push or assemble the data you get.
Edit:
From what you posted I would say: drop the eventhandler and just Read the bytes inside Dowork(). That has the benefit you can specify how much data you want, as long as it is (a lot) smaller than the ReadBufferSize.
Edit2, regarding Update2:
You will still be much better of with a while loop inside a BgWorker, not using the event at all. The simple way:
byte[] buffer = new byte[128]; // 128 = (average) size of a record
while(port.IsOpen && ! worker.CancelationPending)
{
int count = port.Read(buffer, 0, 128);
// proccess count bytes
}
Now maybe your records are variable-sized and you don't don't want to wait for the next 126 bytes to come in to complete one. You can tune this by reducing the buffer size or set a ReadTimeOut. To get very fine-grained you could use port.ReadByte(). Since that reads from the ReadBuffer it's not really any slower.
If you want to write the data to a file and the serial port stops every so often this is a simple way to do it. If possible make your buffer large enough to hold all the bytes that you plan to put in a single file. Then write the code in your datareceived event handler as shown below. Then when you get an oportunity write the whole buffer to a file as shown below that. If you must read FROM your buffer while the serial port is reading TO your buffer then try using a buffered stream object to avoid deadlocks and race conditions.
private byte[] buffer = new byte[8192];
var index = 0;
void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
index += port.Read(buffer, index, port.BytesToRead);
}
void WriteDataToFile()
{
binaryWriter.Write(buffer, 0, index);
index = 0;
}