Okay so i am making a Voice chat software.
I am using NAudio for it, an excellent library.
But i got a problem. The buffer can go up when something happens. I guess it´s from example, when the OS loads something and the Voice Chat application is put on "hold" for a sec. During that time, it adds the data in the buffer, making the current data get delayed.
And as the receiver is playing at the same pace all the time, it will always be delayed.
Now i have a "solution" for this, which is to clear the buffer when it reaches a certain length. Though this is not ideal at all, and is more of a trick than a solution.
Now to the code parts.
First i initialize the things i use.
private NAudio.Wave.WaveInEvent SendStream = new WaveInEvent();
private NAudio.Wave.AsioOut Aut;
private NAudio.Wave.WaveFormat waveformat = new WaveFormat(48000, 16, 2);
private WasapiLoopbackCapture Waloop = new WasapiLoopbackCapture();
private NAudio.Wave.BufferedWaveProvider waveProvider;
waveProvider = new NAudio.Wave.BufferedWaveProvider(waveformat);
waveProvider.DiscardOnBufferOverflow = true;
SendStream.WaveFormat = waveformat;
waveformat is used just so i don´t have to rewrite it all the time.
DiscardOnBufferOverflow is used so if i set a certain lenght on the buffer, for example 20ms. It will Discard anything above, else it will return an exception. I think however it doesn´t do anything if i don´t set a length, it´s probably infinite at default.
And not much else, SendStream is a WaveInEvent, meaning it will run on a BackgroundThread when i use DataAvailable. Waloop is pretty much the same except it´s a loopback.
waveprovider is used in the receiving part to play back the audio.
Waveformat is, well waveformat, it´s importat to set it out, and have all the same, at least in my application.
Here is the receiving part. As you can se, it puts the data in a byte array, then plays it. nothing weird.
byte[] byteData = udpClient.Receive(ref remoteEP);
waveProvider.AddSamples(byteData, 0, byteData.Length);
Here is the sending/recording part.
private void Sendv2()
{
try
{
if (connect == true)
{
if (AudioDevice == "Wasapi Loopback")
{
SendStream.StopRecording();
Waloop.StartRecording();
}
else
{
Waloop.StopRecording();
SendStream.StartRecording();
}
}
}
catch (Exception e)
{
MessageBox.Show(e.Message);
}
}
void Sending(object sender, NAudio.Wave.WaveInEventArgs e)
{
if (connect == true && MuteMic.Checked == false)
{
udpClient.Send(e.Buffer, e.BytesRecorded, otherPartyIP.Address.ToString(), 1500);
}
}
void SendWaloop(object sender, NAudio.Wave.WaveInEventArgs e)
{
byte[] newArray16Bit = new byte[e.BytesRecorded / 2];
short two;
float value;
for (int i = 0, j = 0; i < e.BytesRecorded; i += 4, j += 2)
{
value = (BitConverter.ToSingle(e.Buffer, i));
two = (short)(value * short.MaxValue);
newArray16Bit[j] = (byte)(two & 0xFF);
newArray16Bit[j + 1] = (byte)((two >> 8) & 0xFF);
}
if (connect == true && MuteMic.Checked == false)
{
udpClient.Send(newArray16Bit, newArray16Bit.Length, otherPartyIP.Address.ToString(), 1500);
}
}
Waloop is a Loopback, so it goes through another "channel", but it´s not really important here.
Very simple, When data is available (when it´s recording) and if the connect is true etc, it will just send the buffer.
So pretty much like the receiver part but other way around.
Now how i currently solve this is like this:
if (waveProvider.BufferedDuration.Milliseconds > 40)
{
waveProvider.ClearBuffer();
TimesBufferClear++;
}
So i am clearing the buffer if it´s above 40ms (this is in a Timer at 600ms interval).
(TimesBufferClear++; is just so i can keep track of the times it had been cleared)
Now sadly, i have no idea how to prevent the buffer to be increased, and setting it to a forced state (20ms etc) will just cause the playback to be worse and worse the higher up the buffer goes as it doesn´t really stop, it just ignores the part above i think.
Here is the creationg of the input devices. It is a bit different from ASIO and Wasapi in my implementation, but it pretty much works the same, only real difference is that i tell the UI that ASIO is on or off as you can see in the code, at the end i add the DataAvailable event´s to both SendStream (any input, Microphone etc) and Waloop (Loopback sound that´s being played).
private void CheckAsio()
{
if (NAudio.Wave.AsioOut.isSupported())
{
Aut = new NAudio.Wave.AsioOut();
ASIO.Text += "\nSupported: " + Aut.DriverName;
ASIO.ForeColor = System.Drawing.Color.Green;
Aut.Init(waveProvider);
Aut.Play();
SendStream.NumberOfBuffers = 2;
SendStream.BufferMilliseconds = 10;
}
else
{
AsioSettings.Enabled = false;
ASIO.Text += "\n Not Supported: Wasapi used";
ASIO.ForeColor = System.Drawing.Color.DarkGray;
Wasout = new WasapiOut(AudioClientShareMode.Shared, 0);
Wasout.Init(waveProvider);
Wasout.Play();
SendStream.NumberOfBuffers = 2;
SendStream.BufferMilliseconds = 9;
}
SendStream.DataAvailable += Sending;
Waloop.DataAvailable += SendWaloop;
}
I am not sure if this even can be solved. But as i don´t see other voice chat programs have it, i am guessing there must be something that can be done.
The way this appears to be handled in most applications is to send blocks of data a defined rate (in samples/sec), and drop blocks of data that exceed that rate. If the sender is resource-limited and is not able to maintain the rate, the stream will have audio gaps. This used to happen in audio calls over dial-up when the transmission rate was locked higher than the network connection could handle, or when the CODEC code was taking up too much time.
But from the sound of things the buffering and skips are symptoms, not causes. The root of the problem is that your process is getting shelved for other operations. You can address this by running at a higher process and/or thread priority. The higher your priority the less interruptions you'll have, which will reduce the likelihood of data queuing up to be processed.
In .NET you can raise your process and/or thread priority fairly simply. For process priority:
using System.Diagnostics;
...
Process.GetCurrentProcess().PriorityClass = PriorityClass.Highest;
Or for a thread:
using System.Threading;
...
Thread.CurrentThread.Priority = ThreadPriority.Highest;
This is not a complete solution, since the OS will still steal time-slices from your application under various circumstances, but in a multi-CPU/core system with plenty of memory you should get a fairly good shot at a stable recording environment.
Of course there are no fool-proof methods, and there's always that one slow computer that will mess you up, so you should allow the system to drop excess samples when necessary. Keep track of how much data you're sending out and when it starts to back up, drop anything over your maximum samples/sec. That way your server (or client) isn't going to be buffering increasing amounts of data and lagging further and further behind real-time.
One option there is to time-stamp each packet you send so that the client can choose when to start dropping data to catch up. Better to lose a few milliseconds of output here and there than to drift further and further out of sync.
Related
Okay so I'm working on my file transfer service, and I can transfer the files fine with WCF streaming. I get good speeds, and I'll eventually be able to have good resume support because I chunk my files into small bits before streaming.
However, I'm running into issues with both the server side transfer and the client side receiving when it comes to measuring a detailed transfer speed as the messages are streamed and written.
Here's the code where the file is chunked, which is called by the service every time it needs to send another chunk to the client.
public byte[] NextChunk()
{
if (MoreChunks) // If there are more chunks, procede with the next chunking operation, otherwise throw an exception.
{
byte[] buffer;
using (BinaryReader reader = new BinaryReader(File.OpenRead(FilePath)))
{
reader.BaseStream.Position = currentPosition;
buffer = reader.ReadBytes((int)MaximumChunkSize);
}
currentPosition += buffer.LongLength; // Sets the stream position to be used for the next call.
return buffer;
}
else
throw new InvalidOperationException("The last chunk of the file has already been returned.");
In the above, I basically write to the buffer based on the chunk size I am using(in this case it's 2mb which I found to have the best transfer speeds compared to larger or smaller chunk sizes). I then do a little work to remember where I left off, and return the buffer.
The following code is the server side work.
public FileMessage ReceiveFile()
{
if (!transferSpeedTimer.Enabled)
transferSpeedTimer.Start();
byte[] buffer = chunkedFile.NextChunk();
FileMessage message = new FileMessage();
message.FileMetaData = new FileMetaData(chunkedFile.MoreChunks, buffer.LongLength);
message.ChunkData = new MemoryStream(buffer);
if (!chunkedFile.MoreChunks)
{
OnTransferComplete(this, EventArgs.Empty);
Timer timer = new Timer(20000f);
timer.Elapsed += (sender, e) =>
{
StopSession();
timer.Stop();
};
timer.Start();
}
//This needs to be more granular. This method is called too infrequently for a fast and accurate enough progress of the file transfer to be determined.
TotalBytesTransferred += buffer.LongLength;
return message;
}
In this method, which is called by the client in a WCF call, I get information for the next chunk, create my message, do a little bit with timers to stop my session once the transfer is complete and update the transfer speeds. Shortly before I return the message I increment my TotalBytesTransferred with the length of the buffer, which is used to help me calculate transfer speed.
The problem with this, is it takes a while to stream the file to the client, and so the speeds I'm getting are false. What I'm trying to aim for here is a more granular modification to the TotalBytesTransferred variable so I have a better representation of how much data is being sent to the client at any given time.
Now, for the client side code, which uses an entirely different way of calculating transfer speed.
if (Role == FileTransferItem.FileTransferRole.Receiver)
{
hostChannel = channelFactory.CreateChannel();
((IContextChannel)hostChannel).OperationTimeout = new TimeSpan(3, 0, 0);
bool moreChunks = true;
long bytesPreviousPosition = 0;
using (BinaryWriter writer = new BinaryWriter(File.OpenWrite(fileWritePath)))
{
writer.BaseStream.SetLength(0);
transferSpeedTimer.Elapsed += ((sender, e) =>
{
transferSpeed = writer.BaseStream.Position - bytesPreviousPosition;
bytesPreviousPosition = writer.BaseStream.Position;
});
transferSpeedTimer.Start();
while (moreChunks)
{
FileMessage message = hostChannel.ReceiveFile();
moreChunks = message.FileMetaData.MoreChunks;
writer.BaseStream.Position = filePosition;
// This is golden, but I need to extrapolate it out and do the stream copy myself so I can do calculations on a per byte basis.
message.ChunkData.CopyTo(writer.BaseStream);
filePosition += message.FileMetaData.ChunkLength;
// TODO This needs to be more granular
TotalBytesTransferred += message.FileMetaData.ChunkLength;
}
OnTransferComplete(this, EventArgs.Empty);
}
}
else
{
transferSpeedTimer.Elapsed += ((sender, e) =>
{
totalElapsedSeconds += (int)transferSpeedTimer.Interval;
transferSpeed = TotalBytesTransferred / totalElapsedSeconds;
});
transferSpeedTimer.Start();
host.Open();
}
Here, my TotalBytesTransferred is also based on the length of the chunk coming in. I know I can get a more granular calculation if I do the stream writing myself instead of using the CopyTo for the stream, but I'm not exactly sure how to best go about this.
Can anybody help me out here? Outside of this class I have another class polling the property of TransferSpeed as it's updated internally.
I apologize if I posted too much code, but I wasn't sure what to post and what not.
EDIT: I realize at least with the Server side implementation, the way I can get a more granular reading on how many bytes have been transferred, is by reading the position of the return message value of the stream. However, I don't know a way to do this to ensure absolute integrity on my count. I thought about maybe using a timer and polling the position as the stream was being transferred, but then the next call might be made and I would quickly become out of sync.
How can I poll data from the returning stream and know immediately when the stream finishes so I can quickly add up the remainder of what was left of the stream into my byte count?
Okay I have found what seems to be ideal for me. I don't know if it's perfect, but it's pretty darn good for my needs.
On the Server side, we have this code that does the work of transferring the file. The chunkedFile class obviously does the chunking, but this is the code that sends the information to the Client.
public FileMessage ReceiveFile()
{
byte[] buffer = chunkedFile.NextChunk();
FileMessage message = new FileMessage();
message.FileMetaData = new FileMetaData(chunkedFile.MoreChunks, buffer.LongLength, chunkedFile.CurrentPosition);
message.ChunkData = new MemoryStream(buffer);
TotalBytesTransferred = chunkedFile.CurrentPosition;
UpdateTotalBytesTransferred(message);
if (!chunkedFile.MoreChunks)
{
OnTransferComplete(this, EventArgs.Empty);
Timer timer = new Timer(20000f);
timer.Elapsed += (sender, e) =>
{
StopSession();
timer.Stop();
};
timer.Start();
}
return message;
}
The client basically calls this code, and the server proceeds to get a new chunk, put it in a stream, update the TotalBytesTransferred based on the position of the chunkedFile(which keeps track of the underlying file system file that is used to draw the data from). I'll show the method UpdateTotalBytesTransferred(message) in a moment, as that is where all the code for the server and client reside to achieve the more granular polling of the TotalBytesTransferred.
Next up is the client side work.
hostChannel = channelFactory.CreateChannel();
((IContextChannel)hostChannel).OperationTimeout = new TimeSpan(3, 0, 0);
bool moreChunks = true;
using (BinaryWriter writer = new BinaryWriter(File.OpenWrite(fileWritePath)))
{
writer.BaseStream.SetLength(0);
while (moreChunks)
{
FileMessage message = hostChannel.ReceiveFile();
moreChunks = message.FileMetaData.MoreChunks;
UpdateTotalBytesTransferred(message);
writer.BaseStream.Position = filePosition;
message.ChunkData.CopyTo(writer.BaseStream);
TotalBytesTransferred = message.FileMetaData.FilePosition;
filePosition += message.FileMetaData.ChunkLength;
}
OnTransferComplete(this, EventArgs.Empty);
}
This code is very simple. It calls the host to get the file stream, and also utilizes the UpdateTotalBytesTransferred(message) method. It does a little bit of work to remember the position of the underlying file that is being written, and copies the stream to that file while also updating the TotalBytesTransferred after finishing.
The way I achieved the granularity I was looking for was with the UpdateTotalBytesTransferred method as follows. It works exactly the same for both the Server and the Client.
private void UpdateTotalBytesTransferred(FileMessage message)
{
long previousStreamPosition = 0;
long totalBytesTransferredShouldBe = TotalBytesTransferred + message.FileMetaData.ChunkLength;
Timer timer = new Timer(500f);
timer.Elapsed += (sender, e) =>
{
if (TotalBytesTransferred + (message.ChunkData.Position - previousStreamPosition) < totalBytesTransferredShouldBe)
{
TotalBytesTransferred += message.ChunkData.Position - previousStreamPosition;
previousStreamPosition = message.ChunkData.Position;
}
else
{
timer.Stop();
timer.Dispose();
}
};
timer.Start();
}
What this does is take in the FileMessage which is basically just a stream and some information about the file itself. It has a variable previousStreamPosition to remember the last position it was when it was polling the underlying stream. It also does a simple calculation with totalBytesTransferredShouldBe based on how many bytes are already transferred plus the total length of the stream.
Finally, a timer is created and executed, which upon every tick checks to see if it needs to be incrementing the TotalBytesTransferred. If it's not supposed to update it anymore(reached the end of the stream basically), it stops and disposes of the timer.
This all allows me to get very small reads of how many bytes have been transferred, which lets me better calculate the total progress in a more fluid way, as more accurately measure the file transfer speeds achieved.
I'm receiving periodically some data via Serial Port, in order to plot it and do some more stuff. In order to achive this purpose, I send the data from my microcontroller to my computer with a header, which specifies the length of each packet.
I have the program running and working perfectly except a one last detail. When the header specifies a lenght, my program will not stop until it reachs that amount of bytes. So if, for some reason, some data from one packet is missed, the program wait and take the beginning of the next packet...and then start the real problems. Since that moment, every fails.
I thought about rising a Timer every 0.9 seconds ( the packages come every second) who will give a command in order to comeback to wait and reset variables. But I don't know how to do it, I tried but I obtain errors while running. Since IndCom ( see next code) resets in the midle of some function and errors as "Index out of bounds" arises.
I attach my code ( without timer)
private void routineRx(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
try
{
int BytesWaiting;
do
{
BytesWaiting = this.serialPort.BytesToRead;
//Copy it to the BuffCom
while (BytesWaiting > 0)
{
BuffCom[IndCom] = (byte)this.serialPort.ReadByte();
IndCom = IndCom + 1;
BytesWaiting = BytesWaiting - 1;
}
} while (IndCom < HeaderLength);
//I have to read until I got the whole Header which gives the info about the current packet
PacketLength = getIntInfo(BuffCom,4);
while (IndCom < PacketLength)
{
BytesWaiting = this.serialPort.BytesToRead;
//Copy it to the BuffCom
while (BytesWaiting > 0)
{
BuffCom[IndCom] = (byte)this.serialPort.ReadByte();
IndCom = IndCom + 1;
BytesWaiting = BytesWaiting - 1;
}
}
//If we have a packet--> check if it is valid and, if so, what kind of packet is
this.Invoke(new EventHandler(checkPacket));
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
I'm new in object-oriented programming and c#, so be clement, please! And thank you very much
What you might do is use a Stopwatch.
const long COM_TIMEOUT = 500;
Stopwatch spw = new Stopwatch();
spw.Restart();
while (IndCom < PacketLength)
{
//read byte, do stuff
if (spw.ElapsedMilliseconds > COM_TIMEOUT) break; //etc
}
Restart the stopwatch at the beginning and check the time in each while loop, then break out(and clean up) if the timeout hits. 900ms is probably too much, even, if you're only expecting a few bytes. Com traffic is quite fast - if you don't get the whole thing immediately it's probably not coming.
I like to use termination characters in communication protocols (like [CR], etc). This allows you to read until you find the termination character, then stop. This prevents reading into the next command. Even if you don't want to use termination characters, changing your code to something like this :
while (IndCom < PacketLength)
{
if (serialPort.BytesToRead > 0)
{
BuffCom[IndCom] = (byte)this.serialPort.ReadByte();
IndCom++;
}
}
it allows you to stop when you reach your packet size, leaving any remaining characters in the buffer for the next round through (ie: the next command). You can add the stopwatch timeout in the above also.
The other nice thing about termination characters is that you don't have to know in advance how long the packet should be - you just read until you reach the termination character and then process/parse the whole thing once you've got it. It makes your two-step port read into a one-step port read.
I'm working on a client/server relationship that is meant to push data back and forth for an indeterminate amount of time.
The problem I'm attempting to overcome is on the client side, being that I cannot manage to find a way to detect a disconnect.
I've taken a couple of passes at other peoples solutions, ranging from just catching IO Exceptions, to polling the socket on all three SelectModes. I've also tried using a combination of a poll, with a check on the 'Available' field of the socket.
// Something like this
Boolean IsConnected()
{
try
{
bool part1 = this.Connection.Client.Poll(1000, SelectMode.SelectRead);
bool part2 = (this.Connection.Client.Available == 0);
if (part1 & part2)
{
// Never Occurs
//connection is closed
return false;
}
return true;
}
catch( IOException e )
{
// Never Occurs Either
}
}
On the server side, an attempt to write an 'empty' character ( \0 ) to the client forces an IO Exception and the server can detect that the client has disconnected ( pretty easy gig ).
On the client side, the same operation yields no exception.
// Something like this
Boolean IsConnected( )
{
try
{
this.WriteHandle.WriteLine("\0");
this.WriteHandle.Flush();
return true;
}
catch( IOException e )
{
// Never occurs
this.OnClosed("Yo socket sux");
return false;
}
}
A problem that I believe I am having in detecting a disconnect via a poll, is that I can fairly easily encounter a false on a SelectRead, if my server hasn't yet written anything back to the client since the last check... Not sure what to do here, I've chased down every option to make this detection that I can find and nothing has been 100% for me, and ultimately my goal here is to detect a server (or connection) failure, inform the client, wait to reconnect, etc. So I am sure you can imagine that this is an integral piece.
Appreciate anyone's suggestions.
Thanks ahead of time.
EDIT: Anyone viewing this question should note the answer below, and my FINAL Comments on it. I've elaborated on how I overcame this problem, but have yet to make a 'Q&A' style post.
One option is to use TCP keep alive packets. You turn them on with a call to Socket.IOControl(). Only annoying bit is that it takes a byte array as input, so you have to convert your data to an array of bytes to pass in. Here's an example using a 10000ms keep alive with a 1000ms retry:
Socket socket; //Make a good socket before calling the rest of the code.
int size = sizeof(UInt32);
UInt32 on = 1;
UInt32 keepAliveInterval = 10000; //Send a packet once every 10 seconds.
UInt32 retryInterval = 1000; //If no response, resend every second.
byte[] inArray = new byte[size * 3];
Array.Copy(BitConverter.GetBytes(on), 0, inArray, 0, size);
Array.Copy(BitConverter.GetBytes(keepAliveInterval), 0, inArray, size, size);
Array.Copy(BitConverter.GetBytes(retryInterval), 0, inArray, size * 2, size);
socket.IOControl(IOControlCode.KeepAliveValues, inArray, null);
Keep alive packets are sent only when you aren't sending other data, so every time you send data, the 10000ms timer is reset.
I am new to Visual C#. I have to receive a packet of 468 bytes every second from a embedded device serially. The header of the packet is 0xbf, 0x13, 0x97, 0x74. After check validating the packet header i am saving this packet , process it, and display it graphically.
The problem is that i start losing packets after few hours. (Other software was logging the same data for the whole week and is working well).
The code is here...
private void DataRec(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
rtTotBytes = comport.BytesToRead;
rtTotBytesRead = comport.Read(rtSerBuff, 0, rtTotBytes);
this.Invoke(new ComportDelegate(ComportDlgtCallback), rtSerBuff, rtTotBytesRead);
}
//Delegate
delegate void ComportDelegate(byte[] sBuff, int sByte);
//Callback Function to Delegate
private void ComportDlgtCallback(byte[] SerBuff, int TotBytes)
{
for (int k = 0; k < TotBytes; k++)
{
switch (rtState)
{
case 0:
if (SerBuff[k] == 0xbf) { rtState = 1; TempBuff[0] = 0xbf; }
else rtState = 0;
break;
case 1:
if (SerBuff[k] == 0x13) { rtState = 2; TempBuff[1] = 0x13; }
else rtState = 0;
break;
case 2:
if (SerBuff[k] == 0x97) { rtState = 3; TempBuff[2] = 0x97; }
else rtState = 0;
break;
case 3:
if (SerBuff[k] == 0x74) { rtState = 4; TempBuff[3] = 0x74; rtCnt = 4; }
else rtState = 0;
break;
case 4:
if (rtCnt == 467)
{
TempBuff[rtCnt] = SerBuff[k];
TempBuff.CopyTo(PlotBuff, 0);
ProcessPacket(PlotBuff);
rtState = 0; rtCnt = 0;
}
else
TempBuff[rtCnt++] = SerBuff[k];
break;
}
}
}
Another question: can the BytesToRead be zero if a DataReceivedEvent had occured? Do you have to check (BytesToRead>0) in DataRecievedEvent?
Serial port input data must be treated as a stream, and not series of packets. For example, when device sends 0xbf, 0x13, 0x97, 0x74 packet, DataRec function may be called once with the whole packet, or twice with 0xbf, 0x13 and 0x97, 0x74 packets, or 4 times with one byte, etc. The program must be flexible enough to handle input stream using some parser. Your current program doesn't do this, it can miss logical packets which are received in a several function calls. Another situation is possible, when several packets are received in one DataRec function call - your program is not ready also for such situation.
Edit.
Typical serial port input stream handling algorithm should look like this:
DataRec function adds received data to input queue and calls parser.
Input queue is some byte array, which contains the data already received, but not parsed yet. New data is added to the end, and parsed packets are removed from the beginning of this queue.
Parser reads the input queue, handles all recognized packets and removes them from the queue, leaving all unrecognized data for the next call.
I think a problem could be that you can't be sure that you receive a full package within the DataReceived event. It is possible that you just got the first half of the packet and half a second later the second half.
So you should implement another layer where you put the data into a buffer. The further proceeding depends on the data format.
If you receive additionally informations like an end mark or the length of the data you could check if the buffer already contains these informations. If yes advance this full package to your routine.
If you don't have this information you have to wait till you receive the next header and forward the data within your buffer till this new header.
Have you checked the memory usage of the program?
Maybe you have a small interop class, memory or something which is not properly freed, adds up after a few hours and make the program run sluggish, causing it to lose data.
I'd use process explorer to check how memory and cpu use change after a few hours. Maybe check for hdd activity, too.
If this does not lead to results, use a full blown profiler like ANTS and try to run the program under the profiler to check for problems.
As Alex Farber points out, there's no guarantee that when your DataReceived handler is invoked, all the bytes are there.
If your buffers are always a fixed size, and at a low rate, you can use the Read function directly, rather than relying on the DataReceived event. Conceptually:
packetSize = 468;
...initialization...
comport.ReadTimeout = 2000; //packets expected every 1000 milliseconds, so give it some slack
while (captureFlag) {
comport.Read(rtSerBuff, 0, packetSize);
...do stuff...
}
This can be put into its own worker thread if you want.
Another approach would be to use the ReadLine method. You mention that the packets have a known starting signature. Do they also have a known ending signature that is guaranteed to not be repeated in the packet? If so, you can set the NewLine property to this ending signature and use ReadLine. Again, you can put this in a worker thread,
I have an ugly piece of Serial Port code which is very unstable.
void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
Thread.Sleep(100);
while (port.BytesToRead > 0)
{
var count = port.BytesToRead;
byte[] buffer = new byte[count];
var read = port.Read(buffer, 0, count);
if (DataEncapsulator != null)
buffer = DataEncapsulator.UnWrap(buffer);
var response = dataCollector.Collect(buffer);
if (response != null)
{
this.OnDataReceived(response);
}
Thread.Sleep(100);
}
}
If I remove either Thread.Sleep(100) calls the code stops working.
Of course this really slows things down and if lots of data streams in,
it stops working as well unless I make the sleep even bigger.
(Stops working as in pure deadlock)
Please note the DataEncapsulator and DataCollector are components
provided by MEF, but their performance is quite good.
The class has a Listen() method which starts a background worker to
receive data.
public void Listen(IDataCollector dataCollector)
{
this.dataCollector = dataCollector;
BackgroundWorker worker = new BackgroundWorker();
worker.DoWork += new DoWorkEventHandler(worker_DoWork);
worker.RunWorkerAsync();
}
void worker_DoWork(object sender, DoWorkEventArgs e)
{
port = new SerialPort();
//Event handlers
port.ReceivedBytesThreshold = 15;
port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived);
..... remainder of code ...
Suggestions are welcome!
Update:
*Just a quick note about what the IDataCollector classes do.
There is no way to know if all bytes of the data that has been sent
are read in a single read operation. So everytime data is read it is
passed to the DataColllector which returns true when a complete and
valid protocol message has been received. In this case here it just
checks for a sync byte, length , crc and tail byte. The real work
is done later by other classes.
*
Update 2:
I replaced the code now as suggested, but still there is something wrong:
void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
var count = port.BytesToRead;
byte[] buffer = new byte[count];
var read = port.Read(buffer, 0, count);
if (DataEncapsulator != null)
buffer = DataEncapsulator.UnWrap(buffer);
var response = dataCollector.Collect(buffer);
if (response != null)
{
this.OnDataReceived(response);
}
}
You see this works fine with a fast and stable connection.
But OnDataReceived is NOT called every time data is received.
(See the MSDN docs for more). So if the data gets fragmented
and you only read once within the event data gets lost.
And now I remember why I had the loop in the first place, because
it actually does have to read multiple times if the connection is slow or unstable.
Obviously I can't go back to the while loop solution, so what can I do?
My first concern with the original while-based code fragment is the constant allocation of memory for the byte buffer. Putting a "new" statement here specifically going to the .NET memory manager to allocate memory for the buffer, while taking the memory allocated in the last iteration and sending it back into the unused pool for eventual garbage collection. That seems like an awful lot of work to do in a relatively tight loop.
I am curious as to the performance improvement you would gain by creating this buffer at design-time with a reasonable size, say 8K, so you don't have all of this memory allocation and deallocation and fragmentation. Would that help?
private byte[] buffer = new byte[8192];
void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
Thread.Sleep(100);
while (port.BytesToRead > 0)
{
var count = port.BytesToRead;
var read = port.Read(buffer, 0, count);
// ... more code
}
}
My other concern with re-allocating this buffer on every iteration of the loop is that the reallocation may be unnecessary if the buffer is already large enough. Consider the following:
Loop Iteration 1: 100 bytes received; allocate buffer of 100 bytes
Loop Iteration 2: 75 bytes received; allocate buffer of 75 bytes
In this scenario, you don't really need to re-allocate the buffer, because the buffer of 100 bytes allocated in Loop Iteration 1 is more than enough to handle the 75 bytes received in Loop Iteration 2. There is no need to destroy the 100 byte buffer and create a 75 byte buffer. (This is moot, of course, if you just statically create the buffer and move it out of the loop altogether.)
On another tangent, I might suggest that the DataReceived loop concern itself only with the reception of the data. I am not sure what those MEF components are doing, but I question if their work has to be done in the data reception loop. Is it possible for the received data to be put on some sort of queue and the MEF components can pick them up there? I am interested in keeping the DataReceived loop as speedy as possible. Perhaps the received data can be put on a queue so that it can go right back to work receiving more data. You can set up another thread, perhaps, to watch for data arriving on the queue and have the MEF components pick up the data from there and do their work from there. That may be more coding, but it may help the data reception loop be as responsive as possible.
And it can be so simple...
Either you use DataReceived handler but without a loop and certainly without Sleep(), read what data is ready and push it somewhere (to a Queue or MemoryStream),
or
Start a Thread (BgWorker) and do a (blocking) serialPort1.Read(...), and again, push or assemble the data you get.
Edit:
From what you posted I would say: drop the eventhandler and just Read the bytes inside Dowork(). That has the benefit you can specify how much data you want, as long as it is (a lot) smaller than the ReadBufferSize.
Edit2, regarding Update2:
You will still be much better of with a while loop inside a BgWorker, not using the event at all. The simple way:
byte[] buffer = new byte[128]; // 128 = (average) size of a record
while(port.IsOpen && ! worker.CancelationPending)
{
int count = port.Read(buffer, 0, 128);
// proccess count bytes
}
Now maybe your records are variable-sized and you don't don't want to wait for the next 126 bytes to come in to complete one. You can tune this by reducing the buffer size or set a ReadTimeOut. To get very fine-grained you could use port.ReadByte(). Since that reads from the ReadBuffer it's not really any slower.
If you want to write the data to a file and the serial port stops every so often this is a simple way to do it. If possible make your buffer large enough to hold all the bytes that you plan to put in a single file. Then write the code in your datareceived event handler as shown below. Then when you get an oportunity write the whole buffer to a file as shown below that. If you must read FROM your buffer while the serial port is reading TO your buffer then try using a buffered stream object to avoid deadlocks and race conditions.
private byte[] buffer = new byte[8192];
var index = 0;
void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
index += port.Read(buffer, index, port.BytesToRead);
}
void WriteDataToFile()
{
binaryWriter.Write(buffer, 0, index);
index = 0;
}