I use this code for receiving scanlines:
StateObject stateobj = (StateObject)ar.AsyncState;
stateobj.workSocket.BeginReceive(new System.AsyncCallback(VideoReceive), stateobj);
UdpClient client = stateobj.workSocket;
IPEndPoint ipendp = new IPEndPoint(IPAddress.Any, 0);
byte[] data = client.EndReceive(ar, ref ipendp);
BinaryReader inputStream = new BinaryReader(new MemoryStream(data));
inputStream.BaseStream.Position = 0;
int currentPart = inputStream.ReadInt32();
if (currentPart == part)
{
int a = 0;
int colum = inputStream.ReadInt32();
for (; a < packets.GetLength(1); a++)
{
packets[colum, a, 2] = inputStream.ReadByte();
packets[colum, a, 1] = inputStream.ReadByte();
packets[colum, a, 0] = inputStream.ReadByte();
}
receiverCheck++;
}
else if (currentPart != part)
{
part++;
mask2.Data = packets;
pictureBox1.BeginInvoke(new MethodInvoker(() => { pictureBox1.Image = mask2.ToBitmap(); }));
int colum = inputStream.ReadInt32();
for (int a = 0; a < packets.GetLength(1); a++)
{
packets[colum, a, 2] = inputStream.ReadByte();
packets[colum, a, 1] = inputStream.ReadByte();
packets[colum, a, 0] = inputStream.ReadByte();
}
}
After all scanlines have been received the image displayed in pictureBox.
This should work, but have a lot lost packets even on localhost (only ~ 95 of 480), so I have striped image. I found a similar problem here.
Answer:
private void OnReceive(object sender, SocketAsyncEventArgs e)
{
TOP:
if (e != null)
{
int length = e.BytesTransferred;
if (length > 0)
{
FireBytesReceivedFrom(Datagram, length, (IPEndPoint)e.RemoteEndPoint);
}
e.Dispose(); // could possibly reuse the args?
}
Socket s = Socket;
if (s != null && RemoteEndPoint != null)
{
e = new SocketAsyncEventArgs();
try
{
e.RemoteEndPoint = RemoteEndPoint;
e.SetBuffer(Datagram, 0, Datagram.Length); // don't allocate a new buffer every time
e.Completed += OnReceive;
// this uses the fast IO completion port stuff made available in .NET 3.5; it's supposedly better than the socket selector or the old Begin/End methods
if (!s.ReceiveFromAsync(e)) // returns synchronously if data is already there
goto TOP; // using GOTO to avoid overflowing the stack
}
catch (ObjectDisposedException)
{
// this is expected after a disconnect
e.Dispose();
Logger.Info("UDP Client Receive was disconnected.");
}
catch (Exception ex)
{
Logger.Error("Unexpected UDP Client Receive disconnect.", ex);
}
}
}
Answer has method FireBytesReceivedFrom(), but I can't find it. How can I use this code? And does this code help?
UDP doesn't guarantee that all packets will be received, it that they will arrive in any particular order. So even if you get this "working" be aware that it could (will probably) fail at some point.
When you call BeginReceive, you are starting an sync read. When data arrives, your event handler will be called, and it is then that you need to call EndReceive. Currently you are calling EndReceive immediately, which is probably why things are going wrong.
Some other notes:
I'd suggest that you don't try to be clever and re-use the buffer, as that could result in you losing data by overwriting data while you are trying to read it. Start off simple and add optimizations like this after you have it working well.
Also, the goto could be causing havoc. You seem to be trying to use it to retry, but this code is running IN the data received event handler. Event handlers should handle the event in the most lightweight way possible and then return, not start looping... especially as the loop here could cause a re-entrant call to the same event handler.
With async comms, you should start a read and exit. When you eventually receive the data (your event handler is called), grab it and start a new async read. Anything more complex than that is likely to cause problems.
The Fire... method you are missing probably just raises (fires) an event to tell clients that the data has arrived. this is the place where you should be grabbing the received data and doing something with it.
If you are using an example to build this code then I suggest you look for a better example. (in any case I would always recommend trying to find 3 examples so you can compare the implementations, as you will usually learn a lot more about something this way)
Related
I have a server software that has a single listening socket that then spans off multiple sockets (10 -30) which I then stream data to.
If I startup my application it used about 2-3% cpu usage on my 8 vCPU VM. After some time, generally 1-2 weeks the application suddenly starts using 60-70% cpu usage, and the thread count seems to stay stable, it does not increase.
I have run my C# profiler on my code and it comes down to the following line of code System.net.Socket.beginReceive().
I am using .net async sockets. below is my ReceiveCallBack My suspicion is that I am not handling the case when bytesRead is NOT >0. How should I modify my function below to handle that case correctly?
public static void ReadCallback(IAsyncResult ar)
{
SocketState tmpRef = null;
try
{
if (ar != null)
{
tmpRef = (SocketState)ar.AsyncState;
if (tmpRef != null)
{
// Read data from the client socket.
int bytesRead = tmpRef.WorkSocket.Client.EndReceive(ar);
//Start Reading Again
tmpRef.BeginReading(tmpRef._receievCallbackAction);
if (bytesRead > 0)
{
// Check if we have a complete message yet
for (var i = 0; i < bytesRead; i++)
{
if (tmpRef._receiveBuffer[i] == 160)
{
var tmpBuffer = new byte[i];
Array.Copy(tmpRef._receiveBuffer, tmpBuffer, i);
//Execute callback
tmpRef._receievCallbackAction(tmpBuffer);
break;
}
}
}
}
}
}
catch (Exception e)
{
if (tmpRef != null)
{
//Call callback with null value to indicate a failier
tmpRef._receievCallbackAction(null);
}
}
}
Full code: (Sorry don't want to dirty the post)
https://www.dropbox.com/s/yqjtz0r3ppgd11f/SocketState.cs?dl=0
The problem is if you do not have enough bytes yet your code spins forever waiting for the next byte to show up.
What you need to do is make a messageBuffer that survive between calls and write to that till you have all the data you need. Also, by the way your code looks you look have the opportunity to overwrite tmpRef._receiveBuffer before you have copied all the data out, your BeginReading needs to start after the copy if you are sharing a buffer.
public class SocketState
{
private readonly List<byte> _messageBuffer = new List<byte>(BufferSize);
//...
/// <summary>
/// Async Receive Callback
/// </summary>
/// <param name="ar"></param>
public static void ReadCallback(IAsyncResult ar)
{
SocketState tmpRef = null;
try
{
if (ar != null)
{
tmpRef = (SocketState)ar.AsyncState;
if (tmpRef != null)
{
// Read data from the client socket.
int bytesRead = tmpRef.WorkSocket.Client.EndReceive(ar);
if (bytesRead > 0)
{
//Loop over the bytes we received this read
for (var i = 0; i < bytesRead; i++)
{
//Copy the bytes from the receive buffer to the message buffer.
tmpRef._messageBuffer.Add(tmpRef._receiveBuffer[i]);
// Check if we have a complete message yet
if (tmpRef._receiveBuffer[i] == 160)
{
//Copy the bytes to a tmpBuffer to be passed on to the callback.
var tmpBuffer = tmpRef._messageBuffer.ToArray();
//Execute callback
tmpRef._receievCallbackAction(tmpBuffer);
//reset the message buffer and keep reading the current bytes read
tmpRef._messageBuffer.Clear();
}
}
//Start Reading Again
tmpRef.BeginReading(tmpRef._receievCallbackAction);
}
}
}
}
catch (Exception e)
{
if (tmpRef != null)
{
//Call callback with null value to indicate a failier
tmpRef._receievCallbackAction(null);
}
}
}
//...
You are explaining that the problems occurs after 1-2 weeks, which is quite rare then.
I would suggest you to orientate your researchs by improving the exception handling in your readcallback.
Within this exception handling it turns out that you are invoking the callbackAction with null.
Maybe you should consider answering the following questions :
How does the callbackAction behaves when invoked with null tmpRef._receievCallbackAction(null);
What kind of exception is caught? If it is a SocketException, maybe look at the ErrorCode, which might give you an indication
Would it be possible to dump the stack trace to know exactly where it fails ?
Some other weak point : the begin receive uses this as state object.
WorkSocket.Client.BeginReceive(_receiveBuffer, 0, BufferSize, 0, ReadCallback, this);
So it means that the thread safeness of the readcallback is not entirely guaranteed, because the call to BeginReading will occurs while you didn't process the _receiveBufferyet.
I have an application that receives data from a wireless radio using RS-232. These radios use an API for communicating with multiple clients. To use the radios I created a library for communicate with them that other software can utilize with minimal changes from a normal SerialPort connection. The library reads from a SerialPort object and inserts incoming data into different buffers depending on the radio it receives from. Each packet that is received contains a header indicating its length, source, etc.
I start by reading the header, which is fixed-length, from the port and parsing it. In the header, the length of the data is defined before the data payload itself, so once I know the length of the data, I then wait for that much data to be available, then read in that many bytes.
Example (the other elements from the header are omitted):
// Read header
byte[] header = new byte[RCV_HEADER_LENGTH];
this.Port.Read(header, 0, RCV_HEADER_LENGTH);
// Get length of data in packet
short dataLength = header[1];
byte[] payload = new byte[dataLength];
// Make sure all the payload of this packet is ready to read
while (this.Port.BytesToRead < dataLength) { }
this.Port.Read(payload, 0, dataLength);
Obviously the empty while port is bad. If for some reason the data never arrives the thread will lock. I haven't encountered this problem yet, but I'm looking for an elegant way to do this. My first thought is to add a short timer that starts just before the while-loop, and sets an abortRead flag when it elapses that would break the while loop, like this:
// Make sure all the payload of this packet is ready to read
abortRead = false;
readTimer.Start();
while (this.Port.BytesToRead < dataLength && !abortRead) {}
This code needs to handle a constant stream of incoming data as quickly as it can, so keeping overhead to a minimum is a concern, and am wondering if I am doing this properly.
You don't have to run this while loop, the method Read would either fill the buffer for you or would throw a TimeoutException if buffer wasn't filled within the SerialPort.ReadTimeout time (which you can adjust to your needs).
But some general remark - your while loop would cause intensive CPU work for nothing, in the few milliseconds it would take the data to arrive you would have thousends of this while loop iterations, you should've add some Thread.Sleep inside.
If you want to truly adress this problem, you need to run the code in the background. There are different options to do that; you can start a thread, you start a Task or you can use async await.
To fully cover all options, the answer would be endless. If you use threads or tasks with the default scheduler and your wait time is expected to be rather short, you can use SpinWait.SpinUntil instead of your while loop. This will perform better than your solution:
SpinWait.SpinUntil(() => this.Port.BytesToRead >= dataLength);
If you are free to use async await, I would recommend this solution, since you need only a few changes to your code. You can use Task.Delay and in the best case you pass a CancellationToken to be able to cancel your operation:
try {
while (this.Port.BytesToRead < dataLength) {
await Task.Delay(100, cancellationToken);
}
}
catch(OperationCancelledException) {
//Cancellation logic
}
I think I would do this asynchronously with the SerialPort DataReceived event.
// Class fields
private const int RCV_HEADER_LENGTH = 8;
private const int MAX_DATA_LENGTH = 255;
private SerialPort Port;
private byte[] PacketBuffer = new byte[RCV_HEADER_LENGTH + MAX_DATA_LENGTH];
private int Readi = 0;
private int DataLength = 0;
// In your constructor
this.Port.DataReceived += new SerialDataReceivedEventHandler(DataReceivedHandler);
private void DataReceivedHandler(object sender, SerialDataReceivedEventArgs e)
{
if (e.EventType != SerialData.Chars)
{
return;
}
// Read all available bytes.
int len = Port.BytesToRead;
byte[] data = new byte[len];
Port.Read(data, 0, len);
// Go through each byte.
for (int i = 0; i < len; i++)
{
// Add the next byte to the packet buffer.
PacketBuffer[Readi++] = data[i];
// Check if we've received the complete header.
if (Readi == RCV_HEADER_LENGTH)
{
DataLength = PacketBuffer[1];
}
// Check if we've received the complete data.
if (Readi == RCV_HEADER_LENGTH + DataLength)
{
// The packet is complete add it to the appropriate buffer.
Readi = 0;
}
}
}
I have a small game server I'm making that will have dozens of connections sending player data constantly. While I've finally accomplished some basics and now have data sending/receiving, I now face a problem of flooding the server and the client with too much data. I've tried to throttle it back but even then I am hitting 90-100% cpu simply because of receiving and processing the data received running up the CPU.
The method below is a bare version of receiving data from the server. The server sends a List of data to be received by the player, then it goes through that list. I've thought perhaps instead just using a dictionary with a key based on type rather than for looping but I don't think that will significantly improve it, the problem is that it is processing data non-stop because player positions are constantly being updated, sent to the server, then send to other players.
The code below shows receive for the client, the server receive looks very similar. How might I begin to overcome this issue? Please be nice, I am still new to network programming.
private void Receive(System.Object client)
{
MemoryStream memStream = null;
TcpClient thisClient = (TcpClient)client;
List<System.Object> objects = new List<System.Object>();
while (thisClient.Connected && playerConnected == true)
{
try
{
do
{
//when receiving data, first comes length then comes the data
byte[] buffer = GetStreamByteBuffer(netStream, 4); //blocks while waiting for data
int msgLenth = BitConverter.ToInt32(buffer, 0);
if (msgLenth <= 0)
{
playerConnected = false;
thisClient.Close();
break;
}
if (msgLenth > 0)
{
buffer = GetStreamByteBuffer(netStream, msgLenth);
memStream = new MemoryStream(buffer);
}
} while (netStream.DataAvailable);
if (memStream != null)
{
BinaryFormatter formatter = new BinaryFormatter();
memStream.Position = 0;
objects = new List<System.Object>((List<System.Object>)formatter.Deserialize(memStream));
}
}
catch (Exception ex)
{
Console.WriteLine("Exception: " + ex.ToString());
if (thisClient.Connected == false)
{
playerConnected = false;
netStream.Close();
thisClient.Close();
break;
}
}
try
{
if (objects != null)
{
for (int i = 0; i < objects.Count; i++)
{
if(objects[i] != null)
{
if (objects[i].GetType() == typeof(GameObject))
{
GameObject p = (GameObject)objects[i];
GameObject item;
if (mapGameObjects.TryGetValue(p.objectID, out item))
{
mapGameObjects[p.objectID] = p;;
}
else
{
mapGameObjects.Add(p.objectID, p);
}
}
}
}
}
}
catch (Exception ex)
{
Console.WriteLine("Exception " + ex.ToString());
if (thisClient.Connected == false)
{
playerConnected = false;
netStream.Close();
break;
}
}
}
Console.WriteLine("Receive thread closed for client.");
}
public static byte[] GetStreamByteBuffer(NetworkStream stream, int n)
{
byte[] buffer = new byte[n];
int bytesRead = 0;
int chunk = 0;
while (bytesRead < n)
{
chunk = stream.Read(buffer, (int)bytesRead, buffer.Length - (int)bytesRead);
if (chunk == 0)
{
break;
}
bytesRead += chunk;
}
return buffer;
}
Based on the code shown, I can't say why the CPU utilization is high. The loop will wait for data, and the wait should not consume CPU. That said, it still polls the connection in checking the DataAvailable property, which is inefficient and can cause you to ignore received data (in the implementation shown...that's not an inherent problem with DataAvailable).
I'll go one further than the other answer and state that you should simply rewrite the code. Polling the socket is just no way to handle network I/O. This would be true in any scenario, but it is especially problematic if you are trying to write a game server, because you're going to use up a lot of your CPU bandwidth needlessly, taking it away from game logic.
The two biggest changes you should make here are:
Don't use the DataAvailable property. Ever. Instead, use one of the asynchronous APIs for dealing with network I/O. My favorite approach with the latest .NET is to wrap the Socket in a NetworkStream (or get the NetworkStream from a TcpClient as you do in your code) and then use the Stream.ReadAsync() along with async and await. But the older asynchronous APIs for Sockets work well also.
Separate your network I/O code from the game logic code. The Receive() method you show here has both the I/O and the actual processing of the data relative to the game state in the same method. This two pieces of functionality really belong in two separate classes. Keep both classes, and especially the interface between them, very simple and the code will be a lot easier to write and to maintain.
If you decide to ignore all of the above, you should at least be aware that your GetStreamByteBuffer() method has a bug in it: if you reach the end of the stream before reading as many bytes were requested, you still return a buffer as large as was requested, with no way for the caller to know the buffer is incomplete.
And finally, IMHO you should be more careful about how you shutdown and close the connection. Read about "graceful closure" for the TCP protocol. It's important that each end signal that they are done sending, and that each end receive the other end's signal, before either end actually closes the connection. This will allow the underlying networking protocol to release resources as efficiently and as quickly as possible. Note that TcpClient exposes the socket as the Client property, which you can use to call Shutdown().
Polling is rarely a good approach to communication, unless you're programming 16-bit microcontrollers (and even then, probably not the best solution).
What you need to do is to switch to a producer-consumer pattern, where your input port (a serial port, an input file, or a TCP socket) will act as a producer filling a FIFO buffer (a queue of bytes), and some other part of your program will be able to asynchronously consume the enqueued data.
In C#, there are several ways to do it: you can simply write a couple of methods using a ConcurrentQueue<byte>, or a BlockingCollection, or you can try a library like the TPL Dataflow Library which IMO doesn't add too much value over existing structures in .NET 4. Prior to .NET 4, you would simply use a Queue<byte>, a lock, and a AutoResetEvent to do the same job.
So the general idea is:
When your input port fires a "data received" event, enqueue all received data into the FIFO buffer and set a synchronization event to notify the consumer,
In your consumer thread, wait for the synchonization event. When the signal is received, check if there is enough data in the queue. If yes, process it, if not, continue waiting for the next signal.
For robustness, use an additional watchdog timer (or simply "time since last received data") to be able to fail on timeout.
You want to use the Task-based Asynchronous Pattern. Probably making liberal use of the async function modifier and the await keyword.
You'd be best replacing GetStreamByteBuffer with a direct call to ReadAsync.
For instance you could asynchronously read from a stream like this.
private static async Task<T> ReadAsync<T>(
Stream source,
CancellationToken token)
{
int requestLength;
{
var initialBuffer = new byte[sizeof(int)];
var readCount = await source.ReadAsync(
initialBuffer,
0,
sizeof(int),
token);
if (readCount != sizeof(int))
{
throw new InvalidOperationException(
"Not enough bytes in stream to read request length.");
}
requestLength = BitConvertor.ToInt32(initialBuffer, 0);
}
var requestBuffer = new byte[requestLength];
var bytesRead = await source.ReadAsync(
requestBuffer,
0,
requestLength,
token);
if (bytesRead != requestLength)
{
throw new InvalidDataException(
string.Format(
"Not enough bytes in stream to match request length." +
" Expected:{0}, Actual:{1}",
requestLength,
bytesRead));
}
var serializer = new BinaryFormatter();
using (var requestData = new MemoryStream(requestBuffer))
{
return (T)serializer.Deserialize(requestData);
}
}
Like your code this reads an int from the stream to get the length, then reads that number of bytes and uses the BinaryFormatter to deserialize the data to the specified generic type.
Using this generic function you can simplify your logic,
private Task Receive(
TcpClient thisClient,
CancellationToken token)
{
IList<object> objects;
while (thisClient.Connected && playerConnected == true)
{
try
{
objects = ReadAsync<List<object>>(netStream, token);
}
catch (Exception ex)
{
Console.WriteLine("Exception: " + ex.ToString());
if (thisClient.Connected == false)
{
playerConnected = false;
netStream.Close();
thisClient.Close();
break;
}
}
try
{
foreach (var p in objects.OfType<GameObject>())
{
if (p != null)
{
mapGameObjects[p.objectID] = p;
}
}
}
catch (Exception ex)
{
Console.WriteLine("Exception " + ex.ToString());
if (thisClient.Connected == false)
{
playerConnected = false;
netStream.Close();
break;
}
}
}
Console.WriteLine("Receive thread closed for client.");
}
You need to put a Thread.Sleep(10) in your while loop. This is also a very fragile way to receive tcp data because it assumes the other side has sent all data before you call this receive. If the other side has only sent half of the data this method fails. This can be countered by either sending fixed sized packages or sending the length of a package first.
Your player position update is similar to the framebuffer update in the VNC protocol where the client request a screen frame & server responds to it with the updated screen data. But there is one exception, VNC server doesn't blindly send the new screen it only sends the changes if there is one. So you need to change the logic from sending all the requested list of objects to only to the objects which are changed after the last sent. Also in addition to it, you should send entire object only once after that send only the changed properties, this will greatly reduce the size of data sent & processed both at clients & server.
In my WPF 4.0 application, I have a UDP listener implemented as shown below. On my Windows 7 PC, I'm running both server and client on localhost.
Each received datagram is a scanline of a larger bitmap, so after all scanlines have been received the bitmap is shown on the UI thread. This seems to work. However, occasionally some 1-50% scanlines are missing. I would expect this on a weak network connection, but not when run locally.
What may cause UDP package loss with the following piece of code?
IPEndPoint endPoint = new IPEndPoint(IPAddress.Any, PORT);
udpClient = new UdpClient(endPoint);
udpClient.Client.ReceiveBufferSize = 65535; // I've tried many different sizes...
var status = new UdpStatus()
{
u = udpClient,
e = endPoint
};
udpClient.BeginReceive(new AsyncCallback(UdpCallback), status);
private void UdpCallback(IAsyncResult ar)
{
IPEndPoint endPoint = ((UdpStatus)(ar.AsyncState)).e;
UdpClient client = ((UdpStatus)(ar.AsyncState)).u;
byte[] datagram = client.EndReceive(ar, ref endPoint);
// Immediately begin listening for next packet so as to not miss any.
client.BeginReceive(new AsyncCallback(UdpCallback), ar.AsyncState);
lock (bufferLock)
{
// Fast processing of datagram.
// This merely involves copying the datagram (scanline) into a larger buffer.
//
// WHEN READY:
// Here I can see that scanlines are missing in my larger buffer.
}
}
If I put a System.Diagnostics.Debug.WriteLine in my callback, the package loss increases dramatically. It seems that a small millisecond delay inside this callback causes problems. Still, the same problem is seen in my release build.
UPDATE
The error becomes more frequent when I stress the UI a bit. Is the UdpClient instance executed on the main thread?
To avoid the thread block issue, try this approach that uses the newer IO Completion port receive method:
private void OnReceive(object sender, SocketAsyncEventArgs e)
{
TOP:
if (e != null)
{
int length = e.BytesTransferred;
if (length > 0)
{
FireBytesReceivedFrom(Datagram, length, (IPEndPoint)e.RemoteEndPoint);
}
e.Dispose(); // could possibly reuse the args?
}
Socket s = Socket;
if (s != null && RemoteEndPoint != null)
{
e = new SocketAsyncEventArgs();
try
{
e.RemoteEndPoint = RemoteEndPoint;
e.SetBuffer(Datagram, 0, Datagram.Length); // don't allocate a new buffer every time
e.Completed += OnReceive;
// this uses the fast IO completion port stuff made available in .NET 3.5; it's supposedly better than the socket selector or the old Begin/End methods
if (!s.ReceiveFromAsync(e)) // returns synchronously if data is already there
goto TOP; // using GOTO to avoid overflowing the stack
}
catch (ObjectDisposedException)
{
// this is expected after a disconnect
e.Dispose();
Logger.Info("UDP Client Receive was disconnected.");
}
catch (Exception ex)
{
Logger.Error("Unexpected UDP Client Receive disconnect.", ex);
}
}
}
I am trying to read from several serial ports from sensors through microcontrollers. Each serial port will receive more than 2000 measurements (each measurement is 7 bytes, all in hex). And they are firing at the same time. Right now I am polling from 4 serial ports. Also, I translate each measurement into String and append it to a Stringbuilder. When I finish receiving data, they will be ouput in to a file. The problem is the CPU consumption is very high, ranging from 80% to 100%.
I went though some articles and put Thread.Sleep(100) at the end. It reduces CPU time when there is no data coming. I also put Thread.Sleep at the end of each polling when the BytesToRead is smaller than 100. It only helps to a certain extent.
Can someone suggest a solution to poll from serial port and handle data that I get? Maybe appending every time I get something causes the problem?
//I use separate threads for all sensors
private void SensorThread(SerialPort mySerialPort, int bytesPerMeasurement, TextBox textBox, StringBuilder data)
{
textBox.BeginInvoke(new MethodInvoker(delegate() { textBox.Text = ""; }));
int bytesRead;
int t;
Byte[] dataIn;
while (mySerialPort.IsOpen)
{
try
{
if (mySerialPort.BytesToRead != 0)
{
//trying to read a fix number of bytes
bytesRead = 0;
t = 0;
dataIn = new Byte[bytesPerMeasurement];
t = mySerialPort.Read(dataIn, 0, bytesPerMeasurement);
bytesRead += t;
while (bytesRead != bytesPerMeasurement)
{
t = mySerialPort.Read(dataIn, bytesRead, bytesPerMeasurement - bytesRead);
bytesRead += t;
}
//convert them into hex string
StringBuilder s = new StringBuilder();
foreach (Byte b in dataIn) { s.Append(b.ToString("X") + ","); }
var line = s.ToString();
var lineString = string.Format("{0} ---- {2}",
line,
mySerialPort.BytesToRead);
data.Append(lineString + "\r\n");//append a measurement to a huge Stringbuilder...Need a solution for this.
////use delegate to change UI thread...
textBox.BeginInvoke(new MethodInvoker(delegate() { textBox.Text = line; }));
if (mySerialPort.BytesToRead <= 100) { Thread.Sleep(100); }
}
else{Thread.Sleep(100);}
}
catch (Exception ex)
{
//MessageBox.Show(ex.ToString());
}
}
}
this is not a good way to do it, it far better to work on the DataReceived event.
basically with serial ports there's a 3 stage process that works well.
Receiving the Data from the serial port
Waiting till you have a relevant chunk of data
Interpreting the data
so something like
class DataCollector
{
private readonly Action<List<byte>> _processMeasurement;
private readonly string _port;
private SerialPort _serialPort;
private const int SizeOfMeasurement = 4;
List<byte> Data = new List<byte>();
public DataCollector(string port, Action<List<byte>> processMeasurement)
{
_processMeasurement = processMeasurement;
_serialPort = new SerialPort(port);
_serialPort.DataReceived +=SerialPortDataReceived;
}
private void SerialPortDataReceived(object sender, SerialDataReceivedEventArgs e)
{
while(_serialPort.BytesToRead > 0)
{
var count = _serialPort.BytesToRead;
var bytes = new byte[count];
_serialPort.Read(bytes, 0, count);
AddBytes(bytes);
}
}
private void AddBytes(byte[] bytes)
{
Data.AddRange(bytes);
while(Data.Count > SizeOfMeasurement)
{
var measurementData = Data.GetRange(0, SizeOfMeasurement);
Data.RemoveRange(0, SizeOfMeasurement);
if (_processMeasurement != null) _processMeasurement(measurementData);
}
}
}
Note: Add Bytes keeps collecting data till you have enough to count as a measurement, or if you get a burst of data, splits it up into seperate measurements.... so you can get 1 byte one time, 2 the next, and 1 more the next, and it will then take that an turn it into a measurement. Most of the time if your micro sends it in a burst, it will come in as one, but sometimes it will get split into 2.
then somewhere you can do
var collector = new DataCollector("COM1", ProcessMeasurement);
and
private void ProcessMeasurement(List<byte> bytes)
{
// this will get called for every measurement, so then
// put stuff into a text box.... or do whatever
}
First of all consider reading Using Stopwatches and Timers in .NET. You can break down any performance issue with this and tell exactly which part of Your code is causing the problem.
Use SerialPort.DataReceived Event to trigger data receiving process.
Separate receiving process and data manipulation process. Store Your data first then process.
Do not edit UI from reading loop.
I guess what you should be doing is adding an event handler to process incoming data:
mySerialPort.DataReceived += new SerialDataReceivedEventHandler(mySerialPort_DataReceived);
This eliminates the need to run a separate thread for each serial port you listen to. Also, each DataReceived handler will be called precisely when there is data available and will consume only as much CPU time as is necessary to process the data, then yield to the application/OS.
If that doesn't solve the CPU usage problem, it means you're doing too much processing. But unless you've got some very fast serial ports I can't imagine the code you've got there will pose a problem.