How to send all data in one buffer through serial port? - c#

My existing code:
private void ConvertAndSend_Click(object sender, EventArgs e)
{
if (serialPort.IsOpen)
{
byte[] TxBuffer = new byte[240];
string[] coords = textBox1.Text.Split('\n');
for (int i = 0; i <= coords.Length - 1; i++)
{
if (coords[i].Length > 0)
{
Data = GetValue(coords[i]);
}
}
TxBuffer[0] = 0x5A;
TxBuffer[1] = Instruction;
TxBuffer[2] = (byte)Data.Length;
Data.CopyTo(TxBuffer, 3);
TxBuffer[Data.Length + 3] = 0x2C;
serialPort.Write(TxBuffer, 0, 4 + Data.Length);
}
}
Now I am sending every "Data" in separate "Txbuffer". e.g. if I have more than one "Data", I am going to send more than one "Txbuffer". How can I combine all "Data" into one "Txbuffer" and send at one time?

It isn't exactly "wrong", although a magic number like 240 doesn't win any prizes. You can also use BinaryWriter, pass the SerialPort.BaseStream to its constructor.
Keep in mind that serial ports implement streams, not 'packets'. Just a raw train of bytes with no distinctive beginning and end. Just like TCP. There is no framing protocol unless you create your own. Which you did. It is up to the receiver to turn the stream of bytes back into a frame. That same requirement doesn't exist when you transmit it.

Related

tcp server failing after first loop

As above really, I'm trying to create a tcp linux server in c to accept data, perform some processing then send it back.
The code I'm trying to use on the client side to send the data and then read it back:
TcpClient tcpclnt = new TcpClient();
tcpclnt.Connect("192.168.0.14", 1235);
NetworkStream stm = tcpclnt.GetStream();
_signal.WaitOne();
Image<Bgr, Byte> frame = null;
while (_queue.TryDequeue(out frame))
{
if (frame != null)
{
resizedBMPFrame = frame.Resize(0.5, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR).ToBitmap();
using (MemoryStream ms = new MemoryStream())
{
resizedBMPFrame.Save(ms, ImageFormat.Bmp);
byte[] byteFrame = ms.ToArray();
l = byteFrame.Length;
byte[] buf = Encoding.UTF8.GetBytes(l.ToString());
stm.Write(buf, 0, buf.Length);
stm.Write(byteFrame, 0, byteFrame.Length);
}
}
else
{
Reading = false;
}
int i;
Bitmap receivedBMPFrame;
byte[] receivedFramesize = new byte[4];
int j = stm.Read(receivedFramesize, 0, receivedFramesize.Length);
int receivedFramesizeint = BitConverter.ToInt32(receivedFramesize, 0);
byte[] receivedFrame = new byte[receivedFramesizeint];
j = stm.Read(receivedFrame, 0, receivedFrame.Length);
using (MemoryStream ms = new MemoryStream(receivedFrame))
{
receivedBMPFrame = new Bitmap(ms);
if (receivedBMPFrame != null)
{
outputVideoPlayer.Image = receivedBMPFrame;
}
else
{
Reading = false;
}
}
}
}
stm.Close();
tcpclnt.Close();
So the idea is it waits for the display thread to send the current frame it's displaying using a concurrentqueue, it then takes it, and makes it a quarter of the size, converts it to a byte array and then sends its length and then it itself over the tcp socket.
In theory the server gets it, performs some processing then sends it back, so it reads the length of it then the new frame itself.
The server code is below:
while (1)
{
int incomingframesize;
int n;
n = read(conn_desc, framesizebuff, 6);
if ( n > 0)
{
printf("Length of incoming frame is %s\n", framesizebuff);
}
else
{
printf("Failed receiving length\n");
return -1;
}
char framebuff[atoi(framesizebuff)];
n = read(conn_desc, framebuff, atoi(framesizebuff));
if ( n > 0)
{
printf("Received frame\n");
}
else
{
printf("Failed receiving frame\n");
return -1;
}
printf("Ready to write\n");
int k = sizeof(framebuff);
n = write(conn_desc, &k, sizeof(int));
if (n <0)
{
printf("ERROR writing to socket\n");
}
else
{
printf("Return frame size is %d\n", k);
}
n = write(conn_desc, &framebuff, sizeof(char)*k);
if (n <0)
{
printf("ERROR writing to socket\n");
}
frameno++;
printf("Frames sent: %d\n", frameno);
}
So it reads the length, then the actual frame, which seems to work, and at the moment then just sends it straight back without doing any processing. However it only works for one loop seemingly, if I step through the client code line by line the server code runs through once, but on the 2nd read by the client, receiving the frame from the server, the server then runs the two reads of the loop straight away, without waiting for another write. Failing on the 2nd having seemingly read in nothing as it outputs:
Length of incoming frame is
Failed receiving frame
With no number, which to me makes sense as I haven't sent another write with the length of the next frame. I'm just wondering what I'm missing/why it's acting like this? As on the first loop it waits until the write commands from the client. I'm wondering if it means there is left over data in the write stream, so when it goes back to the top it immediately reads it again? Although it then doesn't print any form of number which to me implies there's nothing there...
Any help would be greatly appreciated.
EDIT/UPDATE:
Changed the read/write sections on the server to do a single byte at a time like this:
while (ntotal != incomingframesize)
{
n = read(conn_desc, &framebuff[ntotal], sizeof(char));
ntotal = ntotal + n;
while (i < k)
{
m = write(conn_desc, &framebuff[i], sizeof(char));
Which seems to have solved the problems I was having and now the correct data is being transferred :)
When the client writes the frame size it uses the length of some object, but when the server reads it always tries to read 6 characters. You need to use a fixed length for the frame size!
When reading, you cannot assume that you get as many bytes as you ask for. The return value, if >0, is the number of bytes actually read. If you get less than you asked for, you need to keep reading until you have received the number of bytes you expect.
First read until you've got 6 bytes (frame size).
Next read until you've got the number of bytes indicated by the frame size.
Make sure you use the same number of bytes for the frame size in all places.
Edit:
I also noted a bug in the call to write in the server:
n = write(conn_desc, &framebuff, sizeof(char)*k);
framebuff is a pointer to the data, so you probably mean:
n = write(conn_desc, &framebuff[0], k);

Parsing Serial communication in C#

I have an application that requires I send a string of 4-10 ASCII character to an RS422 Uart serial receiver. The problem is that the Uart buffer can only receive 2bytes max ever 10ms or so. How do I parse out the data and send it in chunks without timing out on the other side.
Normal serial.write() method overflows the buffer and I get an error response from the device every time I send anything. The device specified a baudrate of 19200 but also say the data i write needs to be spaced out 2 bytes at a time. There is no parity, handshake or flow control support for the device.
Essentially I want to do something like this:
private void sendData(string text)
{
string textnew = text +(char)13;
byte[] r_bytes = Encoding.ASCII.GetBytes(textnew);
if (SeriialComms.IsOpen)
{
for (int i = 0; i > (textnew.Length/2); i = i + 2)
{
byte[] bytesToSend = { r_bytes[i], r_bytes[i+1] };
SerialComms.Write(bytesToSend, 0, 2);
System.Threading.Thread.Sleep(10);
}
}
}
is this possible and is there an easier way to do this?

Network Streams - Amount to read per Read

Im currently a bit stuck with my c# project.
I have 2 applications, they both have a common class definition I call a NetMessage
a NetMessage contains a MessageType string property, as well as 2 List lists.
The idea is that I can pack this class with classes, and data to send across the network as a byte[].
Because Network Streams do not advertise the amount of data they are receiving, I modified my Send method to send the size of the NetMessage byte[] ahead of the actual byte[].
private static byte[] ReceivedBytes(NetworkStream MainStream)
{
try
{
//byte[] myReadBuffer = new byte[1024];
int receivedDataLength = 0;
byte[] data = { };
long len = 0;
int i = 0;
MainStream.ReadTimeout = 60000;
//MainStream.CanTimeout = false;
if (MainStream.CanRead)
{
//Read the length of the incoming message
byte[] byteLen = new byte[8];
MainStream.Read(byteLen, 0, 8);
len = BitConverter.ToInt64(byteLen, 0);
data = new byte[len];
//data is now set to the appropriate size for the expected message
//While we have not got the full message
//Read each individual byte and append to data.
//This method, seems to work, but is ridiculously slow,
while (receivedDataLength < data.Length)
{
receivedDataLength += MainStream.Read(data, receivedDataLength, 1);
}
//receivedDataLength += MainStream.Read(data, receivedDataLength, data.Length);
return data;
}
}
catch (Exception E)
{
//System.Windows.Forms.MessageBox.Show("Exception:" + E.ToString());
}
return null;
}
I have tried to change the size argument below to something like 1024 or to be the data.Length, but I get funky results.
receivedDataLength += MainStream.Read(data, receivedDataLength, 1);
setting it to data.Length seems to cause problems when the Class being sent is a few mb in size.
Setting the buffer size to be 1024 like I have seen in other examples, causes failures when the size of the incoming message is small, like 843 bytes, it errors out saying that I tried to read out of bounds or something.
Below is the type of method being used to send the data in the first place.
public static void SendBytesToStream(NetworkStream TheStream, byte[] TheMessage)
{
//IAsyncResult r = TheStream.BeginWrite(TheMessage, 0, TheMessage.Length, null, null);
// r.AsyncWaitHandle.WaitOne(10000);
//TheStream.EndWrite(r);
try
{
long len = TheMessage.Length;
byte[] Bytelen = BitConverter.GetBytes(len);
TheStream.Write(Bytelen, 0, Bytelen.Length);
TheStream.Flush();
// <-- I've tried putting thread sleeps in this spot to see if it helps
//I've also tried writing each byte of the message individually
//takes longer, but seems more accurate as far as network transmission goes?
TheStream.Write(TheMessage, 0, TheMessage.Length);
TheStream.Flush();
}
catch (Exception e)
{
//System.Windows.Forms.MessageBox.Show(e.ToString());
}
}
I'd like to get these two methods setup to the point where they are reliably sending and receiving data.
The application I am using this for, monitors a screenshots folder in a game directory,
when it detects a screenshot in TGA format, it converts it to PNG, then takes its byte[] and sends it to the receiver.
The receiver then posts it to Facebook (I don't want my FB tokens distributed in my client application), hence the server / proxy idea.
Its strange, but when I step through the code, the transfer is invariably successful.
But if I run it full speed, no breakpoint, it typically tells me that the connection was closed by the remote host etc.
The client typically finishes sending the data almost instantly, even though its a 4mb file.
The receiver spends about 2 minutes reading from the Network Stream, which doesnt make sense, if the client finished sending the data, does that mean the data is just floating in cyber space, and being pulled down?
Surely it should be synchronous?
I suspect I know where my code was going wrong.
It turns out that the scope I was creating my TCPClient that was doing the sending, was declared and instantiated within a method.
This being the case, the GAC was disposing of it, even though the Receiving Server had not finished downloading the stream.
I managed to resolve it by creating a method that can detect when the Client has disconnected on the server end, and until it has actually disconnected, it will keep looping/waiting until disconnected.
This way, we are waiting until the server lets go of us.

c# SslStream.Read Loop problem

I've been learning C# by creating an app and i've hit a snag i'm really struggling with.
Basicly i have the code below which is what im using to read from a network stream I have setup. It works but as its only reading 1 packet for each time the sslstream.Read() unblocks. It's causes a big backlog of messages.
What im looking at trying to do is if the part of the stream read contains multiple packets read them all.
I've tried multiple times to work it out but i just ended up in a big mess of code.
If anyone could help out I'd appreciate it!
(the first 4bytes of each packet is the size of the packet.. packets range between 8 bytes and 28,000 bytes)
SslStream _sslStream = (SslStream)_sslconnection;
int bytes = -1;
int nextread = 0;
int byteslefttoread = -1;
byte[] tmpMessage;
byte[] buffer = new byte[3000000];
do
{
bytes = _sslStream.Read(buffer, nextread, 8192);
int packetSize = BitConverter.ToInt32(buffer, 0);
nextread += bytes;
byteslefttoread = packetSize - nextread;
if (byteslefttoread <= 0)
{
int leftover = Math.Abs(byteslefttoread);
do
{
tmpMessage = new byte[packetSize];
Buffer.BlockCopy(buffer, 0, tmpMessage, 0, packetSize);
PacketHolder tmpPacketHolder = new PacketHolder(tmpMessage, "in");
lock (StaticMessageBuffers.MsglockerIn)
{
//puts message into the message queue.. not very oop... :S
MessageInQueue.Enqueue(tmpPacketHolder);
}
}
while (leftover > 0);
Buffer.BlockCopy(buffer, packetSize , buffer, 0, leftover);
byteslefttoread = 0;
nextread = leftover;
}
} while (bytes != 0);
If you are using .Net 3.5 or later I would highly suggest you look into Windows Communication Foundation (wcf). It will simply anything you are trying to do over a network.
On the other hand, if you are doing this purely for educational purposes.
Take a look at this link. Your best bet is to read from the stream in somewhat smaller increments, and feed that data into another stream. Once you can identify the length of data you need for a message, you can cut the second stream off into a message. You can setup an outside loop where available bytes are being checked and wait until its value is > 0 to start the next message. Also should note, that any network code should be running on its own thread, so as to not block the UI thread.

Data loss TCP IP C# [duplicate]

This question already has answers here:
Receiving data in TCP
(10 answers)
Closed 2 years ago.
Here's my code:
private void OnReceive(IAsyncResult result)
{
NetStateObject state = (NetStateObject)result.AsyncState;
Socket client = state.Socket;
int size = client.EndReceive(result);
byte[] data = state.Buffer;
object data = null;
using (MemoryStream stream = new MemoryStream(data))
{
BinaryFormatter formatter = new BinaryFormatter();
data = formatter.Deserialize(stream);
}
//todo: something with data
client.BeginReceive(
state.Buffer,
0,
NetStateObject.BUFFER_SIZE,
SocketFlags.None,
OnReceive,
state
);
}
state.Buffer has a maximum size of NetStateObject.BUFFER_SIZE (1024). Firstly, is this too big or too small? Second, if I send something larger than that, my deserialize messes up because the object it is trying to deserialize doesnt have all the information (because not all the data was sent). How do I make sure that all my data has been received before I try to construct it and do something with it?
Completed Working Code
private void OnReceive(IAsyncResult result)
{
NetStateObject state = (NetStateObject)result.AsyncState;
Socket client = state.Socket;
try
{
//get the read data and see how many bytes we received
int bytesRead = client.EndReceive(result);
//store the data from the buffer
byte[] dataReceived = state.Buffer;
//this will hold the byte data for the number of bytes being received
byte[] totalBytesData = new byte[4];
//load the number byte data from the data received
for (int i = 0; i < 4; i++)
{
totalBytesData[i] = dataReceived[i];
}
//convert the number byte data to a numan readable integer
int totalBytes = BitConverter.ToInt32(totalBytesData, 0);
//create a new array with the length of the total bytes being received
byte[] data = new byte[totalBytes];
//load what is in the buffer into the data[]
for (int i = 0; i < bytesRead - 4; i++)
{
data[i] = state.Buffer[i + 4];
}
//receive packets from the connection until the number of bytes read is no longer less than we need
while (bytesRead < totalBytes + 4)
{
bytesRead += state.Socket.Receive(data, bytesRead - 4, totalBytes + 4 - bytesRead, SocketFlags.None);
}
CommandData commandData;
using (MemoryStream stream = new MemoryStream(data))
{
BinaryFormatter formatter = new BinaryFormatter();
commandData = (CommandData)formatter.Deserialize(stream);
}
ReceivedCommands.Enqueue(commandData);
client.BeginReceive(
state.Buffer,
0,
NetStateObject.BUFFER_SIZE,
SocketFlags.None,
OnReceive,
state
);
dataReceived = null;
totalBytesData = null;
data = null;
}
catch(Exception e)
{
Console.WriteLine("***********************");
Console.WriteLine(e.Source);
Console.WriteLine("***********************");
Console.WriteLine(e.Message);
Console.WriteLine("***********************");
Console.WriteLine(e.InnerException);
Console.WriteLine("***********************");
Console.WriteLine(e.StackTrace);
}
}
TCP is a stream protocol. It has no concept of packets. A single write call can be sent in multiple packets, and multiple write calls can be put into the same packet. So you need to implement your own packetizing logic on top of TCP.
There are two common ways to packetize:
Delimiter characters, this is usually used in text protocols, with the new-line being a common choice
Prefix the length to each packet, usually a good choice with binary protocols.
You store the size of a logical packet at the beginning of that packet. Then you read until you received enough bytes to fill the packet and start deserializing.
How do I make sure that all my data has been received before I try to construct it and do something with it?
You have to implement some protocol so you know.
While TCP is reliable, it does not guarantee that the data from single write at one end of the socket will appear as a single read at the other end: retries, packet fragmentation and MTU can all lead to data being received in different sized units by the receiver. You will get the data in the right order.
So you need to include some information when sending that allows the receiver to know when it has the complete message. I would also recommend including what kind of message and what version of the data (this will form the basis of being able to support different client and server versions together).
So the sender sends:
- Message type
- Message version
- Message size (in bytes)
And the receiver will loop, performing a read with a buffer and appending this to a master buffer (MemoryStream is good for this). Once the complete header is received it knows when the complete data has been received.
(Another route is to include some pattern as an "end of message" marker, but then you need to handle the same sequence of bytes occurring in the content—hard to do if the data is binary rather than text.)

Categories

Resources