I have an application that requires I send a string of 4-10 ASCII character to an RS422 Uart serial receiver. The problem is that the Uart buffer can only receive 2bytes max ever 10ms or so. How do I parse out the data and send it in chunks without timing out on the other side.
Normal serial.write() method overflows the buffer and I get an error response from the device every time I send anything. The device specified a baudrate of 19200 but also say the data i write needs to be spaced out 2 bytes at a time. There is no parity, handshake or flow control support for the device.
Essentially I want to do something like this:
private void sendData(string text)
{
string textnew = text +(char)13;
byte[] r_bytes = Encoding.ASCII.GetBytes(textnew);
if (SeriialComms.IsOpen)
{
for (int i = 0; i > (textnew.Length/2); i = i + 2)
{
byte[] bytesToSend = { r_bytes[i], r_bytes[i+1] };
SerialComms.Write(bytesToSend, 0, 2);
System.Threading.Thread.Sleep(10);
}
}
}
is this possible and is there an easier way to do this?
Related
I'm making a chatroom application, and I send TCP packets to communicate between the server and the client.
I have the following code:
string returnMessage = "[EVT]USERSUCCESS";
bytes = Encoding.ASCII.GetBytes(returnMessage);
info.WriteToStream(bytes);
foreach (ConnectionInfo con in connections)
{
info.WriteToStream(bytes);
bytes = Encoding.ASCII.GetBytes("[EVT]USERJOIN;" + username);
con.WriteToStream(bytes);
}
However, when the client reads this, the response is:
[EVT]USERSUCCESS[EVT]USERJOIN;Name
Which seems as though it is receiving both packets at once..?
This is the code I have for receiving:
static void ServerListener()
{
while (true)
{
byte[] bytes = new byte[1024];
int numBytes = stream.Read(bytes, 0, bytes.Length);
string message = Encoding.ASCII.GetString(bytes, 0, numBytes);
if (HandleResponse(message) && !WindowHasFocus())
{
player.Play();
}
}
}
Which is run as a separate thread. HandleResponse() is fully working.
Thanks in advance!
Tcp is a stream protocol not a packet protocol.
You could get the bytes in mutiple packets or a single packet like you are doing.
What you need to do is put a byte that siginifies the end of a packet (such as the null byte)
Psudeo code:
Have a buffer that you add every byte received to.
If their is a null byte in the buffer then split the buffer at the null byte
Handle the left side as a complete packet. Keep the right side in the buffer until you get a new null byte.
Advanced TCP:
You may want to add your own error checking in your "packet"
I have a tcp listener set up where i have an unknown message size coming in. I am not sure what the best practice is when it comes to handling unknown message sizes. Here is the code:
TcpListener _server = new TcpListener(_localAddr, _port);
_server.Start();
while (true)
{
if (_server.Pending())
{
Byte[] bytes = new Byte[256];//Works fine if message under this size
string data = string.Empty;
_client = _server.AcceptTcpClient();
NetworkStream stream = _client.GetStream();
int i;
while ((i = stream.Read(bytes, 0, bytes.Length)) != 0)
{
data = System.Text.Encoding.ASCII.GetString(bytes, 0, i);
data = data.ToUpper();
//Do stuff with data
}
}
So you will notice i can change the size of the array bytes from 256 to something bigger but i don't know if there is a better way than just setting the size to something i know is big enough. If the message is smaller than 256 it works fine.
Thanks for any help in advance
If the messages length is not permanent the service who send you the messages should send you in the beginning of every message a few bytes (usually 4 because Int32 is 32 bits that are 4 bytes) that will indicate the message length.
When you receive the message you should first read this 4 bytes, then after you parse them you will know the message length and you can create array of bytes in the appropriate size.
Now you can read to this array the message itself.
I am relativity new to C#. In my TCP client have the following function which sends data to the server and returns the response:
private static TcpClient tcpint = new TcpClient(); //Already initiated and set up
private static NetworkStream stm; //Already initiated and set up
private static String send(String data)
{
//Send data to the server
ASCIIEncoding asen = new ASCIIEncoding();
byte[] ba = asen.GetBytes(data);
stm.Write(ba, 0, ba.Length);
//Read data from the server
byte[] bb = new byte[100];
int k = stm.Read(bb, 0, 100);
//Construct the response from byte array to string
StringBuilder sb = new StringBuilder();
for (int i = 0; i < k; i++)
{
sb.Append(bb[i].ToString());
}
//Return server response
return sb.ToString();
}
As you can see here, when I am reading the response from the server, I am reading it into a fix byte[] array of length 100 bytes.
byte[] bb = new byte[100];
int k = stm.Read(bb, 0, 100);
What do i do if the response from the server is more than 100 bytes? How can I read the data without me knowing what the max length of data form the server will be?
Typically, where there is not some specific intrinsic size of something, tcp protocols explicitly send the length of objects they are sending. One possible method for illustration:
size_t data_len = strlen(some_data_blob);
char lenstr[32];
sprintf(lenstr, "%zd\n", data_len);
send(socket, lenstr, strlen(lenstr));
send(socket, some_data_blob, data_len);
then when the receiver reads the length string, it knows exactly how mush data should follow (good programming practice is to trust but verify though -- if there is more or less data really sent -- say by an 'evil actor' -- you need to be prepared to handle that).
Not with respect to C# but a general answer on writing TCP application:
TCP is steam based protocol. It does not maintain message boundaries. So, the applications using TCP should take care of choosing the right method of data exchange between server and client. Its becomes more paramount if multiple messages gets sent and received on one connection.
One widely used method is to prepend the data message with the length bytes.
Ex:
[2 byte -length field][Actual Data].
The receiver of such data (be it server or client needs to decode length field, wait for until such event where as many bytes are received or raise an alarm on timeout and give up.
Another protocol that can be used is to have applications maintain message boundaries.
Ex:
`[START-of-MSG][Actual Data][END-of-MSG]
The reciever has to parse the data for Start-byte and End-byte (predefined by application protocol) and treat anything in between as data of interest.
hello i solved it with a list, i don't know the size of the complete package but i can read it in parts
List<byte> bigbuffer = new List<byte>();
byte[] tempbuffer = new byte[254];
//can be in another size like 1024 etc..
//depend of the data as you sending from de client
//i recommend small size for the correct read of the package
NetworkStream stream = client.GetStream();
while (stream.Read(tempbuffer, 0, tempbuffer.Length) > 0) {
bigbuffer.AddRange(tempbuffer);
}
// now you can convert to a native byte array
byte[] completedbuffer = new byte[bigbuffer.Count];
bigbuffer.CopyTo(completedbuffer);
//Do something with the data
string decodedmsg = Encoding.ASCII.GetString(completedbuffer);
I do this whith images and looks good, i thik than you dont know the size of the data if the porpouse is read a complete source with a unknow size
I was looking around for an answer to this, and noticed the Available property was added to TcpClient. It returns the amount of bytes available to read.
I'm assuming it was added after most of the replies, so I wanted to share it for others that may stumble onto this question.
https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient.available?view=netframework-4.8
Im currently a bit stuck with my c# project.
I have 2 applications, they both have a common class definition I call a NetMessage
a NetMessage contains a MessageType string property, as well as 2 List lists.
The idea is that I can pack this class with classes, and data to send across the network as a byte[].
Because Network Streams do not advertise the amount of data they are receiving, I modified my Send method to send the size of the NetMessage byte[] ahead of the actual byte[].
private static byte[] ReceivedBytes(NetworkStream MainStream)
{
try
{
//byte[] myReadBuffer = new byte[1024];
int receivedDataLength = 0;
byte[] data = { };
long len = 0;
int i = 0;
MainStream.ReadTimeout = 60000;
//MainStream.CanTimeout = false;
if (MainStream.CanRead)
{
//Read the length of the incoming message
byte[] byteLen = new byte[8];
MainStream.Read(byteLen, 0, 8);
len = BitConverter.ToInt64(byteLen, 0);
data = new byte[len];
//data is now set to the appropriate size for the expected message
//While we have not got the full message
//Read each individual byte and append to data.
//This method, seems to work, but is ridiculously slow,
while (receivedDataLength < data.Length)
{
receivedDataLength += MainStream.Read(data, receivedDataLength, 1);
}
//receivedDataLength += MainStream.Read(data, receivedDataLength, data.Length);
return data;
}
}
catch (Exception E)
{
//System.Windows.Forms.MessageBox.Show("Exception:" + E.ToString());
}
return null;
}
I have tried to change the size argument below to something like 1024 or to be the data.Length, but I get funky results.
receivedDataLength += MainStream.Read(data, receivedDataLength, 1);
setting it to data.Length seems to cause problems when the Class being sent is a few mb in size.
Setting the buffer size to be 1024 like I have seen in other examples, causes failures when the size of the incoming message is small, like 843 bytes, it errors out saying that I tried to read out of bounds or something.
Below is the type of method being used to send the data in the first place.
public static void SendBytesToStream(NetworkStream TheStream, byte[] TheMessage)
{
//IAsyncResult r = TheStream.BeginWrite(TheMessage, 0, TheMessage.Length, null, null);
// r.AsyncWaitHandle.WaitOne(10000);
//TheStream.EndWrite(r);
try
{
long len = TheMessage.Length;
byte[] Bytelen = BitConverter.GetBytes(len);
TheStream.Write(Bytelen, 0, Bytelen.Length);
TheStream.Flush();
// <-- I've tried putting thread sleeps in this spot to see if it helps
//I've also tried writing each byte of the message individually
//takes longer, but seems more accurate as far as network transmission goes?
TheStream.Write(TheMessage, 0, TheMessage.Length);
TheStream.Flush();
}
catch (Exception e)
{
//System.Windows.Forms.MessageBox.Show(e.ToString());
}
}
I'd like to get these two methods setup to the point where they are reliably sending and receiving data.
The application I am using this for, monitors a screenshots folder in a game directory,
when it detects a screenshot in TGA format, it converts it to PNG, then takes its byte[] and sends it to the receiver.
The receiver then posts it to Facebook (I don't want my FB tokens distributed in my client application), hence the server / proxy idea.
Its strange, but when I step through the code, the transfer is invariably successful.
But if I run it full speed, no breakpoint, it typically tells me that the connection was closed by the remote host etc.
The client typically finishes sending the data almost instantly, even though its a 4mb file.
The receiver spends about 2 minutes reading from the Network Stream, which doesnt make sense, if the client finished sending the data, does that mean the data is just floating in cyber space, and being pulled down?
Surely it should be synchronous?
I suspect I know where my code was going wrong.
It turns out that the scope I was creating my TCPClient that was doing the sending, was declared and instantiated within a method.
This being the case, the GAC was disposing of it, even though the Receiving Server had not finished downloading the stream.
I managed to resolve it by creating a method that can detect when the Client has disconnected on the server end, and until it has actually disconnected, it will keep looping/waiting until disconnected.
This way, we are waiting until the server lets go of us.
I am writing a C# client application which will connect to the server written in python. My question is about receiving data in loop. The application structure is all about client asks server -> server responds to client. Everything works fine when the message is lower that actual buffer size (set in server). For example: server side buffer: 1024, client buffer size: 256, data length < 1kb. I run my application with following code:
int datacounter = 0;
byte[] recived = new byte[256];
StringBuilder stb = new StringBuilder();
serverStream.ReadTimeout = 1500;
try
{
while ((datacounter = serverStream.Read(recived, 0, 256)) > 0)
{
Console.WriteLine("RECIVED: " + datacounter.ToString());
stb.append(System.Text.Encoding.UTF8.GetString(recived, 0, datacounter));
}
}
catch { Console.WriteLine("Timeout!"); }
Then the application receives data in 4 loops (256 bytes each):
RECIVED: 256
RECIVED: 256
RECIVED: 256
RECIVED: 96
And then the timeout ticks, that ends the transmission and pass the complete data to later analysis (from stb object). I don't think using timeout is proper, but i don't know any other way to do this.
However, this way it works. Here we go with example, that does not:
server side buffer: 1024, client side buffer: 256, data length ~ 8kbytes (python side sends data in loop).
RECIVED: 256
RECIVED: 256
RECIVED: 256
RECIVED: 256
Then the timeout ticks (and obviosly the data is incomplete - got 1kb of 8kb). Sometimes the loop even ends after 1 run with 28 recived bytes and thats all before timeout. Python says that the data has been send properly. Here's the way i create the socket and serverStream object:
TcpClient clientSocket = new TcpClient();
clientSocket.Connect("x.y.z.x", 1234);
NetworkStream serverStream = clientSocket.GetStream();
Its not the TcpClient fault. Tried the same with clear sockets, created like:
new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)
However that works similar. Is there a way, to make my loop work without timeout, receiving all data? I would like to keep the socket synchronous if possible.
I don't think there's anything wrong with your receive code functionally. I put together a test, and the receiver gets as much as you can send it (eg. 8 MBs), as long as you keep sending without 1.5 seconds pause, before timing out.
So it looks like your server is simply not sending "fast" enough.
To answer your question, timing is not the typical way of knowing when you have received a full message. One common, simple way of determining when a full message is received is to prefix the length of the full message on the sending side (eg. 4-byte int). Then on the receive side, first read 4 bytes, decode to the length, and then read that many more bytes.
You could also consider appending a message termination string, such as Environment.NewLine, to the end of your message. This has the advantage that you could call StreamReader.ReadLine(), which will block until the full message is received. This only works if the termination can NOT be included in the message itself.
If you can't alter the server protocol, is there any other way of knowing you have received a full message? (eg. checking for a NewLine at the end of the message, an XML end tag, or some other pattern.) If not, perhaps you could wait for the server to disconnect, otherwise it looks like you would be forced to find the right timing balance.
I am including the test code below in case you want to play around with it.
Server/Sending Side:
IPAddress localAddr = IPAddress.Parse("127.0.0.1");
TcpListener server = new TcpListener(localAddr, 13579);
server.Start();
TcpClient clientSocket = server.AcceptTcpClient();
NetworkStream stream = clientSocket.GetStream();
int bytesSent = 0;
int bytesToSend = 1 << 25;
int bufferSize = 1024;
string testMessage = new string('X', bufferSize);
byte[] buffer = UTF8Encoding.UTF8.GetBytes(testMessage);
while (bytesSent < bytesToSend)
{
int byteToSendThisRound = Math.Min(bufferSize, bytesToSend - bytesSent);
stream.Write(buffer, 0, byteToSendThisRound);
bytesSent += byteToSendThisRound;
}
Client/Receiving Side:
TcpClient client = new TcpClient("127.0.0.1", 13579);
NetworkStream serverStream = client.GetStream();
int totalBytesReceived = 0;
int datacounter = 0;
byte[] recived = new byte[256];
StringBuilder stb = new StringBuilder();
serverStream.ReadTimeout = 1500;
try
{
while ((datacounter = serverStream.Read(recived, 0, 256)) > 0)
{
totalBytesReceived += 256;
Console.WriteLine("RECIVED: {0}, {1}", datacounter, totalBytesReceived);
stb.Append(System.Text.Encoding.UTF8.GetString(recived, 0, datacounter));
}
}
catch { Console.WriteLine("Timeout!"); }
Why don't you dump the exception that makes your code go into the catch branch and find out? :)
catch (Exception ex) { Console.WriteLine("Timeout because of... " + ex.Message); }
--EDIT
Sorry I didn't see the timout. The question you're asking is if there's a way to do it without timeout. Yes, don't set any timeout and check if received number of bytes is smaller than the buffer size.
That is:
while ((datacounter = serverStream.Read(recived, 0, 256)) > 0)
{
Console.WriteLine("RECIVED: " + datacounter.ToString());
stb.append(System.Text.Encoding.UTF8.GetString(recived, 0, datacounter));
if(datacounter < 256) //you're good to go
break;
}
For anyone else who needs help with this
Just to add to Chi_Town_Don's answer, make sure you use stb.ToString() outside of the loop. And I've found that nothing will print out unless the loop breaks out. To do that if(!serverStream.DataAvailable()){break;} works wonders. That way you don't need to pass in the packet size or some other convoluted condition.