Whats the best way to format a message to a server, at moment I'm serilising an object using the binaryformatter and then sending it to the server.
At the server end its listening in an async fashion and then when the buffer size recieved is not 100% it assumes that the transfer has complete.
This is working and the moment, and I can deserialise the object at the other end, I'm just concerned that if I start sending async this method will fail has message's could be blurred.
I know that I need to mark the message somehow as to say that's the end of message one, this other bit belongs to message 2, but I'm unsure of the correct way to do this.
Could anyone point me in the right direction and maybe give me some examples?
Thanks
You could always serialize it to a memory stream, see how big it is, send the length as a 4-byte binary number then send the stream's contents.
On the other side you can just sit and wait for 4 bytes, combine them into an integer then sit and wait for that number of bytes.
When your read function returns (make them blocking reads), you know you have the entire object into a buffer, so you just deserialize it and cast it to your common interface type.
edit: this is the answer to your specific question. This being said you would be better off using a higher-level library than pure tcp.
If your object has a fixed length you can received specified amount of bytes on the other end and then create your object.
Otherwise you can send delimiters (symbol or sequence of symbols that you are not using in your object) between your objects and keep reading received data byte by byte until you see the delimiter.
You might want to take a look at Protocol Buffers (http://code.google.com/p/protobuf), a (implementation language independent) data interchange format/framework. There exist at least two .NET implementations for it (see http://code.google.com/p/protobuf/wiki/ThirdPartyAddOns).
Related
I want to send JSON strings back and forth over a socket connection in c# (xamarin).
I want to know, how does the receiver know how many bytes to read from the socket in order to receive the complete JSON string because the string will vary in size.
Do I have to send a length first in binary (maybe one or two bytes), then the JSON string? What is the standard way to do it so that the receiver knows how many bytes to read from the socket each time it get a complete JSON string.
It has to know how many bytes per string because each string is a separate packet, and if many packets are send back to back, if the length of each string is not known exactly, it will read past the end of one string and into the beginning of another, or not read the whole string, either way it will crash while decoding the malformed string.
Another problem, if I send the length first in binary, then if anything should happen where the receiver gets out of sync with the sender, then it wont know which byte is the length anymore because it cant tell where the strings start, and which incoming data represents the length, it will just receive a bunch of bytes and it wont know where is the start from where is the end etc.
Anybody knows the proper way to do it without writing a megabyte of code?
Thanks
If it's a string based message(as you mentioned JSON), you can use a StringBuilder to concat each packet you received, and check at every receive step for an End of File tag(which is defined by yourself, e.g. <EOF>).
Here is an example on MSDN
Client and Server implementations: Client sends messages ending with <EOF> tag and server checks for it to make sure each message is completed.
I am using a networkstream to pass short strings around the network.
Now, on the receiving side I have encountered an issue:
Normally I would do the reading like this
see if data is available at all
get count of data available
read that many bytes into a buffer
convert buffer content to string.
In code that assumes all offered methods work as probably intended, that would look something like this:
NetworkStream stream = someTcpClient.GetStream();
while(!stream.DataAvailable)
;
byte[] bufferByte;
stream.Read(bufferByte, 0, stream.Lenght);
AsciiEncoding enc = new AsciiEncoding();
string result = enc.GetString(bufferByte);
However, MSDN says that NetworkStream.Length is not really implemented and will always throw an Exception when called.
Since the incoming data are of varying length I cannot hard-code the count of bytes to expect (which would also be a case of the magic-number antipattern).
Question:
If I cannot get an accurate count of the number of bytes available for reading, then how can I read from the stream properly, without risking all sorts of exceptions within NetworkStream.Read?
EDIT:
Although the provided answer leads to a better overall code I still want to share another option that I came across:
TCPClient.Available gives the bytes available to read. I knew there had to be a way to count the bytes in one's own inbox.
There's no guarantee that calls to Read on one side of the connection will match up 1-1 with calls to Write from the other side. If you're dealing with variable length messages, it's up to you to provide the receiving side with this information.
One common way to do this is to first work out the length of the message you're going to send and then send that length information first. On the receiving side, you then obtain the length first and then you know how big a buffer to allocate. You then call Read in a loop until you've read the correct number of bytes. Note that, in your original code, you're currently ignoring the return value from Read, which tells you how many bytes were actually read. In a single call and return, this could be as low as 1, even if you're asking for more than 1 byte.
Another common way is to decide on message "formats" - where e.g. message number 1 is always 32 bytes in length and has X structure, and message number 2 is 51 bytes in length and has Y structure. With this approach, rather than you sending the message length before sending the message, you send the format information instead - first you send "here comes a message of type 1" and then you send the message.
A further common way, if applicable, is to use some form of sentinels - if your messages will never contain, say, a byte with value 0xff then you scan the received bytes until you've received an 0xff byte, and then everything before that byte was the message you wanted to receive.
But, whatever you want to do, whether its one of the above approaches, or something else, it's up to you to have your sending and receiving sides work together to allow the receiver to discover each message.
I forgot to say but a further way to change everything around is - if you want to exchange messages, and don't want to do any of the above fiddling around, then switch to something that works at a higher level - e.g. WCF, or HTTP, or something else, where those systems already take care of message framing and you can, then, just concentrate on what to do with your messages.
You could use StreamReader to read stream to the end
var streamReader = new StreamReader(someTcpClient.GetStream(), Encoding.ASCII);
string result = streamReader.ReadToEnd();
Lets say I want to do non blocking reads from a network socket.
I can async await for the socket to read x bytes and all is fine.
But how do I combine this with deserialization via protobuf?
Reading objects from a stream must be blocking? that is, if the stream contains too little data for the parser, then there has to be some blocking going on behind the scenes so that the reader can fetch all the bytes it needs.
I guess I can use lengthprefix delimiters and read the first bytes and then figure out how many bytes I have to fetch minimum before I parse, is this the right way to go about it?
e.g. if my buffer is 500 bytes, then await those 500 bytes, and parse the lengthprefix and if the length is more than 500 then wait again, untill all of it is read.
What is the idiomatic way to combine non blocking IO and protobuf parsing?
(I'm using Jon Skeet's implementation right now http://code.google.com/p/protobuf-csharp-port/)
As a general rule, serializers don't often contain a DeserializeAsync method, because that is really really hard to do (at least, efficiently). If the data is of moderate size, then I would advise to buffer the required amount of data using asynchronous code - and then deserialize when all of the required data is available. If the data is very large and you don't want to have to buffer everything in memory, then consider using a regular synchronous deserialize on a worker thread.
(note that note of this is implementation specific, but if the serializer you are using does support an async deserialize: then sure, use that)
Use the BeginReceive/EndRecieve() methods to receive your data into a byte buffer (typically 1024 or 2048 bytes). In the AsyncCallback, after ensuring that you didn't read -1/0 bytes (end of stream/disconnect/io error), attempt to deserialize the packet with ProtocolBuf.
Your receive callback will be asynchronous, and it makes sense to parse the packet in the same thread as the reading, IMHO. It's the handling that will likely cause the biggest performance hit.
I want to use the stream class to read/write data to/from a serial port. I use the BaseStream to get the stream (link below) but the Length property doesn't work. Does anyone know how can I read the full buffer without knowing how many bytes there are?
http://msdn.microsoft.com/en-us/library/system.io.ports.serialport.basestream.aspx
You can't. That is, you can't guarantee that you've received everything if all you have is the BaseStream.
There are two ways you can know if you've received everything:
Send a length word as the first 2 or 4 bytes of the packet. That says how many bytes will follow. Your reader then reads that length word, reads that many bytes, and knows it's done.
Agree on a record separator. That works great for text. For example you might decide that a null byte or a end-of-line character signals the end of the data. This is somewhat more difficult to do with arbitrary binary data, but possible. See comment.
Or, depending on your application, you can do some kind of timing. That is, if you haven't received anything new for X number of seconds (or milliseconds?), you assume that you've received everything. That has the obvious drawback of not working well if the sender is especially slow.
Maybe you can try SerialPort.BytesToRead property.
I am new to serial communication. I have read a fair few tutorials, and most of what I am trying to do is working, however I have a question regarding serial communication with C#. I have a micro controller that is constantly sending data through a serial line. The data ist in this format:
bxxxxixx.xx,xx.xx*
where the x's represent different numbers, + or - signs.
At certain times want to read this information from my C# program on my PC. The problem that I am having is that my messages seem to be split in random positions even though I am using
ReadTo("*");
I assumed this would read everything upto the * character.
How can I make sure that the message I recieved is complete?
Thank you for your help.
public string receiveCommandHC()
{
string messageHC = "";
if (serialHC.IsOpen)
{
serialHC.DiscardInBuffer();
messageHC = serialHC.ReadTo("*");
}
return messageHC;
}
I'm not sure why you're doing it, but you're discarding everything in the serial ports in-buffer just before reading, so if the computer has already received "bxxx" at that point, you throw it away and you'll only be reading "xixx.xx,xx.xx".
You'll nearly always find in serial comms that data messages (unless very small) are split. This is mostly down to the speed of communication and the point at which you retrieve data from the port.
Usually you'd set your code to run in a separate thread (to help prevent impacting the performance of the rest of your code) which raises an event when a complete message is received and also takes full messages in for transmission. Read and write functionality is dealt with by worker threads (serial comms traffic is slow).
You'll need a read and a write buffer. These should be suitabley large to hold data for several cycles.
Append data read from the input to the end of your read buffer. Have the read buffer read on cyclicly for complete messages, from the start of the buffer.
Depending on the protocol used there is usually a data start and maybe a data end indicator and somewhere a message size (this may be fixed, again depending on your protocol). I gather form your protocol that the message start character is 'b' and the message end character is '*'. Discard all data preceeding your message start character ('b'), as this is from an incomplete message.
When a complete message is found, strip it from the front of the buffer and raise an event to indicate its arrival.
A similar process is run for sending data, except that you may need to split the message, hence data to be sent is appended to the end of the buffer and data being sent is read from the start.
I hope that this helps you in understanding how to cope with serial comms.
As pointed out by Marc you're currently clearing your buffer in a way that will cause problems.
edit
As I said in my comment I don't recognise serialHC, but if dealing with raw data then look at using the SerialPort class. More information on how to use it and an example (which roughly uses the process that I described above) can be found here.
I'm going to take a guess that you're actually getting the ends of commands, i.e. instead of getting b01234.56.78.9 (omitting the final *), you're getting (say) .56.78.9. That is because you discarded the input buffer. Don't do that. You can't know the state at that point (just before a read), so discarding the buffer is simply wrong. Remove the serialHC.DiscardInBuffer(); line.