I have chatting application, with multiple clients and one server.
I want to receive separated and full data, and the data can be text(string) or file (bytes).
for example.
client1 send data1(text) and data2(image). as the same time.
the server receive the two messages as a single message (data1data2).
how can i know where the data1 ends?
my solution i have tried but not working, is:
adding "END_OF_PACKET" string which converted to byte[] when client is sending data.
sometimes it works when i treat data as byte[]. failing on string.
What the methods can i use to make this work?
sometimes it works when i treat data as byte[]. failing on string.
It's because a byte[] has a fixed encoding, it's a raw version of your string, so if you encode (string -> byte[]) your message in a certain charset (let's say UTF-8) you have to decode (byte[] -> string) with the same charset (UFT-8) to retrieve the original message.
how can i know where the data1 ends?
Since you also want to pass binary datas, you shouldn't use the "END_OF_PACKET" solution.
Why? Because inside your binary datas there could be one (or more) match of this string, consider binary datas as unpredictable and using all possible characters.
So here is another solution:
Start your message with a number which will represent the message length + a separator (other than a digit) to know where the number ends, then add your data (binary or not).
Syntax: <length><separator><data>
With separator=";" :4;Hello19;Look at this image!20;jœÿJ¯ÿTÀÿRŒ»ÿP‰»ÿ=kF
There are 3 data in the previous example:
1. 4;Hello
2. 19;Look at this image!
3. 20;jœÿJ¯ÿTÀÿRŒ»ÿP‰»ÿ=kF
Related
I want to send JSON strings back and forth over a socket connection in c# (xamarin).
I want to know, how does the receiver know how many bytes to read from the socket in order to receive the complete JSON string because the string will vary in size.
Do I have to send a length first in binary (maybe one or two bytes), then the JSON string? What is the standard way to do it so that the receiver knows how many bytes to read from the socket each time it get a complete JSON string.
It has to know how many bytes per string because each string is a separate packet, and if many packets are send back to back, if the length of each string is not known exactly, it will read past the end of one string and into the beginning of another, or not read the whole string, either way it will crash while decoding the malformed string.
Another problem, if I send the length first in binary, then if anything should happen where the receiver gets out of sync with the sender, then it wont know which byte is the length anymore because it cant tell where the strings start, and which incoming data represents the length, it will just receive a bunch of bytes and it wont know where is the start from where is the end etc.
Anybody knows the proper way to do it without writing a megabyte of code?
Thanks
If it's a string based message(as you mentioned JSON), you can use a StringBuilder to concat each packet you received, and check at every receive step for an End of File tag(which is defined by yourself, e.g. <EOF>).
Here is an example on MSDN
Client and Server implementations: Client sends messages ending with <EOF> tag and server checks for it to make sure each message is completed.
What I'm currently doing with my Server and Client is sending commands between them using simple strings to bytes. What it comes down to is basically this (to send a message to the server as an example):
byte[] outStream = System.Text.Encoding.UTF8.GetBytes("$msg:Test Message");
serverStream.Write(outStream, 0, outStream.Length);
And the recieving end encodes back to a string. It recognizes the command by doing this:
recievedstring.Split(':')[0]
and assuming that recievedstring.Split(':')[1] is the argument. If a user entered a colon in their message then it would cut off there. I feel like this is a hacky way to send data between both endpoints. Is there a more standard way to do this? Sorry if I didn't provide enough information, I'm new to this!
You dont necessarily need to deal all data communicated as string, you can communicate data as bytes and later convert bytes to any datatype(as sent by sender).
A better way is to define a protocol (e.g. a packet format) between server and client for each msg. For example, you can define a packet such that first 4 bytes contain length of the message followed by the message of specified length. Your packet format will be [length:data]
On sending side you will need to write length of message first on stream, and then write the actual data, where as on receiving side you will first receive an int (length of data) and then receive that much byte.
Further more, instead of just $msg (as in your case) if there can be multiple types of packets that can be communicated between end points e.g. $command, $notification etc you can also define a field of message type in your packet. Your packet format will become [length:type:data]
I am using a networkstream to pass short strings around the network.
Now, on the receiving side I have encountered an issue:
Normally I would do the reading like this
see if data is available at all
get count of data available
read that many bytes into a buffer
convert buffer content to string.
In code that assumes all offered methods work as probably intended, that would look something like this:
NetworkStream stream = someTcpClient.GetStream();
while(!stream.DataAvailable)
;
byte[] bufferByte;
stream.Read(bufferByte, 0, stream.Lenght);
AsciiEncoding enc = new AsciiEncoding();
string result = enc.GetString(bufferByte);
However, MSDN says that NetworkStream.Length is not really implemented and will always throw an Exception when called.
Since the incoming data are of varying length I cannot hard-code the count of bytes to expect (which would also be a case of the magic-number antipattern).
Question:
If I cannot get an accurate count of the number of bytes available for reading, then how can I read from the stream properly, without risking all sorts of exceptions within NetworkStream.Read?
EDIT:
Although the provided answer leads to a better overall code I still want to share another option that I came across:
TCPClient.Available gives the bytes available to read. I knew there had to be a way to count the bytes in one's own inbox.
There's no guarantee that calls to Read on one side of the connection will match up 1-1 with calls to Write from the other side. If you're dealing with variable length messages, it's up to you to provide the receiving side with this information.
One common way to do this is to first work out the length of the message you're going to send and then send that length information first. On the receiving side, you then obtain the length first and then you know how big a buffer to allocate. You then call Read in a loop until you've read the correct number of bytes. Note that, in your original code, you're currently ignoring the return value from Read, which tells you how many bytes were actually read. In a single call and return, this could be as low as 1, even if you're asking for more than 1 byte.
Another common way is to decide on message "formats" - where e.g. message number 1 is always 32 bytes in length and has X structure, and message number 2 is 51 bytes in length and has Y structure. With this approach, rather than you sending the message length before sending the message, you send the format information instead - first you send "here comes a message of type 1" and then you send the message.
A further common way, if applicable, is to use some form of sentinels - if your messages will never contain, say, a byte with value 0xff then you scan the received bytes until you've received an 0xff byte, and then everything before that byte was the message you wanted to receive.
But, whatever you want to do, whether its one of the above approaches, or something else, it's up to you to have your sending and receiving sides work together to allow the receiver to discover each message.
I forgot to say but a further way to change everything around is - if you want to exchange messages, and don't want to do any of the above fiddling around, then switch to something that works at a higher level - e.g. WCF, or HTTP, or something else, where those systems already take care of message framing and you can, then, just concentrate on what to do with your messages.
You could use StreamReader to read stream to the end
var streamReader = new StreamReader(someTcpClient.GetStream(), Encoding.ASCII);
string result = streamReader.ReadToEnd();
I'm encountering a pretty difficult problem in an asynchronous RSS feed aggregator application I am creating.
The system is built from a communicating client and server. The server collects RSS feed items from different sources and then distributes them to clients based on their subscriptions.
The issue that I am having is - My specification states that I must implement a pre-defined byte based protocol to communicate between the client and server. Part of this protocol is that when the server sends a payload to the client, that payload must have a checksum created and sent with it, then when it is received by the client the checksum is equated based on the received payload. We then compare the two checksums and if they do not match then we have an issue.
The thing that is baffling me is that, sometimes the checksums will match perfectly and other times they will not, using the same algorithm for all sending and receiving operations.
Here is the method that checks incoming ->
private bool CheckChecksum(uint receivedChecksum, IEnumerable<byte> bytes) {
uint ourChecksum = 0;
foreach (var b in bytes)
ourChecksum += b;
if (ourChecksum != receivedChecksum) {
Debug.WriteLine("received {0}, calculated {1}", receivedChecksum, ourChecksum);
_writeOutToFile = true;
}
return receivedChecksum == ourChecksum;
}
and when calculating it before sending ->
uint checksum = payloadBytes.Aggregate<byte, uint>(0,
(currentChecksum, currentByte) => currentChecksum + currentByte);
Since the behaviour seems to occur on updates that have very large checksums (generally 2 million+) then the only thing I can think that would be causing it is these large byte[] sizes.
As you can see above, I write out to a file the contents of the payload when the checksums do not match. What I found was that the byte[] just ends early (despite the fact that the reading/writing lengths match on both the client and server). The end of the byte[] is just filled with empty spaces.
I am using NetworkStream.ReadAsync and WriteAsync to do all of my I/O operations. I thought that it may be a fault with the way I am using or understanding these.
I realise that this is a very difficult and vague problem (because I'm not sure what is going wrong myself) but if you need any further information I will provide it.
Here is some extra information:
The checksums are of type uint, and their endianness is encoded correctly at both ends of the system.
Payloads are strings which are encoded into ASCII bytes and decoded on the client-side.
All messages are sent with a checksum and a payload length. The payload length represents how many bytes to read off the stream. Since the I read payload length amount of bytes from the networkstream, when the checksums do not match the payload becomes white space after a varying length (but is correct before this point).
Sometimes (rarely) the checksums will match even when they are large.
I am running both the client and server locally on one machine.
I have tried having a Thread.Sleep(5000) before the read and this makes no difference.
Here is a sample of sent and received data that fails.
Sent from server - http://pastebin.com/jvbCbQmJ
Received by client - http://pastebin.com/eNkWymwi
Your receiving code tries to read before all the bytes have been posted.
When you try to send huge chunks of data on only one package, your readAsync will detect there's something to read, try to read for full lenght, which may not be available, and fill the part that was not posted yet with 0s
You can either divide your message on the server, read by parts on the client, or try to read what's available until you have received everything or some time has passed
I'm writing some code using PacketDotNet and SharpPCap to parse H.225 packets for a VOIP phone system. I've been using Wireshark to look at the structure, but I'm stuck. I've been using This as a reference.
Most of the H.225 packets I see are user information type with an empty message body and the actual information apparently shows up as a list of NonStandardControls in Wireshark. I thought I'd just extract out these controls and parse them later, but I don't really know where they start.
In almost all cases, the items start at the 10th byte of the H.225 data. Each item appears to begin with the length which is recorded as 2 bytes. However, I am getting a packet that has items starting at the 11th byte.
The only difference I see in this packet is something in the message body supposedly called open type length which has a value of 1, whereas the rest all appear to be 0. Would the items start at 10 + open type length? Is there some document that explains what this open type length is for?
Thanks.
H.225 doesn't use a fixed length encoding, it user ASN.1 PER encoding (not BER).
You probably won't find a C# library. OPAL is adding a C API if you are able to use that.