XNA TCP Socket multiple sending packet loss - c#

I've developed a nice free game for Windows Phone 7, which is called Domination, and which is, despite the early release, quite a success!
Now, I'm developing an Online Multiplayer Version, which regards interesting features, and now that I've almost reached the end, I'm encountering a BIG problem.
WEIRD packet loss, or something like that.
I've a sample for reproducing the problem.
I've a Server.
I've a Win Form Client
I've a XNA Client
steps to reproduce the problem:
1) you start the server, the win form and the game ( you need an emulator and WP7 SDK )
2) now, you press the GO button, and the form will open the TCP channel to server
3) now, you press the screen on emulator, and the form will open the TCP channel to server
4) now, each time you press the screen emulator, or the GO button on win form, the server will send you back 50 messages on proper client
well, the problem is that
1) win form usually receives 50 messages, RARELY loses 10 packets on one communication, but it's RARE
2) emulator, ALWAYS loses 30-40-45 messages!!!!!
I've tried other ways, but nothing changed..
one tip, if i put a Thread.Sleep(10) which 10 is 10 milliseconds, for each Server Send, it works perfect!!
Can anyone help me please? I just don't know where to put my head!
samples can be found here:
http://uploading.com/files/d7e7939c/Projects.zip/

No messages are being lost. They are all being received. Your code is just not correctly interpreting the received data. If you look at the number of bytes received, it will be correct. If you look at the data in the bytes received, it will be correct. The rest is up to your code.
TCP provides a byte stream service. That means you get out the same bytes you send. If you need to "glue" those bytes together into messages, then you must write code to do that. If you send 30 bytes, you might receive 30 bytes, or 10 bytes and then 20 bytes, or 1 byte 30 times, or anything in-between. If you send 5 bytes and then 3 bytes, you might receive 5 bytes and then 3 bytes. Or you might receive 8 bytes. Or you might receive 1 byte 8 times.
You must define a message format. You must code the sender to send messages in that format. You must code the receiver to identify when it has received a complete message.
What's happening is that you are sending "FOO" and then "BAR" and receiving "FOOBAR". Nothing has been lost in this process except the boundary between "FOO" and "BAR". However, TCP has no concept of a message boundary. If you need them, you must code them. For example, you could send "3FOO3BAR". Now, no matter how that gets received, the receiver knows that "FOO" is one message and "BAR" is another. Another choice is "foo\nbar\n" (foo newline bar newline). The receiver knows that it has a complete message when it receives a newline character.
But however you define an application-level message, you must actually write the code to do it. It won't happen by itself.

Related

.Net Sockets - Socket.Receive reports bytes read, but the bytes do not appear in the buffer

I am calling (System.Net.Sockets) Socket.Receive. It returns that it read 13 bytes, which is the expected data length. However, the receive buffer I pass into the function does not become populated with the expected bytes. All of the bytes in the buffer are zero.
I simply don't understand how this could be happening - I thought Receive is supposed to block until data becomes available, and that it should put the data it read into the buffer.
Simple as it is, here is my code:
bytesRead = Socket.Receive(RecvBuffer.Buffer, offset, RecvBuffer.Buffer.Length - offset, SocketFlags.None, out SocketError error);
bytesRead = 13, and RecvBuffer.Buffer is all zeros.
On the sending side, it's not writing all zeros, as far as I can tell. I haven't found any tool to allow me to check what's on the wire.
I'm at a loss. Any help is appreciated.
You have to understand all possible scenarios. We will not consider chat application here where asynchronously both side can send simultaneously. Best way of understanding serial communications is an application using a scale to send weights. There are two different operating modes. One which is continuous where client sends a message to start weighing and the server constantly sends weights. Again this is probably not what you are doing. The second mode with a scale is a master/slave mode where a client request one weight and server sends the weight. The weight may not come in one chunk so client has to loop until the entire weight is received which is usually and end character or a byte count at beginning of the weight. There is no blocking. Windows uses timers and periodically sends the data in the UART is to the application (Socket.Receive). The number of bytes is random and less or equal to the entire data being received. You should be disabling hardware and software handshaking so you do not get null data. You also need to send a message to receive a message and not send until you receive a full block of data which may take multiple Socket.Receive() commands until you reach the end of data (a terminating character or byte count).
The problem here was the sending side. I got fooled by other messages working correctly, but in this particular case, the send buffer was never populated with the data. So none was received :).
Thanks for the comments and suggestions.

confused when receiving data from serial port

I encountered a problem that took me too much time, but without resolving it.soI really want you to help me.
I have an application built with c # wpf, and communicates with ovens via serial port.
the frame I need to send have following form: [EOT] (GID) (UID) (Temp) [ENQ]
gid uid: group identifier and unit identifier (address of the machine).
(eof),(enq) :frames the message.
(temp) means: give me the temperature value.
the only machine that has the same address can answer (master slave architecture).
the form of the response message is: [STX] (Temp) <DATA> [ETX].
the field contain only the temperature value
stx start text. etx end text.
I have no problem with sending and receiving of data, and I can display the value of temperature for a single machine connected.
but when I connect More machines, I do not know which machine has answered the frame that I sent, because the response frame does not have any adress so that I can determine which oven have respond.
So the situation in brief is:
-I send Data to ovens.
- I received data.
- I can not decide which oven answered.
please any one have an idea.
PS: I work with the protocol:EI-BISYNCH of eurotherm EuroTherm
If needed: EI-Bisynch ASCII Sequence Diagrams
In these conditions, the typical solution is:
Send the request to the current device
Wait for an answer for a defined timeout
If we receive an an answer within the timeout, the device responded.
If we do not receive an answer, the device is offline, mark it as such.
Switch to the next device, goto 1
Basically you should be able to wrap into a loop the code described here:
Providing Asynchronous Serial Port Communication
That is a sample that works with an AutoResetEvent. One of the .Net multithreading that allows synchronizing threads (the threads that sends the request in the loop, and the threads that receive the message in the loop)
IN these situations, the machine that you addressed is responding (or at least its assumed to be) Single Master - Multi Slaves. Meaning :-
Master -> Hey #1 tell me your temp -> #1 SIR! YES SIR! 23 degrees!
Master -> Hey #2...
The idea is no other slave will respond. By convention of the protocol.
Its pretty hard to do anything but this kind of system on serial.
In terms of Design if you create something like a command queue. Each command knows what device it wants to talk to, and what question it wants to ask. You process each command, send the serial message, get the response, and give it back to the command. Now you have a command, which knows which device it talked to, and what the response of that device was.
As long as you only have one "in-flight" command to which you are waiting for a response, and you know which device you sent the command to, you can assume that the device that responds next is the device you asked to respond. Now this won't necessarily always be true if it is possible for your device to send un-prompted responses.

Using TCP Sockets (C#), How do I detect if a message got interrupted in transit, on the receiver's end?

Im writing a server application for my iPhone app. The section of the server im working on is the relay server. This essentially relays messages between iPhones, through a server, using TCP sockets. The server reads the length of the header from the stream, then reads that number of bytes from the stream. It deserializes the header, and checks to see if the message is to be relayed on to another iPhone (rather than being processed on the server).
If it has to be relayed, it begins reading bytes from the sender's socket, 1024 bytes at a time. After each 1024 bytes are received, it adds those bytes (as a "packet" of bytes) to the outgoing message queue, which is processed in order.
This is all fine, however, but what happens if the sender gets interrupted, so it hasn't sent all its bytes (say, out of the 3,000 bytes it had to send, the sending iPhone goes into a tunnel after 2,500 bytes)?
This means that all the other devices are waiting on the remaining 500 bytes, which dont get relayed to them. Then if the sender (or anyone else for that matter) sends data to these sockets, they think the start of the new message is the end of the last one, corrupting the data.
Obviously from the description above, im using message framing, but I think im missing something. From what I can see, message framing only seems to allow the receiver to know the exact amount of bytes to read from the socket, before assembling them into an object. Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
TCP/IP itself ensures that no bytes go "missing" over a single socket connection.
Things are a bit more complex in your situation, where (if I understand correctly) you're using a server as a sort of multiplexer.
In this case, here's some options off the top of my head:
Have the server buffer the entire message from point A before sending it to point B.
Close the B-side sockets if an abnormal close is detected from the A side.
Change the receiving side of the protocol so that a B-side client can detect and recover from a partial A-stream without killing and re-establishing the socket. e.g., if the server gave a unique id to each incoming A-stream, then the B client would be able to detect if a different stream starts. Or have an additional length prefix, so the B client knows both the entire length to expect and the length for that individual message.
Which option you choose depends on what kind of data you're transferring and how easy the different parts are to change.
Regardless of the solution, be sure to include detection of half-open connections.

c# socket multiple packets stack

I'm having an issue using Sockets in c#. Heres an example. Say I send the number 1, then I immediately send the number 2. The problem I run into sometimes is the client that is supposed to receive it will receive a packet containing '12'. I was wondering if there was a built in way to distinguish packets without using characters or something to separate the data.
To sum it up again, I said two packets. One with the number '1', one with the number '2'.
Server receives 1 packet with the data '12'.
I don't want to separate the packets with characters, like this ':1::2:' or anything like that, as I don't always have control over the format of the incoming data.
Any ideas?
Like if I do this
client.Send(new byte[1]{'1'}, 1,SocketFlags.None);
client.Send(new byte[1]{'2'}, 1,SocketFlags.None);
then on the server end
byte[] data = new byte[1024];
client.Receive(data);
data sometimes comes back with "12" even though I do two separate sends.
TCP/IP works on a stream abstraction, not a packet abstraction.
When you call Send, you are not sending a packet. You are only appending bytes to a stream.
When you call Receive, you are not receiving a packet. You are only receiving bytes from a stream.
So, you must define where a "message" begins and ends. There are only three possible solutions:
Every message has a known size (e.g., all messages are 42 bytes). In this case, you just keep Receiveing the data until you get that many bytes.
Use delimiters, which you're already aware of.
Use length prefixing (e.g., prefix each message with a 4-byte length-of-message). In this case, you keep Receiveing the length prefix until it arrives, decode it to get the actual length of the message, and then keep Receiveing the message itself.
You must do the message framing yourself. TCP/IP cannot do it for you.
TCP is a streaming protocol, so you will always receive whatever data has arrived since the last read, up to the streaming window size on the recipient's end. This buffer can be filled with data received from multiple packets of any given size sent by the sender.
TCP Receive Window Size and Scaling # MSDN
Although you have observed 1 unified blob of data containing 2 bytes on the receiving end in your example, it is possible to receive sequences of 1 byte by 1 byte (as sent by your sender) depending on network conditions, as well as many possible combinations of 0, 1, and 2 byte reads if you are doing non-blocking reads. When debugging on a typical uncongested LAN or a loopback setup, you will almost never see this if there is no delay on the sending side. There are ways, at lower levels of the network stack, to detect per-packet transmission but they're not used in typical TCP programming and are out of application scope.
If you switch to UDP, then every packet will be received as it was sent, which would match your expectations. This may suit you better, but keep in mind that UDP has no delivery promises and network routing may cause packets to be delivered out of order.
You should look into delimiting your data or finding some other method to detect when you have reached the end of a unit of data as defined by your application, and stick with TCP.
Its hard to answer your question without knowing the context. In contrast, by default, TCP/IP handles packet management for you automaticly (although you receive it in a streambased fashion). However when you have a very specific (bad?) implementation, you can send multiple streams over 1 socket at the same time making it impossible for lower level TCP/IP to detect the difference. Thereby making it very hard for yourself to identify different streams on the client. The only solution for this would be to send 2 totally unique streams (e.g. stream 1 only sends bytes lower then 127 and stream 2 only sends bytes higher or equal to 127). Yet again, this is terrible behavior
You must put into your TCP messages delimiters or use byte counts to tell where one message starts and another begins.
Your code has another serious error. TCP sockets may not give you all the data in a single call to Receive. You must loop on receiving of data until your application-specific records in the data stream indicate the entire message has been received. Your call to client.Receive(data) returns the count of bytes received. You should capture that number.
Likewise, when you send data, all of your data might not be sent in a single call. You must loop on sending of data until the count of bytes sent is equal to what you intended to send. The call to client.Send returns the actual count of bytes sent, which may not be all you tried to send!
The most common error I see people make with sockets is that they don't loop on the Send and Receive. If you understand why you need to do the looping, then you know why you need to have a delimiter or a byte count.

How to safely stream data through a server socket to another socket?

I'm writing a server application for an iPhone application im designing. iPhone app is written in C# (MonoTouch) and the server is written in C# too (.NET 4.0)
I'm using asynchronous sockets for the network layer. The server allows two or more iPhones ("devices") to connect to each other and be able to send data bi-directionally.
Depending on the incoming message, the server either processes the message itself , or relays the data through to the other device(s) in the same group as the sending device. It can make this decision by decoding the header of the packet first, and deciding what type of packet it is.
This is done by framing the stream in a way that the first 8 bytes are two integers, the length of the header and the length of the payload (which can be much larger than the header).
The server reads (asynchronously) from the socket the first 8 bytes so it has the lengths of the two sections. It then reads again, up to the total length of the header section.
It then deserializes the header, and based on the information within, can see if the remaining data (payload) should be forwarded onto another device, or is something that the server itself needs to work with.
If it needs to be forwarded onto another device, then the next step is to read data coming into the socket in chunks of say, 1024 bytes, and write these directly using an async send via another socket that is connected to the recipient device.
This reduces the memory requirements of the server, as i'm not loading in the entire packet into a buffer, then re-sending it down the wire to the recipient.
However, because of the nature of async sockets, I am not guaranteed to receive the entire payload in one read, so have to keep reading until I receive all the bytes. In the case of relaying onto its final destination, this means that i'm calling BeginSend() for each chunk of bytes I receive from the sender, and forwarding that chunk onto the recipient, one chunk at a time.
The issue with this is that because I am using async sockets, this leaves the possibility of another thread doing a similar operation with the same recipient (and therefore same final destination socket), and so it is likely that the chunks coming from both threads will get mixed up and corrupt all the data going to that recipient.
For example: If the first thread sends a chunk, and is waiting for the next chunk from the sender (so it can relay it onwards), the second thread could send one of its chunks of data, and corrupt the first thread's (and the second thread's for that matter) data.
As I write this, i'm just wondering is it as simple as just locking the socket object?! Would this be the correct option, or could this cause other issues (e.g.: issues with receiving data through the locked socket that's being sent BACK from the remote device?)
Thanks in advance!
I was facing a similar scenario a while back, I don't have the complete solution anymore, but here's pretty much what I did :
I didn't use sync sockets, decided to explore the async sockets in C# - fun ride
I don't allow multiple threads to share a single resource unless I really have to
My "packets" were containing information about size, index and total packet count for a message
My packet's 1st byte was unique to signify that it's a start of a message, I used 0xAA
My packets's last 2 bytes were a result of a CRC-CCITT checksum (ushort)
The objects that did the receiving bit contained a buffer with all received bytes. From that buffer I was extracting "complete" messages once the size was ok, and the checksum matched
The only "locking" I needed to do was in the temp buffer so I could safely analyze it's contents between write/read operations
Hope that helps a bit
Not sure where the problem is. Since you mentioned servers, I assume TCP, yes?
A phone needs to communicate some of your PDU to another phone. It connects as a client to the server on the other phone. A socket-pair is established. It sends the data off to the server socket. The socket-pair is unique - no other streams that might be happening between the two phones should interrupt this, (will slow it up, of course).
I don't see how async/sync sockets, assuming implemented correctly, should affect this, either should work OK.
Is there something I cannot see here?
BTW, Maciek's plan to bolster up the protocol by adding an 'AA' start byte is an excellent idea - protocols depending on sending just a length as the first element always seem to screw up eventually and result in a node trying to dequeue more bytes that there are atoms in the universe.
Rgds,
Martin
OK, now I understand the problem, (I completely misunderstood the topology of the OP network - I thought each phone was running a TCP server as well as client/s, but there is just one server on PC/whatever a-la-chatrooms). I don't see why you could not lock the socket class with a mutex, so serializing the messages. You could queue the messages to the socket, but this has the memory implications that you are trying to avoid.
You could dedicate a connection to supplying only instructions to the phone, eg 'open another socket connection to me and return this GUID - a message will then be streamed on the socket'. This uses up a socket-pair just for control and halves the capacity of your server :(
Are you stuck with the protocol you have described, or can you break your messages up into chunks with some ID in each chunk? You could then multiplex the messages onto one socket pair.
Another alternative, that again would require chunking the messages, is introduce a 'control message', (maybee a chunk with 55 at start instead of AA), that contains a message ID, (GUID?), that the phone uses to establish a second socket connection to the server, passes up the ID and is then sent the second message on the new socket connection.
Another, (getting bored yet?), way of persuading the phone to recognise that a new message might be waiting would be to close the server socket that the phone is receiving a message over. The phone could then connect up again, tell the server that it only got xxxx bytes of message ID yyyy. The server could then reply with an instruction to open another socket for new message zzzz and then resume sending message yyyy. This might require some buffering on the server to ensure no data gets lost during the 'break'. You might want to implement this kind of 'restart streaming after break' functionality anyway since phones tend to go under bridges/tunnels just as the last KB of a 360MB video file is being streamed :( I know that TCP should take care of dropped packets, but if the phone wireless layer decides to close the socket for whatever reason...
None of these solutions is particularly satisfying. Interested to see whay other ideas crop up..
Rgds,
Martin
Thanks for the help everyone, i've realised the simpliest approach is to use synchronous send commands on the client, or at least a send command that must complete before the next item is sent. Im handling this with my own send queue on the client, rather than various parts of the app just calling send() when they need to send something.

Categories

Resources