C# client socket for synchronous - c#

I have a 3Rd party server where I send it data and then wait for a response.
It will first respond with ack then with the full response
I am doing something like this. Is there a better way.
//Connect
eftSocket.Connect(remoteEp);
// Encode the data string into a byte array.
byte[] msg = Encoding.ASCII.GetBytes(strRequest);
// Send the data through the socket.
int bytesSent = eftSocket.Send(msg);
//receive first time
int bytesRec = eftSocket.Receive(bytes);
//Do we get an ack,
if (bytes[0] != 6)
{
throw new Exception("Could not connect to EFT terminal");
}
//receive 2nd time
bytesRec = eftSocket.Receive(bytes);

That code looks broken. At first you have to embrace that TCP is a stream and that there are no packets. That means even if the client sends something of a given length you can never expect that you receive it in one turn. Which means if you do eftSocket.Receive(bytes) you might receive any amount of bytes which were sent from the remote, not necessarily the ack byte in the first call and the remaining stuff in a second call. The only thing that you know is that you won't receive more than bytes.Length with it. If you hand it over a significantly large buffer you might receive everything (ACK + remaining data) in one turn. Therefore you should always check the return value first. And if you expect more bytes repeat the Receive calls with the required offsets.
Then you should at first check whether your receive suceeded at all. You might receive 0, which means the remote closed the socket. If that's the case bytes[0] will yield anything which was stored in it before the Receive started.
You should also properly close your socket. It's not obvious from the code example if it happens, but if you return from the code with an Exception it might be missing.

Related

UDP SocketException on EndReceiveFrom (C#)

I'm currently writing some async UDP network code in C#. I'm sending small packets (less than 50 bytes of data in each so far) back and forth and my first thought was to split them into two different packets and still send it as one packet but receive it as two. So a header or an extra information packet is always added to the start of the real packet. That would contain an ID and the data length.
So I thought I could split it on the receiving end (async receive) and first receive the header and then the actual information. This is so that I don't have to worry about the order between packets and "packet headers".
So I wrote code that basically worked like this:
Client sends 30 bytes of data to the server, where the first 3 bytes is the packet header.
The server would have called (PACKET_HEADER_SIZE = 3):
socket.BeginReceiveFrom(state.Buffer, 0, PACKET_HEADER_SIZE, SocketFlags.None, ref endPoint, ReceivePacketInfo, state);
Then receives the data:
private void ReceivePacketInfo(IAsyncResult ar)
{
StateObj state = (StateObj) ar.AsyncState;
int bytesRead = socket.EndReceiveFrom(ar, ref endPoint);
state.BytesReceived += read;
if (state.BytesReceived < state.Buffer.Length)
{
_socket.BeginReceiveFrom(state.Buffer, state.BytesReceived, state.Buffer.Length - state.BytesReceived, SocketFlags.None, ref endPoint, ReceivePacketInfo, state);
}
else
{
//my thought was to receive the rest of the packet here
}
}
but when calling socket.EndReceiveFrom(ar) I get a SocketException:
"A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself"
So now I have a couple of questions.
Do I have to make sure I receive the whole packet (in this case both the header and the packet) before I call EndReceiveFrom?
Can I assume that I will either get the whole packet in one go or get nothing so that my if-statement in ReceivePacketInfo would be redundant (as long as it's size is less than the maximum packet size, of course)?
If I cannot, is there a good way of solving my problem? I could tag all my packet headers and all my packets to be able to map them together I suppose. I could also try to have a standardized "packet ending" so that I just read until I hit the end of the packet.
Thanks in advance for any help!
Can I assume that I will either get the whole packet in one go or get nothing
That's almost the only thing, that UDP can guarantee - the content of a packet. If packet is received it is guaranteed to have same size and same content. So you have to make sure, that you buffer is large enough for a packet.
The order of a packets is not guaranteed and the delivery itself. It is up to you and your application to handle dropped packets and out of order packets.

Measuring the TCP Bytes waiting to be read by the Socket

I have a socket connection that receives data, and reads it for processing.
When data is not processed/pulled fast enough from the socket, there is a bottleneck at the TCP level, and the data received is delayed (I can tell by the tmestamps after parsing).
How can I see how much TCP bytes are awaiting to be read by the socket ? (via some external tool like WireShark or else)
private void InitiateRecv(IoContext rxContext)
{
rxContext._ipcSocket.BeginReceive(rxContext._ipcBuffer.Buffer, rxContext._ipcBuffer.WrIndex,
rxContext._ipcBuffer.Remaining(), 0, CompleteRecv, rxContext);
}
private void CompleteRecv(IAsyncResult ar)
{
IoContext rxContext = ar.AsyncState as IoContext;
if (rxContext != null)
{
int rxBytes = rxContext._ipcSocket.EndReceive(ar);
if (rxBytes > 0)
{
EventHandler<VfxIpcEventArgs> dispatch = EventDispatch;
dispatch (this, new VfxIpcEventArgs(rxContext._ipcBuffer));
InitiateRecv(rxContext);
}
}
}
The fact is that I guess the "dispatch" is somehow blocking the reception until it is done, ending up in latency (i.e, data that is processed bu the dispatch is delayed, hence my (false?) conclusion that there was data accumulated on the socket level or before.
How can I see how much TCP bytes are awaiting to be read by the socket
By specifying a protocol that indicates how many bytes it's about to send. Using sockets you operate a few layers above the byte level, and you can't see how many send() calls end up as receive() calls on your end because of buffering and delays.
If you specify the number of bytes on beforehand, and send a string like "13|Hello, World!", then there's no problem when the message arrives in two parts, say "13|Hello" and ", World!", because you know you'll have to read 13 bytes.
You'll have to keep some sort of state and a buffer in between different receive() calls.
When it comes to external tools like Wireshark, they cannot know how many bytes are left in the socket. They only know which packets have passed by the network interface.
The only way to check it with Wireshark is to actually know the last bytes you read from the socket, locate them in Wireshark, and count from there.
However, the best way to get this information is to check the Available property on the socket object in your .NET application.
You can use socket.Available if you are using normal Socket class. Otherwise you have to define a header byte which gives number of bytes to be sent from other end.

How do I know when an asynchronous socket read ends?

I have an asynchronous read method...
private void read(IAsyncResult ar) {
//Get the Server State Object
ServerState state = (ServerState)ar.AsyncState;
//read from the socket
int readCount = state.socket.EndReceive(ar);
//check if reading is done, move on if so, trigger another read if not
if (readCount > 0) {
//purge the buffer and start another read
state.purgeBuffer();
state.socket.BeginReceive(state.buffer, 0, ServerState.bufferSize, 0, new AsyncCallback(read), state);
}
else {
//all bytes have been read, dispatch the message
dispatch(state);
}
}
The problem that I am having is that read is only 0 if the connection is closed. How do I say, this is the end of that message and pass the data on to the dispatcher, while leaving the socket open to accept new messages.
Thank you!
You should not rely on what is in the TCP buffer. You must process the incoming bytes as a stream somewhere. You can't really know whether its complete. Only one layer above can know when the message completed.
Example:
If you read HTTP responses the HTTP header will contain the byte count which is in the HTTP body. So you know how much to read.
You only know how much to read if the data follows a certain protocol and you interprete it. Imagine you receive a file over the socket. The first thing you would receive is the file size. Without that you would never know how much to read.
You should make your messages fit a particular format so that you can distinguish when they start and when end. Even if it is a stream of data it should be sent in packets.
One option is to send length of message first and then you know how much data to expect. But problem with that is if you loose sync you can never recover and you will never know what is message length and what is its content. It is good to use some special marking sequence to know when message begins. It is is not 100% error proof (sequence might appear in data) but certainly helps and allows to recover from sync loose. This is particularly important when reading from a binary stream like socket.
Even ancient RS232 serial protocol had its frame and stop bit to know when you got all the data.

Loop until TcpClient response fully read [duplicate]

This question already has answers here:
Receiving data in TCP
(10 answers)
Closed 2 years ago.
I have written a simple TCP client and server. The problem lies with the client.
I'm having some trouble reading the entire response from the server. I must let the thread sleep to allow all the data be sent.
I've tried a few times to convert this code into a loop that runs until the server is finished sending data.
// Init & connect to client
TcpClient client = new TcpClient();
Console.WriteLine("Connecting.....");
client.Connect("192.168.1.160", 9988);
// Stream string to server
input += "\n";
Stream stm = client.GetStream();
ASCIIEncoding asen = new ASCIIEncoding();
byte[] ba = asen.GetBytes(input);
stm.Write(ba, 0, ba.Length);
// Read response from server.
byte[] buffer = new byte[1024];
System.Threading.Thread.Sleep(1000); // Huh, why do I need to wait?
int bytesRead = stm.Read(buffer, 0, buffer.Length);
response = Encoding.ASCII.GetString(buffer, 0, bytesRead);
Console.WriteLine("Response String: "+response);
client.Close();
The nature of streams that are built on top of sockets is that you have an open pipeline that transmits and receives data until the socket is closed.
However, because of the nature of client/server interactions, this pipeline isn't always guaranteed to have content on it to be read. The client and server have to agree to send content over the pipeline.
When you take the Stream abstraction in .NET and overlay it on the concept of sockets, the requirement for an agreement between the client and server still applies; you can call Stream.Read all you want, but if the socket that your Stream is connected to on the other side isn't sending content, the call will just wait until there is content.
This is why protocols exist. At their most basic level, they help define what a complete message that is sent between two parties is. Usually, the mechanism is something along the lines of:
A length-prefixed message where the number of bytes to be read is sent before the message
A pattern of characters used to mark the end of a message (this is less common depending on the content that is being sent, the more arbitrary any part of the message can be, the less likely this will be used)
That said you aren't adhering to the above; your call to Stream.Read is just saying "read 1024 bytes" when in reality, there might not be 1024 bytes to be read. If that's the case, the call to Stream.Read will block until that's been populated.
The reason the call to Thread.Sleep probably works is because by the time a second goes by, the Stream has 1024 bytes on it to read and it doesn't block.
Additionally, if you truly want to read 1024 bytes, you can't assume that the call to Stream.Read will populate 1024 bytes of data. The return value for the Stream.Read method tells you how many bytes were actually read. If you need more for your message, then you need to make additional calls to Stream.Read.
Jon Skeet wrote up the exact way to do this if you want a sample.
Try to repeat the
int bytesRead = stm.Read(buffer, 0, buffer.Length);
while bytesRead > 0. It is a common pattern for that as i remember.
Of course don't forget to pass appropriate params for buffer.
You dont know the size of data you will be reading so you have to set a mechanism to decide. One is timeout and another is using delimiters.
On your example you read whatever data from just one iteration(read) because you dont set the timeout for reading and using default value thats "0" milisecond. So you have to sleep just 1000 ms. You get same effect with using recieve time out to 1000 ms.
I think using lenght of data as prefix is not the real solution because when socket is closed by both sides, socket time-wait situation can not handled properly. Same data can be send to server and cause server to get exception . We used prefix-ending character sequence. After every read we check the data for start and end character sequence, if we cant get end characters, we call another read. But of course this works only if you have the control of server side and client side code.
In the TCP Client / Server I just wrote I generate the packet I want to send to a memory stream, then take the length of that stream and use it as a prefix when sending the data. That way the client knows how many bytes of data it's going to need to read for a full packet.

TCP listener cuts message at 1024 bytes

Problem just started on client side. Here is my code where I receive TCP/IP message. On my local PC this listener receives many K no problem. I tried to increase buffer size but on client site they still report issues related to it.. Still get's only first 1K (1024 bytes)
public void Start()
{
//Define TCP listener
tcpListener = new TcpListener(IPAddress.Any, IDLocal.LocalSession.PortNumber);
try
{
//Starting TCP listenere
tcpListener.Start();
while (true)
{
var clientSocket = tcpListener.AcceptSocket();
if (clientSocket.Connected)
{
var netStream = new NetworkStream(clientSocket);
// Check to see if this NetworkStream is readable.
if (netStream.CanRead)
{
var myReadBuffer = new byte[1024];
var myCompleteMessage = new StringBuilder();
// Incoming message may be larger than the buffer size.
do
{
var numberOfBytesRead = netStream.Read(myReadBuffer, 0, myReadBuffer.Length);
myCompleteMessage.AppendFormat("{0}", Encoding.ASCII.GetString(myReadBuffer, 0, numberOfBytesRead));
} while (netStream.DataAvailable);
//All we do is response with "OK" message
var sendBytes = Encoding.ASCII.GetBytes("OK");
netStream.Write(sendBytes, 0, sendBytes.Length);
clientSocket.Close();
netStream.Dispose();
// Raise event with message we received
DataReceived(myCompleteMessage.ToString());
}
}
}
}
catch (Exception e)
{
//If we catch network related exception - send event up
IDListenerException(e.Message);
}
}
I don't see any problem with the code you posted to extract the message into a string, so I 'm guessing that something else is afoot.
TCP isn't required to send all data you queue to it in one go. This means it can send as few as it want to at a time, and it can choose to split your data into pieces at will. In particular, it is guaranteed to split your data if they don't fit into one packet. Typically, the maximum packet size (aka MTU) is 1532 bytes IIRC.
Therefore there's a real possibility that the data is sent, but as more than one packet. The delay between reception of the first and second packet could mean that when the first one arrives your code happily reads everything it contains and then stops (no more data) before the second packet has had time to arrive.
You can test this hypothesis by either observing network traffic, or allowing your app to pull more messages from the wire and see if it finally does get all the data you sent (albeit in pieces).
Ultimately the underlying issue is TCP's fundamental stream-based (and not message-based) nature; even if you get this code to work correctly, there is no guarantee that it will continue working in the future because it makes assumptions about stuff that TCP does not guarantee.
To be safe, you will need to incorporate a message-based structure (e.g. prepending each piece of data with exactly 4 bytes that hold its length; then, you can just keep reading forever until you have received that many bytes).

Categories

Resources