Socket is not receiving all the bytes C# .NET [duplicate] - c#

This question already has an answer here:
How can we find that all bytes receive from sockets?
(1 answer)
Closed 7 years ago.
I have a problem at work with sockets. I have a client to the server should send a screenshot. The problem is that the server is not receiving all the bytes from the array, which is sent by the client.Constantly lacks 255 bytes (checked back several times). Accordingly, on the server side I can not perform the conversion from byte array back into an image.
Client sends data to the server:
byte[] bytesforSend = ConvertBitmapToByteArray(GetScreenImage());
client.Send(bytesforSend, bytesforSend.Length, 0);
Server recieves data from the client:
int lenght = cl.socket.Receive(bytes);
Perhaps all very easy to solve but I'm working with sockets the first time and I don't understand why this is so.

Let me paste here the code that you have written in a comment:
List<byte[]> recievingBytes = new List<byte[]>();
List<int> lenghts = new List<int>();
int lenght; do
{
lenght = cl.socket.Receive(bytes);
recievingBytes.Add(bytes);
lenghts.Add(lenght);
} while (lenght != 0);
The most likely problem here (I saw a similar issue in a commercial library that communicated with a camera) is that you are assuming that the all the data will reach its destination at the same time, but this could not be true depending on the networking conditions or on how the client is actually sending the data.
Assume for example that the client sends a 2048 byte block of data in four 512 byte TCP segments. The first three arrive immediately, but due to some networking issue the last packet is lost and needs to be retransmitted. In the mean time you have already executed the while (lenght != 0) check and ended the loop. After that the last 512 byte piece arrives, but you have missed it.
What you need to do is to replace the while(lenght != 0) for something like while(IDontHaveAllTheDataThatIExpect && !timeout), this assumes of course that you either know beforehand how much data you will receive or that you can somehow detect the end of the data.

Related

UDP packets always arrive at the transport layer (detected by wireshark) but not are not always read by the application

Context
I have been creating a system where an raspberry PI is sending images to a remote client in real-time.
The raspberry PI captures the images using a raspberry PI camera. A captured image is available as a 3-dimensional array of all the pixels (rows, colums and rgb). By sending and displaying the images really fast it will appear as a video to the user.
My goal is to send these images in real-time with the image resolution being as high as possible. An acceptable frame rate is around 30 fps. I selected the protocol UDP and not TCP. I did this because data can be transferred much faster in UDP due to less overhead. Re-transmissions of individual packets is not necessary because losing some pixels is acceptable in my case. The raspberry PI and the client are located in the same network so not many packets will be dropped anyway.
Taking into account that the maximum transmission unit (MTU) on the ethernet layer is 1500 bytes, and the UDP packets should not be fragmented or dropped, I selected a maximum payload length of 1450 bytes, of which 1447 bytes are data, and 3 bytes are application layer overhead. The remaining 50 bytes are reserved
for overhead that is automatically added by the TCP/IP and transport layers.
I mentioned that captured images are available as an array. Assuming the size of this array is, for example, 1.036.800 bytes (e.g. width=720 * height=480 * numberOfColors=3), then 717 (1.036.800 / 1447) UDP packets are needed to send the entire array. The c++ application on the raspberry PI does this by fragmenting the array into fragments of 1447 bytes, and adding an fragment index number, which is between 1-717, as overhead to the packet. We also add an image number, to distinguish from a previously sent image/array. The packet looks like this:
udp packet
Problem
On the client side, I developed a C# application that receives all the packets and reassembles the array using the included index numbers. Using the EgmuCV library, the received array is converted to an image and drawn in a GUI. However, some of the received images are drawn with black lines/chunks. When debugging, I discovered that this problem is not caused by drawing the image, but the black chunks are actually missing array fragments that did never arrive. Because the byte values in an array are initialized as 0 by default, the missing fragments are shown as black chunks
Debugging
Using Wireshark on the client's side, I searched for the index of such a missing fragment, and was surprised to find it, intact. This would mean that the data is received correctly on the transport layer (and observed by wireshark), but never read on the application layer.
This image shows that a chunk of a received array is missing, at index 174.000. Because there are 1447 data bytes in a packet, the index of this missing data corresponds to an UDP packet with the fragment index 121 (174.000/1447). The hexadecimal equivalent for 121 is 79. The following image shows the packet corrosponding UDP packet in wireshark, proving the data was still intact on the transport layer. image
What have I tried soo far
When I lower the frame rate, there will be less black chunks, and they are often smaller. With a framerate of 3FPS there is no black at all. However, this frame rate is not desired. That is a speed of around (3fps * 720x480x3) 3.110.400 bits per second (379kb/s). A normal computer should be capable to read more bits per seconds than this. And as I explained, the packets DID arrive in wireshark, they are only not read in the application layer.
I have also tried changing the UDP payload length from 1447 to 500. This only makes it worse, see image.
I implemented multi threading so that data is read and processed in different threads.
I tried a TCP implementation. The images were received intact, but it was not fast enough to transfer the images in real-time.
It is notable that a 'black chunk' does not represent a single missing fragment of 1447 bytes, but many consecutive fragments. So at some point when reading for data, a number of packets is not read. Also not every image has this problem, some are arrived intact.
I am wondering what is wrong with my implementation that results in this unwanted effect. So I will be posting some of my code below.
Please note that the exception 'SocketException' is never really thrown and the Console.Writeline for 'invalid overhead' is also never printed. The _client.Receive always receives 1450 bytes, expect for the last fragment of an array, which is smaller.
Also
Besides solving this bug, if anyone has alternative suggestions for transmitting these arrays in a more efficient way (requiring less bandwidth but without quality loss), I would gladly hear it. As long as the solution has the array as input/output on both endpoints.
Most importantly: NOTE that the missing packets were never returned by the UdpClient.Receive() method.
I did not post code for c++ application running on the raspberry PI, because the data did arrive (in wireshark) as I have already proved. So the transmission is working fine, but receiving is not.
private const int ClientPort = 50000;
private UdpClient _client;
private Thread _receiveThread;
private Thread _processThread;
private volatile bool _started;
private ConcurrentQueue<byte[]> _receivedPackets = new ConcurrentQueue<byte[]>();
private IPEndPoint _remoteEP = new IPEndPoint(IPAddress.Parse("192.168.4.1"), 2371);
public void Start()
{
if (_started)
{
throw new InvalidCastException("Already started");
}
_started = true;
_client = new UdpClient(_clientPort);
_receiveThread = new Thread(new ThreadStart(ReceiveThread));
_processThread = new Thread(new ThreadStart(ProcessThread));
_receiveThread.Start();
_processThread.Start();
}
public void Stop()
{
if (!_started)
{
return;
}
_started = false;
_receiveThread.Join();
_receiveThread = null;
_processThread.Join();
_processThread = null;
_client.Close();
}
public void ReceiveThread()
{
_client.Client.ReceiveTimeout = 100;
while (_started)
{
try
{
byte[] data = _client.Receive(ref _remoteEP);
_receivedPackets.Enqueue(data);
}
catch(SocketException ex)
{
Console.Writeline(ex.Message);
continue;
}
}
}
private void ProcessThread()
{
while (_started)
{
byte[] data;
bool dequeued = _receivedPackets.TryDequeue(out data);
if (!dequeued)
{
continue;
}
int imgNr = data[0];
int fragmentIndex = (data[1] << 8) | data[2];
if (imgNr <= 0 || imgNr > 255 || fragmentIndex <= 0)
{
Console.WriteLine("Received data with invalid overhead");
return;
}
// i omitted the code for this method because is does not interfere with the
// socket and therefore not really relevant to the issue that i described
ProccessReceivedData(imgNr, fragmentIndex , data);
}
}

C# IInputStream missing segmented TCP package

I am writing a Small HttpServer, sometime I encounter a problem with missing POST Data.
By using Wireshark I discovered, that the Header is split into two segments.
I only get the first segment (636 Bytes), the second one (POST Data in this case) gets totally lost.
Here is a the relevant C# Code
string requestHeaderString = "";
StreamSocket socketStream = args.Socket;
IInputStream inputStream = socketStream.InputStream;
byte[] data = new byte[BufferSize];
IBuffer buffer = data.AsBuffer();
try
{
await inputStream.ReadAsync(buffer, BufferSize, InputStreamOptions.Partial);
// This is where things go missing, buffer.ToArray() should be 678 Bytes long,
// so Segment 1 (636 Bytes) and Segment 2 (42 Bytes) combined.
// But is only 636 Bytes long, so just the first Segment?!
requestHeaderString += Encoding.UTF8.GetString(buffer.ToArray());
}
catch (Exception e)
{
Debug.WriteLine("inputStream is not readable" + e.StackTrace);
return;
}
This code is in part of the StreamSocketListener ConnectionReceived Event.
Do I manually have to reassemble the TCP Segments, isn't this what the Systems TCP Stack should do?
Thanks,
David
The problem is the systems TCP stack treats the TCP stream just like any other stream. You don't get "messages" with streams, you just get a stream of bytes.
The receiving side has no way to tell when one "message" ends and where the next begins without you telling it some how. You must implement message framing on top of TCP, then on your receiving side you must repeatedly call Receive till you have received enough bytes to form a full message (this will involve using the int returned from the receive call to see how many bytes where processed).
Important note: If you don't know how many bytes you are expecting to get in total, for example you are doing message framing by using '\0' to seperate messages you may get the end of one message and the start of the next in a single Receive call. You will need to handle that situation.
EDIT: Sorry, I skipped over the fact you where reading HTTP. You must follow the protocol of HTTP. You must read in data till you see the pattern \r\n\r\n, once you get that you must parse the header and decode how much data is in the content portion of the HTTP message then repeatatly call read till you have read the number of bytes needed.

UDP sending data, something i don´t quite understand

Okay from my knowledge UDP works like this:
You have data you want to send, you say to the UDP client, hey send this data.
The UDP client then says, sure why not, and sends the data to the selected IP and Port.
If it get´s through or in the right order is another story, it have sent the data, you didn´t ask for anything else.
Now from this perspective, it´s pretty much impossible to send data and assemble it.
for example, i have a 1mb image, and i send it.
So i send divide it in 60kb files (or something to fit the packages), and send them one by one from first to last.
So in theory, if all get´s added, the image should be exactly the same.
But, that theory breaks as there is no law that tells the packages if it can arrive faster or slower than another, so it may only be possible if you make some kind of wait timer, and hope for the best that the arrive in the order they are sent.
Anyway, what i want to understand is, why does this work:
void Sending(object sender, NAudio.Wave.WaveInEventArgs e)
{
if (connect == true && MuteMic.Checked == false)
{
udpSend.Send(e.Buffer, e.BytesRecorded, otherPartyIP.Address.ToString(), 1500);
}
}
Recieving:
while (connect == true)
{
byte[] byteData = udpReceive.Receive(ref remoteEP);
waveProvider.AddSamples(byteData, 0, byteData.Length);
}
So this is basically, it sends the audio buffer through udp.
The receiving par just adds the udp data received in a buffer and plays it.
Now, this works.
And i wonder.. why?
How can this work, how come the data is sent in the right order and added so it appears as a constant audio stream?
Cause if i would to this with an image, i would probably get all the data.
But they would be in a random order probably, and i can only solve that by marking packages and stuff like that. And then there is simply no reason for it, and TCP takes over.
So if someone can please explain this, i just don´t get it.
Here is a code example that is when sending an image, and well, it works. But it seems to work better when the entire byte array isn´t sent, meanign some part of the image is corrupted (not sure why, probably something to do with how the size of the byte array are).
Send:
using (var udpcap = new UdpClient(0))
{
udpcap.Client.SendBufferSize *= 16;
bsize = ms.Length;
var buff = new byte[7000];
int c = 0;
int size = 7000;
for (int i = 0; i < ms.Length; i += size)
{
c = Math.Min(size, (int)ms.Length - i);
Array.Copy(ms.GetBuffer(), i, buff, 0, c);
udpcap.Send(buff, c, adress.Address.ToString(), 1700);
}
Receive:
using (var udpcap = new UdpClient(1700))
{
udpcap.Client.SendBufferSize *= 16;
var databyte = new byte[1619200];
int i = 0;
for (int q = 0; q < 11; ++q)
{
byte[] data = udpcap.Receive(ref adress);
Array.Copy(data, 0, databyte, i, data.Length);
i += data.Length;
}
var newImage = Image.FromStream(new MemoryStream(databyte));
gmp.DrawImage(newImage,0,0);
}
You should be using TCP. You write: it´s pretty much impossible to send data and assemble it. for example, i have a 1mb image, and i send it. So i send divide it in 60kb files (or something to fit the packages), and send them one by one from first to last. ... But, that theory breaks as there is no law that tells the packages if it can arrive faster or slower than another, so it may only be possible if you make some kind of wait timer, and hope for the best that the arrive in the order they are sent. That's exactly what TCP does: ensure that all the pieces of a stream of data are received in the order they were sent, with no omissions, duplications, or modifications. If you really want to re-implement that yourself, you should be reading RFC 793 - it talks at length about how to build a reliable data stream atop an unreliable packet service.
But really, just use TCP.
You're missing a lot of helpful details from your question, but based on the level of understanding presented I'll attempt to answer at a similar level:
You're absolutely right, in general the UDP protocol doesn't guarantee order of delivery or even delivery at all. Your local host is going to send the packets (i.e. parts of your request message) in the order it receives them from the sending application, and from there its up to network components to decide how your request gets delivered. In local networks however (within a handful of hops of the original requester) there aren't really a lot of directions for the packets to go. As such they will likely just flow in line and never see a hiccup.
On the greater internet however, there is likely a wide variety of routing choices available to each router between your requesting host and the destination. Every router along the way can make a choice on which direction parts of your message take. Assuming all paths are equal (which they aren't) and guaranteed reliability of every network segment between the 2 hosts its likely to see similar results as within network (with added latency). Unfortunately neither of the posed conditions can be considered reliable (different paths on the internet perform differently depending on client and server, and no single path on the internet should ever be considered to be reliable (that's why it's a "net").
These are of course based on general observations from my own experience in network support and admin roles. Members of other StackExchange sites may be able to provide better feedback.

Loop until TcpClient response fully read [duplicate]

This question already has answers here:
Receiving data in TCP
(10 answers)
Closed 2 years ago.
I have written a simple TCP client and server. The problem lies with the client.
I'm having some trouble reading the entire response from the server. I must let the thread sleep to allow all the data be sent.
I've tried a few times to convert this code into a loop that runs until the server is finished sending data.
// Init & connect to client
TcpClient client = new TcpClient();
Console.WriteLine("Connecting.....");
client.Connect("192.168.1.160", 9988);
// Stream string to server
input += "\n";
Stream stm = client.GetStream();
ASCIIEncoding asen = new ASCIIEncoding();
byte[] ba = asen.GetBytes(input);
stm.Write(ba, 0, ba.Length);
// Read response from server.
byte[] buffer = new byte[1024];
System.Threading.Thread.Sleep(1000); // Huh, why do I need to wait?
int bytesRead = stm.Read(buffer, 0, buffer.Length);
response = Encoding.ASCII.GetString(buffer, 0, bytesRead);
Console.WriteLine("Response String: "+response);
client.Close();
The nature of streams that are built on top of sockets is that you have an open pipeline that transmits and receives data until the socket is closed.
However, because of the nature of client/server interactions, this pipeline isn't always guaranteed to have content on it to be read. The client and server have to agree to send content over the pipeline.
When you take the Stream abstraction in .NET and overlay it on the concept of sockets, the requirement for an agreement between the client and server still applies; you can call Stream.Read all you want, but if the socket that your Stream is connected to on the other side isn't sending content, the call will just wait until there is content.
This is why protocols exist. At their most basic level, they help define what a complete message that is sent between two parties is. Usually, the mechanism is something along the lines of:
A length-prefixed message where the number of bytes to be read is sent before the message
A pattern of characters used to mark the end of a message (this is less common depending on the content that is being sent, the more arbitrary any part of the message can be, the less likely this will be used)
That said you aren't adhering to the above; your call to Stream.Read is just saying "read 1024 bytes" when in reality, there might not be 1024 bytes to be read. If that's the case, the call to Stream.Read will block until that's been populated.
The reason the call to Thread.Sleep probably works is because by the time a second goes by, the Stream has 1024 bytes on it to read and it doesn't block.
Additionally, if you truly want to read 1024 bytes, you can't assume that the call to Stream.Read will populate 1024 bytes of data. The return value for the Stream.Read method tells you how many bytes were actually read. If you need more for your message, then you need to make additional calls to Stream.Read.
Jon Skeet wrote up the exact way to do this if you want a sample.
Try to repeat the
int bytesRead = stm.Read(buffer, 0, buffer.Length);
while bytesRead > 0. It is a common pattern for that as i remember.
Of course don't forget to pass appropriate params for buffer.
You dont know the size of data you will be reading so you have to set a mechanism to decide. One is timeout and another is using delimiters.
On your example you read whatever data from just one iteration(read) because you dont set the timeout for reading and using default value thats "0" milisecond. So you have to sleep just 1000 ms. You get same effect with using recieve time out to 1000 ms.
I think using lenght of data as prefix is not the real solution because when socket is closed by both sides, socket time-wait situation can not handled properly. Same data can be send to server and cause server to get exception . We used prefix-ending character sequence. After every read we check the data for start and end character sequence, if we cant get end characters, we call another read. But of course this works only if you have the control of server side and client side code.
In the TCP Client / Server I just wrote I generate the packet I want to send to a memory stream, then take the length of that stream and use it as a prefix when sending the data. That way the client knows how many bytes of data it's going to need to read for a full packet.

Handling dropped TCP packets in C#

I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.
I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.
First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?
Secondly, how can I get round this? Receiving all of the 20000 strings is vital.
Socket receiving code:
private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
byte[] recvBuffer = new byte[1024];
int bytesRead = 0;
try
{
bytesRead = socket.Receive(recvBuffer);
}
catch (SocketException e)
{
if (! socket.Connected)
{
return;
}
}
string input = encoding.GetString(recvBuffer, 0, bytesRead);
CountStringsIn(input);
}
Socket sending code:
private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.
What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.
The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.
Well there's one thing wrong with your code to start with, if you're counting the number of calls to Receive which complete: you appear to be assuming that you'll see as many Receive calls finish as you made Send calls.
TCP is a stream-based protocol - you shouldn't be worrying about individual packets or reads; you should be concerned with reading the data, expecting that sometimes you won't get a whole message in one packet and sometimes you may get more than one message in a single read. (One read may not correspond to one packet, too.)
You should either prefix each method with its length before sending, or have a delimited between messages.
It's definitely not TCP's fault. TCP guarantees in-order, exactly-once delivery.
Which strings are "missing"? I'd wager it's the last ones; try flushing from the sending end.
Moreover, your "protocol" here (I'm taking about the application-layer protocol you're inventing) is lacking: you should consider sending the # of objects and/or their length so the receiver knows when he's actually done receiving them.
How long are each of the strings? If they aren't exactly 1024 bytes, they'll be merged by the remote TCP/IP stack into one big stream, which you read big blocks of in your Receive call.
For example, using three Send calls to send "A", "B", and "C" will most likely come to your remote client as "ABC" (as either the remote stack or your own stack will buffer the bytes until they are read). If you need each string to come without it being merged with other strings, look into adding in a "protocol" with an identifier to show the start and end of each string, or alternatively configure the socket to avoid buffering and combining packets.

Categories

Resources