c# & java sockets and byte data reads overheads - c#

I'm receiving byte data from a VNC server, using Real VNC this works in my method/function:
byte[] readBytes = new byte[count];
sock.Receive(readBytes);
Using TigerVNC it doesn't work, but this does:
byte[] readBytes = new byte[count];
byte[] aByte = new byte[1];
for (int i = 0; i < count; i++)
{
sock.Receive(aByte);
readBytes[i] = aByte[0];
}
I stumbled upon it pretty quickly, as when I used breakpoints the original code came in alright from Tight VNC, without breaking only the first two bytes are received. My socket is blocking and has receive size of 1024. I am however running the server and client locally as I have no other way of testing.
The Question is:
Other than using an extra byte of memory with "aByte" and iterating through X bytes. Is there much of a processing difference than just receiving it direct using Socket.Receive? Bearing in mind I'll possibly be getting MBs of data at some point.
Further more, this is going to be implemented on Blackberry as a Java app, would the same kind of method have implications processing wise in Java Mobile?
cheers, craig

Neither of these methods is particularly good. What you want to do is keep calling Recv until it either reads all the bytes you want or returns an error. Each time, pass it a pointer just after the bytes you've received so far and the number of bytes remaining.
Take a look at the Socket.Receive method on this page. The crux is this (simplified):
public static void Receive(Socket socket, byte[] buffer, int size)
{
int received = 0;
do
{
try
{
received += socket.Receive(buffer, received, size - received, SocketFlags.None);
}
catch (SocketException ex)
{
throw ex; // any serious error occurr
}
} while (received < size);
}
This way you don't return too soon, but you don't read the data byte by byte either.

Related

Receiving a complete network stream using int NetworkStream.Read(Span<Bytes>)

As the title says I am trying to use the new (C# 8.0) object (Span) for my networking project. On my previous implementation I learned that it was mandatory to make sure that a NetworkStream have received a complete buffer before trying to use its content, otherwise, depending on the connection, the data received on the other end may not be whole.
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
int received = 0;
byte[] response = new byte[consumerBufferSize];
//Loop that forces the stream to read all incoming data before using it
while (received < consumerBufferSize)
received += stream.Read(response, received, consumerBufferSize - received);
string[] message = ObjectWrapper.ConvertByteArrayToObject<string>(response);
consumerAction(this, message);
}
However, it was introduced a different approach for reading network stream data (Read(Span)). And assuming that stackalloc will help with performance I am attempting to migrate my old implementation to accomodate this method. Here is what it looks like now:
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
Span<byte> response = stackalloc byte[consumerBufferSize];
stream.Read(response);
string[] message = ObjectWrapper.ConvertByteArrayToObject<string>(response).Split('|');
consumerAction(this, message);
}
But now how can I be sure that the buffer was completely read since it does not provides methods like the one I was using?
Edit:
//Former methodd
int Read (byte[] buffer, int offset, int size);
//The one I am looking for
int Read (Span<byte> buffer, int offset, int size);
I'm not sure I understand what you're asking. All the same features you relied on in the first code example still exist when using Span<byte>.
The Read(Span<byte>) overload still returns the count of bytes read. And since the Span<byte> is not the buffer itself, but rather just a window into the buffer, you can update the Span<byte> value to indicate the new starting point to read additional data. Having the count of bytes read and being able to specify the offset for the next read are all you need to duplicate the functionality in your old example. Of course, you don't currently have any code that saves the original buffer reference; you'll need to add that too.
I would expect something like this to work fine:
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
byte* response = stackalloc byte[consumerBufferSize];
while (received < consumerBufferSize)
{
Span<byte> span = new Span<byte>(response, received, consumerBufferSize - received);
received += stream.Read(span);
}
// process response here...
}
Note that this requires unsafe code because of the way stackalloc works. You can only avoid that by using Span<T> and allocating new blocks each time. Of course, that will eventually eat up all your stack.
Since in your implementation you apparently are dedicating a thread to this infinite loop, I don't see how stackalloc is helpful. You might as well just allocate a long-lived buffer array in the heap and use that.
In other words, I don't really see how this is better than just using the original Read(byte[], int, int) overload with a regular managed array. But the above is how you'd get the code to work.
Aside: you should learn how the async APIs work. Since you're already using NetworkStream, the async/await patterns are a natural fit. And regardless of what API you use, a loop checking DataAvailable is just plain crap. Don't do that. The Read() method is already a blocking method; you don't need to wait for data to show up in a separate loop, since the Read() method won't return until there is some.
I am just adding a little bit of additional information.
The function that you are talking about has the following description
public override int Read (Span<byte> buffer);
(source : https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.networkstream.read?view=net-5.0 )
Where the int returned is the amount of byte read from the NetworkStream. Now if we are looking at the Span functions we find Slice with the following description
public Span<T> Slice (int start);
(source : https://learn.microsoft.com/en-us/dotnet/api/system.span-1.slice?view=net-5.0#system-span-1-slice(system-int32) )
Which returns a portion of our Span, which you can use to send a certain portion of your stackalloc to your NetworkStream without using unsafe code.
Reusing your Code you could use something like this
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
int received = 0;
Span<byte> response = stackalloc byte[consumerBufferSize];
//Loop that forces the stream to read all incoming data before using it
while (received < consumerBufferSize)
received += stream.Read(response.Slice(received));
string[] message = ObjectWrapper.ConvertByteArrayToObject<string>(response).Split('|');
consumerAction(this, message);
}
In simple words, we "create" a new Span that is a portion of the initial Span pointing to our stackalloc with Slice, the "start" parameter allows us to choose where to start this portion. The portion is then passed to the function read which will start writing in our buffer wherever we "started" our Slice.

C# Socket.Receive() malfunctions when TCP package is larger than MTU

I'm using C# Socket class to send/receive TCP messages to/from our server by short connection. The pseudo-code (exclude some false tolerant logic to make it clear) is as followings.
class MyTCPClient {
private Socket mSocket;
MyTCPClient() {
mSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
}
void Connect(ipEndpoint, sendTimeOut, receiveTimeOut) {
mSocket.Connect(ipEndpoint);
mSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.SendTimeout, sendTimeOut);
mSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, receiveTimeOut);
}
void Send(msg) {
mSocket.Send(msg);
}
void Receive(data) {
ByteStreamWriter bsw = new ByteStreamWriter();
int totalLen = 0;
while (true) {
byte[] temp = new byte[1024];
int len = mSocket.Receive(temp, SocketFlags.None);
System.Threading.Thread.Sleep(50); // This is the weirdest part
if (len <= 0) break;
totalLen += len;
bsw.WriteBytes(temp);
}
byte[] buff = bsw.GetBuffer();
data = new byte[totalLen];
Array.Copy(buff, 0, data, 0, totalLen);
}
~MyTCPClient() {
mSocket.Close();
}
}
I was using this class to request the same message from our server several times by short connection, and the following things occurred (only when the message size was larger than one MTU -- 1500 bytes).
If the "Sleep(50)" was commented out, the most of the times (90%) I received wrong "data", specifically to say, the "totalLen" was right, but the "data" was wrong.
If I replaced "Sleep(50)" to "Sleep(10)", about half of the times I received wrong "data", and "totalLen" was also right always.
If I used "Sleep(50)", occasionally, I received wrong "data".
I can guarantee that, every time, our server sent the right data, and the client received the right data too at TCP layer (I used WireShark to monitor all messages through the port I used). Is there anyone who can help to answer why my C# code cannot get the right data?
Also, if I use mSocket.Available instead of the return value of Socket.Receive() to judge the while loop, I would always get the right data and data length...
You don't truncate temp to len bytes before writing it. All the bytes after len in temp have not been filled with current data and thus contain nonsensical, stale date.
When copying to another stream you could use stream.Write(temp, 0, len)

Do TCP sockets automatically close after 64kB send?

I'm trying to send multiple files over TCP using C# TcpClient, for a file below 64kB it works great, but when I have more it throws an exeption that host-computer disconnected.
Here is my code:
Server side(sending).
1) SocketServer class
public abstract class SocketServer
{
private Socket serverSocket;
public SocketServer(IPEndPoint localEndPoint)
{
serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
serverSocket.Bind(localEndPoint);
serverSocket.Listen(0);
serverSocket.BeginAccept(BeginAcceptCallback, null);
}
private void BeginAcceptCallback(IAsyncResult ar)
{
Socket clientSocket = serverSocket.EndAccept(ar);
Console.WriteLine("Client connected");
ClientConnection clientConnection = new ClientConnection(this, clientSocket);
Thread clientThread = new Thread(new ThreadStart(clientConnection.Process));
clientThread.Start();
serverSocket.BeginAccept(BeginAcceptCallback, null);
}
internal abstract void OnReceiveMessage(ClientConnection client, byte header, byte[] data);
}
Here goes ClientConnection:
public class ClientConnection
{
private SocketServer server;
private Socket clientSocket;
private byte[] buffer;
private readonly int BUFFER_SIZE = 8192;
public ClientConnection(SocketServer server, Socket clientSocket)
{
this.server = server;
this.clientSocket = clientSocket;
buffer = new byte[BUFFER_SIZE];
}
public void Process()
{
clientSocket.BeginReceive(buffer, 0, buffer.Length, SocketFlags.Peek, BeginReceiveCallback, null);
}
private void BeginReceiveCallback(IAsyncResult ar)
{
int bytesReceived = clientSocket.EndReceive(ar);
if (bytesReceived >= 4)
{
clientSocket.Receive(buffer, 0, 4, SocketFlags.None);
// message size
int size = BitConverter.ToInt32(buffer, 0);
// read message
int read = clientSocket.Receive(buffer, 0, size, SocketFlags.None);
// if data still fragmented, wait for it
while (read < size)
{
read += clientSocket.Receive(buffer, read, size - read, SocketFlags.None);
}
ProcessReceivedData(size);
}
clientSocket.BeginReceive(buffer, 0, buffer.Length, SocketFlags.Peek, BeginReceiveCallback, null);
}
private void ProcessReceivedData(int size)
{
using (PacketReader pr = new PacketReader(buffer))
{
// message header = 1 byte
byte header = pr.ReadByte();
// next message data
byte[] data = pr.ReadBytes(size - 1);
server.OnReceiveMessage(this, header, data);
}
}
public void Send(byte[] data)
{
// first of all, send message length
clientSocket.Send(BitConverter.GetBytes(data.Length), 0, 4, SocketFlags.None);
// and then message
clientSocket.Send(data, 0, data.Length, SocketFlags.None);
}
}
Implementation of FileServer class:
public class FileServer : SocketServer
{
string BinaryPath;
public FileServer(IPEndPoint localEndPoint) : base(localEndPoint)
{
BinaryPath = Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location);
}
internal override void OnReceiveMessage(ClientConnection client, byte hdr, byte[] data)
{
// Convert header byte to ENUM
Headers header = (Headers)hdr;
switch(header)
{
case Headers.Queue:
Queue(client, data);
break;
default:
Console.WriteLine("Wrong header received {0}", header);
break;
}
}
private void Queue(ClientConnection client, byte[] data)
{
// this message contains fileName
string fileName = Encoding.ASCII.GetString(data, 1, data.Length - 1);
// combine path with assembly location
fileName = Path.Combine(BinaryPath, fileName);
if (File.Exists(fileName))
{
FileInfo fileInfo = new FileInfo(fileName);
long fileLength = fileInfo.Length;
// pass the message that is now start a file transfer, contains:
// 1 byte = header
// 16 bytes = file length
using (PacketWriter pw = new PacketWriter())
{
pw.Write((byte)Headers.Start);
pw.Write(fileLength);
client.Send(pw.GetBytes());
}
//
using (FileStream fs = new FileStream(fileName, FileMode.Open, FileAccess.Read))
{
int read = 0, offset = 0;
byte[] fileChunk = new byte[8191];
while (offset < fileLength)
{
// pass message with file chunks, conatins
// 1 byte = header
// 8195 bytes = file chunk
using (PacketWriter pw = new PacketWriter())
{
fs.Position = offset;
read = fs.Read(fileChunk, 0, fileChunk.Length);
pw.Write((byte)Headers.Chunk);
pw.Write(fileChunk, 0, read);
client.Send(pw.GetBytes());
}
offset += read;
}
}
}
}
}
And helper classes:
public class PacketWriter : BinaryWriter
{
private MemoryStream memoryStream;
public PacketWriter() : base()
{
memoryStream = new MemoryStream();
OutStream = memoryStream;
}
public byte[] GetBytes()
{
byte[] data = memoryStream.ToArray();
return data;
}
}
public class PacketReader : BinaryReader
{
public PacketReader(byte[] data) : base(new MemoryStream(data))
{
binaryFormatter = new BinaryFormatter();
}
}
Client have the almost same code. Except receive:
internal override void OnReceiveMessage(ServerConnection client, byte hdr, byte[] data)
{
Headers header = (Headers)hdr;
switch(header)
{
case Headers.Start:
Start(data); // save length of file and prepare for receiving it
break;
case Headers.Chunk:
Chunk(data);
break;
default:
Console.WriteLine("Wrong header received {0}", header);
break;
}
}
private void Chunk(byte[] data)
{
// Process reveived data, write it into a file
}
Do TCP sockets automatically close after 64kB send?
No, TCP sockets do not automatically close after 64 KB of sent data, nor after any amount of sent data. They remain open until either end closes the connection or a network error of some sort happens.
Unfortunately, your question is extremely vague, providing practically no context. But I will point out one potentially serious bug in your client code: you are not restricting the data read by the client to that which you expect for the current file. If the server is sending more than one file, it is entirely possible and likely that on reaching the end of the data for the current file, a read containing that final sequence of bytes will also contain the initial sequence of bytes for the next file.
Thus, the expression totalBytesRead == fileLenght will fail to ever be true and your code will attempt to read all of the data the server sends as if it's all part of the first file.
One easy way to check for this scenario is to look at the size of the file that was written by the client. It will be much larger than expected, if the above is the issue.
You can fix the code by only ever receiving at most the number of bytes you actually expect to be remaining:
while ((bytesRead = binaryReader.Read(
buffer, 0, Math.Min(fileLenght - totalBytesRead, buffer.Length))) > 0)
{
fs.Write(buffer, 0, bytesRead);
totalBytesRead += bytesRead;
if (totalBytesRead == fileLenght)
{
break;
}
}
While you're at it, you might fix the spelling of the variable named fileLenght. I know Intellisense makes it easy to type the variable name whether it's spelled right or not, but if you one day have to grep through the source code looking for code that involves itself with "length" values, you'll miss that variable.
EDIT:
Having pointed out the problem in the code you posted with your question originally, I now find myself looking at completely different code, after your recent edit to the question. The example is still not a good, minimal, complete code example and so advice will still have to be of a limited nature. But I can see at least two major problems in your receive handling:
You are mixing Receive() and BeginReceive(). This is probably not too terrible in general; what makes it really bad in your case is that in the completion handler for the initial asynchronous receive, you are blocking the thread while you use the synchronous Receive() to (attempt to) receive the rest of the data, and then to make matters worse, in the FileServer case you continue to block that thread while you transmit the file data to the client.I/O completion handlers must not block the thread for any significant amount of time; they must retrieve the data for the completed operation, and return as quickly as is feasible. There's a lot of wiggle room, but not enough to justify tying up the IOCP thread for the entire file transfer.
While the first issue is bad, it's probably not bad enough to actually cause the connection to reset. In a low-volume scenario, and especially if you're just dealing with one client at a time at the moment, I doubt blocking the IOCP thread that's being used to call your I/O completion callback will cause any easily observable problems.What's much worse is that you allocate the buffer array once, with a length of 8192, and then proceed to attempt to stuff the entire counted-byte stream of bytes being sent. On the assumption that the client will be sending short messages, this is probably fine on the server side. But on the client side, the value of size can easily be greater than the 8192 bytes allocated in the buffer array.I find it very likely that your BeginReceiveCallback() method is throwing an ArgumentOutOfRangeException after receiving the first 8192 bytes, due to the fact that you are passing the Receive() method a size value that exceeds the length of the buffer array. I can't prove that, because the code example is incomplete. You'll have to follow up on that theory yourself.
As long as I'm reviewing the code you posted, some other irregularities stood out as I scanned through it:
It's a bit odd to be using a buffer length of 8191 when transmitting the file.
Your BeginReceiveCallback() method discards whatever bytes were sent initially. You call EndReceive(), which returns the count of bytes received, but the code that follows assumes that the size value in the stream is still waiting to be read, when it very easily could have been sent in the first block of bytes. Indeed, your comparison bytesReceived >= 4 seems to be intended to refrain from processing received data until you have that length value. Which is wrong for another reason:
If your header is 1 byte followed by a 4-byte count-of-bytes value, then you really need bytesReceived > 4. But worse, TCP is allowed to drip out a single byte at a time if it really wants to (it won't, but if you write your code correctly, it won't matter). Since you only process received data if you get at least 4 bytes, and since if you don't get 4 bytes, you just ignore whatever was sent so far, you could theoretically wind up ignoring all of the data that was sent.
That you have the risk that any data could be ignored at all is terrible. But #2 above guarantees it, and #3 just adds insult to injury.
When receiving the size value, you are reading a 32-bit integer, but when sending the size value (i.e. for the file itself), you are writing a 64-bit integer. You'll get the right number on the receiving end, because BinaryWriter uses little-endian and that means that the first four bytes of any 64-bit integer less than 2^31 are identical to the four bytes of the 32-bit integer representation of the same number. But you'll still wind up with four extra zero bytes after that; if treated as file data as they likely would, that would corrupt your file.

random byte loss when streaming data over network

My issue is that when i'm streaming a continuous stream of data over a LOCAL LAN network sometimes random bytes gets lost in the process.
As it is right now the code is set up to stream about 1027 bytes or so ~40 times a second over a lan and sometimes (very rare) one or more of the bytes are lost.
The thing that baffles me is that the actual byte isn't "lost" it is just set to 0 regardless of the original data. (I'm using TCP by the way)
Here's the sending code:
public void Send(byte[] data)
{
if (!server)
{
if (CheckConnection(serv))
{
serv.Send(BitConverter.GetBytes(data.Length));
serv.Receive(new byte[1]);
serv.Send(data);
serv.Receive(new byte[1]);
}
}
}
and the receiving code:
public byte[] Receive()
{
if (!server)
{
if (CheckConnection(serv))
{
byte[] TMP = new byte[4];
serv.Receive(TMP);
TMP = new byte[BitConverter.ToInt32(TMP, 0)];
serv.Send(new byte[1]);
serv.Receive(TMP);
serv.Send(new byte[1]);
return TMP;
}
else return null;
}
else return null;
}
The sending and receiving of the empty bytes are just to keep the system in sync sorta.
Personally i think that the problem lies on the receiving side of the system. haven't been able to prove that jet though.
Just because you give Receive(TMP) a 4 byte array does not mean it is going to fill that array with 4 bytes. The Receive call is allowed to put in anywhere between 1 and TMP.Length bytes in to the array. You must check the returned int to see how many bytes of the array where filled.
Network connections are stream based not message based. Any bytes you put on to the wire just get concatenated in to a big queue and get read in on the other side as it becomes available. So if you sent the two arrays 1,1,1,1 and 2,2,2,2 it is entirely possible that on the receiving side you call Receive three times with a 4 byte array and get
1,1,0,0 (Receive returned 2)
1,1,2,2 (Receive returned 4)
2,2,0,0 (Receive returned 2)
So what you need to do is look at the values you got back from Receive and keep looping till your byte array is full.
byte[] TMP = new byte[4];
//loop till all 4 bytes are read
int offset = 0;
while(offset < TMP.Length)
{
offset += serv.Receive(TMP, offset, TMP.Length - offset, SocketFlags.None);
}
TMP = new byte[BitConverter.ToInt32(TMP, 0)];
//I don't understand why you are doing this, it is not necessary.
serv.Send(new byte[1]);
//Reset the offset then loop till TMP.Length bytes are read.
offset = 0;
while(offset < TMP.Length)
{
offset += serv.Receive(TMP, offset, TMP.Length - offset, SocketFlags.None);
}
//I don't understand why you are doing this, it is not necessary.
serv.Send(new byte[1]);
return TMP;
Lastly you said "the network stream confuses you", I am willing to bet the above issue is one of the things that confused you, going to a lower level will not remove those complexities. If you want these complex parts gone so you don't have to handle them you will need to use a 3rd party library that will handle it for you inside the library.

TCP server is not reading fixed size messages correctly

I'm sending sensor data from an android device to a TCP server in C#. The android client sends the data in fixed size chunks of 32 bytes.
In the server I read the data expecting it would come in full packs, but since TCP is a stream protocol some messages arrive at the server split in two parts. I know that because I can watch it with a software called SocketSniff.
The problem is that I don't know how to handle it on my server.
All examples I found use the NetworkStream.Read(), in this method I have to pass a array of bytes to store the data read, an offset and the number of bytes to read. This array of bytes must have a know size, in my case 32.
I don't know the real size of the message that arrived on my server, but it could be one of the following situations.
If the received data size is 32 bytes, it's all OK.
If the received data size if greater than 32 bytes, I think I'm loosing data.
If the received data size is less than 32 bytes, lets say 20 bytes, these bytes are stored in my array and the last 12 bytes of the array remain with the value of zero. Since I may be really receiving some zeros there's no way to know the size I really received, so I can't merge it with the remaining data which should come in the next reading.
My code which handles the receiving is the following:
int buffer = 32;
...
private void HandleClientComm(object client)
{
TcpClient tcpClient = (TcpClient)client;
NetworkStream clientStream = tcpClient.GetStream();
byte[] message = new byte[buffer];
int bytesRead;
while (true)
{
bytesRead = 0;
try
{
bytesRead = clientStream.Read(message, 0, message.Length);
}
catch
{
break;
}
if (bytesRead == 0)
{
// Connection closed
break;
}
SensorData sensorData = ProcessTcpPacket(message);
}
tcpClient.Close();
}
Is there any way to know the size of the data I'm receiving in the socket?
Well, yes, you have the bytesRead variable - it holds the number of bytes read from the stream. You will read at most message.Length bytes, but you may read less.
Note that if there are more bytes available, you will not lose them by reading just message.Length bytes. Rather, they will be available for you next time you read from the stream.
What you need to do is add another while loop liked this:
int messageRead = 0;
while(messageRead < message.Length)
{
int bytesRead = clientStream.Read(message, messageRead, message.Length - messageRead);
messageRead += bytesRead;
if(bytesRead==0)
return; // The socket was closed
}
// Here you have a full message

Categories

Resources