My issue is that when i'm streaming a continuous stream of data over a LOCAL LAN network sometimes random bytes gets lost in the process.
As it is right now the code is set up to stream about 1027 bytes or so ~40 times a second over a lan and sometimes (very rare) one or more of the bytes are lost.
The thing that baffles me is that the actual byte isn't "lost" it is just set to 0 regardless of the original data. (I'm using TCP by the way)
Here's the sending code:
public void Send(byte[] data)
{
if (!server)
{
if (CheckConnection(serv))
{
serv.Send(BitConverter.GetBytes(data.Length));
serv.Receive(new byte[1]);
serv.Send(data);
serv.Receive(new byte[1]);
}
}
}
and the receiving code:
public byte[] Receive()
{
if (!server)
{
if (CheckConnection(serv))
{
byte[] TMP = new byte[4];
serv.Receive(TMP);
TMP = new byte[BitConverter.ToInt32(TMP, 0)];
serv.Send(new byte[1]);
serv.Receive(TMP);
serv.Send(new byte[1]);
return TMP;
}
else return null;
}
else return null;
}
The sending and receiving of the empty bytes are just to keep the system in sync sorta.
Personally i think that the problem lies on the receiving side of the system. haven't been able to prove that jet though.
Just because you give Receive(TMP) a 4 byte array does not mean it is going to fill that array with 4 bytes. The Receive call is allowed to put in anywhere between 1 and TMP.Length bytes in to the array. You must check the returned int to see how many bytes of the array where filled.
Network connections are stream based not message based. Any bytes you put on to the wire just get concatenated in to a big queue and get read in on the other side as it becomes available. So if you sent the two arrays 1,1,1,1 and 2,2,2,2 it is entirely possible that on the receiving side you call Receive three times with a 4 byte array and get
1,1,0,0 (Receive returned 2)
1,1,2,2 (Receive returned 4)
2,2,0,0 (Receive returned 2)
So what you need to do is look at the values you got back from Receive and keep looping till your byte array is full.
byte[] TMP = new byte[4];
//loop till all 4 bytes are read
int offset = 0;
while(offset < TMP.Length)
{
offset += serv.Receive(TMP, offset, TMP.Length - offset, SocketFlags.None);
}
TMP = new byte[BitConverter.ToInt32(TMP, 0)];
//I don't understand why you are doing this, it is not necessary.
serv.Send(new byte[1]);
//Reset the offset then loop till TMP.Length bytes are read.
offset = 0;
while(offset < TMP.Length)
{
offset += serv.Receive(TMP, offset, TMP.Length - offset, SocketFlags.None);
}
//I don't understand why you are doing this, it is not necessary.
serv.Send(new byte[1]);
return TMP;
Lastly you said "the network stream confuses you", I am willing to bet the above issue is one of the things that confused you, going to a lower level will not remove those complexities. If you want these complex parts gone so you don't have to handle them you will need to use a 3rd party library that will handle it for you inside the library.
Related
As the title says I am trying to use the new (C# 8.0) object (Span) for my networking project. On my previous implementation I learned that it was mandatory to make sure that a NetworkStream have received a complete buffer before trying to use its content, otherwise, depending on the connection, the data received on the other end may not be whole.
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
int received = 0;
byte[] response = new byte[consumerBufferSize];
//Loop that forces the stream to read all incoming data before using it
while (received < consumerBufferSize)
received += stream.Read(response, received, consumerBufferSize - received);
string[] message = ObjectWrapper.ConvertByteArrayToObject<string>(response);
consumerAction(this, message);
}
However, it was introduced a different approach for reading network stream data (Read(Span)). And assuming that stackalloc will help with performance I am attempting to migrate my old implementation to accomodate this method. Here is what it looks like now:
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
Span<byte> response = stackalloc byte[consumerBufferSize];
stream.Read(response);
string[] message = ObjectWrapper.ConvertByteArrayToObject<string>(response).Split('|');
consumerAction(this, message);
}
But now how can I be sure that the buffer was completely read since it does not provides methods like the one I was using?
Edit:
//Former methodd
int Read (byte[] buffer, int offset, int size);
//The one I am looking for
int Read (Span<byte> buffer, int offset, int size);
I'm not sure I understand what you're asking. All the same features you relied on in the first code example still exist when using Span<byte>.
The Read(Span<byte>) overload still returns the count of bytes read. And since the Span<byte> is not the buffer itself, but rather just a window into the buffer, you can update the Span<byte> value to indicate the new starting point to read additional data. Having the count of bytes read and being able to specify the offset for the next read are all you need to duplicate the functionality in your old example. Of course, you don't currently have any code that saves the original buffer reference; you'll need to add that too.
I would expect something like this to work fine:
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
byte* response = stackalloc byte[consumerBufferSize];
while (received < consumerBufferSize)
{
Span<byte> span = new Span<byte>(response, received, consumerBufferSize - received);
received += stream.Read(span);
}
// process response here...
}
Note that this requires unsafe code because of the way stackalloc works. You can only avoid that by using Span<T> and allocating new blocks each time. Of course, that will eventually eat up all your stack.
Since in your implementation you apparently are dedicating a thread to this infinite loop, I don't see how stackalloc is helpful. You might as well just allocate a long-lived buffer array in the heap and use that.
In other words, I don't really see how this is better than just using the original Read(byte[], int, int) overload with a regular managed array. But the above is how you'd get the code to work.
Aside: you should learn how the async APIs work. Since you're already using NetworkStream, the async/await patterns are a natural fit. And regardless of what API you use, a loop checking DataAvailable is just plain crap. Don't do that. The Read() method is already a blocking method; you don't need to wait for data to show up in a separate loop, since the Read() method won't return until there is some.
I am just adding a little bit of additional information.
The function that you are talking about has the following description
public override int Read (Span<byte> buffer);
(source : https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.networkstream.read?view=net-5.0 )
Where the int returned is the amount of byte read from the NetworkStream. Now if we are looking at the Span functions we find Slice with the following description
public Span<T> Slice (int start);
(source : https://learn.microsoft.com/en-us/dotnet/api/system.span-1.slice?view=net-5.0#system-span-1-slice(system-int32) )
Which returns a portion of our Span, which you can use to send a certain portion of your stackalloc to your NetworkStream without using unsafe code.
Reusing your Code you could use something like this
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
int received = 0;
Span<byte> response = stackalloc byte[consumerBufferSize];
//Loop that forces the stream to read all incoming data before using it
while (received < consumerBufferSize)
received += stream.Read(response.Slice(received));
string[] message = ObjectWrapper.ConvertByteArrayToObject<string>(response).Split('|');
consumerAction(this, message);
}
In simple words, we "create" a new Span that is a portion of the initial Span pointing to our stackalloc with Slice, the "start" parameter allows us to choose where to start this portion. The portion is then passed to the function read which will start writing in our buffer wherever we "started" our Slice.
I have a TcpClient client connected to a server, that send a message back to the client.
When reading this data using the NetworkStream.Read class I can specify the amount of bytes I want to read using the count parameter, which will decrease the TcpClient.Available by count after the read is finished. From the docs:
count Int32
The maximum number of bytes to be read from the current stream.
In example:
public static void ReadResponse()
{
if (client.Available > 0) // Assume client.Available is 500 here
{
byte[] buffer = new byte[12]; // I only want to read the first 12 bytes, this could be a header or something
var read = 0;
NetworkStream stream = client.GetStream();
while (read < buffer.Length)
{
read = stream.Read(buffer, 0, buffer.Length);
}
// breakpoint
}
}
This reads the first 12 bytes of the 500 available on the TcpClient into buffer, and inspecting client.Available at the breakpoint will yield (the expected) result of 488 (500 - 12).
Now when I try to do the exact same thing, but using an SslStream this time, the results are rather unexpected to me.
public static void ReadResponse()
{
if (client.Available > 0) // Assume client.Available is 500 here
{
byte[] buffer = new byte[12]; // I only want to read the first 12 bytes, this could be a header or something
var read = 0;
SslStream stream = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateServerCertificate), null);
while (read < buffer.Length)
{
read = stream.Read(buffer, 0, buffer.Length);
}
// breakpoint
}
}
This code will read the first 12 bytes into buffer, as expected. However when inspecting the client.Available at the breakpoint now will yield a result of 0.
Like the normal NetworkStream.Read the documentation for SslStream.Read states that count indicates the max amount of bytes to read.
count Int32
A Int32 that contains the maximum number of bytes to read from this stream.
While it does only read those 12 bytes, and nothing more I am wondering where the remaining 488 bytes go.
In the docs for either SslStream or TcpClient I couldn't find anything indicating that using SslStream.Read flushes the stream or otherwise empties the client.Available. What is the reason for doing this (and where is this documented)?
There is this question that asks for an equivalent of TcpClient.Available, which is not what i'm asking for. I want to know why this happens, which isn't covered there.
Remember that the SslStream might be reading large chunks from the underlying TcpStream at once and buffering them internally, for efficiency reasons, or because the decryption process doesn't work byte-by-byte and needs a block of data to be available. So the fact that your TcpClient contains 0 available bytes means nothing, because those bytes are probably sitting in a buffer inside the SslStream.
In addition, your code to read 12 bytes is incorrect, which might be affecting what you're seeing.
Remember that Stream.Read can return fewer bytes than you were expecting. Subsequent calls to Stream.Read will return the number of bytes read during that call, and not overall.
So you need something like this:
int read = 0;
while (read < buffer.Length)
{
int readThisTime = stream.Read(buffer, read, buffer.Length - read);
if (readThisTime == 0)
{
// The end of the stream has been reached: throw an error?
}
read += readThisTime;
}
When you're reading from a TLS stream, it over-reads, maintaining an internal buffer of data that is yet to be decrypted - or which has been decrypted but not yet consumed. This is a common approach used in streams especially when they mutate the content (compression, encryption, etc), because there is not necessarily a 1:1 correlation between input and output payload sizes, and it may be necessary to read entire frames from the source - i.e. you can't just read 3 bytes - the API needs to read the entire frame (say, 512 bytes), decrypt the frame, give you the 3 you wanted, and hold onto the remaining 509 to give you next time(s) you ask. This means that it often needs to consume more from the source (the socket in this case) than it gives you.
Many streaming APIs also do the same for performance reasons, for example StreamReader over-reads from the underlying Stream and maintains internally both a byteBuffer of bytes not yet decoded, and a charBuffer of decoded characters available for consuming. Your question would then be comparable to:
When using StreamReader, I've only read 3 characters, but my Stream has advanced 512 bytes; why?
I'm trying to send a very large information to the server,(size 11000) and am having a problem. The information does not reach complete.
Look the code:
On my server , there is a loop.
do
{
Tick = Environment.TickCount;
Listen.AcceptClient();
Listen.Update();
}
Listen.update
public static void UpdateClient(UserConnection client)
{
string data = null;
Decoder utf8Decoder = Encoding.UTF8.GetDecoder();
// byte[] buffer = new byte[client.TCPClient.Available];
//try
//{
//client.TCPClient.GetStream().
// client.TCPClient.GetStream().Read(buffer, 0, buffer.Length);
//}
//catch
//{
// int code = System.Runtime.InteropServices.Marshal.GetExceptionCode();
// Console.WriteLine("Erro Num: " + code);
//}
//data = Encoding.UTF8.GetString(buffer);
//Console.WriteLine("Byte is: " + ReadFully(client.TCPClient.GetStream(), 0));
Console.WriteLine("Iniciando");
byte[] buffer = ReadFully(client.TCPClient.GetStream(), 0);
int charCount = utf8Decoder.GetCharCount(buffer, 0, buffer.Length);
Char[] chars = new Char[charCount];
int charsDecodedCount = utf8Decoder.GetChars(buffer, 0, buffer.Length, chars, 0);
foreach (Char c in chars)
{
data = data + String.Format("{0}", c);
}
int buffersize = buffer.Length;
Console.WriteLine("Byte is: " + buffer.Length);
Console.WriteLine("Data is: " + data);
Console.WriteLine("Size is: " + data.Length);
Server.Network.ReceiveData.SelectPacket(client.Index, data);
}
/// <summary>
/// Reads data from a stream until the end is reached. The
/// data is returned as a byte array. An IOException is
/// thrown if any of the underlying IO calls fail.
/// </summary>
/// <param name="stream">The stream to read data from</param>
/// <param name="initialLength">The initial buffer length</param>
public static byte[] ReadFully(Stream stream, int initialLength)
{
// If we've been passed an unhelpful initial length, just
// use 32K.
if (initialLength < 1)
{
initialLength = 32768;
}
byte[] buffer = new byte[initialLength];
int read = 0;
int chunk;
chunk = stream.Read(buffer, read, buffer.Length - read);
checkreach:
read += chunk;
// If we've reached the end of our buffer, check to see if there's
// any more information
if (read == buffer.Length)
{
int nextByte = stream.ReadByte();
// End of stream? If so, we're done
if (nextByte == -1)
{
return buffer;
}
// Nope. Resize the buffer, put in the byte we've just
// read, and continue
byte[] newBuffer = new byte[buffer.Length * 2];
Array.Copy(buffer, newBuffer, buffer.Length);
newBuffer[read] = (byte)nextByte;
buffer = newBuffer;
read++;
goto checkreach;
}
// Buffer is now too big. Shrink it.
byte[] ret = new byte[read];
Array.Copy(buffer, ret, read);
return ret;
}
Listen.AcceptClient
//Tem alguém querendo entrar na putaria? ;D
if (listener.Pending())
{
//Adicionamos ele na lista
Clients.Add(new UserConnection(listener.AcceptTcpClient(), Clients.Count()));
And this is my winsock server.
Anyone have tips or a solution?
Start here: Winsock FAQ. It will explain a number of things you need to know, including that you are unlikely in a single call to Read() to read all of the data that was sent. Every single TCP program needs to include somewhere logic that will receive data via some type of looping, and in most cases also logic to interpret the data being received to identify boundaries between individual elements of the received data (e.g. logical messages, etc. … the only exception is when the application protocol dictates that the whole transmission from connection to closure represents a single "unit", in which case the only boundary that matters is the end of the stream).
In addition (to address just some of the many things wrong in the little bit of code you included here):
Don't use TcpClient.Available; it's not required in correct code.
Don't use Marshal.GetExceptionCode() to retrieve exception information for managed exceptions
Don't use Convert.ToInt32() when your value already is an instance of System.Int32. And more generally, don't use Convert at all in scenarios where a simple cast would accomplish the same thing (even a cast isn't needed here, but I can tell from the code here what your general habit is…you should break that habit).
Don't just ignore exceptions. Either do something to actually handle them, or let them propagate up the call stack. There's no way the rest of the code in your UpdateClient() method could work if an exception was thrown by the Read() method, but you go ahead and execute it all anyway.
Don't use the Flush() method on a NetworkStream object. It does nothing (it's there only because the Stream class requires it).
Do use Stream.ReadAsync() instead of dedicating a thread to each connection
Do catch exceptions by including the exception type and a variable to accept the exception object reference
Do use a persistent Decoder object to decode UTF8-encoded text (or any other variable-byte-length text encoding), so that if a character's encoded representation spans multiple received buffers, the text is still decoded properly.
And finally:
Do post a good, minimal, complete code example. It is simply not possible to answer a question with any sort of preciseness if it doesn't include a proper, complete code example.
Addendum:
Don't use the goto statement. Use a proper loop (e.g. while). Had you used a proper loop, you probably would have avoided the bug in your code where you fail to branch back to the actual Read() call.
Don't expect the Read() method to fill the buffer you passed it. Not only (as I already mentioned above) is there no guarantee that all of the data sent will be returned in a single call to Read(), there is no guarantee that the entire buffer you pass to Read() will be filled before Read() returns.
Don't read one byte at a time. That's one of the surest ways to kill performance and/or to introduce bugs. In your own example, I don't see anything obviously wrong – you only (intend to) read the single byte when looking for more data and then (intend to) go back to reading into a larger buffer – but it's not required (just try to read more data normally…that gets you the same information without special cases in the code that can lead to bugs and in any case make the code harder to understand).
Do look at other examples and tutorials of networking code. Reinventing the wheel may well eventually lead to a good solution, but odds are low of that and it is a lot more time-consuming and error-prone than following someone else's good example.
I will reiterate: please read the Winsock FAQ. It has a lot of valuable information that everyone who wants to write networking code needs to know.
I will also reiterate: you cannot get a precise answer without a COMPLETE code example.
I'm sending sensor data from an android device to a TCP server in C#. The android client sends the data in fixed size chunks of 32 bytes.
In the server I read the data expecting it would come in full packs, but since TCP is a stream protocol some messages arrive at the server split in two parts. I know that because I can watch it with a software called SocketSniff.
The problem is that I don't know how to handle it on my server.
All examples I found use the NetworkStream.Read(), in this method I have to pass a array of bytes to store the data read, an offset and the number of bytes to read. This array of bytes must have a know size, in my case 32.
I don't know the real size of the message that arrived on my server, but it could be one of the following situations.
If the received data size is 32 bytes, it's all OK.
If the received data size if greater than 32 bytes, I think I'm loosing data.
If the received data size is less than 32 bytes, lets say 20 bytes, these bytes are stored in my array and the last 12 bytes of the array remain with the value of zero. Since I may be really receiving some zeros there's no way to know the size I really received, so I can't merge it with the remaining data which should come in the next reading.
My code which handles the receiving is the following:
int buffer = 32;
...
private void HandleClientComm(object client)
{
TcpClient tcpClient = (TcpClient)client;
NetworkStream clientStream = tcpClient.GetStream();
byte[] message = new byte[buffer];
int bytesRead;
while (true)
{
bytesRead = 0;
try
{
bytesRead = clientStream.Read(message, 0, message.Length);
}
catch
{
break;
}
if (bytesRead == 0)
{
// Connection closed
break;
}
SensorData sensorData = ProcessTcpPacket(message);
}
tcpClient.Close();
}
Is there any way to know the size of the data I'm receiving in the socket?
Well, yes, you have the bytesRead variable - it holds the number of bytes read from the stream. You will read at most message.Length bytes, but you may read less.
Note that if there are more bytes available, you will not lose them by reading just message.Length bytes. Rather, they will be available for you next time you read from the stream.
What you need to do is add another while loop liked this:
int messageRead = 0;
while(messageRead < message.Length)
{
int bytesRead = clientStream.Read(message, messageRead, message.Length - messageRead);
messageRead += bytesRead;
if(bytesRead==0)
return; // The socket was closed
}
// Here you have a full message
I'm receiving byte data from a VNC server, using Real VNC this works in my method/function:
byte[] readBytes = new byte[count];
sock.Receive(readBytes);
Using TigerVNC it doesn't work, but this does:
byte[] readBytes = new byte[count];
byte[] aByte = new byte[1];
for (int i = 0; i < count; i++)
{
sock.Receive(aByte);
readBytes[i] = aByte[0];
}
I stumbled upon it pretty quickly, as when I used breakpoints the original code came in alright from Tight VNC, without breaking only the first two bytes are received. My socket is blocking and has receive size of 1024. I am however running the server and client locally as I have no other way of testing.
The Question is:
Other than using an extra byte of memory with "aByte" and iterating through X bytes. Is there much of a processing difference than just receiving it direct using Socket.Receive? Bearing in mind I'll possibly be getting MBs of data at some point.
Further more, this is going to be implemented on Blackberry as a Java app, would the same kind of method have implications processing wise in Java Mobile?
cheers, craig
Neither of these methods is particularly good. What you want to do is keep calling Recv until it either reads all the bytes you want or returns an error. Each time, pass it a pointer just after the bytes you've received so far and the number of bytes remaining.
Take a look at the Socket.Receive method on this page. The crux is this (simplified):
public static void Receive(Socket socket, byte[] buffer, int size)
{
int received = 0;
do
{
try
{
received += socket.Receive(buffer, received, size - received, SocketFlags.None);
}
catch (SocketException ex)
{
throw ex; // any serious error occurr
}
} while (received < size);
}
This way you don't return too soon, but you don't read the data byte by byte either.