I'm trying to send a very large information to the server,(size 11000) and am having a problem. The information does not reach complete.
Look the code:
On my server , there is a loop.
do
{
Tick = Environment.TickCount;
Listen.AcceptClient();
Listen.Update();
}
Listen.update
public static void UpdateClient(UserConnection client)
{
string data = null;
Decoder utf8Decoder = Encoding.UTF8.GetDecoder();
// byte[] buffer = new byte[client.TCPClient.Available];
//try
//{
//client.TCPClient.GetStream().
// client.TCPClient.GetStream().Read(buffer, 0, buffer.Length);
//}
//catch
//{
// int code = System.Runtime.InteropServices.Marshal.GetExceptionCode();
// Console.WriteLine("Erro Num: " + code);
//}
//data = Encoding.UTF8.GetString(buffer);
//Console.WriteLine("Byte is: " + ReadFully(client.TCPClient.GetStream(), 0));
Console.WriteLine("Iniciando");
byte[] buffer = ReadFully(client.TCPClient.GetStream(), 0);
int charCount = utf8Decoder.GetCharCount(buffer, 0, buffer.Length);
Char[] chars = new Char[charCount];
int charsDecodedCount = utf8Decoder.GetChars(buffer, 0, buffer.Length, chars, 0);
foreach (Char c in chars)
{
data = data + String.Format("{0}", c);
}
int buffersize = buffer.Length;
Console.WriteLine("Byte is: " + buffer.Length);
Console.WriteLine("Data is: " + data);
Console.WriteLine("Size is: " + data.Length);
Server.Network.ReceiveData.SelectPacket(client.Index, data);
}
/// <summary>
/// Reads data from a stream until the end is reached. The
/// data is returned as a byte array. An IOException is
/// thrown if any of the underlying IO calls fail.
/// </summary>
/// <param name="stream">The stream to read data from</param>
/// <param name="initialLength">The initial buffer length</param>
public static byte[] ReadFully(Stream stream, int initialLength)
{
// If we've been passed an unhelpful initial length, just
// use 32K.
if (initialLength < 1)
{
initialLength = 32768;
}
byte[] buffer = new byte[initialLength];
int read = 0;
int chunk;
chunk = stream.Read(buffer, read, buffer.Length - read);
checkreach:
read += chunk;
// If we've reached the end of our buffer, check to see if there's
// any more information
if (read == buffer.Length)
{
int nextByte = stream.ReadByte();
// End of stream? If so, we're done
if (nextByte == -1)
{
return buffer;
}
// Nope. Resize the buffer, put in the byte we've just
// read, and continue
byte[] newBuffer = new byte[buffer.Length * 2];
Array.Copy(buffer, newBuffer, buffer.Length);
newBuffer[read] = (byte)nextByte;
buffer = newBuffer;
read++;
goto checkreach;
}
// Buffer is now too big. Shrink it.
byte[] ret = new byte[read];
Array.Copy(buffer, ret, read);
return ret;
}
Listen.AcceptClient
//Tem alguém querendo entrar na putaria? ;D
if (listener.Pending())
{
//Adicionamos ele na lista
Clients.Add(new UserConnection(listener.AcceptTcpClient(), Clients.Count()));
And this is my winsock server.
Anyone have tips or a solution?
Start here: Winsock FAQ. It will explain a number of things you need to know, including that you are unlikely in a single call to Read() to read all of the data that was sent. Every single TCP program needs to include somewhere logic that will receive data via some type of looping, and in most cases also logic to interpret the data being received to identify boundaries between individual elements of the received data (e.g. logical messages, etc. … the only exception is when the application protocol dictates that the whole transmission from connection to closure represents a single "unit", in which case the only boundary that matters is the end of the stream).
In addition (to address just some of the many things wrong in the little bit of code you included here):
Don't use TcpClient.Available; it's not required in correct code.
Don't use Marshal.GetExceptionCode() to retrieve exception information for managed exceptions
Don't use Convert.ToInt32() when your value already is an instance of System.Int32. And more generally, don't use Convert at all in scenarios where a simple cast would accomplish the same thing (even a cast isn't needed here, but I can tell from the code here what your general habit is…you should break that habit).
Don't just ignore exceptions. Either do something to actually handle them, or let them propagate up the call stack. There's no way the rest of the code in your UpdateClient() method could work if an exception was thrown by the Read() method, but you go ahead and execute it all anyway.
Don't use the Flush() method on a NetworkStream object. It does nothing (it's there only because the Stream class requires it).
Do use Stream.ReadAsync() instead of dedicating a thread to each connection
Do catch exceptions by including the exception type and a variable to accept the exception object reference
Do use a persistent Decoder object to decode UTF8-encoded text (or any other variable-byte-length text encoding), so that if a character's encoded representation spans multiple received buffers, the text is still decoded properly.
And finally:
Do post a good, minimal, complete code example. It is simply not possible to answer a question with any sort of preciseness if it doesn't include a proper, complete code example.
Addendum:
Don't use the goto statement. Use a proper loop (e.g. while). Had you used a proper loop, you probably would have avoided the bug in your code where you fail to branch back to the actual Read() call.
Don't expect the Read() method to fill the buffer you passed it. Not only (as I already mentioned above) is there no guarantee that all of the data sent will be returned in a single call to Read(), there is no guarantee that the entire buffer you pass to Read() will be filled before Read() returns.
Don't read one byte at a time. That's one of the surest ways to kill performance and/or to introduce bugs. In your own example, I don't see anything obviously wrong – you only (intend to) read the single byte when looking for more data and then (intend to) go back to reading into a larger buffer – but it's not required (just try to read more data normally…that gets you the same information without special cases in the code that can lead to bugs and in any case make the code harder to understand).
Do look at other examples and tutorials of networking code. Reinventing the wheel may well eventually lead to a good solution, but odds are low of that and it is a lot more time-consuming and error-prone than following someone else's good example.
I will reiterate: please read the Winsock FAQ. It has a lot of valuable information that everyone who wants to write networking code needs to know.
I will also reiterate: you cannot get a precise answer without a COMPLETE code example.
Related
As the title says I am trying to use the new (C# 8.0) object (Span) for my networking project. On my previous implementation I learned that it was mandatory to make sure that a NetworkStream have received a complete buffer before trying to use its content, otherwise, depending on the connection, the data received on the other end may not be whole.
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
int received = 0;
byte[] response = new byte[consumerBufferSize];
//Loop that forces the stream to read all incoming data before using it
while (received < consumerBufferSize)
received += stream.Read(response, received, consumerBufferSize - received);
string[] message = ObjectWrapper.ConvertByteArrayToObject<string>(response);
consumerAction(this, message);
}
However, it was introduced a different approach for reading network stream data (Read(Span)). And assuming that stackalloc will help with performance I am attempting to migrate my old implementation to accomodate this method. Here is what it looks like now:
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
Span<byte> response = stackalloc byte[consumerBufferSize];
stream.Read(response);
string[] message = ObjectWrapper.ConvertByteArrayToObject<string>(response).Split('|');
consumerAction(this, message);
}
But now how can I be sure that the buffer was completely read since it does not provides methods like the one I was using?
Edit:
//Former methodd
int Read (byte[] buffer, int offset, int size);
//The one I am looking for
int Read (Span<byte> buffer, int offset, int size);
I'm not sure I understand what you're asking. All the same features you relied on in the first code example still exist when using Span<byte>.
The Read(Span<byte>) overload still returns the count of bytes read. And since the Span<byte> is not the buffer itself, but rather just a window into the buffer, you can update the Span<byte> value to indicate the new starting point to read additional data. Having the count of bytes read and being able to specify the offset for the next read are all you need to duplicate the functionality in your old example. Of course, you don't currently have any code that saves the original buffer reference; you'll need to add that too.
I would expect something like this to work fine:
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
byte* response = stackalloc byte[consumerBufferSize];
while (received < consumerBufferSize)
{
Span<byte> span = new Span<byte>(response, received, consumerBufferSize - received);
received += stream.Read(span);
}
// process response here...
}
Note that this requires unsafe code because of the way stackalloc works. You can only avoid that by using Span<T> and allocating new blocks each time. Of course, that will eventually eat up all your stack.
Since in your implementation you apparently are dedicating a thread to this infinite loop, I don't see how stackalloc is helpful. You might as well just allocate a long-lived buffer array in the heap and use that.
In other words, I don't really see how this is better than just using the original Read(byte[], int, int) overload with a regular managed array. But the above is how you'd get the code to work.
Aside: you should learn how the async APIs work. Since you're already using NetworkStream, the async/await patterns are a natural fit. And regardless of what API you use, a loop checking DataAvailable is just plain crap. Don't do that. The Read() method is already a blocking method; you don't need to wait for data to show up in a separate loop, since the Read() method won't return until there is some.
I am just adding a little bit of additional information.
The function that you are talking about has the following description
public override int Read (Span<byte> buffer);
(source : https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.networkstream.read?view=net-5.0 )
Where the int returned is the amount of byte read from the NetworkStream. Now if we are looking at the Span functions we find Slice with the following description
public Span<T> Slice (int start);
(source : https://learn.microsoft.com/en-us/dotnet/api/system.span-1.slice?view=net-5.0#system-span-1-slice(system-int32) )
Which returns a portion of our Span, which you can use to send a certain portion of your stackalloc to your NetworkStream without using unsafe code.
Reusing your Code you could use something like this
while (true)
{
while (!stream.DataAvailable)
Thread.Sleep(10);
int received = 0;
Span<byte> response = stackalloc byte[consumerBufferSize];
//Loop that forces the stream to read all incoming data before using it
while (received < consumerBufferSize)
received += stream.Read(response.Slice(received));
string[] message = ObjectWrapper.ConvertByteArrayToObject<string>(response).Split('|');
consumerAction(this, message);
}
In simple words, we "create" a new Span that is a portion of the initial Span pointing to our stackalloc with Slice, the "start" parameter allows us to choose where to start this portion. The portion is then passed to the function read which will start writing in our buffer wherever we "started" our Slice.
I have code like this:
public byte[] Read()
{
try
{
if (ClientSocket.Available != 0)
{
var InBuffer = new byte[ClientSocket.Available];
ClientSocket.Receive(InBuffer);
return InBuffer;
}
else
{
return null;
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
I'd like to make an async equivalent without totally changing the flow of the code (warts and all) and I am running into issues, I also want to switch to NetworkStream as it has built in async methods
I'd like the sig to be Task<byte[]> Read() but:
NetworkStream.ReadAsync expects to be passed a byte[] but doesn't return it, so I can't simply return stream.Read(...)
NetworkStream doesn't appear to tell you how many bytes are available to be read.
If there is no data available I don't want to call stream.Read just pass back null.
So regardless of issues in the above method - I know it is not optimal - how might I do this?
The aim being I can do, or equivalent.
byte [] bytes = await x.Read();
NetworkStream.ReadAsync is like most of the "give me a byte buffer" methods I've come across; you give it a byte array and ask it to read X number of bytes from the network and place it into the buffer. It may read less than you requested, but it won't read more. It returns the number of bytes read. At all times your code keeps ahold of the byte array buffer so you would end up doing something like this:
byte[] buf = new byte[4096];
int bytesRead = await networkStream.ReadAsync(buf, 0, buf.Length. someCancelationToken);
byte[] rtn = new byte[bytesRead];
Array.Copy(buf, 0, rtn, 0, rtn.Length);
return rtn;
That is to say you read as an Async op, then return an array sized to exactly the number of bytes reported as read, which comes from the buffer. A method employing this code would return a Task
NetworkStream also has a method CanRead that would appear to address your "if there is nothing to read" requirement
There is also an overload of ReadAsync that accepts from you a Memory as the buffer. You'd use this in a similar way except you would call the Slice method on it when you knew how many bytes had been read, to return you a Memory looking at just that section of the buffer. If you then called ToArray on the result of the Slice call you'd get an array sized to your liking (bytes read). There's likely little difference in the two in this context, though using Memory (and it's related class Span) can reduce the number of memory allocations for some operations
I have a TcpClient client connected to a server, that send a message back to the client.
When reading this data using the NetworkStream.Read class I can specify the amount of bytes I want to read using the count parameter, which will decrease the TcpClient.Available by count after the read is finished. From the docs:
count Int32
The maximum number of bytes to be read from the current stream.
In example:
public static void ReadResponse()
{
if (client.Available > 0) // Assume client.Available is 500 here
{
byte[] buffer = new byte[12]; // I only want to read the first 12 bytes, this could be a header or something
var read = 0;
NetworkStream stream = client.GetStream();
while (read < buffer.Length)
{
read = stream.Read(buffer, 0, buffer.Length);
}
// breakpoint
}
}
This reads the first 12 bytes of the 500 available on the TcpClient into buffer, and inspecting client.Available at the breakpoint will yield (the expected) result of 488 (500 - 12).
Now when I try to do the exact same thing, but using an SslStream this time, the results are rather unexpected to me.
public static void ReadResponse()
{
if (client.Available > 0) // Assume client.Available is 500 here
{
byte[] buffer = new byte[12]; // I only want to read the first 12 bytes, this could be a header or something
var read = 0;
SslStream stream = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateServerCertificate), null);
while (read < buffer.Length)
{
read = stream.Read(buffer, 0, buffer.Length);
}
// breakpoint
}
}
This code will read the first 12 bytes into buffer, as expected. However when inspecting the client.Available at the breakpoint now will yield a result of 0.
Like the normal NetworkStream.Read the documentation for SslStream.Read states that count indicates the max amount of bytes to read.
count Int32
A Int32 that contains the maximum number of bytes to read from this stream.
While it does only read those 12 bytes, and nothing more I am wondering where the remaining 488 bytes go.
In the docs for either SslStream or TcpClient I couldn't find anything indicating that using SslStream.Read flushes the stream or otherwise empties the client.Available. What is the reason for doing this (and where is this documented)?
There is this question that asks for an equivalent of TcpClient.Available, which is not what i'm asking for. I want to know why this happens, which isn't covered there.
Remember that the SslStream might be reading large chunks from the underlying TcpStream at once and buffering them internally, for efficiency reasons, or because the decryption process doesn't work byte-by-byte and needs a block of data to be available. So the fact that your TcpClient contains 0 available bytes means nothing, because those bytes are probably sitting in a buffer inside the SslStream.
In addition, your code to read 12 bytes is incorrect, which might be affecting what you're seeing.
Remember that Stream.Read can return fewer bytes than you were expecting. Subsequent calls to Stream.Read will return the number of bytes read during that call, and not overall.
So you need something like this:
int read = 0;
while (read < buffer.Length)
{
int readThisTime = stream.Read(buffer, read, buffer.Length - read);
if (readThisTime == 0)
{
// The end of the stream has been reached: throw an error?
}
read += readThisTime;
}
When you're reading from a TLS stream, it over-reads, maintaining an internal buffer of data that is yet to be decrypted - or which has been decrypted but not yet consumed. This is a common approach used in streams especially when they mutate the content (compression, encryption, etc), because there is not necessarily a 1:1 correlation between input and output payload sizes, and it may be necessary to read entire frames from the source - i.e. you can't just read 3 bytes - the API needs to read the entire frame (say, 512 bytes), decrypt the frame, give you the 3 you wanted, and hold onto the remaining 509 to give you next time(s) you ask. This means that it often needs to consume more from the source (the socket in this case) than it gives you.
Many streaming APIs also do the same for performance reasons, for example StreamReader over-reads from the underlying Stream and maintains internally both a byteBuffer of bytes not yet decoded, and a charBuffer of decoded characters available for consuming. Your question would then be comparable to:
When using StreamReader, I've only read 3 characters, but my Stream has advanced 512 bytes; why?
In my app I have to receive and process some data from device, connected through COM port. I do it partially. In that particular device first two bytes are the length of the packet (minus 2 since it doesn't take into account these very two bytes; so it is length of the rest of the packet after all). Then, since I know that device tends to send me its data slowly, I read rest of the packet in the loop, until all data has been read. But right here I encountered strange problem. Let's assume the entire packet (including these first two bytes with length) looks like this: ['a', 'b', 'c', 'd', 'e']. When I read first two bytes ('a' and 'b'), I'd expect rest of the packet to look like this: ['c', 'd', 'e']. But instead, it looks like this: ['b', 'c', 'd', 'e']. How come second byte of the response is still in the read buffer? And why just the second one, without the previous one?
The code below shows how do I handle the communication process:
//The data array is some array with output data
//The size array is two-byte array to store frame-length bytes
//The results array is for device's response
//The part array is for part of the response that's currently in read buffer
port.Write(data, 0, data.Length);
//Receiving device's response (if there's any)
try
{
port.Read(size, 0, 2); //Read first two bytes (packet's length) of the response
//We'll store entire response in results array. We get its size from first two bytes of response
//(+2 for these very bytes since they're not counted in the device's data frame)
results = new byte[(size[0] | ((int)size[1] << 8)) + 2];
results[0] = size[0]; results[1] = size[1]; //We'll need packet size for checksum count
//Time to read rest of the response
for(offset = 2; offset < results.Length && port.BytesToRead > 0; offset += part.Length)
{
System.Threading.Thread.Sleep(5); //Device's quite slow, isn't it
try
{
part = new byte[port.BytesToRead];
port.Read(part, 0, part.Length); //Here's where old data is being read
}
catch(System.TimeoutException)
{
//Handle it somehow
}
Buffer.BlockCopy(part, 0, results, offset, part.Length);
}
if(offset < results.Length) //Something went wrong during receiving response
throw new Exception();
}
catch(Exception)
{
//Handle it somehow
}
You are making a traditional mistake, you cannot ignore the return value of Read(). It tells you how many bytes were actually received. It will be at least 1, not more than count. However many are present in the receive buffer, BytesToRead tells you. Simply keep calling Read() until your happy:
int cnt = 0;
while (cnt < 2) cnt += port.Read(size, cnt, 2 - cnt);
Just use the same code in the 2nd part of your code so you don't burn 100% core without the Sleep() call. Do keep in mind that TimeoutException is just as likely when you read the size, more likely actually. It is a fatal exception if it is thrown when cnt > 0, you can't resynchronize anymore.
Well, strange enough, but when I read first two bytes separatedly:
port.Read(size, 0, 1); //Read first two bytes (packet's length) of the response
port.Read(size, 1, 1); //Second time, lol
Everything works just fine, no matter what kind of data pack do I receive from device.
The documntation for SerialPort contains the following text:
Because the SerialPort class buffers data, and the stream contained in
the BaseStream property does not, the two might conflict about how
many bytes are available to read. The BytesToRead property can
indicate that there are bytes to read, but these bytes might not be
accessible to the stream contained in the BaseStream property because
they have been buffered to the SerialPort class.
Could this explain why BytesToRead is giving you confusing values?
Personally, I always use the DataReceived event and in my event handler, I use ReadExisting() to read all immediately available data and add it to my own buffer. I don't attempt to impose any meaning on the data stream at that level, I just buffer it; instead I'll typically write a little state machine that takes characters out of the buffer one at a time and parses the data into whatever format is required.
Alternatively, you could use the ReactiveExtensions to produce an observable sequence of received characters and then layer observers on top of that. I do it with a couple of extension methods like this:
public static class SerialObservableExtensions
{
static readonly Logger log = LogManager.GetCurrentClassLogger();
/// <summary>
/// Captures the <see cref="System.IO.Ports.SerialPort.DataReceived" /> event of a serial port and returns an
/// observable sequence of the events.
/// </summary>
/// <param name="port">The serial port that will act as the event source.</param>
/// <returns><see cref="IObservable{Char}" /> - an observable sequence of events.</returns>
public static IObservable<EventPattern<SerialDataReceivedEventArgs>> ObservableDataReceivedEvents(
this ISerialPort port)
{
var portEvents = Observable.FromEventPattern<SerialDataReceivedEventHandler, SerialDataReceivedEventArgs>(
handler =>
{
log.Debug("Event: SerialDataReceived");
return handler.Invoke;
},
handler =>
{
// We must discard stale data when subscribing or it will pollute the first element of the sequence.
port.DiscardInBuffer();
port.DataReceived += handler;
log.Debug("Listening to DataReceived event");
},
handler =>
{
port.DataReceived -= handler;
log.Debug("Stopped listening to DataReceived event");
});
return portEvents;
}
/// <summary>
/// Gets an observable sequence of all the characters received by a serial port.
/// </summary>
/// <param name="port">The port that is to be the data source.</param>
/// <returns><see cref="IObservable{char}" /> - an observable sequence of characters.</returns>
public static IObservable<char> ReceivedCharacters(this ISerialPort port)
{
var observableEvents = port.ObservableDataReceivedEvents();
var observableCharacterSequence = from args in observableEvents
where args.EventArgs.EventType == SerialData.Chars
from character in port.ReadExisting()
select character;
return observableCharacterSequence;
}
}
The ISerialPort interface is just a header interface that I extracted from the SerialPort class, which makes it easier for me to mock it when I'm unit testing.
NetworkStream stream = socket.GetStream();
if (stream.CanRead)
{
while (true)
{
int i = stream.Read(buf, 0, 1024);
result += Encoding.ASCII.GetString(buf, 0, i);
}
}
Above code was designed to retrieve message from a TcpClient while running on a separate thread. Read Method works fine until it is supposed to return -1 to indicate there is nothing to read anymore; instead, it just terminates the thread it is running on without any apparent reason - tracing each step using the debugger shows that it just stops running right after that line.
Also I tried encapsulating it with a try ... catch without much success.
What could be causing this?
EDIT: I tried
NetworkStream stream = socket.GetStream();
if (stream.CanRead)
{
while (true)
{
int i = stream.Read(buf, 0, 1024);
if (i == 0)
{
break;
}
result += Encoding.ASCII.GetString(buf, 0, i);
}
}
thanks to #JonSkeet, but the problem is still there. The thread terminates at that read line.
EDIT2: I fixed the code like this and it worked.
while (stream.DataAvailable)
{
int i = stream.Read(buf, 0, 1024);
result += Encoding.ASCII.GetString(buf, 0, i);
}
I think the problem was simple, I just didn't think thoroughly enough. Thanks everyone for taking a look at this!
No, Stream.Read returns 0 when there's nothing to read, not -1:
Return value
The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached.
My guess is that actually, no exception is being thrown and the thread isn't being aborted - but it's just looping forever. You should be able to see this if you step through in the debugger. Whatever's happening, your "happy" termination condition will never be hit...
Since you're trying to read ASCII characters, from a stream, take a look at the following as a potentially simpler way to do it:
public IEnumerable<string> ReadLines(Stream stream)
{
using (StreamReader reader = new StreamReader(stream, Encoding.ASCII))
{
while (!reader.EndOfStream)
yield return reader.ReadLine();
}
}
While this may not be exactly what you want, the salient points are:
Use a StreamReader to do all the hard work for you
Use a while loop with !reader.EndOfStream to loop through the stream
You can still use reader.Read(buffer, 0, 1024) if you'd prefer to read chunks into a buffer, and append to result. Just note that these will be char[] chunks not byte[] chunks, which is likely what you want.
It looks to me like it is simply blocking - i.e. waiting on the end of the stream. For it to return a non-positive number, it is necessary that the stream be closed, i.e. the the caller has not only sent data, but has closed their outbound socket. Otherwise, the system cannot distinguish between "waiting for a packet to arrive" and "the end of the stream".
If the caller is sending one message only, they should close their outbound socket after sending (they can keep their inbound socket open for a reply).
If the caller is sending multiple messages, then you must use a framing approach to read individual sub-messages. In the case of a text-based protocol this usually means "hunt the newline".