I tried to compare the performance between synchronous and asynchronous methods when reading small amounts of data that had already been received.
So I send the data and make sure it is waiting on the receiving socket(same program).
source.Send(sendBuffer);
while (target.Available != sendBuffer.Length)
Thread.Sleep(50);
SocketAsyncEventArgs arg = new SocketAsyncEventArgs();
arg.Completed += AsyncCallback;
arg.SetBuffer(receiveBuffer, 0, sendBuffer.Length);
if(target.ReceiveAsync (arg))
{
//Pending request
//999 out of 1000 times we end up here
}
To my surprise the the ReceiveAsync call almost never finished synchronously.
I would assume that a ReceiveAsync would return synchronously and perform as fast as Receive but since it almost never happen I can't tell.
Were my expectations of ReceiveAsync wrong here?
Would it be better to always first check target.Available and do a Receive if there is enough bytes available?
This kind of breaks some of the benefits with using ReceiveAsync rather than BeginReceive, that is why I'm writing this question.
ReceiveAsync just calls into the underlying Winsock subsystem. ReceiveAsync uses what is called Overlapped IO (asynchronous IO). In the case of a socket created to use overlapped IO: "Regardless of whether or not the incoming data fills all the buffers, the completion indication occurs for overlapped sockets". Where "completion indication" is the callback that will be called asynchronously. Which means ReceiveAsync will be asynchronous all the time (unless there's an error).
Sorry for late reply :)
i fixed my problem using Socket.Available Property.
Do
If MySocket.Available > 0 Then
bytes = MySocket.Receive(bytesReceived, bytesReceived.Length, 0)
textFrom &= Encoding.ASCII.GetString(bytesReceived, 0, bytes)
Else
Exit Do
End If
Loop While bytes = bytesReceived.Length
Related
I am using NetworkStream with TcpClient.
First I setup my tcp client:
tcp = new TcpClient(AddressFamily.InterNetwork)
{ NoDelay = true, ReceiveTimeout = 5000};
My main data-receiving loop:
while (true)
{
//read available data from the device
int numBytesRead = await ReadAsync();
Console.WriteLine($"{numBytesRead} bytes read"); //BP2
}
And the actual TCP data reading:
public Task<int> ReadAsync()
{
var stream = tcp.GetStream();
return stream.ReadAsync(InBuffer, 0, InBuffer.Length); //BP1
}
I have this connected to a testbed which lets me send manual packets. Through setting breakpoints and debugging I have checked that stream.ReadTimeout takes the value 5000 from tcp.
If I send data frequently it all works as expected. But if I don't send any data, nothing appears to happen after 5s, no timeout. I see breakpoint BP1 being hit in the debugger but until I send data from my testbed, BP2 is not hit. I can leave it a minute or more and it just seems to sit waiting, but receives data sent after a minute, which appears to be incorrect behavior. After 5 seconds something should happen, surely (an exception as I understand it)?
It's late so I am expecting something really basic but can anyone see what my mistake is and a resolution?
Addendum
OK so when I RTFM for the actual .Net version I'm using (how may times have I been caught out by MS defaulting to .Net Core 3, I did say it was late) I see in the remarks sectio for ReadTimeout:
This property affects only synchronous reads performed by calling the
Read method. This property does not affect asynchronous reads
performed by calling the BeginRead method.
I'm unclear now if I can use modern awaitable calls at all to read socket data safely and with a timeout specifically. It's working except for the timeout but I'm not sure how given ReadAsync has no override in NetworkStream. Must I do some ugly hack or is there a simple solution?
In my case 5000 is the longest I can expect not to receive data before concluding there is a problem - the protocol has no ping mechanism so if nothing appears I assume the connection is dead. Hence thinking an Async read with a 5000ms timeout would be nice and neat.
Timeout values for network objects apply only to synchronous operations. For example, from the documentation:
This option applies to synchronous Receive calls only.
For Socket.ReceiveTimeout, TcpClient.ReceiveTimeout, and NetworkStream.ReadTimeout, the implementations all ultimately result in a call to SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, ...) which in turn is effectively calling the native setsockopt() function. From that documentation:
SO_RCVTIMEO DWORD Sets the timeout, in milliseconds, for blocking receive calls.
(emphasis mine)
It's this limitation in the underlying native API that is the reason for the same limitation in the managed API. Timeout values will not apply to asynchronous I/O on the network objects.
You will need to implement the timeout yourself, by closing the socket if and when the timeout should occur. For example:
async Task<int> ReadAsync(TcpClient client, byte[] buffer, int index, int length, TimeSpan timeout)
{
Task<int> result = client.GetStream().ReadAsync(buffer, index, length);
await Task.WhenAny(result, Task.Delay(timeout));
if (!result.IsCompleted)
{
client.Close();
}
return await result;
}
Other variations on this theme can be found in other related questions:
NetworkStream.ReadAsync with a cancellation token never cancels
Cancel C# 4.5 TcpClient ReadAsync by timeout
Closing the socket is really all that you can do. Even for synchronous operations, if a timeout occurs the socket would no longer be usable. There is no reliable way to interrupt a read operation and expect the socket to remain consistent.
Of course, you do have the option of prompting the user before closing the socket. However, if you were to do that, you would implement the timeout at a higher level in your application architecture, such that the I/O operations themselves have no awareness of timeouts at all.
Is this bad programming ?
DateTime dtExpire = DateTime.Now.AddSeconds(90);
while (client.Connected && DateTime.Now < dtExpire)
{
if (client.Available == 0) continue;
//or can also use: if (!networkStream.DataAvailable) continue;
dtExpire = DateTime.Now.AddSeconds(30);
//now do stuff with client via stream
}
The goal being to insure that the client does not take too mor time than the server is willing to wait to pocess incoming messages. Of course, this code is inside of a Try/Catch block, as well as a Using Stream block, so the server would gracefully handle dropped connections or any other socket exceptions.
Basically, I just want to know if there's a better way to handle this. Thanks.
Use the ReceiveTimeout property to specify how long to wait for an incoming message. When you use the Receive method (or its family of methods) and a timeout occurs, a SocketException will be thrown.
client.ReceiveTimeout = 90;
Your code will be more complex if you have to accomplish this asynchronously, but it doesn't look like you are. Receive by itself should do the job as it will block on the current thread.
This is called busy waiting.
You are essentially clogging the CPU even when there is no "real" work to be done (i.e. when you are just waiting on client.Available to become different from 0). Fortunately, your busy waiting has a timeout so at least it won't clog the CPU forever.
Whether you can do it more efficiently really only depends on what the client is and whether it implements a more efficient waiting strategy.
If it doesn't then you'll be stuck with some form of busy waiting, but not all is lost - if you can tolerate a slight delay in detecting the change in client.Available, then doing...
if (client.Available == 0) {
Thread.Sleep(max_delay_you_can_tolerate);
continue;
}
...would go a long way taking the pressure off the CPU.
--- EDIT ---
If client is in fact a Socket, take a look at Blocking and ReceiveTimeout properties.
I have a network project, there is no timer in it. just a tcpclient that connect to a server and listen to receive any data from network.
TcpClient _TcpClient = new TcpClient(_IpAddress, _Port);
_ConnectThread = new Thread(new ThreadStart(ConnectToServer));
_ConnectThread.IsBackground = true;
_ConnectThread.Start();
private void ConnectToServer()
{
try
{
NetworkStream _NetworkStream = _TcpClient.GetStream();
byte[] _RecievedPack = new byte[1024 * 1000];
string _Message = string.Empty;
int _BytesRead;
int _Length;
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
}
catch (Exception exp)
{
// call a function to alarm that connection is false
}
}
But after a while the cpu usage of my application goes up(90%, 85%,...).
even if no data receive.
could anybody give me some tips about cpu usage. I'm totally blank. i don't know i should check which part of the project!
could anybody give me some tips about cpu usage
You should consider checking the loops in the application, like while loop, if you are spend so much time waiting for some condition to became true, then it will take much CPU time. for instance
while (true)
{}
or
while (_Flag)
{
//do something
}
If the code executed inside the while are synchronous, then the thread will be ending eating much of CPU cycles. to solve this problem you could executes the code inside the while in a different thread, so it will be asynchronous, and then use ManualResetEvent or AutoResetEvent to report back when operation executed, another thing to mentioned is to consider using System.Threading.Thread.Sleep method to till the thread to sleep and give the cpu time to execute other threads, example:
while(_Flag)
{
//do something
Thread.Sleep(100);//Blocks the current thread for 100 milliseconds
}
There are several issues with your code... the most important ones are IMHO:
Use async methods (BeginRead etc.), not blocking methods, and don't create your own thread. Thread are "expensive>" resources - and using blocking calls in threads is therefore a waste of resources. Using async calls lets the operating system call you back when an event (data received for instance) occured, so that no separate thread is needed (the callback runs with a pooled thread).
Be aware that Read may return just a few bytes, it doesn't have to fill the _ReceivedPackbuffer. Theoretically, it may just receive one or two bytes - not even enough for your call to ToInt32!
The CPU usage spikes, because you have a while loop, which does not do anything, if it does not receive anything from the network. Add Thread.Sleep() at the end of it, if not data was received, and your CPU usage will be normal.
And take the advice, that Lucero gave you.
I suspect that the other end of the connection is closed when the while loop is still running, in which case you'll repeatedly read zero bytes from the network stream (marking connection closed; see NetworkStream.Read on MSDN).
Since NetworkStream.Read will then return immediately (as per MSDN), you'll be stuck in a tight while loop that will consume a lot of processor time. Try adding a Thread.Sleep() or detecting a "zero read" within the loop. Ideally you should handle a read of zero bytes by terminating your end of the connection, too.
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
Have you attached a debugger and stepped through the code to see if it's behaving in the way you expect?
Alternatively, if you have a profiling tool available (such as ANTs) then this will help you see where time is being spent in your application.
I need to read from NetworkStream which would send data randomly and the size of data packets also keep varying. I am implementing a multi-threaded application where each thread would have its own stream to read from. If there is no data on the stream, the application should keep waiting for the data to arrive. However, if the server is done sending data and has terminated the session, then it should exit out.
Initially I had utilised the Read method to obtain the data from the stream, but it used to block the thread and kept waiting until data appeared on the stream.
The documentation on MSDN suggests,
If no data is available for reading,
the Read method returns 0. If the
remote host shuts down the connection,
and all available data has been
received, the Read method completes
immediately and return zero bytes.
But in my case, I have never got the Read method to return 0 and exit gracefully. It just waits indefinitely.
In my further investigation, I came across BeginRead which watches the stream and invokes a callback method asynchronously, as soon as it receives the data. I have tried to look for various implementations using this approach as well, however, I was unable to identify when would using BeginRead be beneficial as opposed to Read.
As I look at it, BeginRead has just the advantage of having the async call, which would not block the current thread. But in my application, I already have a separate thread to read and process the data from stream, so that wouldn't make much difference for me.
Can anyone please help me understand the Wait and Exit mechanism for
BeginRead and how is it different from Read?
What would be the best way to implement the desired functionality?
I use BeginRead, but continue blocking the thread using a WaitHandle:
byte[] readBuffer = new byte[32];
var asyncReader = stream.BeginRead(readBuffer, 0, readBuffer.Length,
null, null);
WaitHandle handle = asyncReader.AsyncWaitHandle;
// Give the reader 2seconds to respond with a value
bool completed = handle.WaitOne(2000, false);
if (completed)
{
int bytesRead = stream.EndRead(asyncReader);
StringBuilder message = new StringBuilder();
message.Append(Encoding.ASCII.GetString(readBuffer, 0, bytesRead));
}
Basically it allows a timeout of the async reads using the WaitHandle and gives you a boolean value (completed) if the read was completed in the set time (2000 in this case).
Here's my full stream reading code copied and pasted from one of my Windows Mobile projects:
private static bool GetResponse(NetworkStream stream, out string response)
{
byte[] readBuffer = new byte[32];
var asyncReader = stream.BeginRead(readBuffer, 0, readBuffer.Length, null, null);
WaitHandle handle = asyncReader.AsyncWaitHandle;
// Give the reader 2seconds to respond with a value
bool completed = handle.WaitOne(2000, false);
if (completed)
{
int bytesRead = stream.EndRead(asyncReader);
StringBuilder message = new StringBuilder();
message.Append(Encoding.ASCII.GetString(readBuffer, 0, bytesRead));
if (bytesRead == readBuffer.Length)
{
// There's possibly more than 32 bytes to read, so get the next
// section of the response
string continuedResponse;
if (GetResponse(stream, out continuedResponse))
{
message.Append(continuedResponse);
}
}
response = message.ToString();
return true;
}
else
{
int bytesRead = stream.EndRead(asyncReader);
if (bytesRead == 0)
{
// 0 bytes were returned, so the read has finished
response = string.Empty;
return true;
}
else
{
throw new TimeoutException(
"The device failed to read in an appropriate amount of time.");
}
}
}
Async I/O can be used to achieve the same amount of I/O in less threads.
As you note, right now your app has one thread per Stream. This is OK with small numbers of connections, but what if you need to support 10000 at once? With async I/O, this is no longer necessary because the read completion callback allows context to be passed identifying the relevant stream. Your reads no longer block, so you don't need one thread per Stream.
Whether you use sync or async I/O, there is a way to detect and handle stream closedown on the relevant API return codes. BeginRead should fail with IOException if the socket has already been closed. A closedown while your async read is pending will trigger a callback, and EndRead will then tell you the state of play.
When your application calls BeginRead,
the system will wait until data is
received or an error occurs, and then
the system will use a separate thread
to execute the specified callback
method, and blocks on EndRead until
the provided NetworkStream reads data
or throws an exception.
Did you try server.ReceiveTimeout? You can set the time which Read() functon will wait for incomming data before returning zero. In your case, this property is probably set to infinite somewhere.
BeginRead is an async process which means your main thread will start execute Read in another process. So now we have 2 parallel processes. if u want to get the result, u have to call EndRead, which will gives the result.
some psudo
BeginRead()
//...do something in main object while result is fetching in another thread
var result = EndRead();
but if your main thread doesn't have anything else to do and u have to need the result, u should call Read.
While attempting to send a message for a queue through the BeginSend call seem te behave as a blocking call.
Specificly I have:
public void Send(MyMessage message)
{
lock(SEND_LOCK){
var state = ...
try {
log.Info("Begin Sending...");
socket.BeginSend(message.AsBytes(),0, message.ByteLength, SocketFlags.None,
(r) => EndSend(r), state);
log.Info("Begin Send Complete.");
}
catch (SocketException e) {
...
}
}
}
The callback would be something like this:
private void EndSend(IAsyncResult result) {
log.Info("EndSend: Ending send.");
var state = (MySendState) result.AsyncState;
...
state.Socket.EndSend(result, out code);
log.Info("EndSend: Send ended.");
WaitUntilNewMessageInQueue();
SendNextMessage();
}
Most of the time this works fine, but sometimes it hangs. Logging indicates this happens when BeginSend en EndSend are excecuted on the same Thread. The WaitUntilNewMessageInQueue blocks until there is a new message in the queue, so when there is no new message it can wait quit a while.
As far as I can tell this should not really be a problem, but in the some cases BeginSend blocks causing a deadlock situation where EndSend is blocking on WaitUntilNewMessageInQueue (expected), but Send is blocking on BeginSend in return as it seems te be waiting for the EndSend callback te return (not expected).
This behaviour was not what I was expecting. Why does BeginSend sometimes block if the callback does not return in timely fashion?
First of all, why are you locking in your Send method? The lock will be released before the send is complete since you are using BeginSend. The result is that multiple sends can be executing at the same time.
Secondly, do not write (r) => EndSend(r), just write EndSend (without any parameters).
Thrid: You do not need to include the socket in your state. Your EndSend method is working like any other instance method. You can therefore access the socket field directly.
As for your deadlocks, it's hard to tell. You delegate may have something to do with it (optimizations by the compiler / runner). But I have no knowledge in that area.
Need more help? Post more code. but I suggest that you fix the issues above (all four of them) and try again first.
Which operating system are you running on?
Are you sure you're seeing what you think you're seeing?
The notes on the MSDN page say that Send() CAN block if there's no OS buffer space to initiate your async send unless you have put the socket in non blocking mode. Could that be the case? Are you potentially sending data very quickly and filling the TCP window to the peer? If you break into the debugger what does the call stack show?
The rest is speculation based on my understanding of the underlying native technologies involved...
The notes for Send() are likely wrong about I/O being cancelled if the thread exits, this almost certainly depends on the underlying OS as it's a low level IO Completion Port/overlapped I/O issue that changed with Windows Vista (see here: http://www.lenholgate.com/blog/2008/02/major-vista-overlapped-io-change.html) and given that they're wrong about that then they could be wrong about how the completions (calls to EndSend() are dispatched on later operating systems). From Vista onwards it's possible that the completions could be dispatched on the issuing thread if the .Net sockets wrapper is enabling the correct options on the socket (see here where I talk about FILE_SKIP_COMPLETION_PORT_ON_SUCCESS)... However, if this were the case then it's likely that you'd see this behaviour a lot as initially most sends are likely to complete 'in line' and so you'd see most completions happening on the same thread - I'm pretty sure that this is NOT the case and that .Net does NOT enable this option without asking...
This is how you check if it completed synchronously so you avoid the callback on another thread.
For a single send:
var result = socket.BeginSend(...);
if (result.CompletedSynchronously)
{
socket.EndSend(result);
}
For a queue of multiple sends, you can just loop and finalize all synchronous sends:
while (true)
{
var result = socket.BeginSend(...);
if (!result.CompletedSynchronously)
{
break;
}
socket.EndSend(result);
}