Wait for server before receiving stream - c#

I'm trying to implement a client-server socket system based on this MSDN article and I have it working. If I do this it works fine when the server is returning a string immediately.
client.send();
client.receive();
The problem is if my send method requests something that takes the server a few minutes to process, such as creating a PDF version of a file, the receive call executes straight after and receives nothing (because the server hasn't sent anything as it's still processing the PDF).
How can I make the client wait for a certain period of time before executing the receive method so that it's called once the server has finished processing and has sent the file?

This seems to be the difference between a blocking and non-blocking receive call. A blocking receive call would wait until it actually had something to receive or it would timeout. A non-blocking receive call would return right away whether data is present or not. I don't know what call this is but I know C# has both types of calls.
The link you gave was to a asynchronous socket example which is generally different than what you are trying to do. What you are trying to do is more similar to a synchronous style.
Asynchronous in terms of sockets usually means you would register a function to be called when data was received. Synchronous means to poll (explicitly ask for data) in either a blocking or non-blocking manner.
EDIT:
You would send your data and set a class variable saying you have sent something and are expecting to receive something. Then wait for that variable to be cleared saying you've received something.
sent = 1
client.send()
while(sent);
Then in your receive callback when you actually get something you would set that variable.
/* receive data and process */
sent = 0;

Use async and wait. The function will get called after the call returns.
http://msdn.microsoft.com/en-us/library/vstudio/hh156513.aspx

Related

NetworkStream.Write asynchronous issue

I have a problem with writing to a NetworkStream in C#. From MSDN i read:
The Write method blocks until the requested number of bytes is sent or
a SocketException is thrown
Well - in my case, it behaves like an asynchronous method. Thread is not being blocked.
Here is a code sample, to enlighten situation a bit:
TcpClient tcpcl = new TcpClient("192.168.1.128", 1337);
NetworkStream netst = tcpcl.GetStream();
byte[] will_send = File.ReadAllBytes(#"large_file_120_MB.mp4");
Console.WriteLine("Starting transmission...");
netst.Write(will_send, 0, will_send.Length);
Console.WriteLine("File has been sent !");
(... later instructions ...)
Result from console after 1 second of execution:
Starting transmission...
File has been sent !
Second message shows immediately. Later instructions are being executed.
Meanwhile server still receives the file and on its side everything works well. It gets better - if i kill sending program, during transmission, receiving won't stop. Degugger shows clearly that app has ended entirely. However few more megabytes will still be transmitted, until receiving stops completely.
So my question - is there a way to block main thread, until Write method is finished ?
The MSDN description should perhaps be better read as
The Write method blocks until the requested number of bytes is
written to the local network buffer or a SocketException is thrown
i.e. The write will return before the entire file has been successfully received at the other end.
This also means when you close your application anything currently in the network buffer may continue to be sent.
The only way to block the main thread until the entire file has been succesfully received is to potentially use asynchronous sockets and once the send is complete wait until some sort of confirmation is sent by the receiving end, which you would have to implement.

Creating an async/await API wrapping a request/response protocol

I have a request/response protocol that runs over TCP that I'd like to provide an async/await API for. The protocol is STOMP, which is a fairly simple text-based protocol that runs over TCP or SSL. In STOMP, the client sends one of six or so command frames and specifies a receipt ID in the header of the command. The server will respond with either a RECEIPT or ERROR frame, with a receipt-id field, so the client can match the response with the original request. The server can also send a MESSAGE frame at any time (STOMP is fundamentally a messaging protocol) which will not contain a receipt-id.
To allow multiple outstanding requests and handle any MESSAGE frames, the plan is to always have a Socket.BeginReceive() outstanding. So what I was thinking is that the easiest implementation would be to create a waitable event (like a mutex), store that event in a table, send the command request with the receipt set to the index into the table, and block on the event. When socket.BeginReceive() fires the function can get the receipt-id from the message, look up the event in the table, and signal it (and store some state, like success or error). This will wake up the calling function, which can look at the result and return success or failure to the calling application.
Does this sound fundamentally correct? I've used async/await APIs before but have never written my own. If it's OK what kind of waitable event should I use? A simple Monitor.Wait() will block but not in the way I want, correct? If I wrap the whole thing in Task.Run() will that behave properly with Monitor.Wait()? Or is there a new synchronization construct that I should be using instead? I'm basically implementing HttpClient.GetAsync(), does anyone know how that works under the covers?
HttpClient is much simpler, because HTTP only has one response for each request. There's no such thing as an unsolicited server message in HTTP.
To properly set up a "stream" of events like this, it's best to use TPL Dataflow or Rx. Otherwise, you'd have to create an unbounded receive buffer and have repeated async ReceiveMessage calls.
So I'd recommend using a TPL Dataflow pipeline to create a source block of "messages", and then matching some up with requests (using TaskCompletionSource to notify the sender it's complete) and exposing the rest (MESSAGE frames) as a source block.
Internally, your processing pipeline would look like this:
Repeated BeginReceive ->
TransformBlock for message framing ->
ActionBlock to match response messages to requests.
BufferBlock for MESSAGE frames.

Does it makes sense to queue send operations when using Socket.SendAsync?

I am using .NET async send method (SendAsync) from Socket class. Do I need to queue send operations in order to send the payload over the wire one by one after the previous transmission finishes?
I've noticed that SendAsync will accept happily any bytes I throw at it without complaining that the previous send has finished or not. The protocol I am using deals with out-of-order messages.
Does the Windows socket stack already do queuing internally?
The Socket-class should do this internaly - if you check the return value HERE:
Returns true if the I/O operation is pending.
Returns false if the I/O operation completed synchronously.

ObjectDisposedException when using Multiple Asynchronous Clients to Multiple Servers

I've been looking into the Asynchronous Client and Asynchronous Server Socket examples on MSDN and have happily punched up the example that works flawlessly when one Client connects to one Server. My problem is that I need to synchronise a chunk of work with a number of machines so they execute at about the same time (like millisecond difference). The action is reasonably simple, talk to the child servers (all running on the same machine but on different ports for initial testing), simulate its processing and send a 'Ready' signal back to the caller. Once all the Servers have returned this flag (or a time-out occurs), a second message to is passed from the client to the acknowledged servers telling them to execute.
My approach so far has been to create two client instances, stored within a list, and start the routine by looping through the list. This works well but not particularly fast as each client's routine is ran synchronously. To speed up the process, I created a new thread and executed the routine on that for each client. Now this does work allowing two or more servers to return back and synchronise appropriately. Unfortunately, this is very error prone and the code errors with the 'ObjectDisposedException' exception on the following line of the 'ReceiveCallback' method...
// Read data from the remote device.
int bytesRead = client.EndReceive(ar);
With some investigation and debugging I tracked the sockets being passed to the routine (using its handle) and found while it isn't connected, it is always the second socket to return that fails and not the first that does successfully read its response. In addition, these socket instances (based upon the handle value) appear to be separate instances, but somehow the second (and subsequent responses) continue to error out on this line.
What is causing these sockets to inappropriately dispose of themselves before being legitmately processed? As they are running in separate threads and there are no shared routines, is the first socket being inappropriately used on the other instances? Tbh, I feel a bit lost at sea and while I could band-aid up these errors, the reliability of the code and potentially losing returning acknowledgements is not a favourable goal. Any pointers?
Kind regards
Turns out the shared / static ManualResetEvent was being set across the different instances so thread 1 would set the ManualResetEvent disposing the socket on the second thread. By ensuring that no methods / properties were shared / static - each thread and socket would execute under its own scope.

Callbacks using asynchronous sockets

My asking is quite simple and is about asynchronous sockets, working with TCP protocol.
When I send some data with the "BeginSend" method, when will the callback be called?
Will it be called when the data is just sent out to the network, or when we are ensured that the data as reached its destination (like it should be regarding to TCP specification) ?
Thanks for your answers.
KiTe.
ps : I'm sorry if my english is a bit bad ^^.
From MSDN:
"When your application calls BeginSend, the system will use a separate thread to execute the specified callback method, and will block on EndSend until the Socket sends the number of bytes requested or throws an exception."
"The successful completion of a send does not indicate that the data was successfully delivered. If no buffer space is available within the transport system to hold the data to be transmitted, send will block unless the socket has been placed in nonblocking mode."
http://msdn.microsoft.com/en-us/library/38dxf7kt.aspx
When the callback is called you can be sure that the data has been cleared from the output buffer (the asynchronous operation uses a separate thread to ensure that your calling thread is not blocked in case there is no room in the transmit buffer and it has to wait to send the date) and that it will reach it's destination - but not that it has reached it yet.
Because of the TCP protocol's nature however, you can be sure (well, I guess almost sure) that it will get to the destination, eventually.
However, for timing purposes you should not consider the time of the callback as being the same as the time the data reaches the other party.

Categories

Resources