Does it makes sense to queue send operations when using Socket.SendAsync? - c#

I am using .NET async send method (SendAsync) from Socket class. Do I need to queue send operations in order to send the payload over the wire one by one after the previous transmission finishes?
I've noticed that SendAsync will accept happily any bytes I throw at it without complaining that the previous send has finished or not. The protocol I am using deals with out-of-order messages.
Does the Windows socket stack already do queuing internally?

The Socket-class should do this internaly - if you check the return value HERE:
Returns true if the I/O operation is pending.
Returns false if the I/O operation completed synchronously.

Related

Wait for server before receiving stream

I'm trying to implement a client-server socket system based on this MSDN article and I have it working. If I do this it works fine when the server is returning a string immediately.
client.send();
client.receive();
The problem is if my send method requests something that takes the server a few minutes to process, such as creating a PDF version of a file, the receive call executes straight after and receives nothing (because the server hasn't sent anything as it's still processing the PDF).
How can I make the client wait for a certain period of time before executing the receive method so that it's called once the server has finished processing and has sent the file?
This seems to be the difference between a blocking and non-blocking receive call. A blocking receive call would wait until it actually had something to receive or it would timeout. A non-blocking receive call would return right away whether data is present or not. I don't know what call this is but I know C# has both types of calls.
The link you gave was to a asynchronous socket example which is generally different than what you are trying to do. What you are trying to do is more similar to a synchronous style.
Asynchronous in terms of sockets usually means you would register a function to be called when data was received. Synchronous means to poll (explicitly ask for data) in either a blocking or non-blocking manner.
EDIT:
You would send your data and set a class variable saying you have sent something and are expecting to receive something. Then wait for that variable to be cleared saying you've received something.
sent = 1
client.send()
while(sent);
Then in your receive callback when you actually get something you would set that variable.
/* receive data and process */
sent = 0;
Use async and wait. The function will get called after the call returns.
http://msdn.microsoft.com/en-us/library/vstudio/hh156513.aspx

Creating an async/await API wrapping a request/response protocol

I have a request/response protocol that runs over TCP that I'd like to provide an async/await API for. The protocol is STOMP, which is a fairly simple text-based protocol that runs over TCP or SSL. In STOMP, the client sends one of six or so command frames and specifies a receipt ID in the header of the command. The server will respond with either a RECEIPT or ERROR frame, with a receipt-id field, so the client can match the response with the original request. The server can also send a MESSAGE frame at any time (STOMP is fundamentally a messaging protocol) which will not contain a receipt-id.
To allow multiple outstanding requests and handle any MESSAGE frames, the plan is to always have a Socket.BeginReceive() outstanding. So what I was thinking is that the easiest implementation would be to create a waitable event (like a mutex), store that event in a table, send the command request with the receipt set to the index into the table, and block on the event. When socket.BeginReceive() fires the function can get the receipt-id from the message, look up the event in the table, and signal it (and store some state, like success or error). This will wake up the calling function, which can look at the result and return success or failure to the calling application.
Does this sound fundamentally correct? I've used async/await APIs before but have never written my own. If it's OK what kind of waitable event should I use? A simple Monitor.Wait() will block but not in the way I want, correct? If I wrap the whole thing in Task.Run() will that behave properly with Monitor.Wait()? Or is there a new synchronization construct that I should be using instead? I'm basically implementing HttpClient.GetAsync(), does anyone know how that works under the covers?
HttpClient is much simpler, because HTTP only has one response for each request. There's no such thing as an unsolicited server message in HTTP.
To properly set up a "stream" of events like this, it's best to use TPL Dataflow or Rx. Otherwise, you'd have to create an unbounded receive buffer and have repeated async ReceiveMessage calls.
So I'd recommend using a TPL Dataflow pipeline to create a source block of "messages", and then matching some up with requests (using TaskCompletionSource to notify the sender it's complete) and exposing the rest (MESSAGE frames) as a source block.
Internally, your processing pipeline would look like this:
Repeated BeginReceive ->
TransformBlock for message framing ->
ActionBlock to match response messages to requests.
BufferBlock for MESSAGE frames.

SendEmailAsync benefits doubtful

I read this on MSDN documentation, which seems to imply that I will still need to wait after calling the SendAsync method in my code, which is pasted below. Is this right? If it is, then I might as well just use the synchronous method of Send rather than SendAsync. My goal was to go to the next email message in my loop and send it without waiting for the previous one to be sent, which would allow me to handle the emailMessages collection more quickly as compared to using Send method. But it doesn't seem true.
After calling SendAsync, you must wait for the e-mail transmission to complete before attempting to send another e-mail message using Send or SendAsync.
I am using C# and .Net framework 4.5. In my code, I am trying to send multiple emails from within a loop as in code below using SendAsync method.
List<EmailMessage> emailMessages = DAL.GetEmailsToBeSent();
SmtpClient client = new SmtpClient();
foreach(EmailMessage emailMessage in emailMessages)
{
//create a message object from emailMessage object and then send it asynchronously
client.SendAsync(message);
//client.Send(message);
}
The advantage of the async method over the non-async alternative is that you don't need to block the current thread. This is particularly helpful in UI environments where you don't want to be blocking the UI thread, and also prevents the need for blocking a thread pool thread.
If you're just going to do a blocking wait on the results, it has no advantage over the non-async alternative.

Sequential access to asynchronous sockets

I have a server that has several clients C1...Cn to each of which there is a TCP connection established. There are less than 10,000 clients.
The message protocol is request/response based, where the server sends a request to a client and then the client sends a response.
The server has several threads, T1...Tm, and each of these may send requests to any of the clients. I want to make sure that only one of these threads can send a request to a specific client at any one time, while the other threads wanting to send a request to the same client will have to wait.
I do not want to block threads from sending requests to different clients at the same time.
E.g. If T1 is sending a request to C3, another thread T2 should not be able to send anything to C3 until T1 has received its response.
I was thinking of using a simple lock statement on the socket:
lock (c3Socket)
{
// Send request to C3
// Get response from C3
}
I am using asynchronous sockets, so I may have to use Monitor instead:
Monitor.Enter(c3Socket); // Before calling .BeginReceive()
And
Monitor.Exit(c3Socket); // In .EndReceive
I am worried about stuff going wrong and not letting go of the monitor and therefore blocking all access to a client. I'm thinking that my heartbeat thread could use Monitor.TryEnter() with a timeout and throw out sockets that it cannot get the monitor for.
Would it make sense for me to make the Begin and End calls synchronous in order to be able to use the lock() statement? I know that I would be sacrificing concurrency for simplicity in this case, but it may be worth it.
Am I overlooking anything here? Any input appreciated.
My answer here would be a state machine per socket. The states would be free and busy:
If socket is free, the sender thread would mark it busy and start sending to client and waiting for response.
You might want to setup a timeout on that wait just in case a client gets stuck somehow.
If the state is busy - the thread sleeps, waiting for signal.
When that client-related timeout expires - close the socket, the client is dead.
When a response is successfully received/parsed, mark the socket free again and signal/wakeup the waiting threads.
Only lock around socket state inquiry and manipulation, not the actual network IO. That means a lock per socket, plus some sort of wait primitive like a conditional variables (sorry, don't remember what's really available in .NET)
Hope this helps.
You certainly can't use the locking approach that you've described. Since your system is primarily asynchronous, you can't know what thread operations will be running on. This means that you may call Exit on the wrong thread (and have a SynchronizationLockException thrown), or some other thread may call Enter and succeed even though that client is "in use", just because it happened to get the same thread that Enter was originally called on.
I'd agree with Nikolai that you need to hold some additional state alongside each socket to determine whether it is currently in use or not. You woud of course need locking to update this shared state.

Callbacks using asynchronous sockets

My asking is quite simple and is about asynchronous sockets, working with TCP protocol.
When I send some data with the "BeginSend" method, when will the callback be called?
Will it be called when the data is just sent out to the network, or when we are ensured that the data as reached its destination (like it should be regarding to TCP specification) ?
Thanks for your answers.
KiTe.
ps : I'm sorry if my english is a bit bad ^^.
From MSDN:
"When your application calls BeginSend, the system will use a separate thread to execute the specified callback method, and will block on EndSend until the Socket sends the number of bytes requested or throws an exception."
"The successful completion of a send does not indicate that the data was successfully delivered. If no buffer space is available within the transport system to hold the data to be transmitted, send will block unless the socket has been placed in nonblocking mode."
http://msdn.microsoft.com/en-us/library/38dxf7kt.aspx
When the callback is called you can be sure that the data has been cleared from the output buffer (the asynchronous operation uses a separate thread to ensure that your calling thread is not blocked in case there is no room in the transmit buffer and it has to wait to send the date) and that it will reach it's destination - but not that it has reached it yet.
Because of the TCP protocol's nature however, you can be sure (well, I guess almost sure) that it will get to the destination, eventually.
However, for timing purposes you should not consider the time of the callback as being the same as the time the data reaches the other party.

Categories

Resources