Socket ReceiveAsync reuse SocketAsyncEventArgs - c#

I have a network equipment to which I connect once using sockets, and the connection is maintained open all the time until application closes.
Now I have a class in C# that encapsulates the communication. There is a method SendMessage to the equipment. I need to use Socket.ReceiveAsync to get the response.
Let's say there are 3 methods: 1. GetEqValA(), GetEqValB(), GetEqValC() that call SendMessage with a specific message for the equipment.
I have created only one instance of socket Event args like that:
_completeArgs = new SocketAsyncEventArgs();
_completeArgs.SetBuffer(buffer, 0, buffer.Length);
_completeArgs.UserToken = _mySocket;
_completeArgs.RemoteEndPoint = _mySocket.RemoteEndPoint;
_completeArgs.Completed += new EventHandler<SocketAsyncEventArgs>(DataAvailable);
_mySocket.ReceiveAsync(_completeArgs);
Now, the DataAvailable method has something similar to the code below:
for (int i = 0; i < e.BytesTransferred; i++)
{
_tcpData.Add(e.Buffer[i]);
}
if (_tcpData.Count == _expectedTcpDataCount)
{
_expectedTcpDataCount = -1;
ProcessData();
// I don't want to put here, because it will wait for data until
// someone sends a message and the equipment responds with data
//_mySocket.ReceiveAsync(e);
}
else
{
_mySocket.ReceiveAsync(e);
}
Now, the 3 methods from above can be called by anyone, even different threads. I do have a lock mechanism for that.
My problem is that if I reuse _completeArgs in SendMessage for the next message to send, I get an exception that this eventArgs object is already in use by an asynchronous operation, whereas if I do the same(but not directly, by taking the SocketAsyncEventArgs e parameter from DataAvailable) in DataAvailable, no problem occurs.
_mySocket.ReceiveAsync(_completeArgs);
_mySocket.Send(pMessage);
The idea is that I don't want to call ReceiveAsync all the time, even if I know that nothing will come in there, but I want to call ReceiveAsync before sending any message to the device, because I know that I will get something.
The exception appears at method GetEqValC(), if I call them one after another in the sequence A,B,C.
What I don't understand, can you help me? Can I don what I want to do?
I use .NET 3.5.
P.S. Summary: I need to keep the connection alive, but read something from it only when I know for sure I must have something in there. Only one call at a time will be. One send, followed by one receive!

Related

ZeroMq recv not blocking

I'm attempting to learn ZeroMq for project at work although my background is in C#, and in the most simplest of tests I seem to have an issue where the socket.recv(...) call will block for the first received message, but after this throws an exception because the amount of data received is -1.
Currently my 'server' is:
zmq::context_t context(1);
zmq::socket_t socket(context, ZMQ_REP);
socket.bind("tcp://127.0.0.1:5555");
while (true)
{
zmq::message_t message;
if (socket.recv(&message))
{
auto str = std::string(static_cast<char*>(message.data()), message.size());
printf("Receieved: %s\n", str.c_str());
}
}
This is basically from following the first example server within the ZeroMq documentation.
I'm pushing 1 bit of data from a C# 'client' using this code:
using (var context = new ZContext())
using (var requester = new ZSocket(context, ZSocketType.REQ))
{
requester.Connect(#"tcp://127.0.0.1:5555");
requester.Send(new ZFrame(#"hello"));
requester.Disconnect(#"tcp://127.0.0.1:5555");
}
Now I start the server, then start the client. I correctly receive the first message and I am correctly able to print this.
But now when I hit socket.recv(&message) again the code won't block but will instead throw an exception because the underlying zmq_msg_recv(...) returns a value of -1.
I'm unsure why this is occurring, I cannot see why it is expecting another message as I know that there is nothing else on this port. The only thing I came across is calling zmq_msg_close(...) but this should be called as part of the message_t destructor, which I have confirmed.
Is there anything I'm doing wrong in terms of the socket setup or how I'm using it for the recv(...) call to stop blocking?
Your problem is that you cannot receive 2 requests in a row with the REQ-REP pattern.
In the Request-Reply Pattern each request demands a reply. Your client needs to block until it receives a reply to its first request. Also, your server needs to reply to the requests before it services a new request.
Here is a quote referring to your exact issue from the guide.
The REQ-REP socket pair is in lockstep. The client issues zmq_send()
and then zmq_recv(), in a loop (or once if that's all it needs). Doing
any other sequence (e.g., sending two messages in a row) will result
in a return code of -1 from the send or recv call. Similarly, the
service issues zmq_recv() and then zmq_send() in that order, as often
as it needs to.

Sending data in order with SocketAsyncEventArgs

I originally had a race condition when sending data, the issue was that I was allowing multiple SocketAsyncEventArgs to be used to send data, but the first packet didn't send fully before the 2nd packet, this is because I have it so if the data doesn't fit in the buffer it loops until all the data is sent, and the first packet was larger than the second packet which is tiny, so the second packet was being sent and reached to the client before the first packet.
I have solved this by assigning 1 SocketAyncEventArgs to an open connection to be used for sending data and used a Semaphore to limit the access to it, and make the SocketAsyncEventArgs call back once it completed.
Now this works fine because all data is sent, calls back when its complete ready for the next send. The issue with this is, its causing blocking when I want to send data randomly to the open connection, and when there is a lot of data sending its going to block my threads.
I am looking for a work around to this, I thought of having a Queue which when data is requested to be sent, it simply adds the packet to the Queue and then 1 SocketAsyncEventArgs simply loops to send that data.
But how can I do this efficiently whilst still being scalable? I want to avoid blocking as much as I can whilst sending my packets in the order they are requested to be sent in.
Appreciate any help!
If the data needs to be kept in order, and you don't want to block, then you need to add a queue. The way I do this is by tracking, on my state object, whether we already have an active send async-loop in process for that connection. After enqueue (which obviously must be synchronized), just check what is in-progress:
public void PromptToSend(NetContext context)
{
if(Interlocked.CompareExchange(ref writerCount, 1, 0) == 0)
{ // then **we** are the writer
context.Handler.StartSending(this);
}
}
Here writerCount is the count of write-loops (which should be exactly 1 or 0) on the connection; if there aren't any, we start one.
My StartSending tries to read from that connection's queue; if it can do so, it does the usual SendAsync etc:
if (!connection.Socket.SendAsync(args)) SendCompleted(args);
(note that SendCompleted here is for the "sync" case; it would have got to SendCompleted via the event-model for the "async" case). SendCompleted repeats this "dequeue, try send async" step, obviously.
The only thing left is to make sure that when we try to dequeue, we note the lack of action if we find nothing more to do:
if (bufferedLength == 0)
{ // nothing to do; report this worker as inactive
Interlocked.Exchange(ref writerCount, 0);
return 0;
}
Make sense?

C#: Asynchronous NamedPipeServerStream understanding

I was trying to find any good and clear example of asynchronous NamedPipeServerStream and couldn't find any suitable for me.
I want to have NamedPipe Server which is asynchronously accept messages from clients. The client is simple and it's fine for me. But I can't find examples of server, or can't understand how it works.
Now as I understand I need to create NamedPipeServerStream object. Let's do this:
namedPipeServerStream = new NamedPipeServerStream(PIPENAME, PipeDirection.In, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous, BUFFERSIZE, BUFFERSIZE);
Seems to work. But I don't know, do I have to use PipeSecurity or PipeAccessRule at all? Do I? My server will work as a windows service in a local system.
What next? I'm thinking I need to use BeginWaitForConnection for async connections. Let's see:
namedPipeServerStream.BeginWaitForConnection(WaitForConnectionAsyncCallback, <some strange thing>);
Question 1: What is this "some strange thing"? How to use it?
Question 2: Should I do
while(true)
{
namedPipeServerStream.BeginWaitForConnection(WaitForConnectionAsyncCallback, <some strange thing>);
}
To make my server always wait for connections? Or I need to do it somehow else?
And then... Let's take a look into WaitForConnectionAsyncCallback function:
private void WaitForConnectionAsyncCallback(IAsyncResult result)
{
Console.WriteLine("Client connected.");
byte[] buff = new byte[BUFFERSIZE];
namedPipeServerStream.Read(buff, 0, namedPipeServerStream.InBufferSize);
string recStr = General.Iso88591Encoding.GetString(buff, 0, namedPipeServerStream.InBufferSize);
Console.WriteLine(" " + recStr);
namedPipeServerStream.EndWaitForConnection(result);
}
..This doesn't work of course. Because I don't know how exactly to receive string from stream. How? Now it raises an InvalidOperationException:
Pipe hasn't been connected yet.
So how to organize asynchronous work with NamedPipeServerStream?
You tinker with the PipeSecurity to restrict access to the pipe, allowing only blessed programs to connect to your service. Put that on the back-burner until you've got this working and have performed a security analysis that shows that this kind of restriction is warranted.
The "some strange thing" is simply an arbitrary object that you can pass to the callback method. You don't often need it but it can be helpful if you write your callback so it serves multiple connections. In which case you need to know more about the specific pipe that got connected, the state argument allows you to pass that info. In your callback method, the result.AsyncState property gives you the reference back to that object. Only worry about that later, you'll find a use for it when you need it. Just pass null until then.
That's a bug. You must call EndWaitForConnection() first, before doing anything else with the pipe. Simply move it to the top of the method. You typically want to write it inside a try/catch so you can catch ObjectDisposedException, the exception that will be raised when you close the pipe before exiting your program.

SocketAsyncEventArgs buffer is full of zeroes

I'm writing a message layer for my distributed system. I'm using IOCP, ie the Socket.XXXAsync methods.
Here's something pretty close to what I'm doing (in fact, my receive function is based on his):
http://vadmyst.blogspot.com/2008/05/sample-code-for-tcp-server-using.html
What I've found now is that at the start of the program (two test servers talking to each other) I each time get a number of SAEA objects where the .Buffer is entirely filled with zeroes, yet the .BytesTransferred is the size of the buffer (1024 in my case).
What does this mean? Is there a special condition I need to check for? My system interprets this as an incomplete message and moves on, but I'm wondering if I'm actually missing some data. I was under the impression that if nothing was being received, you'd not get a callback. In any case, I can see in WireShark that there aren't any zero-length packets coming in.
I've found the following when I Googled it, but I'm not sure my problem is the same:
http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/40fe397c-b1da-428e-a355-ee5a6b0b4d2c
http://go4answers.webhost4life.com/Example/socketasynceventargs-buffer-not-ready-121918.aspx
I am sure not what is going on in the linked example. It appears to be using asynchronous sockets in a synchronous way. I cannot see any callbacks or similar in the code. You may need to rethink whether you need synchronous or asynchronous sockets :).
To the problem at hand stems from the possibility that your functions are trying to read/write to the buffer before the network transmit/receive has been completed. Try using the callback functionality included in the async Socket. E.g.
// This goes into your accept function, to begin receiving the data
socketName.BeginReceive(yourbuffer, 0, yourbuffer.Length,
SocketFlags.None, new AsyncCallback(OnRecieveData), socketName);
// In your callback function you know that the socket has finished receiving data
// This callback will fire when the receive is complete.
private void OnRecieveData(IAsyncResult input) {
Socket inSocket = (Socket)input.AsyncState; // This is just a typecast
inSocket.EndReceive(input);
// Pull the data out of the socket as you already have before.
// state.Data.Write ......
}

Socket.SendAsync is not sending in-order on Mono/Linux

There is a a single-threaded server using .NET Socket with TCP protocol, and Socket.Pool(), Socket.Select(), Socket.Receive().
To send, I used:
public void SendPacket(int clientid, byte[] packet)
{
clients[clientid].socket.Send(packet);
}
But it was very slow when sending a lot of data to one client (halting the whole main thread), so I replaced it with this:
public void SendPacket(int clientid, byte[] packet)
{
using (SocketAsyncEventArgs e = new SocketAsyncEventArgs())
{
e.SetBuffer(packet, 0, packet.Length);
clients[clientid].socket.SendAsync(e);
}
}
It works fine on Windows with .NET (I don't know if it's perfect), but on Linux with Mono, packets are either dropped or reordered (I don't know). Reverting to slow version with Socket.Send() works on Linux. Source for whole server.
How to write non-blocking SendPacket() function that works on Linux?
I'm going to take a guess that it has to do with your using statement and your SendAsync call. Perhaps e falls out of scope and is being disposed while SendAsync is still processing the buffer. But then this might throw an exception. I am really just taking a guess. Try removing the using statement and see what happens.
I would say by not abusing the async method. YOu will find nowhere a documentation stating that this acutally is forced to maintain order. it queues iem for a scheuler which get distributed to threads, and by ignoring that the oder is not maintained per documentation you open yourself up to implementation details.
The best possibly is to:
Have a queue per socket.
When you write dasta into this queue, and there is no worker thread, start a work item (ThreadPool) to process the thread.
This way you have separate distinct queues that maintain order. Only one thread will ever process one queue / socket.
I got the same problem; Linux and windows react not in the same way with SendAsync. Sometimes linux truncate the data, but there is a workaround. First of all you need to use a queue. Each time you use SendAsync you have to check the callback.
If e.Offset + e.BytesTransferred < e.Buffer.Length, you just have to e.SetBuffer(e.Offset + e.BytesTransferred, e.Buffer.Length - e.BytesTransferred - e.Offset); and call SendAsync again.
I dont know why mono-linux believe it's completed before sending all the data and it's strange but i'm sure he does.
just like #mathieu, 10y later, I can confirm on Unity Mono+Linux complete callback is called without all bytes being sent in some cases. For me it was large packets only.

Categories

Resources