Unlike the synchronous Accept, BeginAccept doesn't provide a socket for the newly created connection. EndAccept however does, but it also stops future connections from being accepted; so I concocted the following code to allow multiple 'clients' to connect to my server:
serverSocket.BeginAccept(AcceptCallback, serverSocket);
AcceptCallback code:
void AcceptCallback(IAsyncResult result)
{
Socket server = (Socket)result.AsyncState;
Socket client = server.EndAccept(result);
// client socket logic...
server.BeginAccept(AcceptCallback, server); // <- continue accepting connections
}
Is there a better way to do this? It seems to be a bit 'hacky', as it essentially loops the async calls recursively.
Perhaps there is an overhead to having multiple calls to async methods, such as multiple threads being created?
The way are doing this is correct for using asynchronous sockets. Personally, I would move your BeginAccept to right after you get the socket from the AsyncState. This will allow you to accept additional connections right away. As it is right now, the handling code will run before you are ready to accept another connection.
As Usr mentioned, I believe you could re-write the code to use await with tasks.
This is normal when you deal with callback-based async IO. And it is what makes it so awful to use!
Can you use C# await? That would simplify this to a simple while (true) { await accept(); } loop.
Related
Windows store applications are frustrating to say the least; just close enough to regular .net to get into trouble.
My issue with working in Tasks, await, and Socket.ConnectAsync.
I've got the following code:
public async Task<string> Connect(string hostName, int portNumber)
{
string result = string.Empty;
// Create DnsEndPoint. The hostName and port are passed in to this method.
DnsEndPoint hostEntry = new DnsEndPoint(hostName, portNumber);
// Create a stream-based, TCP socket using the InterNetwork Address Family.
_socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
// Create a SocketAsyncEventArgs object to be used in the connection request
SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs();
socketEventArg.RemoteEndPoint = hostEntry;
// Inline event handler for the Completed event.
// Note: This event handler was implemented inline in order to make this method self-contained.
socketEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(delegate (object s, SocketAsyncEventArgs e)
{
// Retrieve the result of this request
result = e.SocketError.ToString();
// Signal that the request is complete, unblocking the UI thread
_clientDone.Set();
});
// Sets the state of the event to nonsignaled, causing threads to block
_clientDone.Reset();
// Make an asynchronous Connect request over the socket
await _socket.ConnectAsync(socketEventArg);
// Block the UI thread for a maximum of TIMEOUT_MILLISECONDS milliseconds.
// If no response comes back within this time then proceed
_clientDone.WaitOne(TIMEOUT_MILLISECONDS);
return result;
}
And I started added in Async / await to the app to prevent UI issues. But when I went into this function and added the Await to
await _socket.ConnectAsync(socketEventArg);
I get the error:
Error CS1929 'bool' does not contain a definition for 'GetAwaiter' and the best extension method overload 'WindowsRuntimeSystemExtensions.GetAwaiter(IAsyncAction)' requires a receiver of type 'IAsyncAction'
In looking at the docs for ConnectAsync it looks like ConnectAsync is supposed to support await...
Does it not support await?
No, ConnectAsync is not a TAP method, and thus cannot be used with await.
My #1 recommendation for anyone using raw sockets is "don't". If you can, use a REST API (with HttpClient) or a SignalR API. Raw sockets have tons of pitfalls.
If you must use raw sockets (i.e., the other side is using a custom TCP/IP protocol and you don't have the power to fix the situation), then the first thing to note is that the Socket class has three complete APIs all in one class.
The first is the deceptively simple-looking synchronous API (Connect), which I do not recommend for any production code. The second is the standard APM pattern (BeginConnect/EndConnect). The third is a specialized asynchronous pattern that is specific to the Socket class (ConnectAsync); this specialized API is much more complex to use than the standard asynchronous API, and is only necessary when you have chatty socket communication in a constrained environment, and need to reduce the object churn through the garbage collector.
Note that there is no await-compatible API. I haven't spoken to anyone at Microsoft about this, but my strong suspicion is that they simply thought the Socket class had too many members already (3 complete APIs; adding an await-compatible one would add a fourth complete API), and that's why it was skipped over when they added TAP-pattern (await-compatible) members to other types in the BCL.
The correct API to use - easily 99.999% of the time - is the APM one. You can create your own TAP wrappers (which work with await) by using TaskFactory.FromAsync. I like to do this with extension methods, like this:
public static Task ConnectTaskAsync(this Socket socket, EndPoint remoteEP)
{
return Task.Factory.FromAsync(socket.BeginConnect, socket.EndConnect, remoteEP, null);
}
You can then invoke it anywhere on a Socket, as such:
await _socket.ConnectTaskAsync(hostEntry);
I have a program that begins itself by listening for connections. I wanted to implement a pattern in which the server would accept a connection, pass that individual connection to a user class for processing: future packet reception, and handling of the data.
I ran into trouble with the synchronous pattern before I found out that asynchronous use of the Socket class isn't scary. But then I ran into more trouble. It seemed that, in a while (true) loop, since BeginAccept() is asynchronous, the program would constantly move through this loop and eventually run into an OutOfMemoryException. I needed something to listen for a connection, and immediately hand off responsibility of that connection to some other class.
So I read Microsoft's example and found out about ManualResetEvent. I could actually specify when I was ready for the loop to begin listening again! But after reading some questions here on Stack Overflow, I have become confused.
My worry is that even though I have asynchronously accepted a connection, the entire program will block while it's trying to listen for a new connection upon re-entering the loop. This isn't ideal if I'm handling multiple users.
I'm very new to the world of asynchronous I/O, so I would appreciate even the angriest of comments about my vocabulary or a misuse of a phrase.
Code:
static void Main(string[] args)
{
MainSocket = new Socket(SocketType.Stream, ProtocolType.Tcp);
MainSocket.Bind(new IPEndPoint(IPAddress.Parse("192.168.1.74"), 1626));
MainSocket.Listen(10);
while (true)
{
Ready.Reset();
AcceptCallback = new AsyncCallback(ConnectionAccepted);
MainSocket.BeginAccept(AcceptCallback, MainSocket);
Ready.WaitOne();
}
}
static void ConnectionAccepted(IAsyncResult IAr)
{
Ready.Set();
Connection UserConnection = new Connection(MainSocket.EndAccept(IAr));
}
The Microsoft example, in which they use the old-style WaitHandle based events, will work but frankly it is a very odd and awkward way to implement asynchronous code. I get the feeling that the events are there in the example mainly as a way of artificially synchronizing the main thread so it has something to do. But it's not really the right approach.
One option is to just not even accept sockets asynchronously. Instead, use the asynchronous I/O for when the socket is connected and use a synchronous loop in the main thread to accept sockets. This winds up being pretty much exactly what the Microsoft sample does anyway, but keeps all of the accept logic in the main thread instead of switching back and forth between the main thread (which starts the accept operation) and some IOCP thread that handles the completion.
Another option is to just give the main thread something else to do. For a simple example, this could be simply waiting for some user input to signal that the program should shut down. Of course, in a real program the main thread could be something useful (e.g. handling the message loop in a GUI program).
If the main thread is given something else to do, then you can use the asynchronous BeginAccept() in the way it was intended: you call the method to start the accept operation, and then don't call it again until that operation completes. The initial call happens when you initialize your server, but all subsequent calls happen in the completion callback.
In that case, your completion callback method looks more like this:
static void ConnectionAccepted(IAsyncResult IAr)
{
Connection UserConnection = new Connection(MainSocket.EndAccept(IAr));
MainSocket.BeginAccept(ConnectionAccepted, MainSocket);
}
That is, you simply call the BeginAccept() method in the completion callback itself. (Note that there's no need to create the AsyncCallback object explicitly; the compiler will implicitly convert the method name to the correct delegate type instance on your behalf).
Ok, I think I have understood the whole async/await thing. Whenever you await something, the function you're running returns, allowing the current thread to do something else while the async function completes. The advantage is that you don't start a new thread.
This is not that hard to understand as it's somewhat how Node.JS works, except Node uses alot of callbacks to make this happen. This is where I fail to understand the advantage however.
The socket class doesn't currently have any Async methods (that work with async/await). I can of course pass a socket to the stream class, and use the async methods there, however this leaves a problem with the accepting of new sockets.
There are two ways of doing this, as far as I know. In both cases I accept new sockets in an infinite loop on the main thread. In the first case I can start a new task for every socket that I accept, and run the stream.ReceiveAsync within that task. However, won't an await actually block that task, since the task will have nothing else to do? Which again will result in more threads spawned on the threadpool, which again is no better than using synchronous methods inside a task?
My second option is to put all accepted sockets in one of several lists (one list per thread), and inside those threads run a loop, running await stream.ReceiveAsync for every socket. This way, whenever i run into await, stream.ReceiveAsync and start receiving from all other sockets.
I guess my real question is if this is in any way more effective than a threadpool, and in the first case, if it really will be worse than just using the APM methods.
I also know you can wrap APM methods into functions using await/async, but the way I see it, you still get the "disadvantage" of APM methods, with the extra overhead of state machines in async/await.
The async socket API is not based around Task[<T>], so it isn't directly usable from async/await - but you can bridge that fairly easily - for example (completely untested):
public class AsyncSocketWrapper : IDisposable
{
public void Dispose()
{
var tmp = socket;
socket = null;
if(tmp != null) tmp.Dispose();
}
public AsyncSocketWrapper(Socket socket)
{
this.socket = socket;
args = new SocketAsyncEventArgs();
args.Completed += args_Completed;
}
void args_Completed(object sender, SocketAsyncEventArgs e)
{
// might want to switch on e.LastOperation
var source = (TaskCompletionSource<int>)e.UserToken;
if (ShouldSetResult(source, args)) source.TrySetResult(args.BytesTransferred);
}
private Socket socket;
private readonly SocketAsyncEventArgs args;
public Task<int> ReceiveAsync(byte[] buffer, int offset, int count)
{
TaskCompletionSource<int> source = new TaskCompletionSource<int>();
try
{
args.SetBuffer(buffer, offset, count);
args.UserToken = source;
if (!socket.ReceiveAsync(args))
{
if (ShouldSetResult(source, args))
{
return Task.FromResult(args.BytesTransferred);
}
}
}
catch (Exception ex)
{
source.TrySetException(ex);
}
return source.Task;
}
static bool ShouldSetResult<T>(TaskCompletionSource<T> source, SocketAsyncEventArgs args)
{
if (args.SocketError == SocketError.Success) return true;
var ex = new InvalidOperationException(args.SocketError.ToString());
source.TrySetException(ex);
return false;
}
}
Note: you should probably avoid running the receives in a loop - I would advise making each socket responsible for pumping itself as it receives data. The only thing you need a loop for is to periodically sweep for zombies, since not all socket deaths are detectable.
Note also that the raw async socket API is perfectly usable without Task[<T>] - I use that extensively. While await may have uses here, it is not essential.
This is not that hard to understand as it's somewhat how Node.JS works, except Node uses alot of callbacks to make this happen. This is where I fail to understand the advantage however.
Node.js does use callbacks, but it has one other significant facet that really simplifies those callbacks: they are all serialized to the same thread. So when you're looking at asynchronous callbacks in .NET, you're usually dealing with multithreading as well as asynchronous programming (except for EAP-style callbacks).
Asynchronous programming using callbacks is called "continuation-passing style" (CPS). It's the only real option for Node.js but is one of many options on .NET. In particular, CPS code can get extremely complex and difficult to maintain, so the async/await compiler transform was introduced so you could write "normal-looking" code and the compiler would translate it to CPS for you.
In both cases I accept new sockets in an infinite loop on the main thread.
If you're writing a server, then yes, somewhere you will be repeatedly accepting new client connections. Also, you should be continuously reading from each connected socket, so each socket also has a loop.
In the first case I can start a new task for every socket that I accept, and run the stream.ReceiveAsync within that task.
You wouldn't need a new task. That's the whole point of asynchronous programming.
My second option is to put all accepted sockets in one of several lists (one list per thread), and inside those threads run a loop, running await stream.ReceiveAsync for every socket.
I'm not sure why you'd need multiple threads, or any dedicated threads at all.
You seem a bit confused on how async and await work. I recommend reading my own introduction, the MSDN overview, the Task-Based Asynchronous Pattern guidance, and the async FAQ, in that order.
I also know you can wrap APM methods into functions using await/async, but the way I see it, you still get the "disadvantage" of APM methods, with the extra overhead of state machines in async/await.
I'm not sure what disadvantage you're referring to. The overhead of state machines, while non-zero, is negligible in the face of socket I/O.
If you're looking to do socket I/O, you have several options. For reads, you can either do them in an "infinite" loop using APM or Task wrappers around the APM or Async methods. Alternatively, you could convert them into a stream-like abstraction using Rx or TPL Dataflow.
Another option is a library I wrote a few years ago called Nito.Async. It provides EAP-style (event-based) sockets that handle all the thread marshaling for you, so you end up with something simpler like Node.js. Of course, like Node.js, this simplicity means it won't scale as well as a more complex solution.
I am a little bit confused about what does the Async approach achieve. I encountered it when looking up how to make a server accept multiple connections. What confuses me while looking up what Aync does in C# exactly, is that from what I can tell its not its own thread. However, it also allows you to avoid locking and stalling. For instance, if I have the following:
ConnectionManager()
{
listener = new TcpListener(port);
listener.BeginAcceptSocket(new AsyncCallback(acceptConnection), listener);
}
public void acceptConnection(IAsyncResult ar)
{
//Do stuff
}
does this mean that as soon as it finds a connection, it executes the "acceptConnection" function but then continues to execute through the caller function? (in this case going out of scope). How does this allow me to create a server application that will be able to take multiple clients? I am fairly new to this concept even though I have worked with threads before to manage server/client interaction. If I am being a little vague, please let me know. I have looked up multiple examples on MSDN and am still a little confused. Thank you ahead of time!
as soon as it finds a connection, it executes the "acceptConnection" function
Yes
then continues to execute through the caller function?
No.
what does the Async approach achieve
When done right, it allows processing much higher number of requests/second using fewer resources.
Imagine you're creating a server that should accept connections on 10 TCP ports.
With blocking API, you’ll have to create 10 threads just for accepting sockets. Threads are expensive system resource, e.g. every thread has its own stack, and switching between threads takes considerable time. If a client connecting to some socket, the OS will have to wake up the corresponding thread.
With async API, you post 10 asynchronous requests. When client is connecting, your acceptConnection method will be called by a thread from the CLR thread pool.
And one more thing.
If you want to continue executing the caller function after waiting for asynchronous I/O operation to complete, you should consider new C#’s async/await syntax, it allows you to do just that. The feature is available as a stand-alone library “Async CTP” for visual studio 2010, and included in visual studio 2012.
I don't profess to be a c# or sockets guru but from what I understand the code you've got above will accept the first connection and then no more. You would need to establish another BeginAccept.
Something like:
TcpListener listener = null;
ConnectionManager()
{
listener = new TcpListener(port);
listener.BeginAcceptSocket(new AsyncCallback(acceptConnection), listener);
}
public void acceptConnection(IAsyncResult ar)
{
// Create async receive data code..
// Get ready for a new connection
listener.BeginAcceptSocket(new AsyncCallback(acceptConnection), listener);
}
So by using Async receive data in addition to the async connection, the accept connection finishes pretty quickly and sets up listening for a new connection. I guess you could re-order this too.
For straight socket connection (not TcpListener) this is what i used:
(connectedClient is a my own class which handles the receive & transmit functions and holds other info about the connection).
int Port = 7777; // or whatever port you want to listen on
IPEndPoint ipLocal = new IPEndPoint(IPAddress.Any, port);
listenSocket = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
listenSocket.Bind(ipLocal);
// create the call back for any client connections...
listenSocket.BeginAccept(new AsyncCallback(OnClientConnection), null);
private void OnClientConnection(IAsyncResult asyn)
{
if (socketClosed)
{
return;
}
try
{
Socket clientSocket = listenSocket.EndAccept(asyn);
ConnectedClient connectedClient = new ConnectedClient(clientSocket, this);
connectedClient.MessageReceived += OnMessageReceived;
connectedClient.Disconnected += OnDisconnection;
connectedClient.MessageSent += OnMessageSent;
connectedClient.StartListening();
// create the call back for any client connections...
listenSocket.BeginAccept(new AsyncCallback(OnClientConnection), null);
}
catch (ObjectDisposedException excpt)
{
// Deal with this, your code goes here
}
catch (Exception excpt)
{
// Deal with this, your code goes here
}
}
I hope this has answered what you're looking for ?
What is the correct way to accept sockets in a multi connection environment in .NET?
Will the following be enough even if the load is high?
while (true)
{
//block untill socket accepted
var socket = tcpListener.AcceptSocket();
DoStuff(socket) //e.g. spawn thread and read data
}
That is, can I accept sockets in a single thread and then handle the sockets in a thread / dataflow / whatever.
So the question is just about the accept part..
You'll probably want the BeginAccept async operation instead of the synchroneous Accept.
And if you want to handle high load, you definitely don't want a thread per connection - again, you async methods.
Take a look at either the Reactor or Proactor pattern depending on if you wan't to block or not. I'll recommend the Patterns for Concurrent and Networked Objects book.
This should be fine but if the load gets even higher you might consider using the asynchronous versions of this method: BeginAcceptSocket/EndAcceptSocket.
The BeginAcceptSocket is a better choice if you want the most performant server.
More importantly, these async operations use a Threadpool under the hood whilst in your current implementation you are creating and destroying lots of threads which is really expensive.
I think the best approach is to call BeginAccept(), and within OnAccept call BeginAccept right again.. This should give you the best concurrency.
The OnAccept should be something like this:
private void OnAccept(IAsyncResult ar)
{
bool beginAcceptCalled = false;
try
{
//start the listener again
_listener.BeginAcceptSocket(OnAccept, null);
beginAcceptCalled = true;
Socket socket = _listener.EndAcceptSocket(ar);
//do something with the socket..
}
catch (Exception ex)
{
if (!beginAcceptCalled)
{
//try listening to connections again
_listener.BeginAcceptSocket(OnAccept, null);
}
}
}
It doesn't really matter performance wise. What matters is how you communicate which each client. That handling will consume a lot more CPU than accepting sockets.
I would use BeginAccept/EndAccept for the listener socket AND BeginReceive/EndReceive for the client sockets.
Since I'm using Async CTP and DataFlow, the current code looks like this:
private async void WaitForSockets()
{
var socket = await tcpListener.AcceptSocketAsync();
WaitForSockets();
incomingSockets.Post(socket);
}
Note that what looks like a recursive call will not cause stack overflow or block.
It will simply start a new awaiter for a new socket and exit.