I'm struggling a bit with socket programming (something I'm not at all familiar with) and I can't find anything which helps from google or MSDN (awful). Apologies for the length of this.
Basically I have an existing service which recieves and responds to requests over UDP. I can't change this at all.
I also have a client within my webapp which dispatches and listens for responses to that service. The existing client I've been given is a singleton which creates a socket and an array of response slots, and then creates a background thread with an infinite looping method that makes "sock.Receive()" calls and pushes the data received into the slot array. All kinds of things about this seem wrong to me and the infinite thread breaks my unit testing so I'm trying to replace this service with one which makes it's it's send/receives asynchronously instead.
Point 1: Is this the right approach? I want a non-blocking, scalable, thread-safe service.
My first attempt is roughly like this, which sort of worked but the data I got back was always shorter than expected (i.e. the buffer did not have the number of bytes requested) and seemed to throw exceptions when processed.
private Socket MyPreConfiguredSocket;
public object Query()
{
//build a request
this.MyPreConfiguredSocket.SendTo(MYREQUEST, packet.Length, SocketFlags.Multicast, this._target);
IAsyncResult h = this._sock.BeginReceiveFrom(response, 0, BUFFER_SIZE, SocketFlags.None, ref this._target, new AsyncCallback(ARecieve), this._sock);
if (!h.AsyncWaitHandle.WaitOne(TIMEOUT)) { throw new Exception("Timed out"); }
//process response data (always shortened)
}
private void ARecieve (IAsyncResult result)
{
int bytesreceived = (result as Socket).EndReceiveFrom(result, ref this._target);
}
My second attempt was based on more google trawling and this recursive pattern I frequently saw, but this version always times out! It never gets to ARecieve.
public object Query()
{
//build a request
this.MyPreConfiguredSocket.SendTo(MYREQUEST, packet.Length, SocketFlags.Multicast, this._target);
State s = new State(this.MyPreConfiguredSocket);
this.MyPreConfiguredSocket.BeginReceiveFrom(s.Buffer, 0, BUFFER_SIZE, SocketFlags.None, ref this._target, new AsyncCallback(ARecieve), s);
if (!s.Flag.WaitOne(10000)) { throw new Exception("Timed out"); } //always thrown
//process response data
}
private void ARecieve (IAsyncResult result)
{
//never gets here!
State s = (result as State);
int bytesreceived = s.Sock.EndReceiveFrom(result, ref this._target);
if (bytesreceived > 0)
{
s.Received += bytesreceived;
this._sock.BeginReceiveFrom(s.Buffer, s.Received, BUFFER_SIZE, SocketFlags.None, ref this._target, new AsyncCallback(ARecieve), s);
}
else
{
s.Flag.Set();
}
}
private class State
{
public State(Socket sock)
{
this._sock = sock;
this._buffer = new byte[BUFFER_SIZE];
this._buffer.Initialize();
}
public Socket Sock;
public byte[] Buffer;
public ManualResetEvent Flag = new ManualResetEvent(false);
public int Received = 0;
}
Point 2: So clearly I'm getting something quite wrong.
Point 3: I'm not sure if I'm going about this right. How does the data coming from the remote service even get to the right listening thread? Do I need to create a socket per request?
Out of my comfort zone here. Need help.
Not the solution for you, just a suggestion - come up with the simplest code that works peeling off all the threading/events/etc. From there start adding needed, and only needed, complexity. My experience always was that in the process I'd find what I was doing wrong.
So is your program SUDO outline as follows?
Socket MySocket;
Socket ResponceSocket;
byte[] Request;
byte[] Responce;
public byte[] GetUDPResponce()
{
this.MySocket.Send(Request).To(ResponceSocket);
this.MySocket.Receive(Responce).From(ResponceSocket);
return Responce;
}
ill try help!
The second code post is the one we can work with and the way forward.
But you are right! the documentation is not the best.
Do you know for sure that you get a response to the message you send? Remove the asynchronous behavior from the socket and just try to send and receive synchronously (even though this may block your thread for now). Once you know this behavior is working, edit your question and post that code, and I'll help you with the threading model. Once networking portion, i.e., the send/receive, is working, the threading model is pretty straightforward.
One POSSIBLE issue is if your send operation goes to the server, and it responds before windows sets up the asynchronous listener. If you arent listening the data wont be accepted on your side (unlike TCP)
Try calling beginread before the send operation.
Related
I have built application that connects with help of TCP Sockets to 4 devices.
For that i created an TCP class with asynchronous methods to send and receive data.
public delegate void dataRec(string recStr);
public event dataRec dataReceiveEvent;
public Socket socket;
public void Connect(string IpAdress, int portNum)
{
socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
IPEndPoint epServer = new IPEndPoint(IPAddress.Parse(IpAdress), portNum);
socket.Blocking = false;
AsyncCallback onconnect = new AsyncCallback(OnConnect);
m_sock.BeginConnect(epServer, onconnect, socket);
}
public void SetupRecieveCallback(Socket sock)
{
try
{
AsyncCallback recieveData = new AsyncCallback(OnRecievedData);
sock.BeginReceive(m_byBuff, 0, m_byBuff.Length, SocketFlags.None, recieveData, sock);
}
catch (Exception ex)
{
//nevermind
}
}
public void OnRecievedData(IAsyncResult ar)
{
// Socket was the passed in object
Socket sock = (Socket)ar.AsyncState;
try
{
int nBytesRec = sock.EndReceive(ar);
if (nBytesRec > 0)
{
string sRecieved = Encoding.ASCII.GetString(m_byBuff, 0, nBytesRec);
OnAddMessage(sRecieved);
SetupRecieveCallback(sock);
}
else
{
sock.Shutdown(SocketShutdown.Both);
sock.Close();
}
}
catch (Exception ex)
{
//nevermind
}
}
public void OnAddMessage(string sMessage)
{
if (mainProgram.InvokeRequired)
{
{
scanEventCallback d = new scanEventCallback(OnAddMessage);
mainProgram.BeginInvoke(d, sMessage);
}
}
else
{
dataReceiveEvent(sMessage);
}
}
I have 4 devices with 4 different IP's and Port's that i send data, and from which i receive data.
So i created 4 different instances of a class mentioned.
When i receive data i call callback functions to do their job with the data i received (OnAddMessage event).
The connection with devices is really good, latency is like: 1-2ms~ (its in internal network).
Functions i call by callbacks are preety fast, each function is not more than 100ms.
The problem is it is working really slow, and its not caused by callback functions.
For each data i send to device, i receive one message from it.
When i start sending them, and stop after like 1 minute of working, the program keep receiving data for like 4-5 sec, even when i turn off devices- its like some kind of lag, that i receive data, that should be delivered a lot earlier.
It looks like something is working really slow.
Im getting like 1 message per second from each device, so it shouldnt be a big deal.
Any ideas what else i should do or set, or what actually could slow me down ?
You haven't posted all the relevant code, but here are some things to pay attention to:
With a network sniffer, like Wireshark or tcpdump, you can see what is actually going on.
Latency it not the only relevant factor for "connection speed". Look also at throughput, packet loss, re-transmissions, etc..
Try to send and receive in large chunks. Sending and receive only single bytes is slow because it has a lot of overhead.
The receiver should read data faster than the sender can send it, or else internal buffers (OS, network) will fill up.
Try to avoid a "chatty" protocol, basically synchronous request/reply, if possible.
If you have a chatty protocol, you can get better performance by disabling the Nagle algorithm. The option to disable this algorithm is often called "TCP no delay" or similar.
Don't close/reopen the connection for each message. TCP connection setup and teardown has quite some overhead.
If you have long standing open TCP connections, close the connection when the connection is idle for some time, for example several minutes.
I've got a listener socket that accepts, receives and sends as a TCP server typically does. I've given my accept and receive code below, it's not that different from the example on Microsoft's documentation. The main difference is that my server doesn't kill a connection after it stops receiving data (I don't know if this is a bad design or not?).
private void on_accept(IAsyncResult xResult)
{
Socket listener = null;
Socket handler = null;
TStateObject state = null;
Task<int> consumer = null;
try
{
mxResetEvent.Set();
listener = (Socket)xResult.AsyncState;
handler = listener.EndAccept(xResult);
state = new TStateObject()
{
Socket = handler
};
consumer = async_input_consumer(state);
OnConnect?.Invoke(this, handler);
handler.BeginReceive(state.Buffer, 0, TStateObject.BufferSize, 0, new AsyncCallback(on_receive), state);
}
catch (SocketException se)
{
if (se.ErrorCode == 10054)
{
on_disconnect(state);
}
}
catch (ObjectDisposedException)
{
return;
}
catch (Exception ex)
{
System.Console.WriteLine("Exception in TCPServer::AcceptCallback, exception: " + ex.Message);
}
}
private void on_receive(IAsyncResult xResult)
{
Socket handler = null;
TStateObject state = null;
try
{
state = xResult.AsyncState as TStateObject;
handler = state.Socket;
int bytesRead = handler.EndReceive(xResult);
UInt16 id = TClientRegistry.GetIdBySocket(handler);
TContext context = TClientRegistry.GetContext(id);
if (bytesRead > 0)
{
var buffer_data = new byte[bytesRead];
Array.Copy(state.Buffer, buffer_data, bytesRead);
state.BufferBlock.Post(buffer_data);
}
Array.Clear(state.Buffer, 0, state.Buffer.Length);
handler.BeginReceive(state.Buffer, 0, TStateObject.BufferSize, 0, new AsyncCallback(on_receive), state);
}
catch (SocketException se)
{
if(se.ErrorCode == 10054)
{
on_disconnect(state);
}
}
catch (ObjectDisposedException)
{
return;
}
catch (Exception ex)
{
System.Console.WriteLine("Exception in TCPServer::ReadCallback, exception: " + ex.Message);
}
}
This code is used to connect to an embedded device and works (mostly) fine. I was investigating a memory leak and trying to speed up the process a bit by replicating exactly what the device does (our connection speeds are in the realm of about 70kbps to our device, and it took an entire weekend of stress testing to get the memory leak to double the memory footprint of the server).
So I wrote a C# program to replicate the data transactions, but I've run into an issue where when I disconnect the test program, the server gets caught in a loop where it endlessly has its on_receive callback called. I was under the impression that BeginReceive wouldn't be triggered until something was received, and it seems to call on_receive, ends the receiving like an async callback should do, process the data, and then I want the connection to await more data so I call BeginReceive again.
The part of my test program where the issue occurs is in here:
private static void read_write_test()
{
mxConnection = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
mxConnection.Connect("12.12.12.18", 10);
if (mxConnection.Connected)
{
byte[] data = Encoding.ASCII.GetBytes("HANDSHAKESTRING"); //Connect string
int len = data.Length;
mxConnection.Send(data);
data = new byte[4];
len = mxConnection.Receive(data);
if (len == 0 || data[0] != '1')
{
mxConnection.Disconnect(false);
return;
}
}
//Meat of the test goes here but isn't relevant
mxConnection.Shutdown(SocketShutdown.Both);
mxConnection.Close();
}
Up until the Shutdown(SocketShutdown.Both) call, everything works as expected. When I make that call however, it seems like the server never gets notification that the client has closed the socket and gets stuck in a loop of endlessly trying to receive. I've done my homework and I think I am closing my connection properly as per this discussion. I've messed around with the disconnect section to just do mxConnection.Disconnect(false) as well, but the same thing occurs.
When the device disconnects from the server, my server catches a SocketException with error code 10054, which documentation says:
Connection reset by peer.
An existing connection was forcibly closed
by the remote host. This normally results if the peer application on
the remote host is suddenly stopped, the host is rebooted, the host or
remote network interface is disabled, or the remote host uses a hard
close (see setsockopt for more information on the SO_LINGER option on
the remote socket). This error may also result if a connection was
broken due to keep-alive activity detecting a failure while one or
more operations are in progress. Operations that were in progress fail
with WSAENETRESET. Subsequent operations fail with WSAECONNRESET.
I've used this to handle the socket being closed and has worked well for the most part. However, with my C# test program, it doesn't seem like it works the same way.
Am I missing something here? I'd appreciate any input. Thanks.
The main difference is that my server doesn't kill a connection after it stops receiving data (I don't know if this is a bad design or not?).
Of course it is.
it seems like the server never gets notification that the client has closed the socket and gets stuck in a loop of endlessly trying to receive
The server does get notification. It's just that you ignore it. The notification is that your receive operation returns 0. When that happens, you just call BeginReceive() again. Which starts a new read operation. Which…returns 0! You just keep doing that over and over again.
When a receive operation returns 0, you're supposed to complete the graceful closure (with a call to Shutdown() and Close()) that the remote endpoint started. Do not try to receive again. You'll just keep getting the same result.
I strongly recommend you do more homework. A good place to start would be the Winsock Programmer's FAQ. It is a fairly old resource and doesn't address .NET at all. But for the most part, the things that novice network programmers are getting wrong in .NET are the same things that novice Winsock programmers were getting wrong twenty years ago. The document is still just as relevant today as it was then.
By the way, your client-side code has some issues as well. First, when the Connect() method returns successfully, the socket is connected. You don't have to check the Connected property (and in fact, should never have to check that property). Second, the Disconnect() method doesn't do anything useful. It's used when you want to re-use the underlying socket handle, but you should be disposing the Socket object here. Just use Shutdown() and Close(), per the usual socket API idioms. Third, any code that receives from a TCP socket must do that in a loop, and make use of the received byte-count value to determine what data has been read and whether enough has been read to do anything useful. TCP can return any positive number of bytes on a successful read, and it's your program's job to identify the start and end of any particular blocks of data that were sent.
You missed this in the documentation for EndReceive() and Receive():
If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes.
When you read zero bytes, you still start another BeginReceive(), instead of shutting down:
if (bytesRead > 0)
{
var buffer_data = new byte[bytesRead];
Array.Copy(state.Buffer, buffer_data, bytesRead);
state.BufferBlock.Post(buffer_data);
}
Array.Clear(state.Buffer, 0, state.Buffer.Length);
handler.BeginReceive(state.Buffer, 0, TStateObject.BufferSize, 0, new AsyncCallback(on_receive), state);
Since you keep calling BeginReceive on a socket that's 'shutdown', you're going to keep getting callbacks to receive zero bytes.
Compare with the example from Microsoft in the documentation for EndReceive():
public static void Read_Callback(IAsyncResult ar){
StateObject so = (StateObject) ar.AsyncState;
Socket s = so.workSocket;
int read = s.EndReceive(ar);
if (read > 0) {
so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read));
s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0,
new AsyncCallback(Async_Send_Receive.Read_Callback), so);
}
else{
if (so.sb.Length > 1) {
//All of the data has been read, so displays it to the console
string strContent;
strContent = so.sb.ToString();
Console.WriteLine(String.Format("Read {0} byte from socket" +
"data = {1} ", strContent.Length, strContent));
}
s.Close();
}
}
I'm using the asynchronous methos BeginSend and I need some sort of a timeout mechanism. What I've implemented works fine for connect and receive timeouts but I have a problem with the BeginSend callback. Even a timeout of 25 seconds is often not enough and gets exceeded. This seems very strange to me and points towards a different cause.
public void Send(String data)
{
if (client.Connected)
{
// Convert the string data to byte data using ASCII encoding.
byte[] byteData = Encoding.ASCII.GetBytes(data);
client.NoDelay = true;
// Begin sending the data to the remote device.
IAsyncResult res = client.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback(SendCallback), client);
if (!res.IsCompleted)
{
sendTimer = new System.Threading.Timer(SendTimeoutCallback, null, 10000, Timeout.Infinite);
}
}
else MessageBox.Show("No connection to target! Send");
}
private void SendCallback(IAsyncResult ar)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 1, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
sendTimeoutflag = 0; //needs to be reset back to 0 for next reception
// we set the flag to 1, indicating it was completed.
if (sendTimer != null)
{
// stop the timer from firing.
sendTimer.Dispose();
}
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete sending the data to the remote device.
int bytesSent = client.EndSend(ar);
ef.updateUI("Sent " + bytesSent.ToString() + " bytes to server." + "\n");
}
catch (Exception e)
{
MessageBox.Show(e.ToString());
}
}
private void SendTimeoutCallback(object obj)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 2, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
// we set the flag to 2, indicating a timeout was hit.
sendTimer.Dispose();
client.Close(); // closing the Socket cancels the async operation.
MessageBox.Show("Connection to the target has been lost! SendTimeoutCallback");
}
I've tested timeout values up to 30 seconds. The value of 30 seconds has proved to be the only one never to time out. But that just seems like an overkill and I believe there's a different underlying cause.Any ideas as to why this could be happening?
Unfortunately, there's not enough code to completely diagnose this. You don't even show the declaration of sendTimeoutflag. The example isn't self-contained, so there's no way to test it. And you're not clear about exactly what happens (e.g. do you just get the timeout, do you complete a send and still get a timeout, does something else happen?).
That said, I see at least one serious bug in the code, which is your use of the sendTimeoutflag. The SendCallback() method sets this flag to 1, but it immediately sets it back to 0 again (this time without the protection of Interlocked.CompareExchange()). Only after it's set the value to 0 does it dispose the timer.
This means that even when you successfully complete the callback, the timeout timer is nearly guaranteed to have no idea and to close the client object anyway.
You can fix this specific issue by moving the assignment sendTimeoutflag = 0; to a point after you've actually completed the send operation, e.g. at the end of the callback method. And even then only if you take steps to ensure that the timer callback cannot execute past that point (e.g. wait for the timer's dispose to complete).
Note that even having fixed that specific issue, you may still have other bugs. Frankly, it's not clear why you want a timeout in the first place. Nor is it clear why you want to use lock-free code to implement your timeout logic. More conventional locking (i.e. Monitor-based with the lock statement) would be easier to implement correctly and would likely not impose a noticeable performance penalty.
And I agree with the suggestion that you would be better-served by using the async/await pattern instead of explicitly dealing with callback methods (but of course that would mean using a higher-level I/O object, since Socket doesn't suppose async/await).
I'm trying to code a basic game, where when one player makes a move, the other receives the packet and vice versa. The problem is, only every second packet is being received. I know it's not the connection because it's consistently the same pattern of missed packets.
My code is as follows:
Socket serverSocket
private void Window_Loaded(object sender, RoutedEventArgs e)
{
//Take socket from another window that created it and start receiving
serverSocket = Welcome.MainSocket;
serverSocket.BeginReceive(Buffer, 0, Buffer.Length, SocketFlags.None, new AsyncCallback(OnReceive), null);
//The initial definition of the socket was this (in the Welcome window)
//MainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
//MainSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
//MainSocket.Bind(new IPEndPoint(OwnLocal, OwnPort));
//MainSocket.Connect(Address, OppositePort);
}
private void OnReceive(IAsyncResult ar)
{
OnlineData Data = OnlineData.FromByte(Buffer);
//Do stuff with data on UI thread
if (Data is MoveData)
{ App.Current.Dispatcher.Invoke((Action)delegate
{
((MoveData)Data).Sync(Game, Me == Game.PlayerOne ? 1 : 2);
});
}
//End receive and start receiving again
serverSocket.EndReceive(ar);
serverSocket.BeginReceive(Buffer, 0, Buffer.Length, SocketFlags.None, new AsyncCallback(OnReceive), null);
}
//Called each time the player makes a move
void SocketSend(OnlineData Data)
{
serverSocket.Send(Data.ToByte());
}
Any ideas why this would ever happen in my case or in any other situation? Thanks!
The immediate thing I can see is that you aren't calling EndReceive, which means you aren't processing the critical return value from that method: the number of bytes received. I expect your data is being combined and multiple messages are being received in a single "receive" call (TCP is stream based, not message based).
This also means you aren't seeing any exceptions that you should know about.
Additionally, not calling End* methods can cause leak issues - you are very much meant to call End* methods. Or switch to the newer async IO API.
What was happening was that I had two different AsyncCallbacks for BeginRecieve, one from the original window ("Welcome") in which the socket was created, and one for this window. Each time the other computer sent a message, the AsyncCallback that receieved the data was alternating between the two.
Moral of this story: Only have one AsyncCallback for a socket unless you want to only receive every second packet.
I have a client/server infrastructure. At present they use a TcpClient and TcpListener to send a receive data between all the clients and server.
What I currently do is when data is received (on it's own thread), it is put in a queue for another thread to process in order to free the socket so it is ready and open to receive new data.
// Enter the listening loop.
while (true)
{
Debug.WriteLine("Waiting for a connection... ");
// Perform a blocking call to accept requests.
using (client = server.AcceptTcpClient())
{
data = new List<byte>();
// Get a stream object for reading and writing
using (NetworkStream stream = client.GetStream())
{
// Loop to receive all the data sent by the client.
int length;
while ((length = stream.Read(bytes, 0, bytes.Length)) != 0)
{
var copy = new byte[length];
Array.Copy(bytes, 0, copy, 0, length);
data.AddRange(copy);
}
}
}
receivedQueue.Add(data);
}
However I wanted to find out if there is a better way to do this. For example if there are 10 clients and they all want to send data to the server at the same time, one will get through while all the others will fail.Or if one client has a slow connection and hogs the socket all other communication will halt.
Is there not some way to be able to receive data from all clients at the same time and add the received data in the queue for processing when it has finished downloading?
So here is an answer that will get you started - which is more beginner level than my blog post.
.Net has an async pattern that revolves around a Begin* and End* call. For instance - BeginReceive and EndReceive. They nearly always have their non-async counterpart (in this case Receive); and achieve the exact same goal.
The most important thing to remember is that the socket ones do more than just make the call async - they expose something called IOCP (IO Completion Ports, Linux/Mono has these two but I forget the name) which is extremely important to use on a server; the crux of what IOCP does is that your application doesn't consume a thread while it waits for data.
How to Use The Begin/End Pattern
Every Begin* method will have exactly 2 more arguments in comparisson to it's non-async counterpart. The first is an AsyncCallback, the second is an object. What these two mean is, "here is a method to call when you are done" and "here is some data I need inside that method." The method that gets called always has the same signature, inside this method you call the End* counterpart to get what would have been the result if you had done it synchronously. So for example:
private void BeginReceiveBuffer()
{
_socket.BeginReceive(buffer, 0, buffer.Length, BufferEndReceive, buffer);
}
private void EndReceiveBuffer(IAsyncResult state)
{
var buffer = (byte[])state.AsyncState; // This is the last parameter.
var length = _socket.EndReceive(state); // This is the return value of the method call.
DataReceived(buffer, 0, length); // Do something with the data.
}
What happens here is .Net starts waiting for data from the socket, as soon as it gets data it calls EndReceiveBuffer and passes through the 'custom data' (in this case buffer) to it via state.AsyncResult. When you call EndReceive it will give you back the length of the data that was received (or throw an exception if something failed).
Better Pattern for Sockets
This form will give you central error handling - it can be used anywhere where the async pattern wraps a stream-like 'thing' (e.g. TCP arrives in the order it was sent, so it could be seen as a Stream object).
private Socket _socket;
private ArraySegment<byte> _buffer;
public void StartReceive()
{
ReceiveAsyncLoop(null);
}
// Note that this method is not guaranteed (in fact
// unlikely) to remain on a single thread across
// async invocations.
private void ReceiveAsyncLoop(IAsyncResult result)
{
try
{
// This only gets called once - via StartReceive()
if (result != null)
{
int numberOfBytesRead = _socket.EndReceive(result);
if(numberOfBytesRead == 0)
{
OnDisconnected(null); // 'null' being the exception. The client disconnected normally in this case.
return;
}
var newSegment = new ArraySegment<byte>(_buffer.Array, _buffer.Offset, numberOfBytesRead);
// This method needs its own error handling. Don't let it throw exceptions unless you
// want to disconnect the client.
OnDataReceived(newSegment);
}
// Because of this method call, it's as though we are creating a 'while' loop.
// However this is called an async loop, but you can see it the same way.
_socket.BeginReceive(_buffer.Array, _buffer.Offset, _buffer.Count, SocketFlags.None, ReceiveAsyncLoop, null);
}
catch (Exception ex)
{
// Socket error handling here.
}
}
Accepting Multiple Connections
What you generally do is write a class that contains your socket etc. (as well as your async loop) and create one for each client. So for instance:
public class InboundConnection
{
private Socket _socket;
private ArraySegment<byte> _buffer;
public InboundConnection(Socket clientSocket)
{
_socket = clientSocket;
_buffer = new ArraySegment<byte>(new byte[4096], 0, 4096);
StartReceive(); // Start the read async loop.
}
private void StartReceive() ...
private void ReceiveAsyncLoop() ...
private void OnDataReceived() ...
}
Each client connection should be tracked by your server class (so that you can disconnect them cleanly when the server shuts down, as well as search/look them up).
You should use asynchronous socket programming to achieve this. Take a look at the example provided by MSDN.
You should use asynchronous method of reading the data, an example is:
// Enter the listening loop.
while (true)
{
Debug.WriteLine("Waiting for a connection... ");
client = server.AcceptTcpClient();
ThreadPool.QueueUserWorkItem(new WaitCallback(HandleTcp), client);
}
private void HandleTcp(object tcpClientObject)
{
TcpClient client = (TcpClient)tcpClientObject;
// Perform a blocking call to accept requests.
data = new List<byte>();
// Get a stream object for reading and writing
using (NetworkStream stream = client.GetStream())
{
// Loop to receive all the data sent by the client.
int length;
while ((length = stream.Read(bytes, 0, bytes.Length)) != 0)
{
var copy = new byte[length];
Array.Copy(bytes, 0, copy, 0, length);
data.AddRange(copy);
}
}
receivedQueue.Add(data);
}
Also you should consider using AutoResetEvent or ManualResetEvent to be notified when new data is added to the collection so the thread that handle the data will know when data is received, and if you are using 4.0 you better switch off to using BlockingCollection instead of Queue.
What I do usually is using a thread pool with several threads.
Upon each new connection I'm running the connection handling (in your case - everything you do in the using clause) in one of the threads from the pool.
By that you achieve both performance since you're allowing several simultaneously accepted connection and you also limiting the number of resources (threads, etc') you allocate for handling incoming connections.
You have a nice example here
Good Luck