I'm teaching myself some simple networking using Unity and Sockets and I'm running into problems synchronizing data between a client and server. I'm aware that there are other options using Unity Networking but, before I move on, I want to understand better how to improve my code using the System libraries.
In this example I'm simply trying to stream my mouse position over a multicast UDP socket. I'm encoding a string into a byte array and sending that array once per frame. I'm aware that sending these values as a string is un-optimal but, unless that is likely the bottleneck, I'm assuming It's ok come back to optimize that later.
Im my setup server is sending values at 60 fps, and the client is reading at the same rate. The problem I'm having is that when the client receives values it typically receives many at once. If I log the values I received with a ----- between each frame I typically get output like this:
------
------
------
------
------
------
------
119,396
91,396
45,391
18,379
-8,362
-35,342
-59,314
------
------
------
------
------
------
------
I would expect unsynchronized update cycles to lead to receiving two values per frame, but I'm not sure what might be accounting for the larger discrepancy.
Here's the Server code:
using UnityEngine;
using System.Net;
using System.Net.Sockets;
using System.Text;
public class Server : MonoBehaviour
{
Socket _socket;
void OnEnable ()
{
var ip = IPAddress.Parse ("224.5.6.7");
var ipEndPoint = new IPEndPoint(ip, 4567);
_socket = new Socket (AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
_socket.SetSocketOption (SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption (ip));
_socket.SetSocketOption (SocketOptionLevel.IP, SocketOptionName.MulticastTimeToLive, 2);
_socket.Connect(ipEndPoint);
}
void OnDisable ()
{
if (_socket != null)
{
_socket.Close();
_socket = null;
}
}
public void Send (string message)
{
var byteArray = Encoding.ASCII.GetBytes (message);
_socket.Send (byteArray);
}
}
And the client:
using UnityEngine;
using System.Net;
using System.Net.Sockets;
using System.Text;
public class Client : MonoBehaviour
{
Socket _socket;
byte[] _byteBuffer = new byte[16];
public delegate void MessageRecievedEvent (string message);
public MessageRecievedEvent messageWasRecieved = delegate {};
void OnEnable ()
{
var ipEndPoint = new IPEndPoint(IPAddress.Any, 4567);
var ip = IPAddress.Parse("224.5.6.7");
_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
_socket.Bind (ipEndPoint);
_socket.SetSocketOption (SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption(ip,IPAddress.Any));
}
void Update ()
{
while (_socket.Available > 0)
{
for (int i = 0; i < _byteBuffer.Length; i++) _byteBuffer[i] = 0;
_socket.Receive (_byteBuffer);
messageWasRecieved (Encoding.ASCII.GetString (_byteBuffer));
}
}
}
If anybody could shed light on what I can do to improve synchronization that would be a great help.
Network I/O is subject to a large number of external influences, and TCP/IP as a protocol has few requirements. Certainly none that would provide a guarantee of the behavior you seem to want.
Unfortunately, without a good Minimal, Complete, and Verifiable code example, it's not possible to verify that your server is in fact sending data at the interval you claim. It's entirely possible you have a bug that's causing this behavior.
But if we assume that the code itself is perfect, there are still no guarantees when using UDP that datagrams won't be batched up at some point along the way, such that a large number appear in the network buffer all at once. I would expect this to happen with higher frequency when the datagrams are sent through multiple network nodes (e.g. a switch and especially over the Internet), but it could just as easily happen when the server and client are both on the same computer.
Ironically, one option that might force the datagrams to be spread out more is to pad each datagram with extra bytes. The exact number of bytes required would depend on the exact network route; to do this "perfectly" might require writing some calibration logic that tries different padding amounts until the code sees datagrams arriving at the intervals it expects.
But that would significantly increase the complexity of your network I/O code, and yet still would not guarantee the behavior you'd like. And it has some obvious negative side-effects, including the extra overhead on the network (something people using metered network connections certainly won't appreciate), as well as increasing the likelihood of a UDP datagram being dropped altogether.
It's not clear from your question whether your project actually requires multicast UDP, or if that's just something in your code because that's what some tutorial or other example you're following was using. If multicast is not actually a requirement, another thing you definitely should try is to use direct UDP without multicasting.
FWIW: I would not implement this the way you have. Instead, I would use asynchronous receive operations, so that my client receives datagrams the instant they are available, rather than only checking periodically each frame of rendering. I would also include a sequence number in the datagram, and discard (ignore) any datagrams that arrive out of sequence (i.e. where the sequence number isn't strictly greater than the most recent sequence number already received). This approach will improve (perhaps only slightly) responsiveness, but also will handle situations where the datagrams arrive out of order, or are duplicated (two of the three main delivery issues one will experience with UDP…the third being, of course, failure of delivery).
Related
I'm trying to write a high-performance TCP server (a LDAP server) using this tutorial by David Fowler as a base part of the MyServerListener.cs to handle incoming connections.
This is a simple .net 7 console app (with little changes) that I borrowed from David, it just accepts incoming clients, process the requests and writes hello to the response :
internal class Program
{
const int PORT = 389; // injecting from config
const int BACKLOG_LENGTH = 200; // max backlog size in windows server
static async Task Main(string[] args)
{
var listenSocket = new Socket(SocketType.Stream, ProtocolType.Tcp);
listenSocket.Bind(new IPEndPoint(IPAddress.Any, port));
Console.WriteLine("Listening on port " + port);
listenSocket.Listen(BACKLOG_LENGTH);
while (true)
{
var socket = await listenSocket.AcceptAsync();
_ = ProcessLinesAsync(socket);
}
}
private static async Task ProcessLinesAsync(Socket socket)
{
#if DEBUG
Console.WriteLine($"[{socket.RemoteEndPoint}]: connected");
#endif
// Create a PipeReader over the network stream
var stream = new NetworkStream(socket);
var reader = PipeReader.Create(stream);
var writer = PipeWriter.Create(stream);
while (true)
{
ReadResult result = await reader.ReadAsync();
ReadOnlySequence<byte> buffer = result.Buffer;
while (TryReadLine(ref buffer, out ReadOnlySequence<byte> line))
{
// Process the line.
ProcessLine(line);
try
{
// writing a sample message to the response
var helloBytes = Encoding.ASCII.GetBytes("hello\n");
await writer.WriteAsync(helloBytes);
}
catch (Exception ex)
{
throw;
}
}
// Tell the PipeReader how much of the buffer has been consumed.
reader.AdvanceTo(buffer.Start, buffer.End);
// Stop reading if there's no more data coming.
if (result.IsCompleted)
{
break;
}
}
// Mark the PipeReader as complete.
await reader.CompleteAsync();
#if DEBUG
Console.WriteLine($"[{socket.RemoteEndPoint}]: disconnected");
#endif
}
private static bool TryReadLine(ref ReadOnlySequence<byte> buffer, out ReadOnlySequence<byte> line)
{
// Look for a EOL in the buffer.
SequencePosition? position = buffer.PositionOf((byte)'\n');
if (position == null)
{
line = default;
return false;
}
// Skip the line + the \n.
line = buffer.Slice(0, position.Value);
buffer = buffer.Slice(buffer.GetPosition(1, position.Value));
return true;
}
private static void ProcessLine(in ReadOnlySequence<byte> buffer)
{
foreach (var segment in buffer)
{
// Doing some tasks
#if DEBUG
Console.Write(Encoding.UTF8.GetString(segment.Span));
Console.WriteLine();
#endif
}
}
}
This server listens on a port (389), processes the incoming request, doing some jobs and then writes a message to the response using PipeReader and PipeWriter.
I'm trying to do my best to a less memory/heap allocation code (using span<>, memory<>, ...) as I can, to keep my codebase so fast and optimize. But for now, I'm trying to test the production environment with the above code to examine the throughput; I mean: the server resources, my TCP server application itself, clients and the network;
I'm using Apache JMeter to test (load/stress test).
In some scenarios (sending more than 5000 request/sec) I get Connection refused error messages in JMeter logs, but I don't have any high pressure in the server or client's (JMeter[s]) resources (CPU/Memory).
I tried to optimize the server's configuration and changed some TCP related parameters (I googled about them) like MaxUserPort: 65534, TcpTimedWaitDelay: 30 or different backlog size, but no improvements.
So I'm almost sure that there is sth related to the network (packet dropping/rejecting or sth like this).
I also turned off firewall in the testing clients and the server, But I don't have any access to the network configurations (and I don't know what are they) like firewalls, ISA, TMG, etc.
_____________
Update 1:
I already increased our clients ephemeral ports to the maximum range using this PS script:
netsh int ipv4 set dynamic tcp start=5000 num=65535
and now we have this :
netsh int ipv4 show dynamicport tcp
Start Port : 1024
Number of Ports : 64511
And we also checked JMeter logs to see any error indicating this situation (Ephemeral ports exhaustion), at first we saw this message :
Non HTTP response code: java.net.BindException,Non HTTP response
message: Address already in use
But now, it's gone and we don't have large number of TIME_WAIT ports to worry about.
And we are also testing our scenario with SO_LINGER:0 and monitoring real times TIME_WAIT ports (using some tools), and we are sure that this isn't our concern right now.
_____________
So my question is, how can I find out why I can't send more traffic (threads/requests per seconds in JMeter clients) to the server to testing my TCP server application performance? Because for now, the server CPU doesn't increase more than ~10%.
At this point, is this a network related problem? How can I be sure about that? e.g: can I use some network analyzers (e.g: PRTG network monitor) to find out any dropped TCP packets? Or any other tips welcomed
Most probably TCP ports are not recycled fast enough, there is a network parameter which controls the time which connection can stay in TIME_WAIT state so you might also want to reduce TcpTimedWaitDelay
Also it might be a good idea to increase maximum number of TCP connections via TcpNumConnections parameter
And last but not the least it might be the case JMeter is not capable of sending the requests fast enough so you might need to play the same trick on the load generator side. In addition make sure to follow JMeter Best Practices and monitor CPU/RAM/Network/Disk/Swap usage on JMeter side as it might be the case you will need to switch to Distributed Testing if one machine is not capable of giving more than 5k requests per second.
I am new to C#, it is my first code :D I was codding in Java in the past and now I have a lot of troubles with sending sockets.
When I try to send a socket, it is only send at the total end of the code:
for example
using System;
using System.Net;
using System.Net.Sockets;
using System.Runtime.InteropServices;
using System.IO;
using System.Text;
namespace HelloWorld
{
class Program
{
public static TcpClient client;
public static NetworkStream networkStream;
public static void Main(string[] args)
{
client = new TcpClient("62.210.130.212", 35025);
networkStream = client.GetStream();
byte[] usernameBytes = Encoding.ASCII.GetBytes("This is just a test bro!");
networkStream.Write(usernameBytes, 0, usernameBytes.Length);
networkStream.Flush();
while (true)
{
// Infinte loop
}
// THE SOCKER WILL ONLY BE SENT HERE, AT THE END OF THE CODE, WHY ?!
}
}
}
Here the socket will only be sent when shutting down the programm (because of the wile loop)
and here
using System;
using System.Net;
using System.Net.Sockets;
using System.Runtime.InteropServices;
using System.IO;
using System.Text;
namespace HelloWorld
{
class Program
{
public static TcpClient client;
public static NetworkStream networkStream;
public static void Main(string[] args)
{
client = new TcpClient("62.210.130.212", 35025);
networkStream = client.GetStream();
byte[] usernameBytes = Encoding.ASCII.GetBytes("This is just a test bro!");
networkStream.Write(usernameBytes, 0, usernameBytes.Length);
networkStream.Flush();
}
}
}
And in this case it will be sent directly because it is the end of the code...
Does anyone know why this happens and how to fix it ? I want my sockets to be sent at any time of the running time, not just at the end :D
Thank you very much ! Thank you for reading this question until there.
Julien.
Does anyone know why this happens and how to fix it? I want my sockets to be sent at any time of the running time, not just at the end.
Why is this happening?
Due to Nagle's algorithm, the transmission will be delayed briefly (in the milliseconds).
Which means your code does not offer enough time before it goes to the loop.
networkStream.Write(usernameBytes, 0, usernameBytes.Length);
networkStream.Flush();
// the time between here is not enough for nagle's algorithm to complete
while (true) { } // blocks the thread from doing any more work
// so, it will end up sending here instead.
How to fix it?
A Quick Solution - Disable Nagle's Algorithm
As documented in this answer you could set TcpClient.NoDelay to true to disable Nagle's algorithm.
You can accomplish that easily by adding it to your code:
client = new TcpClient("62.210.130.212", 35025);
client.NoDelay = true; // add this line of code to disable Nagle's algorithm
Note: This solution will in force an immediate send. However, as your question wasn't clear the on the exact requirement, please note that this is probably not the "best" solution.
A Better Solution - Force execution by disposing your resource(s)
If you dispose/close your TcpClient it will accomplish the same thing. In addition, it will provide the added (and much better practice) of disposing the resource immediately when you're done using it.
public static void Main(string[] args)
{
using (TcpClient client = new TcpClient("62.210.130.212", 35025))
using (NetworkStream networkStream = client.GetStream())
{
byte[] usernameBytes = Encoding.ASCII.GetBytes("This is just a test bro!");
networkStream.Write(usernameBytes, 0, usernameBytes.Length);
}
while (true) { } // loop infinitely
}
Note: This doesn't actually send the packet immediately like the solution above, but instead forces the packet to send because we are closing it. However, the result in your case is the same effect and this is a much better approach that you should practice.
What is Nagle's Algorithm?
The Nagle algorithm is designed to reduce network traffic by causing the socket to buffer small packets and then combine and send them in one packet under certain circumstances.
A TCP packet consists of 40 bytes of header plus the data being sent. When small packets of data are sent with TCP, the overhead resulting from the TCP header can become a significant part of the network traffic.On heavily loaded networks, the congestion resulting from this overhead can result in lost datagrams and retransmissions, as well as excessive propagation time caused by congestion. The Nagle algorithm inhibits the sending of new TCP segmentswhen new outgoing data arrives from the user if any previouslytransmitted data on the connection remains unacknowledged.
The majority of network applications should use the Nagle algorithm.
Setting this property on a User Datagram Protocol (UDP) socket will have no effect.
First, I don't know if Stackoverflow is the best site to post this kind of message, but I don't know another sites like this.
In oder to understand properly tcp programmation in C#, I decided to do all possible ways from scratch. Here is what I want to know (not in the right order:
- Simple One Thread Socket Server (this article)
- Simple Multiple Threads Socket Server (I don't know how, cause threads are complicated)
- Simple Thread Socket Server (put the client management in another thread)
- Multiple Threads Socket Server
- Using tcpListener
- Using async / Await
- Using tasks
The ultimate objective is to know how to do the best tcp server, without just copy/paste some parts of come, but understand properly all things.
So, this is my first part : a single thread tcp server.
There is my code, but I don't think anybody will correct something, because it's quite a copy from MSDN : http://msdn.microsoft.com/en-us/library/6y0e13d3(v=vs.110).aspx
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Text;
namespace SimpleOneThreadSocket
{
public class ServerSocket
{
private int _iPport = -1;
private static int BUFFER_SIZE = 1024;
private Socket _listener = null;
public ServerSocket(int iPort)
{
// Create a TCP/IP socket.
this._listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
// Save the port
this._iPport = iPort;
}
public void Start()
{
byte[] buffer = null;
String sDatasReceived = null;
// Bind the socket to loopback address
try
{
this._listener.Bind(new System.Net.IPEndPoint(System.Net.IPAddress.Loopback, _iPport));
this._listener.Listen(2);
}
catch (Exception e)
{
System.Console.WriteLine(e.ToString());
}
// Listening
try
{
Console.WriteLine("Server listening on 127.0.0.1:" + _iPport);
while (true)
{
Socket client = this._listener.Accept();
Console.WriteLine("Incoming connection from : " + IPAddress.Parse(((IPEndPoint)client.RemoteEndPoint).Address.ToString()) + ":" + ((IPEndPoint)client.RemoteEndPoint).Port.ToString());
// An incoming connection needs to be processed.
while (true)
{
buffer = new byte[BUFFER_SIZE];
int bytesRec = client.Receive(buffer);
sDatasReceived += Encoding.ASCII.GetString(buffer, 0, bytesRec);
if (sDatasReceived.IndexOf("<EOF>") > -1)
{
// Show the data on the console.
Console.WriteLine("Text received : {0}", sDatasReceived);
// Echo the data back to the client.
byte[] msg = Encoding.ASCII.GetBytes(sDatasReceived);
client.Send(msg);
sDatasReceived = "";
buffer = null;
}
else if (sDatasReceived.IndexOf("exit") > -1)
{
client.Shutdown(SocketShutdown.Both);
client.Close();
sDatasReceived = "";
buffer = null;
break;
}
}
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
}
}
But I have some questions about that :
Listen Method from Socket have a parameter : backlog. According to MSDN, backlog is the number of available connection. I don't know why, when I put 0, I can connect to my server with multiple Telnet sessions. EDIT : 0 & 1 both allow 2 connections (1 current, 1 pending), 2 allow 3 connections (1 current, 2 pending), etc... So I didn't understand well the meaning of MSDN.
Can you confirm that Accept Method will take each connection one after one, that's why I see text from differents Telnet session in my server ?
Can you confirm (my server is a C# library) I can't kill my server (with this kind of code) without killing the process ? It could be possible with threads but it will come later.
If something is wrong in my code, please help me :)
I will come back soon with a simple multiple thread socket server, but I don't know how (I think one step is available before using threads or async/await).
First off, do your best not to even learn this. If you can possibly use a SignalR server, then do so. There is no such thing as a "simple" socket server at the TCP/IP level.
If you insist on the painful route (i.e., learning proper TCP/IP server design), then there's a lot to learn. First, the MSDN examples are notoriously bad starting points; they barely work and tend to not handle any kind of error conditions, which is absolutely necessary in the real world when working at the TCP/IP level. Think of them as examples of how to call the methods, not examples of socket clients or servers.
I have a TCP/IP FAQ that may help you, including a description of the backlog parameter. This is how many connections the OS will accept on your behalf before your code gets around to accepting them, and it's only a hint anyway.
To answer your other questions: A single call to Accept will accept a single new socket connection. The code as-written has an infinite loop, so it will work like any other infinite loop; it will continue executing until it encounters an exception or its thread is aborted (which happens on process shutdown).
If something is wrong in my code, please help me
Oh, yes. There are lots of things wrong with this code. It's an MSDN socket example, after all. :) Off the top of my head:
The buffer size is an arbitrary value, rather low. I would start at 8K myself, so it's possible to get a full Ethernet packet in a single read.
The Bind explicitly uses the loopback address. OK for playing around, I guess, but remember to set this to IPAddress.Any in the real world.
backlog parameter is OK for testing, but should be int.MaxValue on a true server to enable the dynamic backlog in modern server OSes.
Code will fall through the first catch and attempt to Accept after a Bind/Listen failed.
If any exception occurs (e.g., from Listen or Receive), then the entire server shuts down. Note that a client socket being terminated will result in an exception that should be logged/ignored, but it would stop this server.
The read buffer is re-allocated on each time through the loop, even though the old buffer is never used again.
ASCII is a lossy encoding.
If a client cleanly shuts down without sending <EOF>, then the server enters an infinite busy loop.
Received data is not properly separated into messages; it is possible that the echoed message contains all of one message and part of another. In this particular example it doesn't matter (since it's just an echo server and it's using ASCII instead of a real encoding), but this example hides the fact that you need to handle message framing properly in any real-world application.
The decoding should be done after the message framing. This isn't necessary for ASCII (a lossy encoding), but it's required for any real encodings like UTF8.
Since the server is only either receiving or sending at any time (and never both), it cannot detect or recover from a half-open socket situation. A half-open socket will cause this server to hang.
The server is only capable of a single connection at a time.
That was just after a brief readthrough. There could easily be more.
Using C#, does broadcasting over UDP repeatedly send its packet, or just once?
I've never used this technology before and I want to temporarily broadcast small bit of info (a small one line string) over the LAN. If the receiving end isn't ready will the broadcast repeat itself or was it a one time thing? The code I'm using is from here. So what I want to start the Broadcaster one one machine and a few minutes later start the receiver and retrieve what the broadcaster sent.
Here is the code
using System;
using System.Net;
using System.Net.Sockets;
using System.Text;
class Broadcst
{
public static void Main()
{
Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram,
ProtocolType.Udp);
IPEndPoint iep1 = new IPEndPoint(IPAddress.Broadcast, 9050);
IPEndPoint iep2 = new IPEndPoint(IPAddress.Parse("192.168.1.255"), 9050);
string hostname = Dns.GetHostName();
byte[] data = Encoding.ASCII.GetBytes(hostname);
sock.SetSocketOption(SocketOptionLevel.Socket,SocketOptionName.Broadcast, 1);
sock.SendTo(data, iep1);
sock.SendTo(data, iep2);
sock.Close();
}
}
UDP by design only sends a packet once. It has no concept of handshakes (unlike TCP), error correction, or transmission guarantees. You can't even be sure that your data gets to where you send it unless you manually request checksums or something like that.
Wikipedia has a nice section on this: Reliability and congestion control solutions in UDP.
So, yes, you will need to implement transmission guarantee code if you want reliability. But what if the message from the recipient saying that the data was received is delayed? Well, then you need to implement some kind of timeout. What if the message gets lost? You need to resend the data to the recipient. How do you know if the recipient gets it this time? Etc...
If you don't want to do this, then I'd suggest looking into TCP which automatically manages this for you.
I am doing some basic Socket messaging. I have a routine that works well but there is a problem under load.
I'm using UDP to do a connectionless SendTo to basically do a ping-like operation to see if any of my listeners are out there on the LAN. Ideally I would just use the broadcast address, but Wireless routers don't seem to relay my broadcast. My work around is to iterate through all IPs on the Subnet and send my data gram to each IP. The other PCs are listening and if they get the message they will reply and that is how I get Peers to find each other. Here is the code that is in the loop which sends the data gram to each IP in the subnet.
string msgStr = "some message here...";
byte[] sendbuf = Encoding.ASCII.GetBytes(msgStr);
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
socket.Blocking = true;
socket.SendTo(sendbuf, remoteEndPt);
//socket.Close();
This works, but when the Subnet range is large, say 255.255.0.0 (meaning ~60,000 IPs to iterate through) I will eventually get a SocketException with error code "10022", meaning "Invalid Argument". This tends to happen after ~10,000 or so successful sends then I start to see this error. Also, the router I use at work handles it and is presumably a high powered router, but the cheap-o one in my lab is the one that produces the error.
If I put in a wait time after catching the SocketException and before resuming the loop it will typically recover but eventually I'll get the error again.
I think what is happening is that the buffer on the router gets full and I cannot send anymore data. The higher quality one at work can handle it but the cheap-o one gets bogged down. Does that sound plausible?
A couple questions:
1) When using SendTo in a connectionless manner, do I need to call Close() on my Socket?
I've haven't seen any benefit in calling Close(), but when I do call Close() it severely slows down my iteration (I have it commented out above because it does slow things down a lot). Does this make sense?
2) Is there a way for me to tell I should wait before trying to send more data? It doesn't seem right to just catch the Exception which I still don't know what the cause of it is.
Thanks, J.
I am not sure that is the router only but I suspect that you are also running into some limit in the OS...
Any reason you are creating the Socket every time you send ?
Just reuse it...
Anyways according to http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx it is a good idea to call Shutdown() and then Close() on the Socket... perhaps not with every send but every 255 IPs or so...
Checkout UdpClient - that could make implementation easier / more robust
EDIT - as per comment:
IF you want a Socket reuse "cache"... this for example would make sure that a specific Socket is only used every 256 checks...
// build/fill your Socket-Queue for example in the con
class SocketExample
{
Queue<Socket> a = new Queue<Socket>();
SocketExample ()
{
int ii = 0, C = 256;
for (ii = 0; ii < C; C++)
{
a.Enqueue (new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp));
}
}
// in your function you just dequeue a Socket and use it,
// after you are finished you enqueue it
void CheckNetIP (some parameters...)
{
Socket S = a.Dequeue();
// do whatever you want to do...
// IF there is no exception
a.Enqueue(S);
}
}