How can I detect that a client has disconnected from my server?
I have the following code in my AcceptCallBack method
static Socket handler = null;
public static void AcceptCallback(IAsyncResult ar)
{
//Accept incoming connection
Socket listener = (Socket)ar.AsyncState;
handler = listener.EndAccept(ar);
}
I need to find a way to discover as soon as possible that the client has disconnected from the handler Socket.
I've tried:
handler.Available;
handler.Send(new byte[1], 0,
SocketFlags.None);
handler.Receive(new byte[1], 0,
SocketFlags.None);
The above approaches work when you are connecting to a server and want to detect when the server disconnects but they do not work when you are the server and want to detect client disconnection.
Any help will be appreciated.
Since there are no events available to signal when the socket is disconnected, you will have to poll it at a frequency that is acceptable to you.
Using this extension method, you can have a reliable method to detect if a socket is disconnected.
static class SocketExtensions
{
public static bool IsConnected(this Socket socket)
{
try
{
return !(socket.Poll(1, SelectMode.SelectRead) && socket.Available == 0);
}
catch (SocketException) { return false; }
}
}
Someone mentioned keepAlive capability of TCP Socket.
Here it is nicely described:
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
I'm using it this way: after the socket is connected, I'm calling this function, which sets keepAlive on. The keepAliveTime parameter specifies the timeout, in milliseconds, with no activity until the first keep-alive packet is sent. The keepAliveInterval parameter specifies the interval, in milliseconds, between when successive keep-alive packets are sent if no acknowledgement is received.
void SetKeepAlive(bool on, uint keepAliveTime, uint keepAliveInterval)
{
int size = Marshal.SizeOf(new uint());
var inOptionValues = new byte[size * 3];
BitConverter.GetBytes((uint)(on ? 1 : 0)).CopyTo(inOptionValues, 0);
BitConverter.GetBytes((uint)keepAliveTime).CopyTo(inOptionValues, size);
BitConverter.GetBytes((uint)keepAliveInterval).CopyTo(inOptionValues, size * 2);
socket.IOControl(IOControlCode.KeepAliveValues, inOptionValues, null);
}
I'm also using asynchronous reading:
socket.BeginReceive(packet.dataBuffer, 0, 128,
SocketFlags.None, new AsyncCallback(OnDataReceived), packet);
And in callback, here is caught timeout SocketException, which raises when socket doesn't get ACK signal after keep-alive packet.
public void OnDataReceived(IAsyncResult asyn)
{
try
{
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = socket.EndReceive(asyn);
}
catch (SocketException ex)
{
SocketExceptionCaught(ex);
}
}
This way, I'm able to safely detect disconnection between TCP client and server.
This is simply not possible. There is no physical connection between you and the server (except in the extremely rare case where you are connecting between two compuers with a loopback cable).
When the connection is closed gracefully, the other side is notified. But if the connection is disconnected some other way (say the users connection is dropped) then the server won't know until it times out (or tries to write to the connection and the ack times out). That's just the way TCP works and you have to live with it.
Therefore, "instantly" is unrealistic. The best you can do is within the timeout period, which depends on the platform the code is running on.
EDIT:
If you are only looking for graceful connections, then why not just send a "DISCONNECT" command to the server from your client?
"That's just the way TCP works and you have to live with it."
Yup, you're right. It's a fact of life I've come to realize. You will see the same behavior exhibited even in professional applications utilizing this protocol (and even others). I've even seen it occur in online games; you're buddy says "goodbye", and he appears to be online for another 1-2 minutes until the server "cleans house".
You can use the suggested methods here, or implement a "heartbeat", as also suggested. I choose the former. But if I did choose the latter, I'd simply have the server "ping" each client every so often with a single byte, and see if we have a timeout or no response. You could even use a background thread to achieve this with precise timing. Maybe even a combination could be implemented in some sort of options list (enum flags or something) if you're really worried about it. But it's no so big a deal to have a little delay in updating the server, as long as you DO update. It's the internet, and no one expects it to be magic! :)
Implementing heartbeat into your system might be a solution. This is only possible if both client and server are under your control. You can have a DateTime object keeping track of the time when the last bytes were received from the socket. And assume that the socket not responded over a certain interval are lost. This will only work if you have heartbeat/custom keep alive implemented.
I've found quite useful, another workaround for that!
If you use asynchronous methods for reading data from the network socket (I mean, use BeginReceive - EndReceive methods), whenever a connection is terminated; one of these situations appear: Either a message is sent with no data (you can see it with Socket.Available - even though BeginReceive is triggered, its value will be zero) or Socket.Connected value becomes false in this call (don't try to use EndReceive then).
I'm posting the function I used, I think you can see what I meant from it better:
private void OnRecieve(IAsyncResult parameter)
{
Socket sock = (Socket)parameter.AsyncState;
if(!sock.Connected || sock.Available == 0)
{
// Connection is terminated, either by force or willingly
return;
}
sock.EndReceive(parameter);
sock.BeginReceive(..., ... , ... , ..., new AsyncCallback(OnRecieve), sock);
// To handle further commands sent by client.
// "..." zones might change in your code.
}
This worked for me, the key is you need a separate thread to analyze the socket state with polling. doing it in the same thread as the socket fails detection.
//open or receive a server socket - TODO your code here
socket = new Socket(....);
//enable the keep alive so we can detect closure
socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
//create a thread that checks every 5 seconds if the socket is still connected. TODO add your thread starting code
void MonitorSocketsForClosureWorker() {
DateTime nextCheckTime = DateTime.Now.AddSeconds(5);
while (!exitSystem) {
if (nextCheckTime < DateTime.Now) {
try {
if (socket!=null) {
if(socket.Poll(5000, SelectMode.SelectRead) && socket.Available == 0) {
//socket not connected, close it if it's still running
socket.Close();
socket = null;
} else {
//socket still connected
}
}
} catch {
socket.Close();
} finally {
nextCheckTime = DateTime.Now.AddSeconds(5);
}
}
Thread.Sleep(1000);
}
}
The example code here
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.connected.aspx
shows how to determine whether the Socket is still connected without sending any data.
If you called Socket.BeginReceive() on the server program and then the client closed the connection "gracefully", your receive callback will be called and EndReceive() will return 0 bytes. These 0 bytes mean that the client "may" have disconnected. You can then use the technique shown in the MSDN example code to determine for sure whether the connection was closed.
Expanding on comments by mbargiel and mycelo on the accepted answer, the following can be used with a non-blocking socket on the server end to inform whether the client has shut down.
This approach does not suffer the race condition that affects the Poll method in the accepted answer.
// Determines whether the remote end has called Shutdown
public bool HasRemoteEndShutDown
{
get
{
try
{
int bytesRead = socket.Receive(new byte[1], SocketFlags.Peek);
if (bytesRead == 0)
return true;
}
catch
{
// For a non-blocking socket, a SocketException with
// code 10035 (WSAEWOULDBLOCK) indicates no data available.
}
return false;
}
}
The approach is based on the fact that the Socket.Receive method returns zero immediately after the remote end shuts down its socket and we've read all of the data from it. From Socket.Receive documentation:
If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes.
If you are in non-blocking mode, and there is no data available in the protocol stack buffer, the Receive method will complete immediately and throw a SocketException.
The second point explains the need for the try-catch.
Use of the SocketFlags.Peek flag leaves any received data untouched for a separate receive mechanism to read.
The above will work with a blocking socket as well, but be aware that the code will block on the Receive call (until data is received or the receive timeout elapses, again resulting in a SocketException).
Above answers can be summarized as follow :
Socket.Connected properity determine socket state depend on last read or receive state so it can't detect current disconnection state until you manually close the connection or remote end gracefully close of socket (shutdown).
So we can use the function below to check connection state:
bool IsConnected(Socket socket)
{
try
{
if (socket == null) return false;
return !((socket.Poll(5000, SelectMode.SelectRead) && socket.Available == 0) || !socket.Connected);
}
catch (SocketException)
{
return false;
}
//the above code is short exp to :
/* try
{
bool state1 = socket.Poll(5000, SelectMode.SelectRead);
bool state2 = (socket.Available == 0);
if ((state1 && state2) || !socket.Connected)
return false;
else
return true;
}
catch (SocketException)
{
return false;
}
*/
}
Also the above check need to care about poll respone time(block time)
Also as said by Microsoft Documents : this poll method "can't detect proplems like a broken netwrok cable or that remote host was shut down ungracefuuly".
also as said above there is race condition between socket.poll and socket.avaiable which may give false disconnect.
The best way as said by Microsoft Documents is to attempt to send or recive data to detect these kinds of errors as MS docs said.
The below code is from Microsoft Documents :
// This is how you can determine whether a socket is still connected.
bool IsConnected(Socket client)
{
bool blockingState = client.Blocking; //save socket blocking state.
bool isConnected = true;
try
{
byte [] tmp = new byte[1];
client.Blocking = false;
client.Send(tmp, 0, 0); //make a nonblocking, zero-byte Send call (dummy)
//Console.WriteLine("Connected!");
}
catch (SocketException e)
{
// 10035 == WSAEWOULDBLOCK
if (e.NativeErrorCode.Equals(10035))
{
//Console.WriteLine("Still Connected, but the Send would block");
}
else
{
//Console.WriteLine("Disconnected: error code {0}!", e.NativeErrorCode);
isConnected = false;
}
}
finally
{
client.Blocking = blockingState;
}
//Console.WriteLine("Connected: {0}", client.Connected);
return isConnected ;
}
//and heres comments from microsoft docs*
The socket.Connected property gets the connection state of the Socket as of the last I/O operation. When it returns false, the Socket was either never connected, or is no longer connected.
Connected is not thread-safe; it may return true after an operation is aborted when the Socket is disconnected from another thread.
The value of the Connected property reflects the state of the connection as of the most recent operation.
If you need to determine the current state of the connection, make a nonblocking, zero-byte Send call. If the call returns successfully or throws a WAEWOULDBLOCK error code (10035), then the socket is still connected; //otherwise, the socket is no longer connected .
Can't you just use Select?
Use select on a connected socket. If the select returns with your socket as Ready but the subsequent Receive returns 0 bytes that means the client disconnected the connection. AFAIK, that is the fastest way to determine if the client disconnected.
I do not know C# so just ignore if my solution does not fit in C# (C# does provide select though) or if I had misunderstood the context.
Using the method SetSocketOption, you will be able to set KeepAlive that will let you know whenever a Socket gets disconnected
Socket _connectedSocket = this._sSocketEscucha.EndAccept(asyn);
_connectedSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, 1);
http://msdn.microsoft.com/en-us/library/1011kecd(v=VS.90).aspx
Hope it helps!
Ramiro Rinaldi
i had same problem , try this :
void client_handler(Socket client) // set 'KeepAlive' true
{
while (true)
{
try
{
if (client.Connected)
{
}
else
{ // client disconnected
break;
}
}
catch (Exception)
{
client.Poll(4000, SelectMode.SelectRead);// try to get state
}
}
}
This is in VB, but it seems to work well for me. It looks for a 0 byte return like the previous post.
Private Sub RecData(ByVal AR As IAsyncResult)
Dim Socket As Socket = AR.AsyncState
If Socket.Connected = False And Socket.Available = False Then
Debug.Print("Detected Disconnected Socket - " + Socket.RemoteEndPoint.ToString)
Exit Sub
End If
Dim BytesRead As Int32 = Socket.EndReceive(AR)
If BytesRead = 0 Then
Debug.Print("Detected Disconnected Socket - Bytes Read = 0 - " + Socket.RemoteEndPoint.ToString)
UpdateText("Client " + Socket.RemoteEndPoint.ToString + " has disconnected from Server.")
Socket.Close()
Exit Sub
End If
Dim msg As String = System.Text.ASCIIEncoding.ASCII.GetString(ByteData)
Erase ByteData
ReDim ByteData(1024)
ClientSocket.BeginReceive(ByteData, 0, ByteData.Length, SocketFlags.None, New AsyncCallback(AddressOf RecData), ClientSocket)
UpdateText(msg)
End Sub
You can also check the .IsConnected property of the socket if you were to poll.
I'm using the asynchronous methos BeginSend and I need some sort of a timeout mechanism. What I've implemented works fine for connect and receive timeouts but I have a problem with the BeginSend callback. Even a timeout of 25 seconds is often not enough and gets exceeded. This seems very strange to me and points towards a different cause.
public void Send(String data)
{
if (client.Connected)
{
// Convert the string data to byte data using ASCII encoding.
byte[] byteData = Encoding.ASCII.GetBytes(data);
client.NoDelay = true;
// Begin sending the data to the remote device.
IAsyncResult res = client.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback(SendCallback), client);
if (!res.IsCompleted)
{
sendTimer = new System.Threading.Timer(SendTimeoutCallback, null, 10000, Timeout.Infinite);
}
}
else MessageBox.Show("No connection to target! Send");
}
private void SendCallback(IAsyncResult ar)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 1, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
sendTimeoutflag = 0; //needs to be reset back to 0 for next reception
// we set the flag to 1, indicating it was completed.
if (sendTimer != null)
{
// stop the timer from firing.
sendTimer.Dispose();
}
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete sending the data to the remote device.
int bytesSent = client.EndSend(ar);
ef.updateUI("Sent " + bytesSent.ToString() + " bytes to server." + "\n");
}
catch (Exception e)
{
MessageBox.Show(e.ToString());
}
}
private void SendTimeoutCallback(object obj)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 2, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
// we set the flag to 2, indicating a timeout was hit.
sendTimer.Dispose();
client.Close(); // closing the Socket cancels the async operation.
MessageBox.Show("Connection to the target has been lost! SendTimeoutCallback");
}
I've tested timeout values up to 30 seconds. The value of 30 seconds has proved to be the only one never to time out. But that just seems like an overkill and I believe there's a different underlying cause.Any ideas as to why this could be happening?
Unfortunately, there's not enough code to completely diagnose this. You don't even show the declaration of sendTimeoutflag. The example isn't self-contained, so there's no way to test it. And you're not clear about exactly what happens (e.g. do you just get the timeout, do you complete a send and still get a timeout, does something else happen?).
That said, I see at least one serious bug in the code, which is your use of the sendTimeoutflag. The SendCallback() method sets this flag to 1, but it immediately sets it back to 0 again (this time without the protection of Interlocked.CompareExchange()). Only after it's set the value to 0 does it dispose the timer.
This means that even when you successfully complete the callback, the timeout timer is nearly guaranteed to have no idea and to close the client object anyway.
You can fix this specific issue by moving the assignment sendTimeoutflag = 0; to a point after you've actually completed the send operation, e.g. at the end of the callback method. And even then only if you take steps to ensure that the timer callback cannot execute past that point (e.g. wait for the timer's dispose to complete).
Note that even having fixed that specific issue, you may still have other bugs. Frankly, it's not clear why you want a timeout in the first place. Nor is it clear why you want to use lock-free code to implement your timeout logic. More conventional locking (i.e. Monitor-based with the lock statement) would be easier to implement correctly and would likely not impose a noticeable performance penalty.
And I agree with the suggestion that you would be better-served by using the async/await pattern instead of explicitly dealing with callback methods (but of course that would mean using a higher-level I/O object, since Socket doesn't suppose async/await).
I'm writing an application for windows phone and I need to communicate with a server and transmit data. The SERVER is written in C++ and I cannot modify it. The CLIENT is what I have to write. The Server is designed such that the client connect to it and transmit data. The connection remains open for all the transmission. By writing my code in C# I am able to receive data from the server but after the first receive, the data that I read in the buffer are alway the same. So I need a way to flush the input buffer so I can receive the new data (data are sent continuously). I'm using the class defined in here:
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202858%28v=vs.105%29.aspx
thanks a lot !!
I used this code for Receiving in the SocketClient.cs :
public string Receive()
{
string response = "Operation Timeout";
// We are receiving over an established socket connection
if (_socket != null)
{
// Create SocketAsyncEventArgs context object
SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs();
socketEventArg.RemoteEndPoint = _socket.RemoteEndPoint;
// Setup the buffer to receive the data
socketEventArg.SetBuffer(new Byte[MAX_BUFFER_SIZE], 0, MAX_BUFFER_SIZE);
// Inline event handler for the Completed event.
// Note: This even handler was implemented inline in order to make
// this method self-contained.
socketEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(delegate(object s, SocketAsyncEventArgs e)
{
if (e.SocketError == SocketError.Success)
{
// *********************************************
// THIS part of the code was added to receive
// a vector of 3 double
Double[] OdomD = new Double[3];
for (int i = 0; i < 3; i++)
{
OdomD[i] = BitConverter.ToDouble(e.Buffer, 8 * i);
}
// *********************************************
}
else
{
response = e.SocketError.ToString();
}
_clientDone.Set();
});
// Sets the state of the event to nonsignaled, causing threads to block
_clientDone.Reset();
// Make an asynchronous Receive request over the socket
_socket.ReceiveAsync(socketEventArg);
// Block the UI thread for a maximum of TIMEOUT_MILLISECONDS milliseconds.
// If no response comes back within this time then proceed
_clientDone.WaitOne(TIMEOUT_MILLISECONDS);
}
else
{
response = "Socket is not initialized";
}
return response;
}
The Connect() method is exactly the same reported in the link above. So when the application start, the Connect() method is called as follow:
SocketClient client = new SocketClient();
// Attempt to connect to server for receiving data
Log(String.Format("Connecting to server '{0}' over port {1} (data) ...", txtRemoteHost.Text, 4444), true);
result = client.Connect(txtRemoteHost.Text, 4444);
Log(result, false);
That is done just once at the beginning, then I need receive this array of 3 double that is updated every second. So I use:
Log("Requesting Receive ...", true);
result = client.Receive();
Log(result, false);
The problem is that also if I debug the code and stop the execution inside Receive(), I always read the same value, that is the first value sent by the server. What I'm expecting is that every time I call client.Receive(), I get the new value, but this is not appening.
I had a similar problem by writing the same client in Matlab environment. I solved the problem by using the function flushinput(t) before to read the input buffer. In this way I was able to read always the last data sent by the server. I'm lookin for a function similar to that one ..
The size of the input buffer is fixed equal to the data that I'm expecting to receive, in that case is 24 bytes ( 3* sizeof(double) ) ..
Thanks a lot for you time !!
oleksii is right, you should call client.Receive() in a loop. You can choose to start a thread that covers the receive section of your code. Also note that client.Receive() will keep trying to receive from the buffer, and it will get stuck if there is no data available.
The main question was **how to clear the input buffer? ** or am I wrong?=!
Nevertheless; since you don't have a fixed buffer denoted as seen from you posted code and receive it via the SocketAsyncEventArgs, you could clear it with:
Array.Clear(e.Buffer, 0, e.Buffer.Length);
I have a weird problem occuring at times.
I have a synchronous socket that I connect to and send data to fine.
do
{
try
{
// blocking
int bytesRead = sender.Receive(bytesReceived);
// process the received bytes
}
catch (SocketException soex)
{
throw new Exception(String.Format("Socket exception..." +Environment.NewLine+"[Code: {0}] {1}",soex.ErrorCode,soex.Message));
}
}
while (sender.Poll(1500, SelectMode.SelectRead) && sender.Available > 0);
I don't want to stay in the Receive forever since it is blocking, so that is why I put a poll with a time to wait for a response for the packets to get time to come onto the wire and along with the Available, I can break when there's no more data for real.
The problem I am getting is that the while (in the do-while) returns false, when client.Available > 0 is FALSE (it has bytes)
I put a Debug.WriteLine after the while to print out how many available bytes are available and I get a number.
Any idea? I'm puzzled.
Maybe sender.Poll returns false, but still returns data to be read. Maybe you could just change your while-condition to sender.Available > 0 and call sender.Poll inside the loop.
EDIT:
Maybe this is enough (pseudo-code):
int length;
do {
length = socket.Read(buffer);
// process "buffer"
} while (length > 0);
You may need to set a socket timeout to prevent the Read call from blocking too long.
I'm struggling a bit with socket programming (something I'm not at all familiar with) and I can't find anything which helps from google or MSDN (awful). Apologies for the length of this.
Basically I have an existing service which recieves and responds to requests over UDP. I can't change this at all.
I also have a client within my webapp which dispatches and listens for responses to that service. The existing client I've been given is a singleton which creates a socket and an array of response slots, and then creates a background thread with an infinite looping method that makes "sock.Receive()" calls and pushes the data received into the slot array. All kinds of things about this seem wrong to me and the infinite thread breaks my unit testing so I'm trying to replace this service with one which makes it's it's send/receives asynchronously instead.
Point 1: Is this the right approach? I want a non-blocking, scalable, thread-safe service.
My first attempt is roughly like this, which sort of worked but the data I got back was always shorter than expected (i.e. the buffer did not have the number of bytes requested) and seemed to throw exceptions when processed.
private Socket MyPreConfiguredSocket;
public object Query()
{
//build a request
this.MyPreConfiguredSocket.SendTo(MYREQUEST, packet.Length, SocketFlags.Multicast, this._target);
IAsyncResult h = this._sock.BeginReceiveFrom(response, 0, BUFFER_SIZE, SocketFlags.None, ref this._target, new AsyncCallback(ARecieve), this._sock);
if (!h.AsyncWaitHandle.WaitOne(TIMEOUT)) { throw new Exception("Timed out"); }
//process response data (always shortened)
}
private void ARecieve (IAsyncResult result)
{
int bytesreceived = (result as Socket).EndReceiveFrom(result, ref this._target);
}
My second attempt was based on more google trawling and this recursive pattern I frequently saw, but this version always times out! It never gets to ARecieve.
public object Query()
{
//build a request
this.MyPreConfiguredSocket.SendTo(MYREQUEST, packet.Length, SocketFlags.Multicast, this._target);
State s = new State(this.MyPreConfiguredSocket);
this.MyPreConfiguredSocket.BeginReceiveFrom(s.Buffer, 0, BUFFER_SIZE, SocketFlags.None, ref this._target, new AsyncCallback(ARecieve), s);
if (!s.Flag.WaitOne(10000)) { throw new Exception("Timed out"); } //always thrown
//process response data
}
private void ARecieve (IAsyncResult result)
{
//never gets here!
State s = (result as State);
int bytesreceived = s.Sock.EndReceiveFrom(result, ref this._target);
if (bytesreceived > 0)
{
s.Received += bytesreceived;
this._sock.BeginReceiveFrom(s.Buffer, s.Received, BUFFER_SIZE, SocketFlags.None, ref this._target, new AsyncCallback(ARecieve), s);
}
else
{
s.Flag.Set();
}
}
private class State
{
public State(Socket sock)
{
this._sock = sock;
this._buffer = new byte[BUFFER_SIZE];
this._buffer.Initialize();
}
public Socket Sock;
public byte[] Buffer;
public ManualResetEvent Flag = new ManualResetEvent(false);
public int Received = 0;
}
Point 2: So clearly I'm getting something quite wrong.
Point 3: I'm not sure if I'm going about this right. How does the data coming from the remote service even get to the right listening thread? Do I need to create a socket per request?
Out of my comfort zone here. Need help.
Not the solution for you, just a suggestion - come up with the simplest code that works peeling off all the threading/events/etc. From there start adding needed, and only needed, complexity. My experience always was that in the process I'd find what I was doing wrong.
So is your program SUDO outline as follows?
Socket MySocket;
Socket ResponceSocket;
byte[] Request;
byte[] Responce;
public byte[] GetUDPResponce()
{
this.MySocket.Send(Request).To(ResponceSocket);
this.MySocket.Receive(Responce).From(ResponceSocket);
return Responce;
}
ill try help!
The second code post is the one we can work with and the way forward.
But you are right! the documentation is not the best.
Do you know for sure that you get a response to the message you send? Remove the asynchronous behavior from the socket and just try to send and receive synchronously (even though this may block your thread for now). Once you know this behavior is working, edit your question and post that code, and I'll help you with the threading model. Once networking portion, i.e., the send/receive, is working, the threading model is pretty straightforward.
One POSSIBLE issue is if your send operation goes to the server, and it responds before windows sets up the asynchronous listener. If you arent listening the data wont be accepted on your side (unlike TCP)
Try calling beginread before the send operation.