I am wondering whether I can set a timeout value for UdpClient receive method.
I want to use block mode, but because sometimes udp will lost packet, my program udpClient.receive will hang there forever.
any good ideas how I can manage that?
There is a SendTimeout and a ReceiveTimeout property that you can use in the Socket of the UdpClient.
Here is an example of a 5 second timeout:
var udpClient = new UdpClient();
udpClient.Client.SendTimeout = 5000;
udpClient.Client.ReceiveTimeout = 5000;
...
What Filip is referring to is nested within the socket that UdpClient contains (UdpClient.Client.ReceiveTimeout).
You can also use the async methods to do this, but manually block execution:
var timeToWait = TimeSpan.FromSeconds(10);
var udpClient = new UdpClient( portNumber );
var asyncResult = udpClient.BeginReceive( null, null );
asyncResult.AsyncWaitHandle.WaitOne( timeToWait );
if (asyncResult.IsCompleted)
{
try
{
IPEndPoint remoteEP = null;
byte[] receivedData = udpClient.EndReceive( asyncResult, ref remoteEP );
// EndReceive worked and we have received data and remote endpoint
}
catch (Exception ex)
{
// EndReceive failed and we ended up here
}
}
else
{
// The operation wasn't completed before the timeout and we're off the hook
}
There is a ReceiveTimeout property you can use.
Actually, it appears that UdpClient is broken when it comes to timeouts. I tried to write a server with a thread containing only a Receive which got the data and added it to a queue. I've done this sort of things for years with TCP. The expectation is that the loop blocks at the receive until a message comes in from a requester. However, despite setting the timeout to infinity:
_server.Client.ReceiveTimeout = 0; //block waiting for connections
_server.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, 0);
the socket times out after about 3 minutes.
The only workaround I found was to catch the timeout exception and continue the loop. This hides the Microsoft bug but fails to answer the fundamental question of why this is happening.
you can do like this:
udpClient.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, 5000);
Related
I am using a UDP socket to send then receive a message.
So when I receive I set the timeout exception to 4 seconds...
sending_socket.ReceiveTimeout = 4000;
sending_socket.ReceiveFrom(ByteFromListener, ref receiving_end_point);
Now I get this exception (which I am expecting) : An unhandled exception of type 'System.Net.Sockets.SocketException' occurred in System.dll
Additional information: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
I wanted to know how can i ignore this exception?
Basicly I want the UDPSOCKET to listen for 4 seconds and if no answer then try to send a message again.. My code is the following (part of it)
IPEndPoint sending_end_point = new IPEndPoint(sendto, sendPort);
EndPoint receiving_end_point = new IPEndPoint(IPAddress.Any, 0);
Socket sending_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
text_to_send = ("hello");
byte[] send_buffer = Encoding.ASCII.GetBytes(text_to_send);
sending_socket.SendTo(send_buffer, sending_end_point);
Byte[] ByteFromListener = new byte[8000];
sending_socket.ReceiveTimeout = 4000;
sending_socket.ReceiveFrom(ByteFromListener, ref receiving_end_point);
string datafromreceiver;
datafromreceiver = Encoding.ASCII.GetString(ByteFromListener).TrimEnd('\0');
datafromreceiver = (datafromreceiver.ToString());
try
{
sending_socket.ReceiveFrom(ByteFromListener, ref receiving_end_point);
}
catch (SocketException ex) { }
Instead of checking for exception, I suggest that you use sending_socket.Available (Read on MSDN) property.
You can add a logic where you check for the time elapsed since you sent the data and then if Available is not yet true, try to send again. Something like below:
bool data_received = false;
do
{
DateTime dtSent;
sending_socket.SendTo(send_buffer, sending_end_point);
dtSent = DateTime.Now;
while(DateTime.Now - dtSent < TimeSpan.FromSeconds(4))
{
while(sending_socket.Available)
{
int bytes_available = sending_socket.Available;
// you can use bytes_available variable to create a buffer of the required size only.
//read data... and concatenate with previously received data, if required
data_received = true;
}
Thread.Sleep(100); // sleep for some time to let the data arrive
}
}while(!data_received);
The above code is only a simple sample logic. Kindly modify it as per your requirement.
I strongly suggest that you do not depend on exceptions to handle cases which you already know may happen. Exceptions are meant to handle those cases which cannot be known in advance and where there is no mechanism to check for.
Also, SocketException can be raised for other reasons like Endpoint was not available, connection get lost due to any reason. Exception should be handled for these scenarios so that your code can handle those scenarios properly.
I am trying to implement a simple TCP server and I basically copied the example on MSDN sans a couple of lines and tried to make it work. I have an external client trying to connect already.
This is my code:
IPHostEntry ipHostInfo = Dns.Resolve(Dns.GetHostName());
IPEndPoint localEP = new IPEndPoint(ipHostInfo.AddressList[0], 4001);
Socket listener = new Socket(localEP.Address.AddressFamily,
SocketType.Stream, ProtocolType.Tcp);
try
{
listener.Bind(localEP);
listener.Listen(1000);
while (true)
{
listener.BeginAccept(new AsyncCallback(AcceptCnxCallback), listener);
}
}
catch (Exception e)
{
//Log here
}
This is my callback:
private void AcceptCnxCallback(IAsyncResult iar)
{
MensajeRecibido msj = new MensajeRecibido();
Socket server = (Socket)iar.AsyncState;
msj.workSocket = server.EndAccept(iar);
}
And this is the information of one of the incoming packages:
TCP:[SynReTransmit #1727889]Flags=......S., SrcPort=57411, DstPort=4001, PayloadLen=0, Seq=673438964, Ack=0, Win=5840 ( Negotiating scale factor 0x4 ) = 5840
Source: 10.0.19.65 Destination: 10.0.19.59
I basically have two issues:
If I use the while loop I get an OutOfMemoryException
I never do manage to connect to the client
Any tips on either of the two problems? Thank you in advance!
Your problem is, that you use asynchronous calls all the time. There is no wait mechanism or similar, so generally you are just creating new asynchronous callbacks in an infinite loop.
For a basic TCP I would recommend to use the simple approach and use the synchronous methods.
Accept() is blocking, so the program flow will stop until there is an ingoing connection.
while (true)
{
Socket s = listener.Accept();
buffer = new byte[BUFFER_SIZE];
s.Receive(buffer);
//Do something
s.Send(...);
}
Noe that this is just a basic example. If you want to keep your connection you might consider a new Thread for each accepted Socket, that continoues with receiving and sending data.
First problem
You are using an infinite loop to call an async method.
try it like this:
listener.BeginAccept(new AsyncCallback(AcceptCnxCallback), listener);
//add your code here (this part will be executed wile the listner is waiting for a connection.
while (true)
{
Thread.Sleep(100);
}
and change the Callbackmethod to:
private void AcceptCnxCallback(IAsyncResult iar)
{
MensajeRecibido msj = new MensajeRecibido();
Socket server = (Socket)iar.AsyncState;
msj.workSocket = server.EndAccept(iar);
//call again the listener after you get a message
listener.BeginAccept(new AsyncCallback(AcceptCnxCallback), listener);
}
I have looked just about all over Google, and even here on Stack overflow, but I cannot seem to find the solution I'm looking for. I am testing my programming skills remaking Pong using MonoGame for C# and I'm trying to make this multiplayer with both UDP Clients and a UDP Server. I'm going with "the perfect client-server model" idea where the server handles all the calculations, while the game client just receives data from the server and displays it on the screen. Unfortunately I have had past issue working with programming UDP Servers. I have a loop in which I receive a datagram, than begin listening for another. I use Asynchronous calls because in my mind that is what would work the best for client and server. Main code looks something like this: (I'm going to cut out the bit that won't affect CPU, and show only the networking.)
static void Main()
{
//Initialize
Console.Title = "Pong Server";
Console.WriteLine("Pong Server");
serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
//Bind socket
EndPoint localEndPoint = new IPEndPoint(IPAddress.Any, 25565);
Console.WriteLine("Binding to port 25565");
serverSocket.Bind(localEndPoint);
//Listen
Console.WriteLine("Listening on {0}.", localEndPoint);
//Prepare EndPoints
EndPoint clientEndPoint = new IPEndPoint(IPAddress.Any, 0);
//Recive Data...
rcvPacket = new byte[Data.Size];
serverSocket.BeginReceiveFrom(rcvPacket, 0, Data.Size, SocketFlags.None, ref clientEndPoint, new AsyncCallback(Receive), null);
}
And then in the Receive(IAsyncResult ar) Method:
static void Receive(IAsyncResult ar)
{
//Prepare EndPoints
EndPoint clientEndPoint = new IPEndPoint(IPAddress.Any, 0);
//End
int PacketSize = serverSocket.EndReceiveFrom(ar, ref clientEndPoint);
//<Handle Packet Code Here>
//Receive Loop
rcvPacket = new byte[Data.Size];
serverSocket.BeginReceiveFrom(rcvPacket, 0, Data.Size, SocketFlags.None, ref clientEndPoint, new AsyncCallback(Receive), null);
}
What this code looks like is that it will wait until a packet is received, than listen for another one, which will halt this "listener" thread, being asynchronous. Anyway thanks for reading my question and hopefully I'll get to the bottom of this.
Oh by the way, here is an image of one of my 4 cores getting maxed out. (Yes I'm currently debugging it.)
Adding Thread.Sleep(1) call to the update loop fixed not only the CPU usage in the original problem, but it also made the the code executed in the update loop behave better. inside the while(alive)}{ } where I keep track of the elapsed time and total time, I insert the sleep call.
The value seems to be needed at least 1.
static void Main(string[] args)
{
//Initialize
Console.Title = "Pong Server";
Console.WriteLine("Pong Server");
serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
//Bind socket
EndPoint localEndPoint = new IPEndPoint(IPAddress.Any, 25565);
Console.WriteLine("Binding to port 25565");
serverSocket.Bind(localEndPoint);
//Listen
Console.WriteLine("Listening on {0}.", localEndPoint);
//Prepare EndPoints
EndPoint clientEndPoint = new IPEndPoint(IPAddress.Any, 0);
//Recive Data...
serverSocket.BeginReceiveFrom(rcvPacket, 0, Data.Size, SocketFlags.None, ref clientEndPoint, new AsyncCallback(ReceiveFrom), null);
//Initialize TimeSpans
DateTime Now = DateTime.Now;
DateTime Start = Now;
DateTime LastUpdate = Now;
TimeSpan EllapsedTime;
TimeSpan TotalTime;
//Create Physics Objects
Paddle1 = new Physics2D();
Paddle2 = new Physics2D();
Ball = new Physics2D();
//Loop
while (alive)
{
Now = DateTime.Now;
TotalTime = Now - Start;
EllapsedTime = Now - LastUpdate;
LastUpdate = Now;
Update(EllapsedTime, TotalTime);
//Add Sleep to reduce CPU usage;
Thread.Sleep(1);
}
//Press Any Key
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
}
From MSDN documentation:
Your callback method should invoke the EndReceiveFrom method. When your application calls BeginReceiveFrom, the system will use a separate thread to execute the specified callback method, and it will block on EndReceiveFrom until the Socket reads data or throws an exception. If you want the original thread to block after you call the BeginReceiveFrom method, use WaitHandle.WaitOne. Call the Set method on a T:System.Threading.ManualResetEvent in the callback method when you want the original thread to continue executing. For additional information on writing callback methods, see Callback Sample
From my understanding, you create the callback thread directly when you call BeginReceiveFrom.
Because you do not use EndReceiveFrom, you callback thread is executed, creating another thread , again and again ...
Use EndReceiveFrom to wait for the whole packet to be read, then your callback will repeat and start reading again. Watch out for thread safety I have no idea how you manage your data here.
Looking at that post could help :
Asynchronous TCP/UDP Server
I have a .NET Socket that listens to all TCP requests on the computer, and aggregates them into HTTP requests (where possible).
I have the following problem -
When I access a site (for example - stackoverflow.com) I see in WireShark that there are X (lets say - 12) TCP packets received from the site's host.
But in my code the Socket just stops receiving the messages before the end (after 10 messages)
I have no idea how to fix this, I hope it's something that is limiting the socket in his definition
Here is my code:
public void StartCapturing()
{
try
{
_chosenOutgoingAddress = UserChoosesIpCtrl();
_socket = new Socket(AddressFamily.InterNetwork, SocketType.Raw,
ProtocolType.IP);
_socket.Bind(new IPEndPoint(_chosenOutgoingAddress, 0));
_socket.SetSocketOption(SocketOptionLevel.IP,
SocketOptionName.HeaderIncluded, true);
_socket.IOControl(IOControlCode.ReceiveAll, _bIn, _bOut);
thrStartCapturing = new Thread(StartReceiving);
thrStartCapturing.Name = "Capture Thread";
thrStartCapturing.Start();
}
catch (Exception ex)
{
//TODO: general exception handler
throw ex;
}
}
The StartCapturing method will initiate the Socket and start the receiving thread with the StartReceiving method (as below0
private void StartReceiving()
{
while (!_stopCapturing)
{
int size = _socket.ReceiveBufferSize;
int bytesReceived = _socket.Receive(_bBuffer,
0,
_bBuffer.Length,
SocketFlags.None);
if (bytesReceived > 0)
{
_decPackagesReceived++;
ConvertReceivedData(_bBuffer, bytesReceived);
}
Array.Clear(_bBuffer, 0, _bBuffer.Length);
}
}
What am I doing wrong?
Ok, I figured it out, so I'm posting here for anyone else who might need it in the future
The .NET Socket class has a property ReceiveBufferSize which determines what is the buffer that the Socket will allow.
My problem was that my code wasn't ASync, or fast enough to clean this buffer, so that the last TCP packets had no more buffer and were ignored.
Increasing the ReceiveBufferSize or make my code ASync (probably better :-)) will fix this.
I have specified the ReceiveTimout as 40 ms. But it takes more than 500ms for the receive to timeout. I am using a Stopwatch to compute the timetaken.
The code is shown below.
Socket TCPSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);
TCPSocket.ReceiveTimeout = 40;
try
{
TCPSocket.Receive(Buffer);
} catch(SocketException e) { }
You can synchronously poll on the socket with any timeout you wish. If Poll() returns true, you can be certain that you can make a call to Receive() that won't block.
Socket s;
// ...
// Poll the socket for reception with a 10 ms timeout.
if (s.Poll(10000, SelectMode.SelectRead))
{
s.Receive(); // This call will not block
}
else
{
// Timed out
}
I recommend you read Stevens' UNIX Network Programming chapters 6 and 16 for more in-depth information on non-blocking socket usage. Even though the book has UNIX in its name, the overall sockets architecture is essentially the same in UNIX and Windows (and .net)
You cannot use timeout values which are less than 500ms.
See here for SendTimeout: http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.sendtimeout
Even though MSDN doesn't state the same requirement for the ReceiveTimeout, my experience shows that this restriction is still there.
You can also read more about this on several SO posts:
Setting socket send/receive timeout to less than 500ms in .NET
WinSock recv() timeout: setsockopt()-set value + half a second?
I found this one:
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
IAsyncResult result = socket.BeginConnect( sIP, iPort, null, null );
bool success = result.AsyncWaitHandle.WaitOne( 40, true );
if ( !success )
{
socket.Close();
throw new ApplicationException("Failed to connect server.");
}