Timeouts in Silverlight Sockets - c#

I'm using Sockets in my Silverlight application to stream data from a server to a client.
However, I'm not quite sure how timeouts are handled in a Silverlight Socket.
In the documentation, I cannot see anything like ReceiveTimeout for Silverlight.
Are user-defined timeouts possible? How can I set them? How can I get notifications when a send / receive operation times out?
Are there default timeouts? How big are they?
If there are no timeouts: what's the easiest method to implement these timeouts manually?

I've checked the Socket class in Reflector and there's not a single relevant setsockopt call that deals with timeouts - except in the Dispose method. Looks like Silverlight simply relies on the default timeout of the WinSock API.
The Socket class also contains a "SetSocketOption" method which is private that you might be able to call via reflection - though it is very likely that you will run into a security exception.

Since I couldn't find any nice solution, I solved the problem manually by creating a System.Threading.Timer with code similar to the following:
System.Threading.Timer t;
bool timeout;
[...]
// Initialization
t = new Timer((s) => {
lock (this) {
timeout = true;
Disconnected();
}
});
[...]
// Before each asynchronous socket operation
t.Change(10000, System.Threading.Timeout.Infinite);
[...]
// In the callback of the asynchronous socket operations
lock (this) {
t.Change(System.Threading.Timeout.Infinite, System.Threading.Timeout.Infinite);
if (!timeout) {
// Perform work
}
}
This handles also cases where a timeout occurs which is produced by simple lag, and lets the callback return immediately if the operation took too much time.

I solved this issue for my project sharpLightFtp like:
Created a class which is injected in the UserToken-property of an instance of System.Net.Sockets.SocketAsyncEventArgs and has an System.Threading.AutoResetEvent, which is used to receive a signal after ConnectAsync, ReceiveAsync and SendAsync with a timeout (like here: line 22 for getting a custom enhanced SocketAsyncEventArgs-instance, line 270 for creating and enhancing the SocketEventArgs-instance, line 286 for sending the signal and line 30 for waiting)

Related

Calling EndConnect after BeginConnect

According to this MSDN article, the socket.EndConnect method should be called in the AsyncCallback delegate provided in the original socket.BeginConnect call.
What is not clear (and the MSDN article is silent here) is whether EndConnect should be called after a timeout (and the socket is NOT connected). socket.EndConnect throws an exception in this case.
What is the proper procedure to follow after timeout? What are the consequences if EndConnect is not called (either after a successful connection or timeout without connection)? My code appears to work fine without calling EndConnect.
Here is some example code covering the main ideas in the question:
// Member variables
private static ManualResetEvent m_event;
private static Socket m_socket;
// Constructor of class
public static CMyTestConnection()
{
// Create an event that can be used to wake this thread when the connection completes
m_event = new ManualResetEvent(false);
}
private static void TestConnection(object sender, EventArgs e)
{
// Create connection endpoint
IPAddress ip = IPAddress.Parse("200.1.2.3"); // Deliberately incorrect
IPEndPoint ipep = new IPEndPoint(ip, 12345); // Also deliberately incorrect
EndPoint ep = (EndPoint)ipep;
// Attempt connection
m_event.Reset();
m_socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
m_socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, 1);
m_socket.BeginConnect(ep, ConnectCompletedCallback, m_socket);
}
private static void ConnectCompletedCallback(IAsyncResult ar)
{
// The asynchronous connection call has completed. Either we have connected (success) or
// timed out without being able to connect (failure).
m_event.Set();
Socket s = (Socket)ar.AsyncState;
if (s.Connected)
{
// Success...should EndConnect only be called here?
s.EndConnect(ar);
}
else
{
// Or should EndConnect also be called here (in a try/catch block)?
s.Close();
}
}
You invited me to this chat room. I am assuming this is the question to which you're referring, but it's hard for me to know for sure. Your message in the chat room doesn't have a real URL. I looked at your question links in your profile, and the only one I recognize is this one, which isn't closed at the moment. So there's no need to vote to re-open.
That said, the answer is still the same as already provided in the comments: you always call the EndXXX method when you've called BeginXXX (the few known exceptions don't apply here). There's nothing in your question, even after the recent edit, that would indicate what more you need.
You don't show how the timeout is implemented, so there's not even enough information to understand the code you posted. But if you are closing the socket, thus causing your callback to be invoked where EndConnect() will throw an exception, you should be calling EndConnect(). Failing to do so can potentially leave unmanaged resources dangling, which would then eventually be exhausted, or at the very least lead to performance problems.
The source code for .NET is readily available, so you can easily examine the implementation yourself. In the case of Socket.EndConnect(), we can see that for the current implementation, if the socket has already been disposed, all that happens is an exception is thrown. So, in theory, you could ignore sockets that have already been closed. I.e. this is an exception to the general concern about leaving resources dangling, in the specific "socket is already closed" scenario. But only if your timeout is implemented by closing the socket.
There are a couple of problems here though, related to race conditions:
Depending on how the timeout is implemented (you didn't share that part, so the question is still incomplete), you may have code that got as far as starting to call Socket.Close(), but which has not set the disposed flag. You'll be dealing with a connected socket that is about to become disconnected, and you need to have try/catch in place to handle that scenario.
Your callback assumes (it seems…again, there's not enough context in your question) that the Connected property is a reliable way to detect that there's been a timeout, but the Connected property could theoretically be reset to false after being connected, but before your callback gets to execute (e.g. some other type of error on the socket).
As far as the question of calling EndConnect() on a successful connection, that is much more clear: you must do so. If your code appears to work even though you haven't, that's just you getting lucky. We can see in the implementation that the EndConnect() method does useful work to configure the socket state when called after a successful connection, so if you fail to call the method, your socket will be in some indeterminate, incompletely configured state.
Naturally, if your timeout is implemented in some other way, where the socket is not closed before the callback is invoked, then you are in the same situation as if the connection had completed, and you must call EndConnect() to ensure that the appropriate cleanup and socket configuration occurs. I.e. that would be the same as the "successful connection" scenario.
The bottom line is, there is zero benefit to not calling EndConnect() in the event of a close/dispose-based timeout. The only hypothetical benefit might be that you can avoid try/catch, but you can't get away without that, because of the race conditions that exist. And if there's not such a timeout, not only is there not a benefit to not calling the method, there is real harm in failing to call it.
On a related note, there's not enough context in your question to make any real assessments of the rest of your code (since you didn't show how you're implementing the timeout, nor how the rest of your network I/O is handled). But I will say that in most cases, the "reuse address" option is unnecessary and should not be used. Most people wind up using it because they get into a situation where they can't start a new listening socket after they have somehow stopped a previous one, but that problem only comes up with the first listening socket and/or associated connected sockets have not been closed or shutdown correctly. The correct approach in that case is to handle the socket closure/shutdown correctly, not to add to the problem by setting "reuse address".

.Net Socket Read function blocking issue

I have client-server application in C#.Net and for that I am using Tcp Socket. I have used following function to aggressive close of socket object.
void CloseSocket(Socket socket)
{
if(socket != null)
{
socket.ShutDown(ocketShutdown.Both);
socket.Close();
}
}
In Normal Condition this function works perfectly and my method returns with 0 bytes returned from Read function.
But whenever client process terminated by taskmanager server program blocks into read function of network stream.
How can I workaround this read blocking function ? I don't want to use AsyncRead function because whole project uses blocking strategy so write now I can't change it to Async pattern.
Thanks, in advance.
I'm assuming that what you are saying is that when the connection isn't closed cleanly by the client, the server can end up blocking at Read indefinitely, even if the client has actually terminated abruptly. If so: yes, that happens. So if you want to use the synchronous read methods, you should use timeouts, in particular ReceiveTimeout. If you have a multi-message protocol, it may be worthwhile adding some kind of heartbeat message periodically, to allow you to correctly identify true zombies from idle connections (for example: if you are sending a heartbeat every minute, and you haven't seen any activity on a connection for 3 minutes, hit it with a shovel).
**you can try this may help you**
public void close()
{
if(clientSocket != null )
{
sendCommand("QUIT");
}
cleanup();
}
private void cleanup()
{
if(clientSocket!=null)
{
clientSocket.Close();
clientSocket = null;
}
logined = false;
}

How do I prevent Socket/Port Exhaustion?

I am attempting to performance test a website by hitting it with requests across multiple threads. Each thread executes n times. (in a for loop)
However, I am running into problems. Specifically the WebException ("Unable to connect to remote server") with the inner exception:
An operation on a socket could not be performed because the system
lacked sufficient buffer space or because a queue was full
127.0.0.1:52395
I am attempting to run 100 threads at 500 iterations per thread.
Initially I was using HttpWebRequest in System.Net to make the GET request to the server. Currently I am using WebClient as I assumed that each iteration was using a new socket (so 100 * 500 sockets in a short period of time). I assumed WebClient (which is instantiated once per thread) would only use one socket.
I don't need 50 000 sockets open at once, as I would like to send the GET request, receive the response, and close the socket, freeing it for use in the next loop iteration. I understand that it would be a problem to
However, even with WebClient, a bunch of sockets are being requested resulting in a bunch of sockets in TIME_WAIT mode (checked using netstat). This causes other applications (like internet browsers) to hang and stop functioning.
I can operate my test with less iterations and/or less threads, as it appears the sockets do eventually exit this TIME_WAIT state. However, this is not a solution as it doesn't adequately test the abilities of the web server.
Question:
How do I explicitly close a socket (from the client side) after each thread iteration in order to prevent TIME_WAIT states and socket exhaustion?
Code:
Class that wraps the HttpRequest
Edit: Wrapped WebClient in a using, so a new one is instantiated,used and disposed for every iteration. The problem still persists.
public sealed class HttpGetTest : ITest {
private readonly string m_url;
public HttpGetTest( string url ) {
m_url = url;
}
void ITest.Execute() {
using (WebClient webClient = new WebClient()){
using( Stream stream = webClient.OpenRead( m_url ) ) {
}
}
}
}
The part of my ThreadWrapperClass that creates a new thread:
public void Execute() {
Action Hammer = () => {
for( int i = 1; i <= m_iterations; i++ ) {
//Where m_test is an ITest injected through constructor
m_test.Execute();
}
};
ThreadStart work = delegate {
Hammer();
};
Thread thread = new Thread( work );
thread.Start();
}
Do you understand the purpose of TIME_WAIT? It's a period during which it would be unsafe to reuse the port because lost packets (that have been successfully retransmitted) from the previous transaction might yet be delivered within that time period.
You could probably tweak it down in the registry somewhere, but I question if this is a sensible next step.
My experience of creating realistic load in a test environment have proved very frustrating. Certainly running your load-tester from localhost is by no means realistic, and most network tests I have made using the .net http apis seem to require more grunt in the client than the server itself.
As such, it's better to move to a second machine for generating load on your server... however domestic routing equipment is rarely up to the job of supporting anywhere near the number of connections that would cause any sort of load on a well written server app, so now you need to upgrade your routing/switching equipment as well!
Lastly, I've had some really strange and unexpected performance issues around the .net Http client API. At the end of the day, they all use HttpWebRequest to do the heavy lifting. IMO it's nowhere near as performant as it could be. DNS is sychronous, even when calling the APIs asynchronously (although if you're only requesting from a single host, this isn't an issue), and after sustained usage CPU usage creeps up until the client becomes CPU constrained rather than IO constrained. If you're looking to generate sustained and heavy load, any request-heavy app reliant on HttpWebRequest is IMO a bogus investment.
All in all, a pretty tricky job, and ultimately, something that can only be proved in the wild, unless you've got plently of cash to spend on an armada of better equipment.
[Hint: I got much better perfomance from my own client written using async Socket apis and a 3rd party DNS client library]
Q: How do I explicitly close a socket ... in order to prevent
TIME_WAIT states?
A: Dude, TIME_WAIT is an integral - and important! - part of TCP/IP itself!
You can tune the OS to reduce TIME_WAIT (which can have negative repercussions).
And you can tune the OS to increase #/ephemeral ports:
http://msdn.microsoft.com/en-us/library/aa560610%28v=bts.20%29.aspx
Here's a link on why TIME_WAIT exists ... and why it's a Good Thing:
http://www.serverframework.com/asynchronousevents/2011/01/time-wait-and-its-design-implications-for-protocols-and-scalable-servers.html
It's not an issue of closing sockets or releasing resources in your app. The TIME _WAIT is a TCP stack timeot on released sockets to prevent their re-use until such time as it is virtually impossible for any packets 'left over' from a previous connection to that socket to not have expired.
For test purposes, you can reduce the wait time from the default, (some minutes, AFAIK), to a smaller value. When load-testing servers, I set it at six seconds.
It's in the registry somewhere - you'll find it if you Google.
Found it:
Change TIME_WAIT delay
It looks like you are not forcing your WebClient to get rid of the resources that it has allocated. You are performing a Using on the stream that is returned, but your WebClient still has resources.
Either wrap your WebClient instantiation in a using block, or manually call dispose on it once you are done reading from the URL.
Try this:
public sealed class HttpGetTest : ITest {
private readonly string m_url;
public HttpGetTest( string url ) {
m_url = url;
}
public void ITest.Execute() {
using( var m_webClient = new WebClient())
{
using( Stream stream = m_webClient.OpenRead( m_url ) )
{
}
}
}
}
You don't need to mess around with TIME_WAIT to accomplish what you want.
The problem is that you are disposing the WebClient every time you call Execute(). When you do that, you close the socket connection with the server and the TCP port keeps busy for the TIME_WAIT period.
A better approach is to create the WebClient in the constructor of your HttpGetTest class and reuse the same object throughout the test.
WebClient uses keep alive by default and will reuse the same connection for all its requests so in your case there will be only 100 opened connections for this.

SocketAsyncEventArgs buffer is full of zeroes

I'm writing a message layer for my distributed system. I'm using IOCP, ie the Socket.XXXAsync methods.
Here's something pretty close to what I'm doing (in fact, my receive function is based on his):
http://vadmyst.blogspot.com/2008/05/sample-code-for-tcp-server-using.html
What I've found now is that at the start of the program (two test servers talking to each other) I each time get a number of SAEA objects where the .Buffer is entirely filled with zeroes, yet the .BytesTransferred is the size of the buffer (1024 in my case).
What does this mean? Is there a special condition I need to check for? My system interprets this as an incomplete message and moves on, but I'm wondering if I'm actually missing some data. I was under the impression that if nothing was being received, you'd not get a callback. In any case, I can see in WireShark that there aren't any zero-length packets coming in.
I've found the following when I Googled it, but I'm not sure my problem is the same:
http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/40fe397c-b1da-428e-a355-ee5a6b0b4d2c
http://go4answers.webhost4life.com/Example/socketasynceventargs-buffer-not-ready-121918.aspx
I am sure not what is going on in the linked example. It appears to be using asynchronous sockets in a synchronous way. I cannot see any callbacks or similar in the code. You may need to rethink whether you need synchronous or asynchronous sockets :).
To the problem at hand stems from the possibility that your functions are trying to read/write to the buffer before the network transmit/receive has been completed. Try using the callback functionality included in the async Socket. E.g.
// This goes into your accept function, to begin receiving the data
socketName.BeginReceive(yourbuffer, 0, yourbuffer.Length,
SocketFlags.None, new AsyncCallback(OnRecieveData), socketName);
// In your callback function you know that the socket has finished receiving data
// This callback will fire when the receive is complete.
private void OnRecieveData(IAsyncResult input) {
Socket inSocket = (Socket)input.AsyncState; // This is just a typecast
inSocket.EndReceive(input);
// Pull the data out of the socket as you already have before.
// state.Data.Write ......
}

Socket.SendAsync is not sending in-order on Mono/Linux

There is a a single-threaded server using .NET Socket with TCP protocol, and Socket.Pool(), Socket.Select(), Socket.Receive().
To send, I used:
public void SendPacket(int clientid, byte[] packet)
{
clients[clientid].socket.Send(packet);
}
But it was very slow when sending a lot of data to one client (halting the whole main thread), so I replaced it with this:
public void SendPacket(int clientid, byte[] packet)
{
using (SocketAsyncEventArgs e = new SocketAsyncEventArgs())
{
e.SetBuffer(packet, 0, packet.Length);
clients[clientid].socket.SendAsync(e);
}
}
It works fine on Windows with .NET (I don't know if it's perfect), but on Linux with Mono, packets are either dropped or reordered (I don't know). Reverting to slow version with Socket.Send() works on Linux. Source for whole server.
How to write non-blocking SendPacket() function that works on Linux?
I'm going to take a guess that it has to do with your using statement and your SendAsync call. Perhaps e falls out of scope and is being disposed while SendAsync is still processing the buffer. But then this might throw an exception. I am really just taking a guess. Try removing the using statement and see what happens.
I would say by not abusing the async method. YOu will find nowhere a documentation stating that this acutally is forced to maintain order. it queues iem for a scheuler which get distributed to threads, and by ignoring that the oder is not maintained per documentation you open yourself up to implementation details.
The best possibly is to:
Have a queue per socket.
When you write dasta into this queue, and there is no worker thread, start a work item (ThreadPool) to process the thread.
This way you have separate distinct queues that maintain order. Only one thread will ever process one queue / socket.
I got the same problem; Linux and windows react not in the same way with SendAsync. Sometimes linux truncate the data, but there is a workaround. First of all you need to use a queue. Each time you use SendAsync you have to check the callback.
If e.Offset + e.BytesTransferred < e.Buffer.Length, you just have to e.SetBuffer(e.Offset + e.BytesTransferred, e.Buffer.Length - e.BytesTransferred - e.Offset); and call SendAsync again.
I dont know why mono-linux believe it's completed before sending all the data and it's strange but i'm sure he does.
just like #mathieu, 10y later, I can confirm on Unity Mono+Linux complete callback is called without all bytes being sent in some cases. For me it was large packets only.

Categories

Resources