WebService and Polling - c#

I'd like to implement a WebService containing a method whose reply will be delayed for less than 1 second to about an hour (it depends if the data is already cached or neeeds to be fetched).
Basically my question is what would be the best way to implement this if you are only able to connect from the client to the WebService (no notification possible)?
AFAIK this will only be possible by using some kind of polling. But polling is bad and so I'd rather like to avoid using it. The other extreme could be to just let the connection stay open as long as the method isn't done. But i guess this could end up in slowing down the webserver and the network. I considerd to combine these two technics. Then the client would call the method and the server will return after at least 10 seconds either with the message that the client needs to poll again or the actual result.
What are your thoughts?

You probably want to have a look at comet

I would suggest a sort of intelligent polling, if possible:
On first request, return a token to represent the request. This is what gets presented in future requests, so it's easy to check whether or not that request has really completed.
On future requests, hold the connection open for a certain amount of time (e.g. a minute, possibly specified on the client) and return either the result or a result of "still no results; please try again at X " where X is the best guess you have about when the response will be completed.
Advantages:
You allow the client to use the "hold a connection open" model which is relatively expensive (in terms of connections) but allows the response to be served as soon as it's ready. Make sure you don't hold onto a thread each connection though! (And have some kind of time limit...)
By saying when the client should come back, you can implement a backoff policy - even if you don't know when it will be ready, you could have a "backoff for 1, 2, 4, 8, 16, 30, 30, 30, 30..." minutes policy. (You should potentially check that the client isn't ignoring this.) You don't end up with masses of wasted polls for long misses, but you still get quick results quickly.

I think that for something which could take an hour to respond a web service is not the best mechanism to use.
Why is polling bad? Surely if you adjust the frequency of the polling it won't be so bad. Prehaps double the time between polls with a max of about five minutes.

Some web services I've worked with return a "please try again in " xml message when they can't respond immediately. I realise that this is just a refinement of the polling technique, but if your server can determine at the time of the request what the likely delay is going to be, it could tell the client that and then forget about it, leaving the client to ask again once the polling interval has expired.

There are timeouts on IIS and client - side, which will prevent you from leaving the connection open.
This is also not practical, because resources/connections are blocked on the server.
Why do you want the user to wait for such a long running task? Let them look up the status of the operation somewhere.

Related

429 Too many requests only production server side, not localhost, not browser

I readed this post: C# (429) Too Many Requests
and i understod the responde code but... why only return this status code when the call is done from server side (backend) and production mode (hosted)? the service never return this code when call (the same service) from chrome's navigate url or when i do the call server side (backend) but my localhost.
CASE 1 (works fine in localhost - the service url is not localhost, is hosted)
App A (localhost) call App B (hosted) --> works fine
for (int i = 0; i < 1000; i++)
{
HttpClient client = new HttpClient();
client.BaseAddress = new Uri(url);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
String response = client.GetStringAsync(urlParameters).Result;
client.Dispose();
}
CASE 2 (work fine)
Chrome navigator call App B (hosted) --> works fine
CASE 3 (similar to case 1 but too less requests - NOT WORK)
App A (hosted) call App B (hosted) --> 429
Why? What is the problem? How can solve it?
What's Happening
The HTTP 429 response code indicates you have been rate limited. The idea is to prevent one caller from overwhelming a service, making it less availabe to other callers.
Most Common
That limiting can be based on many things. Most common are
Number of calls per unit time (usually per second)
Number of concurrent calls
The General Case
A rate limiter may also forgive a short burst of calls that happens occasionally, may allow more calls before hitting the brakes based on who you are (using your IP or an API key for example), dynamically adjust its limits based on total system load, or do other things.
Probably Happening Here
Based on your description, I would guess the number of concurrent calls could be causing production rate limiting. Rather than hitting the external API hard trying to guess what the rules are, try reaching out to them to ask. If that is not an option, running multiple requests in parallel could validate this theory.
Handling
A great way to deal with this is to back off your requests when you receive an HTTP 429.
The service should return a Retry-After header indicating how many seconds you should wait before trying again. If it does, wait that long before resubmitting your request.
If the service does not provide that header (I work with a major one that does not), use exponential backoff instead.
Depending on your needs, you may want to tell your own caller to try again later (return an HTTP 429 yourself) or you may want to queue up pending requests and work off the queue to submit them all.
Preventing
If you know the rate limits, you can pre-emptively limit your outbound call rate so you get into this situation less often.
For call-per-second limits, you can use a counter variable that you reset (in a thread-safe way) every second. If the known call limit would be exceeded, calculate when the counter will reset (store a timestamp when it does) and delay processing that long.
For a concurrent-call limit, a SemaphoreSlim works nicely. Set the maximum count to whatever your concurrent rate limit is. Acquire the semaphore before making a request and release it (in a finally block) after your call completes.
If you have multiple servers subject to the same rate limit (e.g. if rate limiting is based on an API key rather than IP address), it gets harder to self-limit, but you can set self-limiting parameters (calls per second and concurrent calls) in a configuration file, and tune them over time to maximize your throughput without hitting excessive HTTP 429's.

HttpListener setting a total connection timeout

I am trying to control the maximum total duration of a single connection in HttpListener. I am aware of the TimeoutManager property and the 5 or so different timeout values that it contains but it is unclear whether or not setting each of those values will add up to the total places where delay may occur in a connection.
I am looking for something more along the lines of: "If we have a connection that lasts more than x s from the moment of opening the connection until now, abort it without sending anything else or waiting for anything else."
EDIT
To clarify, the scenario that I was experimenting with involves the server trying to send the response and the client not receiving. This causes HttpListenerResponse.OutputStream.Write() to hang indefinitely. I was trying to find a method that I can call from another thread to hard-abort the connection. I tried using OutputStream.Close() and got Cannot Close Stream until all bytes are written. I also tried HttpListenerResponse.Abort() which produced no visible effect.
None of those properties will do what you want. HttpListener is intended to control the request flow, incomming and outgoing data, so it doesn't handle the time between when the response has been fully received and when you send a response, it's your responsability to take care of it.
You should create your own mechanism to abort the request if the total time is higer than the desired one, just a timer can be enough, when a new connection is created enqueue a timer with the total timeout as expiring time, if the request ends before the timer expires cancel the timer, else the timer aborts the request.

How many simultaneous (concurrent) connections are actually active during a many async request

My understanding is the point of Task is to abstract out threads, and that a new thread is not guaranteed per Task.
I'm debugging in VS2010, and I have something similar to this:
var request = WebRequest.Create(URL);
Task.Factory.FromAsync<WebResponse>(
request.BeginGetResponse,
request.EndGetResponse).ContinueWith(
t => { /* ... Stuff to do with response ... */ });
If I make X calls to this, e.g. start up X async web requests, how am I to calculate how many simultaneous (concurrent) connections are actually being made at any given time during execution? I assume that somehow it is opening only the max it can (in the case X is very high), and the other Tasks are blocked while waiting?
Any insight into this or how I can check with the debugger to determine how many active (open) connections are existent at a given point in execution would be great.
Basically, I'm wondering if it's handled for me, or if I have to take special consideration so that I do not appear to be attacking a server?
This won't really be specific to Task. The external connection is created as soon as you make your call to Task.Factory.FromAsync. The "task" that the Task is performing is simply waiting for the response to get back (not for it to be sent in the first place). Thus the call to BeginGetResponse will fail if your machine is unable to send any more requests, and the response will contain an error message if the server is rejecting your requests due to their belief that you are flooding them.
The only real place that Task comes into play here is the amount of time between when the response is actually received by the machine and when your continuation runs. If you are getting lots of responses, or otherwise have lots of work in the thread pool, it could take some time for it to get to your continuation.

Handling Timeouts in a Socket Server

I have an asynchronous socket server that contains a thread-safe collection of all connected clients. If there's no activity coming from a client for a set amount of time (i.e. timeout), the server application should disconnect the client. Can someone suggest the best way to efficiently track this timeout for each connected client and disconnect when the client times out? This socket server must be very high performance and at any given time, hundreds of client could be connected.
One solution is to have each client associated with a last activity timestamp and have a timer periodically poll the collection to see which connection has timed out based on that timestamp and disconnect it. However, this solution to me isn't very good because that would mean the timer thread has to lock the collection whenever it polls (preventing any other connections/disconnections) for the duration of this process of checking every connected client and disconnecting when timed out.
Any suggestions/ideas would be greatly appreciated. Thanks.
If this is a new project or if you're open to a major refactor of your project, have a look at Reactive Extensions. Rx has an elegant solution for timeouts in asynchronous calls:
var getBytes = Observable.FromAsyncPattern<byte[], int, int, int>(_readStream.BeginRead, _readStream.EndRead);
getBytes(buffer, 0, buffer.Length)
.Timeout(timeout);
Note, the code above is just intended to demonstrate how to timeout in Rx and it would be more complex in a real project.
Performance-wise, I couldn't say unless you profile your specific use case. But I have seen a talk where they used Rx in a complex data-driven platform (probably like your requirements) and they said that their software was able to make decisions within less than 30ms.
Code-wise, I find that using Rx makes my code look more elegant and less verbose.
Your solution is quite OK for many cases when the number of connected clients is small compared to the performance of the enumerating the collection of their contexts. If you allow some wiggle room in the timeout (i.e. it's OK to disconnect the client somewhere between 55 and 65 seconds of inactivity), I'd just run the purging routine every so often.
The other approach which worked great for me was using a queue of activity tokens. Lets use C#:
class token {
ClientContext theClient; // this is the client we've observed activity on
DateTime theTime; // this is the time of observed activity
};
class ClientContext {
// ... what you need to know about the client
DateTime lastActivity; // the time the last activity happened on this client
}
Every time an activity happens on a particular client, a token is generated and pushed into FIFO queue and lastActivity is updated in the ClientContext.
There is another thread which runs the following loop:
extracts the oldest token from the FIFO queue;
checks if theTime in this token matches the theClient.lastActivity;
if it does, shuts down the client;
looks at the next oldest token in the queue, calculates how much time is left till it needs to be shutdown;
Thread.Sleep(<this time>);
repeat
The price of this approach is a little, but constant, i.e. O(1) overhead. One can come up with faster best case solutions, but it seems to me that it's hard to come up with a one with faster worst case performance.
I think it is easier to control timeout form within the connection thread. So you may have something like this:
// accept socket and open stream
stream.ReadTimeout = 10000; // 10 seconds
while (true)
{
int bytes = stream.Read(buffer, 0, length)
// process the data
}
The stream.Read() call will either return data (which means client is alive) or throw IO exception (abnormal disconnect) or return 0 (client closed the socket).

client-server question

If i have a client that is connected to a server and if the server crashes, how can i determine, form my client, if the connection is off ? the idea is that if in my client's while i await to read a line from my server ( String a = sr.ReadLine(); ) and while the client is waiting to recieve that line , the server crashes , how do i close that thread that contains my while ?
Many have told me that in that while(alive) { .. } I should just change the alive value to true , but if my program is currently awaiting for a line to read, it won't get to exit the while because it will be trapped at sr.ReadLine() .
I was thinking that if i can't send a line to the server i should just close the client thread with .abort() . Any Ideas ?
Have a TimeOut parameter in ReadLine method which takes a TimeSpan value and times out after that interval if the response is not received..
public string ReadLine(TimeSpan timeout)
{
// ..your logic.
)
For an example check these SO posts -
Implementing a timeout on a function returning a value
Implement C# Generic Timeout
Is the server app your own, or something off the shelf?
If it's yours, send a "heart beat" every couple of seconds to let the clients know that the connection and service are still alive. (This is a bit more reliable than just seeing if the connection is closed since it may be possible for the connection to remain open while the server app is locked.)
That the server crashes has nothing to do with your clients. There are several external factors that can make the connection go down: The client is one of them, internet/lan problems is another one.
It doesn't matter why something fails, the server should handle it anyway. Servers going down will make your users scream ;)
Regarding multi threading, I suggest that you look at the BeginXXX/EndXXX asynchronous methods. They give you much more power and a more robust solution.
Try to avoid any strategy that relies on thread abort(). If you cannot avoid it, make sure you understand the idiom for that mechanism, which involves having a separate appdomain and catching ThreadAbortException
If the server crashes I imagine you will have more problems than just fixing a while loop. Your program may enter an unstable state for other reasons. State should not be overlooked. That being said, a nice "server timed out" message may suffice. You could take it a step further and ping, then give a slightly more advanced message "server appears to be down".

Categories

Resources