I have written a little winforms application that sends http requests to every ip address within my local network to discover a certain device of mine. On my particular subnet mask thats 512 addresses. I have written this using backGroundWorker but I wanted to tryout httpClient and the Async/Await pattern to achieve the same thing. The code below uses a single instance of httpClient and I wait until all the requests have completed. This issue is that the main thread gets blocked. I know this because I have a picturebox + loading gif and its not animating uniformly. I put the GetAsync method in a Task.Run as suggested here but that didn't work either.
private async void button1_Click(object sender, EventArgs e)
{
var addresses = networkUtils.generateIPRange..
await MakeMultipleHttpRequests(addresses);
}
public async Task MakeMultipleHttpRequests(IPAddress[] addresses)
{
List<Task<HttpResponseMessage>> httpTasks = new List<Task<HttpResponseMessage>>();
foreach (var address in addresses)
{
Task<HttpResponseMessage> response = MakeHttpGetRequest(address.ToString());
httpTasks.Add(response);
}
try
{
if (httpTasks.ToArray().Length != 0)
{
await Task.WhenAll(httpTasks.ToArray());
}
}
catch (Exception ex)
{
Console.WriteLine("\thttp tasks did not complete Exception : {0}", ex.Message);
}
}
private async Task<HttpResponseMessage> MakeHttpGetRequest(string address)
{
var url = string.Format("http://{0}/getStatus", address);
var cts = new System.Threading.CancellationTokenSource();
cts.CancelAfter(TimeSpan.FromSeconds(10));
HttpResponseMessage response = null;
var request = new HttpRequestMessage(HttpMethod.Get, url);
response = await httpClient.SendAsync(request, cts.Token);
return response;
}
I have read a similar issue here but my gui thread is not doing much. I have read here that I maybe running out of threads. Is this the issue, how can I resolve it?
I know its the Send Async because if I replace the code with the simple task below there is no blocking.
await Task.Run(() =>
{
Thread.Sleep(1000);
});
So one of the issues here is that you are creating 500+ tasks one after another in quick succession with a timeout set outside the task creation.
Just because you ask to run 500+ tasks, doesn't mean 500+ tasks are all going to run at the same time. They get queued up and run when the scheduler deems it's possible.
You set a timeout at the time of creation of 10 seconds. But they could sit in the scheduler for 10 seconds before they even get executed.
You want to have your Http requests to timeout organically, you can do that like this when you create the HttpClient:
private static readonly HttpClient _httpClient = new HttpClient
{
Timeout = TimeSpan.FromSeconds(10)
};
So, by moving the timeout to the HttpClient, your method should now look like this:
private static Task<HttpResponseMessage> MakeHttpGetRequest(string address)
{
return _httpClient.SendAsync(new HttpRequestMessage(HttpMethod.Get, new UriBuilder
{
Host = address,
Path = "getStatus"
}.Uri));
}
Try using that method and see if it improves your lock-up issue in Debug mode.
As far as the issue you were having: It's locking up because you are in Debug mode and the debugger is trying to say "hey, you got an exception" 500 times all at the same time because they were all spawned at the same time. Run it in Release mode and see if it still locks up.
What I would consider doing is batching out your operations. Do 20, then wait until those 20 finish, do 20 more, so on and so forth.
If you'd like to see a slick way of batching tasks, let me know and I would be more than happy to show you.
On .NET Framework, the number of connections to a server is controlled by the ServicePointManager Class.
For a client, the default connection limit is 2 on client processes.
No matter how many HttpClient.SendAsync invocations you do, only 2 will be active at the same time.
But you can manage the connections yourself.
On .NET Core here isn't the concept of service point manager and the equivalent default limit is int.MaxValue.
Related
I have a web app that connects to an external API.
That API has a limit of 3 connections per second.
I have a method that gets employee data for a whole factory.
It works fine, but I've found that if a particular factory has a lot of employees, I hit the API connection limit and get an error.
(429) API calls exceeded...maximum 3 per Second
So I decided to use await Task.Delay(1000) to set a 1 second delay, every time this method is used.
Now it seems to have reduced the number of errors I get, but I am still getting a few limit errors.
Is there another method I could use to ensure my limit is not reached?
Here is my code:
public async Task<YourSessionResponder> GetAll(Guid factoryId)
{
UserSession.AuthData sessionManager = new UserSession.AuthData
{
UserName = "xxxx",
Password = "xxxx"
};
ISessionHandler sessionMgr = new APIclient();
YourSessionResponder response;
response = await sessionMgr.GetDataAsync(sessionManager, new ListerRequest
{
FactoryId = factoryId;
await Task.Delay(1000);
return response;
}
I call it like this:
var yourEmployees = GetAll(factoryId);
I have a web app that connects to an external API.
Your current code limits the number of outgoing requests made by a single incoming request to your API. What you need to do is limit all of your outgoing requests, app-wide.
It's possible to do this using a SemaphoreSlim:
private static readonly SemaphoreSlim Mutex = new(1);
public async Task<YourSessionResponder> GetAll(Guid factoryId)
{
...
YourSessionResponder response;
await Mutex.WaitAsync();
try
{
response = await sessionMgr.GetDataAsync(...);
await Task.Delay(1000);
}
finally
{
Mutex.Release();
}
return response;
}
But I would take a different approach...
Is there another method I could use to ensure my limit is not reached?
Generally, I recommend just retrying on 429 errors, using de-correlated jittered exponential backoff (see Polly for an easy implementation). That way, when you're "under budget" for the time period, your requests go through immediately, and they only slow down when you hit your API limit.
From a comment on the question:
I am calling it like this: var yourEmployees = GetAll(factoryId);
Then you're not awaiting the task. While there's a 1-second delay after each network operation, you're still firing off all of the network operations in rapid succession. You need to await the task before moving on to the next one:
var yourEmployees = await GetAll(factoryId);
Assuming that this is happening in some kind of loop or repeated operation, of course. Otherwise, where would all of these different network tasks be coming from? Whatever high-level logic is invoking the multiple network operations, that logic needs to await one before moving on to the next.
I have a UrlList of only 4 URLs which I want to use to make 4 concurrent requests. Does the code below truly make 4 requests which start at the same time?
My testing appears to show that it does, but am I correct in thinking that there will actually be 4 requests retrieving data from the URL target at the same time or does it just appear that way?
static void Main(string[] args)
{
var t = Do_TaskWhenAll();
t.Wait();
}
public static async Task Do_TaskWhenAll()
{
var downloadTasksQuery = from url in UrlList select Run(url);
var downloadTasks = downloadTasksQuery.ToArray();
Results = await Task.WhenAll(downloadTasks);
}
public static async Task<string> Run(string url)
{
var client = new WebClient();
AddHeaders(client);
var content = await client.DownloadStringTaskAsync(new Uri(url));
return content;
}
Correct, when ToArray is called, the enumerable downloadTasksQuery will yield a task for every URL, running your web requests concurrently.
await Task.WhenAll ensures your task completes only when all web requests have completed.
You can rewrite your code to be less verbose, while doing effectively the same, like so:
public static async Task Do_TaskWhenAll()
{
var downloadTasks = from url in UrlList select Run(url);
Results = await Task.WhenAll(downloadTasks);
}
There's no need for ToArray because Task.WhenAll will enumerate your enumerable for you.
I advice you to use HttpClient instead of WebClient. Using HttpClient, you won't have to create a new instance of the client for each concurrent request, as it allows you to reuse the same client for doing multiple requests, concurrently.
The short answer is yes: if you generate multiple Tasks without awaiting each one individually, they can run simultaneously, as long as they are truly asynchronous.
When DownloadStringTaskAsync is awaited, a Task is returned from your Run method, allowing the next iteration to occur whilst waiting for the response.
So the next HTTP request is allowed to be sent without waiting for the first to complete.
As an aside, your method can be written more concisely:
public static async Task Do_TaskWhenAll()
{
Results = await Task.WhenAll(UrlList.Select(Run));
}
Task.WhenAll has an overload that accepts IEnumerable<Task<TResult>> which is returned from UrlList.Select(Run).
No, there is no guarantee that your requests will be executed in parallel, or immediately.
Starting a task merely queues it to the thread pool. If all of the pool's threads are occupied, that task will necessarily wait until a thread frees up.
In your case, since there are a relatively large number of threads available in the pool, and you are queueing only a small number of items, the pool has no problem servicing them as they come in. The more tasks you queue at once, the more likely this is to change.
If you truly need concurrency, you need to be aware of what the thread pool size is, and how busy it is. The ThreadPool class will help you to manage this.
We are working on a project developed in UWP(frontend) and REST-MVC-IIS(backend).
I was thinking on a theoretical scenario which might ensue:
From what I know, there is no way to guarantee the order in which requests will be processed and served by IIS.
So in a simple scenario, let's just assume this:
UI:
SelectionChanged(productId=1);
SelectionChanged(productId=2);
private async void SelectionChanged(int productId)
{
await GetProductDataAsync(productId);
}
IIS:
GetProductDataAsync(productId=1) scheduled on thread pool
GetProductDataAsync(productId=2) scheduled on thread pool
GetProductDataAsync(productId=2) finishes first => send response to client
GetProductDataAsync(productId=1) finishes later => send response to client
As you can see, the request for productId=2 for whatever reason finished faster then the first request for productId=1.
Because the way async works, both calls will create two continuation tasks on the UI which will override each other if they don't come in the correct order since they contain the same data.
This can be extrapolated to almost any master-detail scenario, where it can happen to end up selecting a master item and getting the wrong details for it (because of the order in which the response comes back from IIS).
What I wanted to know is if there are some best practice to handle this kind of scenarios... lot's of solutions come to mind but I don't want to jump the gun and go for one implementation before I try to see what other options are on the table.
As you presented your code await GetProductDataAsync(productId=2); will always run after await GetProductDataAsync(productId=1); has completed. So, there is no race condition.
If your code was:
await Task.WhenAll(
GetProductDataAsync(productId=1),
GetProductDataAsync(productId=2))
Then there might be a race condition. And, if that's a problem, it's not particular to async-await but due to the fact that you are making concurrent calls.
If you wrap that code in another method and use ConfigureAwait(), you'll have only one continuation on the UI thread:
Task GetProductDataAsync()
{
await Task.WhenAll(
GetProductDataAsync(productId=1).ConfigureAwait(),
GetProductDataAsync(productId=2).ConfigureAwait()
).ConfigureAwait();
}
I think I get what you're saying. Because of the async void eventhandler, nothing in the UI is awaiting the first call before the second. I am imagining a drop down of values and when it changes, it fetches the pertinent data.
Ideally, you would probably want to either lock out the UI during the call or implement a cancellationtoken.
If you're just looking for a way to meter the calls, keep reading...
I use a singleton repository layer in the UWP application that handles whether or not to fetch the data from a web service, or a locally cached copy. Additionally, if you want to meter the requests to process one at a time, use SemaphoreSlim. It works like lock, but for async operations (oversimplified simile).
Here is an example that should illustrate how it works...
public class ProductRepository : IProductRepository
{
//initializing (1,1) will allow only 1 use of the object
static SemaphoreSlim semaphoreLock = new SemaphoreSlim(1, 1);
public async Task<IProductData> GetProductDataByIdAsync(int productId)
{
try
{
//if semaphore is in use, subsequent requests will wait here
await semaphoreLock.WaitAsync();
try
{
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("yourbaseurl");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
string url = "yourendpoint";
HttpResponseMessage response = await client.GetAsync(url);
if (response.IsSuccessStatusCode)
{
var json = await response.Content.ReadAsStringAsync();
ProductData prodData = JsonConvert.DeserializeObject<ProductData>(json);
return prodData;
}
else
{
//handle non-success
}
}
}
catch (Exception e)
{
//handle exception
}
}
finally
{
//if any requests queued up, the next one will fire here
semaphoreLock.Release();
}
}
}
I have a problem here. In below code the async/await pattern is used with HttpListener. When the request is sent via HTTP "delay" query string argument is expected and its value causes the server to delay the mentioned request processing for the given period. I need the server to process the pending requests even after the server stopped receiving new requests.
static void Main(string[] args)
{
HttpListener httpListener = new HttpListener();
CountdownEvent sessions = new CountdownEvent(1);
bool stopRequested = false;
httpListener.Prefixes.Add("http://+:9000/GetData/");
httpListener.Start();
Task listenerTask = Task.Run(async () =>
{
while (true)
{
try
{
var context = await httpListener.GetContextAsync();
sessions.AddCount();
Task childTask = Task.Run(async () =>
{
try
{
Console.WriteLine($"Request accepted: {context.Request.RawUrl}");
int delay = int.Parse(context.Request.QueryString["delay"]);
await Task.Delay(delay);
using (StreamWriter sw = new StreamWriter(context.Response.OutputStream, Encoding.Default, 4096, true))
{
await sw.WriteAsync("<html><body><h1>Hello world</h1></body></html>");
}
context.Response.Close();
}
finally
{
sessions.Signal();
}
});
}
catch (HttpListenerException ex)
{
if (stopRequested && ex.ErrorCode == 995)
{
break;
}
throw;
}
}
});
Console.WriteLine("Server is running. ENTER to stop...");
Console.ReadLine();
sessions.Signal();
stopRequested = true;
httpListener.Stop();
Console.WriteLine("Stopped accepting requests. Waiting for the pendings...");
listenerTask.Wait();
sessions.Wait();
Console.WriteLine("Finished");
Console.ReadLine();
httpListener.Close();
}
The exact problem here, is that when the server is stopped the HttpListener.Stop is called, but all the pending requests are aborted immediately, i.e. the code is unable to send the responses back.
In non-async/await pattern (i.e. simple Thread based implementation) I have a choice to abort the thread (which I suppose is very bad) and this will allow me to process pending requests, because this simply Aborts HttpListener.GetContext call.
Can you please point me out, what am I doing wrong and how to can I prevent HttpListener to abort pending requests in async/await pattern?
It seems that when HttpListener closes the request queue handle, the requests in progress are aborted. As far as I can tell, there is no way to avoid having HttpListener do that - apparently, it's a compatibility thing. In any case, that's how its GetContext-ending system works - when the handle is closed, the native method GetContext call to actually get the request context returns an error immediately.
Thread.Abort doesn't help - really, I've yet to see a place where Thread.Abort is used correctly outside of the "application domain unloading" scenario. Thread.Abort can only ever abort managed code. Since your code is currently running native, it will only be aborted when it returns back to managed code - which is almost exactly equivalent to just doing this:
var context = await httpListener.GetContextAsync();
if (stopRequested) return;
... and since there's no better cancellation API for HttpListener, this is really your only option if you want to stick with HttpListener.
The shutdown will look like this:
stopRequested = true;
sessions.Wait();
httpListener.Dispose();
listenerTask.Wait();
I'd also suggest using CancellationToken instead of a bool flag - it handles all the synchronization woes for you. If that's not desirable for some reason, make sure you synchronize access to the flag - contractually, the compiler is allowed to omit the check, since it's impossible for the flag to change in single-threaded code.
If you want to, you can make listenerTask complete sooner by sending a dummy HTTP request to yourself right after setting stopRequested - this will cause GetContext to return immediately with the new request, and you can return. This is an approach that's commonly used when dealing with APIs that don't support "nice" cancellation, e.g. UdpClient.Receive.
I am using System.Net.Http to use network resources. When running on a single thread it works perfectly. When I run the code via TPL, it hangs and never completes until the timeout is hit.
What happens is that all the threads end up waiting on the sendTask.Result line. I am not sure what they are waiting on, but I assume it is something in HttpClient.
The networking code is:
using (var request = new HttpRequestMessage(HttpMethod.Get, "http://google.com/"))
{
using (var client = new HttpClient())
{
var sendTask = client.SendAsync
(request, HttpCompletionOption.ResponseHeadersRead);
using (var response = sendTask.Result)
{
var streamTask = response.Content.ReadAsStreamAsync();
using (var stream = streamTask.Result)
{
// problem occurs in line above
}
}
}
}
The TPL code that I am using is as follows. The Do method contains exactly the code above.
var taskEnumerables = Enumerable.Range(0, 100);
var tasks = taskEnumerables.Select
(x => Task.Factory.StartNew(() => _Do(ref count))).ToArray();
Task.WaitAll(tasks);
I have tried a couple of different schedulers, and the only way that I can get it to work is to write a scheduler that limits the number of running tasks to 2 or 3. However, even this fails sometimes.
I would assume that my problem is in HttpClient, but for the life of me I can't see any shared state in my code. Does anyone have any ideas?
Thanks,
Erick
I finally found the issue. The problem was that HttpClient issues its own additional tasks, so a single task that I start might actually end spawning 5 or more tasks.
The scheduler was configured with a limit on the number of tasks. I started the task, which caused the number of running tasks to hit the max limit. The HttpClient then attempted to start its own tasks, but because the limit was reached, it blocked until the number of tasks went down, which of course never happened, as they were waiting for my tasks to finish. Hello deadlock.
The morals of the story:
Tasks might be a global resource
There are often non-obvious interdependencies between tasks
Schedulers are not easy to work with
Don't assume that you control either schedulers or number of tasks
I ended up using another method to throttle the number of connections.
Erick