The following situation is given:
A new job is sent to an API via Post Request. This API returns a JobID and the HTTP ResponseCode 202.
This JobID is then used to request a status endpoint. If the end point has a "Finished" property set in the response body, you can continue with step 3.
The results are queried via a result endpoint using the JobID and can be processed.
My question is how I can solve this elegantly and cleanly. Are there perhaps already ready-to-use libraries that implement exactly this functionality? I could not find such functionality for RestSharp or another HttpClient.
The current solution looks like this:
async Task<string> PostNewJob()
{
var restClient = new RestClient("https://baseUrl/");
var restRequest = new RestRequest("jobs");
//add headers
var response = await restClient.ExecutePostTaskAsync(restRequest);
string jobId = JsonConvert.DeserializeObject<string>(response.Content);
return jobId;
}
async Task WaitTillJobIsReady(string jobId)
{
string jobStatus = string.Empty;
var request= new RestRequest(jobId) { Method = Method.GET };
do
{
if (!String.IsNullOrEmpty(jobStatus))
Thread.Sleep(5000); //wait for next status update
var response = await restClient.ExecuteGetTaskAsync(request, CancellationToken.None);
jobStatus = JsonConvert.DeserializeObject<string>(response.Content);
} while (jobStatus != "finished");
}
async Task<List<dynamic>> GetJobResponse(string jobID)
{
var restClient = new RestClient(#"Url/bulk/" + jobID);
var restRequest = new RestRequest(){Method = Method.GET};
var response = await restClient.ExecuteGetTaskAsync(restRequest, CancellationToken.None);
dynamic downloadResponse = JsonConvert.DeserializeObject(response.Content);
var responseResult = new List<dynamic>() { downloadResponse?.ToList() };
return responseResult;
}
async main()
{
var jobId = await PostNewJob();
WaitTillJobIsReady(jobID).Wait();
var responseResult = await GetJobResponse(jobID);
//handle result
}
As #Paulo Morgado said, I should not use Thread.Sleep / Task Delay in production code. But in my opinion I have to use it in the method WaitTillJobIsReady() ? Otherwise I would overwhelm the API with Get Requests in the loop?
What is the best practice for this type of problem?
Long Polling
There are multiple ways you can handle this type of problem, but as others have already pointed out no library such as RestSharp currently has this built in. In my opinion, the preferred way of overcoming this would be to modify the API to support some type of long-polling like Nikita suggested. This is where:
The server holds the request open until new data is available. Once
available, the server responds and sends the new information. When the
client receives the new information, it immediately sends another
request, and the operation is repeated. This effectively emulates a
server push feature.
Using a scheduler
Unfortunately this isn't always possible. Another more elegant solution would be to create a service that checks the status, and then using a scheduler such as Quartz.NET or HangFire to schedule the service at reoccurring intervals such as 500ms to 3s until it is successful. Once it gets back the "Finished" property you can then mark the task as complete to stop the process from continuing to poll. This would arguably be better than your current solution and offer much more control and feedback over whats going on.
Using Timers
Aside from using Thread.Sleep a better choice would be to use a Timer. This would allow you to continuously call a delegate at specified intervals, which seems to be what you are wanting to do here.
Below is an example usage of a timer that will run every 2 seconds until it hits 10 runs. (Taken from the Microsoft documentation)
using System;
using System.Threading;
using System.Threading.Tasks;
class Program
{
private static Timer timer;
static void Main(string[] args)
{
var timerState = new TimerState { Counter = 0 };
timer = new Timer(
callback: new TimerCallback(TimerTask),
state: timerState,
dueTime: 1000,
period: 2000);
while (timerState.Counter <= 10)
{
Task.Delay(1000).Wait();
}
timer.Dispose();
Console.WriteLine($"{DateTime.Now:HH:mm:ss.fff}: done.");
}
private static void TimerTask(object timerState)
{
Console.WriteLine($"{DateTime.Now:HH:mm:ss.fff}: starting a new callback.");
var state = timerState as TimerState;
Interlocked.Increment(ref state.Counter);
}
class TimerState
{
public int Counter;
}
}
Why you don't want to use Thread.Sleep
The reason that you don't want to use Thread.Sleep for operations that you want on a reoccurring schedule is because Thread.Sleep actually relinquishes control and ultimately when it regains control is not up to the thread. It's simply saying it wants to relinquish control of it's remaining time for a least x milliseconds, but in reality it could take much longer for it to regain it.
Per the Microsoft documentation:
The system clock ticks at a specific rate called the clock resolution.
The actual timeout might not be exactly the specified timeout, because
the specified timeout will be adjusted to coincide with clock ticks.
For more information on clock resolution and the waiting time, see the
Sleep function from the Windows system APIs.
Peter Ritchie actually wrote an entire blog post on why you shouldn't use Thread.Sleep.
EndNote
Overall I would say your current approach has the appropriate idea on how this should be handled however, you may want to 'future proof' it by doing some refactoring to utilize on of the methods mentioned above.
Related
I'm running a method synchronously in parallel using System.Threading.Tasks.Parallel.ForEach. At the end of the method, it needs to make a few dozen HTTP POST requests, which do not depend on each other. Since I'm on .NET Framework 4.6.2, System.Net.Http.HttpClient is exclusively asynchronous, so I'm using Nito.AsyncEx.AsyncContext to avoid deadlocks, in the form:
public static void MakeMultipleRequests(IEnumerable<MyClass> enumerable)
{
AsyncContext.Run(async () => await Task.WhenAll(enumerable.Select(async c =>
await getResultsFor(c).ConfigureAwait(false))));
}
The getResultsFor(MyClass c) method then creates an HttpRequestMessage and sends it using:
await httpClient.SendAsync(request);
The response is then parsed and the relevant fields are set on the instance of MyClass.
My understanding is that the synchronous thread will block at AsyncContext.Run(...), while a number of tasks are performed asynchronously by the single AsyncContextThread owned by AsyncContext. When they are all complete, the synchronous thread will unblock.
This works fine for a few hundred requests, but when it scales up to a few thousand over five minutes, some of the requests start returning HTTP 408 Request Timeout errors from the server. My logs indicate that these timeouts are happening at the peak load, when there are the most requests being sent, and the timeouts happen long after many of the other requests have been received back.
I think the problem is that the tasks are awaiting the server handshake inside HttpClient, but they are not continued in FIFO order, so by the time they are continued the handshake has expired. However, I can't think of any way to deal with this, short of using a System.Threading.SemaphoreSlim to enforce that only one task can await httpClient.SendAsync(...) at a time.
My application is very large, and converting it entirely to async is not viable.
This isn't something that can be done with wrapping the tasks before blocking. For starters, if the requests go through, you may end up nuking the server. Right now you're nuking the client. There's a 2 concurrent-request per domain limit in .NET Framework that can be relaxed, but if you set it too high you may end up nuking the server.
You can solve this by using DataFlow blocks in a pipeline to execute requests with a fixed degree of parallelism and then parse them. Let's say you have a class called MyPayload with lots of Items in a property:
ServicePointManager.DefaultConnectionLimit = 1000;
var options=new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 10
};
var downloader=new TransformBlock<string,MyPayload>(async url=>{
var json=await _client.GetStringAsync(url);
var data=JsonConvert.DeserializeObject<MyPayload>(json);
return data;
},options);
var importer=new ActionBlock<MyPayload>(async data=>
{
var items=data.Items;
using(var connection=new SqlConnection(connectionString))
using(var bcp=new SqlBulkCopy(connection))
using(var reader=ObjectReader.Create(items))
{
bcp.DestinationTableName = destination;
connection.Open();
await bcp.WriteToServerAsync(reader);
}
});
downloader.LinkTo(importer,new DataflowLinkOptions {
PropagateCompletion=true
});
I'm using FastMember's ObjectReader to wrap the items in a DbDataReader that can be used to bulk insert the records to a database.
Once you have this pipeline, you can start posting URLs to the head block, downloader :
foreach(var url in hugeList)
{
downloader.Post(url);
}
downloader.Complete();
Once all URLs are posted, you tell donwloader to complete and await for the last block in the pipeline to finish with :
await importer.Completion;
Firstly, Nito.AsyncEx.AsyncContext will execute on a threadpool thread; to avoid deadlocks in the way described requires an instance of Nito.AsyncEx.AsyncContextThread, as outlined in the documentation.
There are two possible causes:
a bug in System.Net.Http.HttpClient in .NET Framework 4.6.2
the continuation priority issue outlined in the question, in which individual requests are not continued promptly enough and so time out.
As described at this answer and its comments, from a similar question, it may be possible to deal with the priority problem using a custom TaskScheduler, but throttling the number of concurrent requests using a semaphore is probably the best answer:
using System.Collections.Generic;
using System.Linq;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;
using Nito.AsyncEx;
public class MyClass
{
private static readonly AsyncContextThread asyncContextThread
= new AsyncContextThread();
private static readonly HttpClient httpClient = new HttpClient();
private static readonly SemaphoreSlim semaphore = new SemaphoreSlim(10);
public HttpRequestMessage Request { get; set; }
public HttpResponseMessage Response { get; private set; }
private async Task GetResponseAsync()
{
await semaphore.WaitAsync();
try
{
Response = await httpClient.SendAsync(Request);
}
finally
{
semaphore.Release();
}
}
public static void MakeMultipleRequests(IEnumerable<MyClass> enumerable)
{
Task.WaitAll(enumerable.Select(c =>
asyncContextThread.Factory.Run(() =>
c.GetResponseAsync())).ToArray());
}
}
Edited to use AsyncContextThread for executing async code on non-threadpool thread, as intended. AsyncContext does not do this on its own.
I am working on a protocol and trying to use as much async/await as I can to make it scale well. The protocol will have to support hundreds to thousands of simultaneous connections. Below is a little bit of pseudo code to illustrate my problem.
private static async void DoSomeWork()
{
var protocol = new FooProtocol();
await protocol.Connect("127.0.0.1", 1234);
var i = 0;
while(i != int.MaxValue)
{
i++;
var request = new FooRequest();
request.Payload = "Request Nr " + i;
var task = protocol.Send(request);
_ = task.ContinueWith(async tmp =>
{
var resp = await task;
Console.WriteLine($"Request {resp.SequenceNr} Successful: {(resp.Status == 0)}");
});
}
}
And below is a little pseudo code for the protocol.
public class FooProtocol
{
private int sequenceNr = 0;
private SemaphoreSlim ss = new SemaphoreSlim(20, 20);
public Task<FooResponse> Send(FooRequest fooRequest)
{
var tcs = new TaskCompletionSource<FooResponse>();
ss.Wait();
var tmp = Interlocked.Increment(ref sequenceNr);
fooRequest.SequenceNr = tmp;
// Faking some arbitrary delay. This work is done over sockets.
Task.Run(async () =>
{
await Task.Delay(1000);
tcs.SetResult(new FooResponse() {SequenceNr = tmp});
ss.Release();
});
return tcs.Task;
}
}
I have a protocol with request and response pairs. I have used asynchronous socket programming. The FooProtocol will take care of matching up request with responses (sequence numbers) and will also take care of the maximum number of pending requests. (Done in the pseudo and my code with a semaphore slim, So I am not worried about run away requests). The DoSomeWork method calls the Protocol.Send method, but I don't want to await the response, I want to spin around and send the next one until I am blocked by the maximum number of pending requests. When the task does complete I want to check the response and maybe do some work.
I would like to fix two things
I would like to avoid using Task.ContinueWith() because it seems to not fit in cleanly with the async/await patterns
Because I have awaited on the connection, I have had to use the async modifier. Now I get warnings from the IDE "Because this call is not waited, execution of the current method continues before this call is complete. Consider applying the 'await' operator to the result of the call." I don't want to do that, because as soon as I do it ruins the protocol's ability to have many requests in flight. The only way I can get rid of the warning is to use a discard. Which isn't the worst thing but I can't help but feel like I am missing a trick and fighting this too hard.
Side note: I hope your actual code is using SemaphoreSlim.WaitAsync rather than SemaphoreSlim.Wait.
In most socket code, you do end up with a list of connections, and along with each connection is a "processor" of some kind. In the async world, this is naturally represented as a Task.
So you will need to keep a list of Tasks; at the very least, your consuming application will need to know when it is safe to shut down (i.e., all responses have been received).
Don't preemptively worry about using Task.Run; as long as you aren't blocking (e.g., SemaphoreSlim.Wait), you probably will not starve the thread pool. Remember that during the awaits, no thread pool thread is used.
I am not sure that it's a good idea to enforce the maximum concurrency at the protocol level. It seems to me that this responsibility belongs to the caller of the protocol. So I would remove the SemaphoreSlim, and let it do the one thing that it knows to do well:
public class FooProtocol
{
private int sequenceNr = 0;
public async Task<FooResponse> Send(FooRequest fooRequest)
{
var tmp = Interlocked.Increment(ref sequenceNr);
fooRequest.SequenceNr = tmp;
await Task.Delay(1000); // Faking some arbitrary delay
return new FooResponse() { SequenceNr = tmp };
}
}
Then I would use an ActionBlock from the TPL Dataflow library in order to coordinate the process of sending a massive number of requests through the protocol, by handling the concurrency, the backpreasure (BoundedCapacity), the cancellation (if needed), the error-handling, and the status of the whole operation (running, completed, failed etc). Example:
private static async Task DoSomeWorkAsync()
{
var protocol = new FooProtocol();
var actionBlock = new ActionBlock<FooRequest>(async request =>
{
var resp = await protocol.Send(request);
Console.WriteLine($"Request {resp.SequenceNr} Status: {resp.Status}");
}, new ExecutionDataflowBlockOptions()
{
MaxDegreeOfParallelism = 20,
BoundedCapacity = 100
});
await protocol.Connect("127.0.0.1", 1234);
foreach (var i in Enumerable.Range(0, Int32.MaxValue))
{
var request = new FooRequest();
request.Payload = "Request Nr " + i;
var accepted = await actionBlock.SendAsync(request);
if (!accepted) break; // The block has failed irrecoverably
}
actionBlock.Complete();
await actionBlock.Completion; // Propagate any exceptions
}
The BoundedCapacity = 100 configuration means that the ActionBlock will store in its internal buffer at most 100 requests. When this threshold is reached, anyone who wants to send more requests to it will have to wait. The awaiting will happen in the await actionBlock.SendAsync line.
Note: I am running on .NET Framework 4.6.2
Background
I have a long running Windows Services that, once a minute, queues up a series of business related tasks that are ran on their own threads that are each awaited on by the main thread. There can only be one set of business related tasks running at the same time, as to disallow for race conditions. At certain points, each business task makes a series of asynchronous calls, in parallel, off to an external API via an HttpClient in a singleton wrapper. This results in anywhere between 20-100 API calls per second being made via HttpClient.
The issue
About twice a week for the past month, a deadlock issue (I believe) has been cropping up. Whenever it does happen, I have been restarting the Windows Service frequently as we can't afford to have the service going down for more than 20 minutes at a time without it causing serious business impact. From what I can see, any one of the business tasks will try sending a set of API calls and further API calls made using the HttpClient will fail to ever return, resulting in the task running up against a fairly generous timeout on the cancellation token that is created for each business task. I can see that the requests are reaching the await HttpClientInstance.SendAsync(request, cts.Token).ConfigureAwait(false) line, but do not advance past it.
For a additional clarification here, once the first business task begins deadlocking with HttpClient, any new threads attempting to send API requests using the HttpClient end up timing out. New business threads are being queued up, but they cannot utilize the instance of HttpClient at all.
Is this a deadlocking situation? If so, how do I avoid it?
Relevant Code
HttpClientWrapper
public static class HttpClientWrapper
{
private static HttpClientHandler _httpClientHandler;
//legacy class that is extension of DelegatingHandler. I don't believe we are using any part of
//it outside of the inner handler. This could probably be cleaned up a little more to be fair
private static TimeoutHandler _timeoutHandler;
private static readonly Lazy<HttpClient> _httpClient =
new Lazy<HttpClient>(() => new HttpClient(_timeoutHandler));
public static HttpClient HttpClientInstance => _httpClient.Value;
public static async Task<Response> CallAPI(string url, HttpMethod httpMethod, CancellationTokenSource cts, string requestObj = "")
{
//class that contains fields for logging purposes
var response = new Response();
string accessToken;
var content = new StringContent(requestObj, Encoding.UTF8, "application/json");
var request = new HttpRequestMessage(httpMethod, new Uri(url));
if (!string.IsNullOrWhiteSpace(requestObj))
{
request.Content = content;
}
HttpResponseMessage resp = null;
try
{
resp = await HttpClientInstance.SendAsync(request, cts.Token).ConfigureAwait(false);
}
catch (Exception ex)
{
if ((ex.InnerException is OperationCanceledException || ex.InnerException is TaskCanceledException) && !cts.IsCancellationRequested)
throw new TimeoutException();
throw;
}
response.ReturnedJson = await resp.Content.ReadAsStringAsync();
// non-relevant post-call variables being set for logging...
return response;
}
//called on start up of the Windows Service
public static void SetProxyUse(bool useProxy)
{
if (useProxy || !ServerEnv.IsOnServer)
{
_httpClientHandler = new HttpClientHandler
{
UseProxy = true,
Proxy = new WebProxy {Address = /* in-house proxy */},
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};
}
else
{
_httpClientHandler = new HttpClientHandler
{
UseProxy = false,
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};
}
_handler = new TimeoutHandler
{
DefaultTimeout = TimeSpan.FromSeconds(120),
InnerHandler = _httpClientHandler
};
}
}
Generalized function from a business class
For more context.
//Code for generating work parameters in each batch of work
...
foreach (var workBatch in batchesOfWork)
{
var tasks = workBatch.Select(async batch =>
workBatch.Result = await GetData(/* work related parms*/)
);
await Task.WhenAll(tasks);
}
...
GetData() function
//code for formating url
try
{
response = await HttpClientWrapper.CallAPI(formattedUrl, HttpMethod.Get, cts);
}
catch (TimeoutException)
{
//retry logic
}
...
//JSON deserialization, error handling, etc.....
Edit
I forgot to mention that this also set on start-up.
ServicePointManager
.FindServicePoint(/* base uri for the API that we are contacting*/)
.ConnectionLeaseTimeout = 60000; // 1 minute
ServicePointManager.DnsRefreshTimeout = 60000;
The above mentioned code example shows that a common instance of HttpClient is being used by all the running applications.
Microsoft documentation recommends that the HttpClient object be instantiated once per application, rather than per-use.
This recommendation is applicable for the requests within one application.
This is for the purpose of ensuring common connection settings for all requests made to specific destination API.
However, when there are multiple applications, then the recommended approach is to have one instance of HttpClient per application instance, in order to avoid the scenario of one application waiting for the other to finish.
Removing the static keyword for the HttpClientWrapper class and updating the code so that each application can have its own instance of HttpClient will resolve the reported problem.
More information:
https://learn.microsoft.com/en-us/dotnet/api/system.net.http.httpclient?view=netcore-3.1
After taking #David Browne - Microsoft's advice in the comment section, I changed the default amount of connections from the default (2) to the API provider's rate limit for my organization (100) and that seems to have done the trick. It has been several days since I've installed the change to production, and it is humming along nicely.
Additionally, I slimmed down the HttpClientWrapper class I had to contain the CallAPI function and a default HttpClientHandler implementation with the proxy/decompression settings I have above. It doesn't override the default timer anymore, as my thought is is that I should just retry the API call if it takes more than the default 100 seconds.
To anyone stumbling upon this thread:
1) One HttpClient being used throughout the entirety of your application will be fine, no matter the amount of threads or API calls being done by it. Just make sure to increase the number of DefaultConnections via the ServicePointManager. You also DO NOT have to use the HttpClient in a using context. It will work just fine in a lazy singleton as I demonstrate above. Don't worry about disposing of the HttpClient in a long running service.
2) Use async-await throughout your application. It is worth the pay-off as it makes the application much more readable and allows your threads to be freed up as you are awaiting a response back from the API. This might seem obvious, but it isn't if you haven't used the async-await architecture in an application before.
I have this code to check if Server is available or not:
public static async Task<bool> PingServer()
{
System.Net.NetworkInformation.Ping p1 = new System.Net.NetworkInformation.Ping();
System.Net.NetworkInformation.PingReply PR = await p1.SendPingAsync("pc2");
// check when the ping is not success
if (PR.Status != System.Net.NetworkInformation.IPStatus.Success)
{
return false;
}
return true;
}
Now, my problem is that SendPingAsync does not support CancellationTokenSource so how can I cancel the last ping to perform a new one to prevent a lot of pings?
When the server is lost it takes seconds to PingServer() return a false value.
This is how I call PingServer():
var task = await PingServer();
if (task == true)
{
//do some stuff like retrieve a table data from SQL server.
}
Why I use ping? because when I want to get a table from SQL server if the server is disconnected my app enter a breakpoint.
public async static Task<System.Data.Linq.Table<Equipment>> GetEquipmentTable()
{
try
{
DataClassesDataContext dc = new DataClassesDataContext();
// note : i tried so many ways to get table synchronize but it still freeze my UI !
return await Task.Run(() => dc.GetTable<Equipment>());
}
catch
{
return null;
}
}
EDIT : I used ping to reduce chance of entering my app in brake mode by getting a table from database. is there any other way to prevent break mode ? is it best way to ping server before calling dc.GetTable() ?
Have you considered using the timeout parameter?
From the documentation:
This overload allows you to specify a time-out value for the
operation.
If that doesn't suffice and your problem is that the ping call is blocking, you could perform it on a background thread providing this is tightly controlled.
Consider
public static async Task<bool> PingServer() {
using (var ping = new System.Net.NetworkInformation.Ping()) {
try {
var maxDelay = TimeSpan.FromSeconds(2); //Adjust as needed
var tokenSource = new CancellationTokenSource(maxDelay);
System.Net.NetworkInformation.PingReply PR = await Task.Run(() =>
ping.SendPingAsync("pc2"), tokenSource.Token);
// check when the ping is not success
if (PR.Status != System.Net.NetworkInformation.IPStatus.Success) {
return false;
}
return true;
} catch {
return false;
}
}
}
Where the ping is done with a cancellation token using Task.Run; If the ping result returns before the allotted time then all is well.
wrapped the component in a using to dispose of it when exiting function.
The system you depend on might fail, or the connection can go down right after your ping, when you're executing your code.
This is why a retry strategy is a much more robust approach then simply pinging a system before calling it.
Here is how you can implement a retry Cleanest way to write retry logic?
I would go with a retry approach, but if you still want to stay with your design you could
Schedule a periodic task to ping the system in question Is there a Task based replacement for System.Threading.Timer?
Make sure you schedule this periodic task in one central place (application startup or the like)
Invoke your PingServer from this periodic task and make sure you call Ping.SendAsync with PingOptions.Timeout being set, see this overload
From your PingServer set some kind of shared state, it could be a static variable, or an implementation of the Registry pattern
Make sure your shared state is thread-safe
The rest of your code can call this shared state, to find out if a system is online and available
As you can see, this approach is more complex, but you will prevent "lots of pings" to the system you depend on
I am trying to consume a service reference, making multiple requests at the same time using a task scheduler. The service includes an synchronous and an asynchronous function that returns a result set. I am a bit confused, and I have a couple of initial questions, and then I will share how far I got in each. I am using some logging, concurrency visualizer, and fiddler to investigate. Ultimately I want to use a reactive scheduler to make as many requests as possible.
1) Should I use the async function to make all the requests?
2) If I were to use the synchronous function in multiple tasks what would be the limited resources that would potentially starve my thread count?
Here is what I have so far:
var myScheduler = new myScheduler();
var myFactory = new Factory(myScheduler);
var myClientProxy = new ClientProxy();
var tasks = new List<Task<Response>>();
foreach( var request in Requests )
{
var localrequest = request;
tasks.Add( myFactory.StartNew( () =>
{
// log stuff
return client.GetResponsesAsync( localTransaction.Request );
// log some more stuff
}).Unwrap() );
}
Task.WaitAll( tasks.ToArray() );
// process all the requests after they are done
This runs but according to fiddler it just tries to do all of the requests at once. It could be the scheduler but I trust that more then I do the above.
I have also tried to implement it without the unwrap command and instead using an async await delegate and it does the same thing. I have also tried referencing the .result and that seems to do it sequentially. Using the non synchronous service function call with the scheduler/factory it only gets up to about 20 simultaneous requests at the same time per client.
Yes. It will allow your application to scale better by using fewer threads to accomplish more.
Threads. When you initiate a synchronous operation that is inherently asynchronous (e.g. I/O) you have a blocked thread waiting for the operation to complete. You could however be using this thread in the meantime to execute CPU bound operations.
The simplest way to limit the amount of concurrent requests is to use a SemaphoreSlim which allows to asynchronously wait to enter it:
async Task ConsumeService()
{
var client = new ClientProxy();
var semaphore = new SemaphoreSlim(100);
var tasks = Requests.Select(async request =>
{
await semaphore.WaitAsync();
try
{
return await client.GetResponsesAsync(request);
}
finally
{
semaphore.Release();
}
}).ToList();
await Task.WhenAll(tasks);
// TODO: Process responses...
}
Regardless of how you are calling the WCF service whether it is an Async call or a Synchronous one you will be bound by the WCF serviceThrottling limits. You should look at these settings and possible adjust them higher (if you have them set to low values for some reason), in .NET4 the defaults are pretty good, however In older versions of the .NET framework, these defaults were much more conservative than .NET4.
.NET 4.0
MaxConcurrentSessions: default is 100 * ProcessorCount
MaxConcurrentCalls: default is 16 * ProcessorCount
MaxConcurrentInstances: default is MaxConcurrentCalls+MaxConcurrentSessions
1.)Yes.
2.)Yes.
If you want to control the number of simultaneous requests you can try using Stephen Toub's ForEachAsync method. it allows you to control how many tasks are processed at the same time.
public static class Extensions
{
public static Task ForEachAsync<T>(this IEnumerable<T> source, int dop, Func<T, Task> body)
{
return Task.WhenAll(
from partition in Partitioner.Create(source).GetPartitions(dop)
select Task.Run(async delegate {
using (partition)
while (partition.MoveNext())
await body(partition.Current);
}));
}
}
void Main()
{
var myClientProxy = new ClientProxy();
var responses = new List<Response>();
// Max 10 concurrent requests
Requests.ForEachAsync<Request>(10, async (r) =>
{
var response = await client.GetResponsesAsync( localTransaction.Request );
responses.Add(response);
}).Wait();
}