I've got a Windows service that monitors a table (with a timer) for rows, grabs rows one at a time when they appear, submits the information to a RESTful web service, analyzes the response, and writes some details about the response to a table. Would I gain anything by making this async? My current (stripped down) web service submission code is as follows:
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(new Uri(url));
HttpWebResponse resp;
try
{
resp = (HttpWebResponse)req.GetResponse();
}
catch (WebException we)
{
resp = (HttpWebResponse)we.Response;
}
if (resp != null)
{
Stream respStream = resp.GetResponseStream();
if (respStream != null)
{
responseBody = new StreamReader(respStream).ReadToEnd();
}
resp.Close();
respStream.Close();
}
return responseBody;
If you don't care how long it takes to get your response for any individual request, no, there's no particular reason to make it async.
On the other hand, if you're waiting for one request to fully finish before you start your next request you might have trouble handling a large volume. In this scenario you might want to look at parallelizing your code. But that's only worth discussing if you get large numbers of items entered into the database for processing.
By using asynchronous methods you avoid the need to have a dedicated thread blocking on the results of the async operation. You can free up that thread to work on other more productive tasks until the async task has finished and is again productive work.
If your service is not in high demand, and is not frequently in a position where you have more work that needs doing than threads to do it, then you don't need to worry. For many business apps they simply have so low usage rates that this isn't needed.
If you have a large scale app servicing enough users that you have a very high number of concurrent requests (even if it's just at peak usage times) then it may be work switching to asynchronous counterparts.
Related
I have a web app that connects to an external API.
That API has a limit of 3 connections per second.
I have a method that gets employee data for a whole factory.
It works fine, but I've found that if a particular factory has a lot of employees, I hit the API connection limit and get an error.
(429) API calls exceeded...maximum 3 per Second
So I decided to use await Task.Delay(1000) to set a 1 second delay, every time this method is used.
Now it seems to have reduced the number of errors I get, but I am still getting a few limit errors.
Is there another method I could use to ensure my limit is not reached?
Here is my code:
public async Task<YourSessionResponder> GetAll(Guid factoryId)
{
UserSession.AuthData sessionManager = new UserSession.AuthData
{
UserName = "xxxx",
Password = "xxxx"
};
ISessionHandler sessionMgr = new APIclient();
YourSessionResponder response;
response = await sessionMgr.GetDataAsync(sessionManager, new ListerRequest
{
FactoryId = factoryId;
await Task.Delay(1000);
return response;
}
I call it like this:
var yourEmployees = GetAll(factoryId);
I have a web app that connects to an external API.
Your current code limits the number of outgoing requests made by a single incoming request to your API. What you need to do is limit all of your outgoing requests, app-wide.
It's possible to do this using a SemaphoreSlim:
private static readonly SemaphoreSlim Mutex = new(1);
public async Task<YourSessionResponder> GetAll(Guid factoryId)
{
...
YourSessionResponder response;
await Mutex.WaitAsync();
try
{
response = await sessionMgr.GetDataAsync(...);
await Task.Delay(1000);
}
finally
{
Mutex.Release();
}
return response;
}
But I would take a different approach...
Is there another method I could use to ensure my limit is not reached?
Generally, I recommend just retrying on 429 errors, using de-correlated jittered exponential backoff (see Polly for an easy implementation). That way, when you're "under budget" for the time period, your requests go through immediately, and they only slow down when you hit your API limit.
From a comment on the question:
I am calling it like this: var yourEmployees = GetAll(factoryId);
Then you're not awaiting the task. While there's a 1-second delay after each network operation, you're still firing off all of the network operations in rapid succession. You need to await the task before moving on to the next one:
var yourEmployees = await GetAll(factoryId);
Assuming that this is happening in some kind of loop or repeated operation, of course. Otherwise, where would all of these different network tasks be coming from? Whatever high-level logic is invoking the multiple network operations, that logic needs to await one before moving on to the next.
I have a website and I am also exploring Parallel Processing in C# and I thought it would be a good idea to see if I could write my own DDOS test script to see how the site would handle a DDOS attack.
However when I run it, there only seems to be 13 threads in use and they always return 200 status codes, never anything to suggest the response wasn't quick and accurate and when going to the site and refreshing at the same time as the script runs the site loads quickly.
I know there are tools out there for penetration tests and so on but I was just wondering why I couldn't use a Parallel loop to make enough concurrent HTTP requests to a site that it would struggle to load fast and return a response. It seems I get more problems from a Twitter Rush just by tweeting out a link to a new page on the site and the 100s of BOTS that all rush concurrently to the site to rip, scan, check it etc than anything I can throw at it using a Parallel loop.
Is there something I am doing wrong that limits the number of concurrent threads or is this something I cannot control. I could just throw numerous long winded search queries that I know would scan the whole DB returning 0 results in each request as I have seen this in action and depending on the size of the data to be scanned and the complexity of the search query it can cause CPU spikes and slow loads.
So without a lecture on using other tools is there a way to throw a 100+ parallel requests for a page to be loaded rather than a max of 13 threads which it handles perfectly.
Here is the code, the URL and no of HTTP requests to make are passed in as command line parameters.
static void Attack(string url, int limit)
{
Console.WriteLine("IN Attack = {0}, requests = {1}", url, limit);
try
{
Parallel.For(0, limit, i =>
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(url);
webRequest.ServicePoint.ConnectionLimit = limit;
HttpWebResponse webResponse = webRequest.GetResponse() as HttpWebResponse;
int statuscode = Convert.ToInt32(webResponse.StatusCode);
Console.WriteLine("iteration {0} on thread {1} Status {2}", i,
Thread.CurrentThread.ManagedThreadId, statuscode);
});
}
catch (AggregateException exc)
{
exc.InnerExceptions.ToList().ForEach(e =>
{
Console.WriteLine(e.Message);
});
}
catch (Exception ex)
{
Console.WriteLine("In Exception: " + ex.Message.ToString());
}
finally
{
Console.WriteLine("All finished");
}
}
you can try it like:
var socketsHandler = new SocketsHttpHandler
{
PooledConnectionLifetime = TimeSpan.FromSeconds(1),
PooledConnectionIdleTimeout = TimeSpan.FromSeconds(1),
MaxConnectionsPerServer = 10
};
var client = new HttpClient(socketsHandler);
for (var i = 0; i < limit; i++)
{
_ = await client.GetAsync(url);
}
The Parallel.For method is using threads from the ThreadPool. The initial number of threads in the pool is usually small (comparable to the number of logical processors in the machine). When the pool is starved, new threads are injected at a rate of one every 500 msec. The easy way to solve your problem is simply to increase the number of the create-immediately-on-demand threads, using the SetMinThreads method:
ThreadPool.SetMinThreads(1000, 10);
This is not scalable though, because each thread allocates 1MB of memory for its stack, so you can't have millions of them. The scalable solution is to go async, which makes minimal use of threads.
I’m creating an API that serves as the bridge between the app and 2 other APIs. I want to know if what is the best way to do this. I’m using HttpClient. The app has almost a thousand users so if I use synchronous calls does that mean that if a user calls the API, then the other users have to wait until the 1st user gets the response before it proceeds to the other API requests? Is there a better way of doing an API like this?
Here is a sample of my code using synchronous:
[HttpGet]
[Route("api/apiname")]
public String GetNumberofP([FromUri]GetNumberofPRequest getNPRequest){
var request = JsonConvert.SerializeObject(getNPRequest);
string errorMessage = "";
try{
httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token.gettoken());
var response = httpClient.GetAsync("api/MobileApp/GetNumberP?"
+ "strCardNumber=" + getNPRequest.strCardNumber
+ "&strDateOfBirth=" + getNPRequest.strDateOfBirth).Result;
return response;
}
catch (Exception e){
throw utils.ReturnException("GetNumberofP", e, errorMessage);
}
}
if I use synchronous calls does that mean that if a user calls the API, then the other users have to wait until the 1st user gets the response before it proceeds to the other API requests
No. When a request comes into the pipeline, a new thread is spawned by the framework. So if 1,000 requests come in at the same time, the 1,000th user will not have to wait for the other 999 requests to finish.
You are better off using async code for this anyway. For any I/O like network requests, you're usually better off for performance letting a background thread do the waiting. Side note, you never want to call .Result because that forces the async code to become blocking and effectively becomes synchronous.
t's always easy to turn a synchronous call into an asynchronous one, but the other way around is fraught with danger. You should make your API asynchronous.
[HttpGet]
[Route("api/apiname")]
public Task<string> GetNumberofP([FromUri]GetNumberofPRequest getNPRequest)
{
httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token.gettoken());
return httpClient.GetAsync($"api/MobileApp/GetNumberP?strCardNumber={getNPRequest.strCardNumber}&strDateOfBirth={getNPRequest.strDateOfBirth}");
}
You should also consider creating a new httpClient for each call.
It seems you're missing the async and await keywords.
public async String GetNumberofP([FromUri]GetNumberofPRequest getNPRequest){
(...)
var response = await httpClient.GetAsync();
We are working on a project developed in UWP(frontend) and REST-MVC-IIS(backend).
I was thinking on a theoretical scenario which might ensue:
From what I know, there is no way to guarantee the order in which requests will be processed and served by IIS.
So in a simple scenario, let's just assume this:
UI:
SelectionChanged(productId=1);
SelectionChanged(productId=2);
private async void SelectionChanged(int productId)
{
await GetProductDataAsync(productId);
}
IIS:
GetProductDataAsync(productId=1) scheduled on thread pool
GetProductDataAsync(productId=2) scheduled on thread pool
GetProductDataAsync(productId=2) finishes first => send response to client
GetProductDataAsync(productId=1) finishes later => send response to client
As you can see, the request for productId=2 for whatever reason finished faster then the first request for productId=1.
Because the way async works, both calls will create two continuation tasks on the UI which will override each other if they don't come in the correct order since they contain the same data.
This can be extrapolated to almost any master-detail scenario, where it can happen to end up selecting a master item and getting the wrong details for it (because of the order in which the response comes back from IIS).
What I wanted to know is if there are some best practice to handle this kind of scenarios... lot's of solutions come to mind but I don't want to jump the gun and go for one implementation before I try to see what other options are on the table.
As you presented your code await GetProductDataAsync(productId=2); will always run after await GetProductDataAsync(productId=1); has completed. So, there is no race condition.
If your code was:
await Task.WhenAll(
GetProductDataAsync(productId=1),
GetProductDataAsync(productId=2))
Then there might be a race condition. And, if that's a problem, it's not particular to async-await but due to the fact that you are making concurrent calls.
If you wrap that code in another method and use ConfigureAwait(), you'll have only one continuation on the UI thread:
Task GetProductDataAsync()
{
await Task.WhenAll(
GetProductDataAsync(productId=1).ConfigureAwait(),
GetProductDataAsync(productId=2).ConfigureAwait()
).ConfigureAwait();
}
I think I get what you're saying. Because of the async void eventhandler, nothing in the UI is awaiting the first call before the second. I am imagining a drop down of values and when it changes, it fetches the pertinent data.
Ideally, you would probably want to either lock out the UI during the call or implement a cancellationtoken.
If you're just looking for a way to meter the calls, keep reading...
I use a singleton repository layer in the UWP application that handles whether or not to fetch the data from a web service, or a locally cached copy. Additionally, if you want to meter the requests to process one at a time, use SemaphoreSlim. It works like lock, but for async operations (oversimplified simile).
Here is an example that should illustrate how it works...
public class ProductRepository : IProductRepository
{
//initializing (1,1) will allow only 1 use of the object
static SemaphoreSlim semaphoreLock = new SemaphoreSlim(1, 1);
public async Task<IProductData> GetProductDataByIdAsync(int productId)
{
try
{
//if semaphore is in use, subsequent requests will wait here
await semaphoreLock.WaitAsync();
try
{
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("yourbaseurl");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
string url = "yourendpoint";
HttpResponseMessage response = await client.GetAsync(url);
if (response.IsSuccessStatusCode)
{
var json = await response.Content.ReadAsStringAsync();
ProductData prodData = JsonConvert.DeserializeObject<ProductData>(json);
return prodData;
}
else
{
//handle non-success
}
}
}
catch (Exception e)
{
//handle exception
}
}
finally
{
//if any requests queued up, the next one will fire here
semaphoreLock.Release();
}
}
}
public interface IEventDismiss
{
[OperationContract]
[return:MessageParameter(Name="response")]
[XmlSerializerFormat]
Response ProcessRequest(Request request);
}
Hello,
Above is my WCF implementation in C# and it is pretty straight forward. However, it becomes a little more complicated when I receive the request and pass it on to another thread to process to produce the response and finally send this response back.
My algorithm is:
Get the request.
Pass it on to a separate thread to process by putting onto a static queue for other thread.
Once thread finish processing, it put the response object onto a static queue.
In my function ProcessRequest I have a while loop that dequeue this response and send it back to the requester.
public Response ProcessRequest (Request request)
{
bool sWait = true;
Response sRes = new Response();
ResponseProcessor.eventIDQueue.Enqueue(request.EventID);
while (sWait)
{
if (ResponseProcessor.repQ.Count > 0)
{
sRes = ResponseProcessor.repQ.Dequeue();
sWait = false;
}
}
return sRes;
}
Now, before everyone start to grill me, I am too realized this is bad practice and that's why I ask the question here in hoping to get better way to do this. I realized with the current code I have the following issues:
My while loops maybe in a continue loop and thus eating up the CPU if it has no sleep() in between.
My response queue may contains the wrong response back due to the nature of async call.
So I have two questions:
Is there a way to put sleep in the while loop to eliminate the high CPU usage?
Is there a better way to do this?
There's not point in doing this in the first place. Rather than having the current thread sitting around doing nothing while it waits for another queue to compute the work (while eating up tons of CPU cycles anyway), just compute the response in the current thread and send it back. You are gaining nothing by queuing it for another thread to handle.
You are also using queue objects that cannot be safely accessed from multiple threads, so in addition to being extremely inefficient, it's also subject to race conditions that can mean it won't even work.