C#/.NET - Very high CPU usage when using async/await function calls - c#

I am having some severe performance issues in a project i'm working on. It's a standard web application project - users send requests to an API which trigger some form of computation in various handlers.
The problem right now is pretty much any request will drive the CPU usage of the server up significantly, regardless of what internal computation the corresponding function is supposed to do. For example, we have an endpoint to display a game from the database - the user sends a request containing an ID and the server will respond with a JSON-object. When this request is being processed the CPU usage goes from 5% (with the app just running) to 25-30%. Several concurrent requests will tank the server, with .net-core using 60-70% of the CPU.
The request chain looks like:
(Controller)
[HttpGet("game/{Id}")]
public async Task<IActionResult> GetPerson(string Id)
{
try
{
var response = await _GameService.GetGameAsync(Id);
return Ok(new FilteredResponse(response, 200));
}
Service
public async Task<PlayerFilteredGameState> GetGameAsync(string gameId, string apiKey)
{
var response = await _ironmanDataHandler.GetGameAsync(gameId);
var filteredGame = _responseFilterHelper.FilterForPlayer(response, apiKey);
return filteredGame;
}
Data handler
public async Task<GameState> GetGameAsync(string gameStateId)
{
using (var db = _dbContextFactory.Create())
{
var specifiedGame = await db.GameStateIronMan.FirstOrDefaultAsync(a => a.gameId == gameStateId);
if (specifiedGame == null)
{
throw new ApiException("There is no game with that ID.", 404);
}
var deserializedGame = JsonConvert.DeserializeObject<GameState>(specifiedGame.GameState);
return deserializedGame;
}
}
I've tried mocking all function return values and database accesses, replacing all computed values with null/new Game() etc etc but it doesn't improve the performance. I've spent lots of time with different performance analysis tools but there isn't a single function that uses more than 0,5-1% of the CPU.
After a lot of investigation the only "conclusion" i've reached is that it seems to have something to do with the internal functionality of async/await and the way we use it in our project, because it doesn't matter what we do in the called functions - as soon as we call a function the performance takes a huge hit.
I also tried making the functions synchronous just to see if there was something wrong with my system, however performance is massively reduced if i do that (which is good, i suppose).
I really am at a loss here because we aren't really doing anything out of the ordinary and we're still having large issues.
UPDATE
I've performed a performance analysis in ANTS. Im not really sure how to present the results, so i took a picture of what the callstack looks like.

If your gamestate is a large object, deserializing it can be quite taxing.
You could create a test where you just deserialize a saved game state, and do some profiling with various game states (a fresh start, after some time, ...) to see if there are differences.
If you find that deserializing takes a lot of CPU no matter what, you could look into changing the structure and seeing if you can optimize the amount of data that is saved

Related

EF core query loop asynchronous

In the previous day I am looking for a way to make my code fully asynchronous. So that when called by a rest API, I' ll get an immediate response meanwhile the process is running in the background.
To do that I simply used
tasks.Add(Task<bool>.Run( () => WholeProcessFunc(parameter) ))
where WholeProcessFunc is the function that make all the calculations(it may be computationally intensive).
It works as expected however I read that it is not optimal to wrap the whole process in a Task.Run.
My code need to compute different entity framework query which result depends on the previous one and contains also foreach loop.
For instance I can' t understand which is the best practice to make async a function like this:
public async Task<List<float>> func()
{
List<float> acsi = new List<float>();
using (var db = new EFContext())
{
long[] ids = await db.table1.Join(db.table2 /*,...*/)
.Where(/*...*/)
.Select(/*...*/).ToArrayAsync();
foreach (long id in ids)
{
var all = db.table1.Join(/*...*/)
.Where(/*...*/);
float acsi_temp = await all.OrderByDescending(/*...*/)
.Select(/*...*/).FirstAsync();
if (acsi_temp < 0) { break; }
acsi.Add(acsi_temp);
}
}
return acsi;
}
In particular I have difficulties with the foreach loop and the fact that the result of a query is used in the next .
Finally with the break statement which I don't get how to translate it. I read about cancellation token, could it be the way ?
Is wrapping up all this function in a Task.Run a solid solution ?
In the previous day I am looking for a way to make my code fully asynchronous. So that when called by a rest api, I' ll get an immediate response meanwhile the process is running in the background.
Well, that's one meaning of the word "asynchronous". Unfortunately, it's completely different than the kind of "asynchronous" that async/await does. async yields to the thread pool, not the client (browser).
It works as expected however I read that it is not optimal to wrap the whole process in a Task.Run.
It only seems to work as expected. It's likely that once your web site gets higher load, it will start to fail. It's definite that once your web site gets busier and you do things like rolling upgrades, it will start to fail.
Is wrapping up all this function in a Task.Run a solid solution ?
Not at all. Fire-and-forget is inherently dangerous.
A proper solution should be a basic distributed architecture:
A durable queue, such as an Azure Queue or Rabbit (if properly configured to be durable).
An independent processor, such as an Azure Function or Win32 Service.
Then the ASP.NET app will encode the work to be done into a queue message, enqueue that to the durable queue, and then return. Some time later, the processor will retrieve the message from that queue and do the actual work.
You can translate your code to return an IAsyncEnumerable<...>, that way the caller can process the results as they are obtained. In an asp.net 5 MVC endpoint, this includes writing serialised json to the browser;
public async IAsyncEnumerable<float> func()
{
using (var db = new EFContext())
{
//...
foreach (long id in ids)
{
//...
if(acsi_temp<0) { yield break; }
yield return acsi_temp;
}
}
}
public async Task<IActionResult> ControllerAction(){
if (...)
return NotFound();
return Ok(func());
}
Note that if your endpoint is an async IAsyncEnumerable coroutine. In asp.net 5, your headers would be flushed before your action even started. Giving you no way to return any http error codes.
Though for performance, you should try rework your queries so you can fetch all the data up front.

Azure EventHub: Send Async performance

I have pretty naive code :
public async Task Produce(string topic, object message, MessageHeader messageHeaders)
{
try
{
var producerClient = _EventHubProducerClientFactory.Get(topic);
var eventData = CreateEventData(message, messageHeaders);
messageHeaders.Times?.Add(DateTime.Now);
await producerClient.SendAsync(new EventData[] { eventData });
messageHeaders.Times?.Add(DateTime.Now);
//.....
Log.Info($"Milliseconds spent: {(messageHeaders.Times[1]- messageHeaders.Times[0]).TotalMilliseconds});
}
}
private EventData CreateEventData(object message, MessageHeader messageHeaders)
{
var eventData = new EventData(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(message)));
eventData.Properties.Add("CorrelationId", messageHeaders.CorrelationId);
if (messageHeaders.DateTime != null)
eventData.Properties.Add("DateTime", messageHeaders.DateTime?.ToString("s"));
if (messageHeaders.Version != null)
eventData.Properties.Add("Version", messageHeaders.Version);
return eventData;
}
in logs I had values for almost 1 second (~ 800 milliseconds)
What could be a reason for such long execution time?
The EventHubProducerClient opens connections to the Event Hubs service lazily, waiting until the first time an operation requires it. In your snippet, the call to SendAsync triggers an AMQP connection to be created, an AMQP link to be created, and authentication to be performed.
Unless the client is closed, most future calls won't incur that overhead as the connection and link are persistent. Most being an important distinction in that statement, as the client may need to reconnect in the face of a network error, when activity is low and the connection idles out, or if the Event Hubs service terminates the connection/link.
As Serkant mentions, if you're looking to understand timings, you'd probably be best served using a library like Benchmark.NET that works ove a large number of iterations to derive statistically meaningful results.
You are measuring the first 'Send'. That will incur some overhead that other Sends won't. So, always do warm up first like send single event and then measure the next one.
Another important thing. It is not right to measure just single 'Send' call. Measure bunch of calls instead and calculate latency percentile. That should provide a better figure for your tests.

Call to Web API getting slower and slower with each call on Angular 7 app

My Angular c# app makes a call to a web API and hits a stored proc. The C# part of the app executes quickly, but the 'Content Download' is getting slower and slower with each call.
I have a Angular service that calls the web API;
getInvestorsToFunds(params): Observable<InvestorToFund[]> {
let body = JSON.stringify({ params });
return this.http.post<InvestorToFund[]>(this.baseUrl + 'api/Investor/getInvestorsToFunds', body)
.pipe(catchError(this.handleError));
}
And I call that from my component;
let x = forkJoin(
this.investorService.getInvestorsToFunds(params)
).subscribe(t => {
this.investorToFunds = t[0] as InvestorToFund[];
});
Any ideas on why each call just gets slower and slower?
OK, I got to the bottom of this and I'll post my answer for any poor soul who faces the same issue.
I read into memory leaks and the Chrome tools for taking snapshots. Sure enough my memory usage was increasing over time with each page hit. This meant that less memory was available for my app, throttling the data input from my API.
Turns out one of my plug-ins was causing an issue - https://github.com/inorganik/countUp.js-angular2. I was on version 6 - when I updated to version 7 this stopped the memory leaks and the API call executed in about 3 seconds every time, no matter how many pages I clicked on.
Helpful articles;
https://auth0.com/blog/four-types-of-leaks-in-your-javascript-code-and-how-to-get-rid-of-them/
https://developers.google.com/web/tools/chrome-devtools/memory-problems/
It is not a memory leak. You need to unsubscribe from subscriptions
class A implements OnDestroy {
protected ngUnsubscribe: Subject<void> = new Subject<void>();
ngOnDestroy() {
this.ngUnsubscribe.next();
this.ngUnsubscribe.complete();
}
And on EACH subscription
this.subscription.takeUntil( this.ngUnsubscribe ).subscribe( _ => _ );
This way, when you move away from a component, the ngOnDestroy is run and all your subscriptions are cleared from the memory.
PS. I had the same issue, when I first started. No issues after I implemented this, all is running smooth as butter.
Any ideas on why each call just gets slower and slower?
The time you are seeing is the backend response time. The backend is getting slower and slower and any changes to your frontend code will not make it faster.
Fix
Fix the backend 🌹

MVC 5 / .NET 4.5 - Long running process

I have a website on Rackspace which does calculation, the calculation can take anywhere from 30 seconds to several minutes. Originally I implemented this with SignalR but had to yank it due to excessive CC usage. Hosted Rackspace sites are really not designed for that kind of use. The Bill went though the roof.
The basic code is as below which work perfectly on my test server but of course gets a timeout error on Rackspace if the calculation take more than 30 seconds due to their watcher killing it. (old code) I have been told that the operation must write to the stream to keep it alive. In the days of old I would have started a thread and polled the site until the thread was done. If there is a better way I would prefer to take it.
It seems that with .NET 4.5 I can use the HttpTaskAsyncHandler to accomplish this. But I'm not getting it. The (new code) below is as I understand the handler you would use by taking the old code in the using and placing it in the ProcessRequestAsync task. When I attempt to call the CalcHandler / Calc I get a 404 error which most likely has to do with routing. I was trying to follow this link but could not get it to work either. The add name is "myHandler" but the example link is "feed", how did we get from one to the other. They mentioned they created a class library but can the code be in the same project as the current code, how?
http://codewala.net/2012/04/30/asynchronous-httphandlers-with-asp-net-4-5/
As a side note, will the HttpTaskAsyncHandler allow me to keep the request alive until it is completed if it takes several minutes? Basically should I use something else for what I am trying to accomplish.
Old code
[Authorize]
[AsyncTimeout(5000)] // does not do anything on RackSpace
public async Task<JsonResult> Calculate(DataModel data)
{
try
{
using (var db = new ApplicationDbContext())
{
var result = await CalcualteResult(data);
return Json(result, JsonRequestBehavior.AllowGet);
}
}
catch (Exception ex)
{
LcDataLink.ProcessError(ex);
}
return Json(null, JsonRequestBehavior.AllowGet);
}
new code
public class CalcHandler : HttpTaskAsyncHandler
{
public override System.Threading.Tasks.Task ProcessRequestAsync(HttpContext context)
{
Console.WriteLine("test");
return new Task(() => System.Threading.Thread.Sleep(5000));
}
}
It's not a best approach. Usually you need to create a separate process ("worker role" in Azure).
This process will handle long-time operations and save result to the database. With SignalR (or by calling api method every 20 seconds) you will update the status of this operation on client side (your browser).
If this process takes too much time to calculate, your server will become potentially vulnerable to DDoS attacks.
Moreover, it depends on configuration, but long-running operations could be killed by the server itself. By default, if I'm not mistaken, after 30 minutes of execution.

Why is AsyncController grinding my server to a halt?

I recently made some (fairly trivial) changes to one of my ASP.NET MVC3 controllers and changed one of the actions into an async action. Basically I took code that looks like this:
public ActionResult MyAction(BindingObject params){
// use HttpWebRequest to call an external API and process the results
}
And turned it into code that looks like this:
private delegate ActionResult DoMyAction(BindingObject params);
public void MyActionAsync(BindingObject params){
AsyncManager.OutstandingOperations.Increment();
var doMyAction = new DoMyAction(MyAction);
doMyAction.BeginInvoke(params, MyActionCallback, doMyAction);
}
private void MyActionCallback(IAsyncResult ar){
var doMyAction = ar.AsyncState as DoMyAction;
AsyncManager.Parameters["result"] = doMyAction != null ? doMyAction.EndInvoke(ar) : null;
AsyncManager.OutstandingOperations.Decrement();
}
public ActionResult MyActionCompleted(ActionResult result){
return result;
}
private ActionResult MyAction(BindingObject params){
// use HttpWebRequest to call an external API and process the results
}
This seems to work fine, when I test it locally calling MyAction, breakpoints in each of the methods fire when I would expect them to and it ultimately returns the expected result.
I would anticipate this change to at best improve performance under heavy load because now my worker threads aren't being eaten up waiting for the HttpWebRequest to call the external API, and at worst have no effect at all.
Before pushing this change, my server's CPU usage averaged around 30%, and my W3SVC_W3WP Active Requests perfmon stat hovers around 10-15. The server is Win Server 2008 R2 and the MVC site gets around 50 requests per second.
Upon pushing this change, the CPU shoots up to constant 90-100% usage, and the W3SVC_W3WP Active Requests counter slowly increases until it hits the maximum of 5000 and stays there. The website becomes completely unresponsive (either timing out or giving "Service Unavailable" errors).
My assumption is I'm either implementing the AsyncController incorrectly, missing some additional configuration that's required, or maybe just misunderstanding what the AsyncController is supposed to be used for. In any case, my question is why is this happening?
By async-invoking a delegate you move the work to the thread pool. You still burn a thread. You gain nothing and loose performance.
Async mostly makes sense when you can trigger true async IO.

Categories

Resources