I recently made some (fairly trivial) changes to one of my ASP.NET MVC3 controllers and changed one of the actions into an async action. Basically I took code that looks like this:
public ActionResult MyAction(BindingObject params){
// use HttpWebRequest to call an external API and process the results
}
And turned it into code that looks like this:
private delegate ActionResult DoMyAction(BindingObject params);
public void MyActionAsync(BindingObject params){
AsyncManager.OutstandingOperations.Increment();
var doMyAction = new DoMyAction(MyAction);
doMyAction.BeginInvoke(params, MyActionCallback, doMyAction);
}
private void MyActionCallback(IAsyncResult ar){
var doMyAction = ar.AsyncState as DoMyAction;
AsyncManager.Parameters["result"] = doMyAction != null ? doMyAction.EndInvoke(ar) : null;
AsyncManager.OutstandingOperations.Decrement();
}
public ActionResult MyActionCompleted(ActionResult result){
return result;
}
private ActionResult MyAction(BindingObject params){
// use HttpWebRequest to call an external API and process the results
}
This seems to work fine, when I test it locally calling MyAction, breakpoints in each of the methods fire when I would expect them to and it ultimately returns the expected result.
I would anticipate this change to at best improve performance under heavy load because now my worker threads aren't being eaten up waiting for the HttpWebRequest to call the external API, and at worst have no effect at all.
Before pushing this change, my server's CPU usage averaged around 30%, and my W3SVC_W3WP Active Requests perfmon stat hovers around 10-15. The server is Win Server 2008 R2 and the MVC site gets around 50 requests per second.
Upon pushing this change, the CPU shoots up to constant 90-100% usage, and the W3SVC_W3WP Active Requests counter slowly increases until it hits the maximum of 5000 and stays there. The website becomes completely unresponsive (either timing out or giving "Service Unavailable" errors).
My assumption is I'm either implementing the AsyncController incorrectly, missing some additional configuration that's required, or maybe just misunderstanding what the AsyncController is supposed to be used for. In any case, my question is why is this happening?
By async-invoking a delegate you move the work to the thread pool. You still burn a thread. You gain nothing and loose performance.
Async mostly makes sense when you can trigger true async IO.
Related
I have pretty naive code :
public async Task Produce(string topic, object message, MessageHeader messageHeaders)
{
try
{
var producerClient = _EventHubProducerClientFactory.Get(topic);
var eventData = CreateEventData(message, messageHeaders);
messageHeaders.Times?.Add(DateTime.Now);
await producerClient.SendAsync(new EventData[] { eventData });
messageHeaders.Times?.Add(DateTime.Now);
//.....
Log.Info($"Milliseconds spent: {(messageHeaders.Times[1]- messageHeaders.Times[0]).TotalMilliseconds});
}
}
private EventData CreateEventData(object message, MessageHeader messageHeaders)
{
var eventData = new EventData(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(message)));
eventData.Properties.Add("CorrelationId", messageHeaders.CorrelationId);
if (messageHeaders.DateTime != null)
eventData.Properties.Add("DateTime", messageHeaders.DateTime?.ToString("s"));
if (messageHeaders.Version != null)
eventData.Properties.Add("Version", messageHeaders.Version);
return eventData;
}
in logs I had values for almost 1 second (~ 800 milliseconds)
What could be a reason for such long execution time?
The EventHubProducerClient opens connections to the Event Hubs service lazily, waiting until the first time an operation requires it. In your snippet, the call to SendAsync triggers an AMQP connection to be created, an AMQP link to be created, and authentication to be performed.
Unless the client is closed, most future calls won't incur that overhead as the connection and link are persistent. Most being an important distinction in that statement, as the client may need to reconnect in the face of a network error, when activity is low and the connection idles out, or if the Event Hubs service terminates the connection/link.
As Serkant mentions, if you're looking to understand timings, you'd probably be best served using a library like Benchmark.NET that works ove a large number of iterations to derive statistically meaningful results.
You are measuring the first 'Send'. That will incur some overhead that other Sends won't. So, always do warm up first like send single event and then measure the next one.
Another important thing. It is not right to measure just single 'Send' call. Measure bunch of calls instead and calculate latency percentile. That should provide a better figure for your tests.
I am having some severe performance issues in a project i'm working on. It's a standard web application project - users send requests to an API which trigger some form of computation in various handlers.
The problem right now is pretty much any request will drive the CPU usage of the server up significantly, regardless of what internal computation the corresponding function is supposed to do. For example, we have an endpoint to display a game from the database - the user sends a request containing an ID and the server will respond with a JSON-object. When this request is being processed the CPU usage goes from 5% (with the app just running) to 25-30%. Several concurrent requests will tank the server, with .net-core using 60-70% of the CPU.
The request chain looks like:
(Controller)
[HttpGet("game/{Id}")]
public async Task<IActionResult> GetPerson(string Id)
{
try
{
var response = await _GameService.GetGameAsync(Id);
return Ok(new FilteredResponse(response, 200));
}
Service
public async Task<PlayerFilteredGameState> GetGameAsync(string gameId, string apiKey)
{
var response = await _ironmanDataHandler.GetGameAsync(gameId);
var filteredGame = _responseFilterHelper.FilterForPlayer(response, apiKey);
return filteredGame;
}
Data handler
public async Task<GameState> GetGameAsync(string gameStateId)
{
using (var db = _dbContextFactory.Create())
{
var specifiedGame = await db.GameStateIronMan.FirstOrDefaultAsync(a => a.gameId == gameStateId);
if (specifiedGame == null)
{
throw new ApiException("There is no game with that ID.", 404);
}
var deserializedGame = JsonConvert.DeserializeObject<GameState>(specifiedGame.GameState);
return deserializedGame;
}
}
I've tried mocking all function return values and database accesses, replacing all computed values with null/new Game() etc etc but it doesn't improve the performance. I've spent lots of time with different performance analysis tools but there isn't a single function that uses more than 0,5-1% of the CPU.
After a lot of investigation the only "conclusion" i've reached is that it seems to have something to do with the internal functionality of async/await and the way we use it in our project, because it doesn't matter what we do in the called functions - as soon as we call a function the performance takes a huge hit.
I also tried making the functions synchronous just to see if there was something wrong with my system, however performance is massively reduced if i do that (which is good, i suppose).
I really am at a loss here because we aren't really doing anything out of the ordinary and we're still having large issues.
UPDATE
I've performed a performance analysis in ANTS. Im not really sure how to present the results, so i took a picture of what the callstack looks like.
If your gamestate is a large object, deserializing it can be quite taxing.
You could create a test where you just deserialize a saved game state, and do some profiling with various game states (a fresh start, after some time, ...) to see if there are differences.
If you find that deserializing takes a lot of CPU no matter what, you could look into changing the structure and seeing if you can optimize the amount of data that is saved
I have a .NET 4.5.2 ASP.NET webapp in which a chunk of code makes async webclient calls back into web pages inside the same webapp. (Yes, my webapp makes async calls back into itself.) It does this in order to screen scrape, grab the html, and hand it to a PDF generator.
I had this all working...except that it was very slow because there are about 15 labor-intensive reports that take roughly 3 seconds each, or 45 seconds in total. Since that is so slow I attempted to generate all these concurrently in parallel, and that's when things hit the fan.
What is happening is that my aspx reports (that get hit by webclient) never make it past the class constructor until timeout. Page_Load doesn't get hit until timeout, or any other page events. The report generation (and webclient calls) are triggered when the user clicks Save in the webapp, and a bunch of stuff happens, including this async page generation activity. The webapp requires windows authentication which I'm handling fine.
So when the multithreaded stuff kicks off, a bunch of webclient requests are made, and they all get stuck in the reports' class contructor for a few minutes, and then time out. During/after timeout, session data is cleared, and when that happens, the reports cannot get their data.
Here is the multithreaded code:
Parallel.ForEach(folders, ( folderPath ) =>
{
...
string html = getReportHTML(fullReportURL, aspNetSessionID);
// hand html to the PDF generator here...
...
});
private string getReportHTML( string url, string aspNetSessionID ) {
using( WebClient webClient = new WebClient() ) {
webClient.UseDefaultCredentials = true;
webClient.Headers.Add(HttpRequestHeader.Cookie, "ASP.NET_SessionId=" + aspNetSessionID);
string fullReportURL = url;
byte[] reportBytes = webClient.DownloadData(fullReportURL);
if( reportBytes != null && reportBytes.Length > 0 ) {
string html = Encoding.ASCII.GetString(reportBytes);
return html;
}
}
return string.Empty;
}
Important points:
Notice I have to include the ASP.NET session cookie, or the web call doesn't work.
webClient.UseDefaultCredentials = true is required for the winauth.
The fragile session state and architecture is not changeable in the short term - it's an old and massive webapp and I am stuck with it. The reports are complex and rely heavily on session state (and prior to session state many db lookups and calcs are occurring.
Even though I'm calling reports from my webapp to my same webapp, I must use an absolute url - relative URL throws errors.
When I extract the code samples above into a separate .net console app, it works well, and doesn't get stuck in the constructor. Because of this, the issue must lie (at least in part) in the fact that my web app is making async calls back to itself. I don't know how to avoid doing this. I even flirted with Server.Execute() which really blows up inside worker threads.
The reports cannot be generated in a windows service or some other process - it must be linked to the webapp's save event.
There's a lot going on here, but I think the most fundamental question/problem is that these concurrent webclient calls hit the ASPX pages and get stuck in the constructor, going no further into page events. And after about 2 minutes, all those threads flood down into the page events, where failures occur because the main webapp's session state is no longer active.
Chicken or egg: I don't know whether the threads unblock and eventually hit page events because the session state was cleared, or the other way around. Or maybe there is no connection.
Any ideas?
I have a website on Rackspace which does calculation, the calculation can take anywhere from 30 seconds to several minutes. Originally I implemented this with SignalR but had to yank it due to excessive CC usage. Hosted Rackspace sites are really not designed for that kind of use. The Bill went though the roof.
The basic code is as below which work perfectly on my test server but of course gets a timeout error on Rackspace if the calculation take more than 30 seconds due to their watcher killing it. (old code) I have been told that the operation must write to the stream to keep it alive. In the days of old I would have started a thread and polled the site until the thread was done. If there is a better way I would prefer to take it.
It seems that with .NET 4.5 I can use the HttpTaskAsyncHandler to accomplish this. But I'm not getting it. The (new code) below is as I understand the handler you would use by taking the old code in the using and placing it in the ProcessRequestAsync task. When I attempt to call the CalcHandler / Calc I get a 404 error which most likely has to do with routing. I was trying to follow this link but could not get it to work either. The add name is "myHandler" but the example link is "feed", how did we get from one to the other. They mentioned they created a class library but can the code be in the same project as the current code, how?
http://codewala.net/2012/04/30/asynchronous-httphandlers-with-asp-net-4-5/
As a side note, will the HttpTaskAsyncHandler allow me to keep the request alive until it is completed if it takes several minutes? Basically should I use something else for what I am trying to accomplish.
Old code
[Authorize]
[AsyncTimeout(5000)] // does not do anything on RackSpace
public async Task<JsonResult> Calculate(DataModel data)
{
try
{
using (var db = new ApplicationDbContext())
{
var result = await CalcualteResult(data);
return Json(result, JsonRequestBehavior.AllowGet);
}
}
catch (Exception ex)
{
LcDataLink.ProcessError(ex);
}
return Json(null, JsonRequestBehavior.AllowGet);
}
new code
public class CalcHandler : HttpTaskAsyncHandler
{
public override System.Threading.Tasks.Task ProcessRequestAsync(HttpContext context)
{
Console.WriteLine("test");
return new Task(() => System.Threading.Thread.Sleep(5000));
}
}
It's not a best approach. Usually you need to create a separate process ("worker role" in Azure).
This process will handle long-time operations and save result to the database. With SignalR (or by calling api method every 20 seconds) you will update the status of this operation on client side (your browser).
If this process takes too much time to calculate, your server will become potentially vulnerable to DDoS attacks.
Moreover, it depends on configuration, but long-running operations could be killed by the server itself. By default, if I'm not mistaken, after 30 minutes of execution.
I am attempting to call/push a semi-large tiff and a Gal file to a java webservice.
The platform is Visual Studio 2013, C# windows forms application.
I am pointing to the WSDL file and "The Platform" is generating a service reference class for me.
This is all very abstracted from me, which is a good thing as I am a relative newbie to this arena.
I left the "Generate Task based Code" checked and I get an addSample and addSampleAsync method.
I populate the class fields and push the code up.
The addSample code works fine but blocks the UI.
The async code, addSampleAsync, also works, bit is slower and is not completely asynchronous.
addSampleAsync locks the UI for about half of the processing time and the function call to fncTestUpload does not return for that same period of time.
//Dimensioned at class level
//private static addSamplePortClient Service = new addSamplePortClient();
//private static addSampleResponse Myresult = new addSampleResponse();
//ThisRequest is the WSDL modeled class object.
//This code works, but is slow, 30 seconds on wifi
ResponseType Myresult = Service.addSample(ThisRequest.Request);
MessageBox.Show(Myresult.Message + Myresult.Code);
//This code locks up the UI for about 15 - 20 seconds then takes another 15 to display the messagebox
fncTestUpload(ThisRequest);
async void fncTestUpload(addSampleRequest SentRequest)
{
Myresult = await Service.addSampleAsync(SentRequest.Request);
MessageBox.Show(Myresult.Response.Message + " - " + Myresult.Response.Code);
}
I made the response object a class level variable in hopes of doing something with it in the function that calls fncTestUpload, which it thought would return immediately when calling an Async function. It does not return until after 15 seconds.??
I have spent several hours googling this and have not found any answers as to why the addSampleAsync is not working as advertised.
Microsoft's tutorials may as well be written in Dilbert's Elbonian. I can't follow them and don't find them helpful, so please don't direct me to one.
When you use the 'await' keyword in your method you are saying "Ok, you go ahead and do work, I will return to my caller, let me know when you're done".
So the 15 seconds of waiting is the time it takes your service to process the request, then invoking the state machine generated by the async method to return to the method after the previously awaited method has finished. That is the normal behavior for await.
About the MessageBox that is taking 15 seconds, it could be that the Response property is lazyloading and actually trying to load the code / message for the first time wheb you access those properties.