I have an aspx page which loads 10 charts asynchronously through Jquery Ajax requests. The requests are being made to Generic Handlers which implement IReadOnlySessionState since access to a session variable is required but it is a read and this way I am not affected by the read write session lock that asp.net implements.
Through the debugger I am able to see that calls are happening asynchronously but it seems that there is a limit as some of the calls are entering the code only after the first few have completed. I am not sure if this is by design on IIS or a property inside the web.config.
Is there a limit of threads that one user/session can have at one time?
The way IIS and asp.net handles threads depends on the version of IIS you are using. There is a limit to the number of workerthreads and there are caps on the number of threads that must be left available. This means that only a certain number of threads can execute at once.
See: http://blogs.msdn.com/b/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx
If your ASP.NET application is using web services (WFC or ASMX) or
System.Net to communicate with a backend over HTTP you may need to
increase connectionManagement/maxconnection. For ASP.NET
applications, this is limited to 12 * #CPUs by the autoConfig feature.
Also from the same article: http://support.microsoft.com/kb/821268
If you have long running HTTP requests from AJAX your best bet is to do asynchronous http request handlers. Then the requests can be waiting on an IO thread, since asp.net has a lot more IO threads than workerthreads.
See: http://www.asp.net/web-forms/tutorials/aspnet-45/using-asynchronous-methods-in-aspnet-45
Related
I am trying to determine what will happen when my Web API methods are being called simultaneously by two clients. In order to do so I am generating two async requests in Python:
rs = (grequests.post(url, data = json), grequests.post(url, data = json))
grequests.map(rs)
The requests call a non-async method in my WebApi which is changing my resource, which is a static class instance. From what I've understood, API requests should not run concurrently, which is what I want since I don't want the two requests to interfere with eachother. However, if I set a break point in my API method, the two requests seem to run concurrently (in parallell). This leads to some unwanted results since the two requests are changing the same resource in parallell.
Is there any way to make sure that the Web API handles the requests non-concurrently?
Requests to your Web API methods are handled by the IIS application pool (pool of threads) which initializes a thread for every synchronous request it receives. There is no way to tell IIS to run these threads non-concurrently.
I believe you have a misunderstanding of what a "non-async" Web API method is. When a Web API method is async that means it will share its application pool thread while it's in a wait state. This has the advantage of other requests not having to initialize a new thread (which is somewhat expensive). It also helps minimize the number of concurrent threads which in turn minimizes the number of threads that the application pool has to queue up.
For non-async methods, the IIS application pool will initialize a new thread for every request even if an inactive thread is available. Also, it will not share that thread with any other requests.
So the short answer to your question is no. There is no way to make sure that the Web API requests are handled non-concurrently. Brian Driscoll's comment is correct. You will have to lock your shared resources during a request.
I am using a WebApi service controller, hosted by IIS,
and i'm trying to understand how this architecture really works:
When a WebPage client is sending an Async requests simultaneously, are all this requests executed in parallel at the WebApi controller ?
At the IIS app pool, i've noticed the queue size is set to 1,000 default value - Does it mean that 1,000 max threads can work in parallel at the same time at the WebApi server?
Or this value is only related to ths IIS queue?
I've read that the IIS maintains some kind of threads queue, is this queue sends its work asynchronously? or all the client requests sent by the IIS to the WebApi service are being sent synchronously?
The queue size you're looking at specifies the maximum number of requests that will be queued for each application pool (which typically maps to one w3wp worker process). Once the queue length is exceeded, 503 "Server Too Busy" errors will be returned.
Within each worker process, a number of threads can/will run. Each request runs on a thread within the worker process (defaulting to a maximum of 250 threads per process, I believe).
So, essentially, each request is processed on its own thread (concurrently - at least, as concurrently as threads get) but all threads for a particular app pool are (typically) managed by a single process. This means that requests are, indeed, executed asynchronously as far as the requests themselves are concerned.
In response to your comment; if you have sessions enabled (which you probably do), then ASP.NET will queue the requests in order maintain a lock on the session for each request. Try hitting your sleeping action in Chrome and then your quick-responding action in Firefox and see what happens. You should see that the two different sessions allow your requests to be executed concurrently.
Yes, all the requests will be executed in parallel using the threads from the CLR thread pool subject to limits. About the queue size set against the app pool, this limit is for IIS to start rejecting requests with a 503 - Service unavailable status code. Even before this happens, your requests will be queued by IIS/ASP.NET. That is because threads cannot be created at will. There is a limit to number of concurrent requests that can run which is set by MaxConcurrentRequestsPerCPU and a few other parameters. For 1000 threads to execute in parallel in a true sense, you will need 1000 CPU cores. Otherwise, threads will need to be time sliced and that adds overhead to the system. Hence, there are limits to number of threads. I believe it is very difficult to comprehensively answer your questions through a single answer here. You will probably need to read up a little bit and a good place to start will be http://blogs.msdn.com/b/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx.
I have an asp.net web application running on IIS 7 set-up in web-garden mode. I want to clear runtime cache items across all worker processes using a single-step. I can setup a database key-value, but that would mean a thread executing on each worker process, on each of my load-balanced-scenario web servers will poll for changes on that key-value and flush cache. That would be a very bad mechanism as I flush cache items once per day at max. Also I cannot implement a push notification using the SqlCacheDependency with Service Broker notifications as I have a MySql db. Any thoughts? Is there any dirty work-around? One possible workaround, expose an aspx page, and hit that page multiple times using the ip and port on which the site is hosted instead of the domain name - ex: http://ip.ip.ip.ip:82/CacheClear.aspx, so that a request for that page might be sent to all the worker processes within that webserver, and on Page_Load, clear the cache items. But this is a really dirty hack and may not work in cases when all requests are sent to the same worker process.
You need to setup inter-process communication.
For caching there are two commonly used ways of doing this:
Setup a shared cache (memcached or the like.)
Setup a message queue (e.g. ms-mqueue or rabbitMq) and use it to spread state to the local caches.
A shared cache is the ultimate solution as it means the whole cache is distributed but it is also the most complex: it needs to be set up so the cache load is properly distributed between nodes and make sure it doesn't become a bottle neck.
The second option requires more code on your part but it is easier if you don't want to share the cache content (as in your case.)
The easiest is to setup a listener thread or task to handle the cache clear or individual entries invalidation messages. This thread will be dormant if there are no messages so the impact on performance is minimal.
You can also forgo the listener thread by handling messages as part of the usual iis request pipeline. I.e. set up a filter/module that checks for messages in the queue and processes them before handling the request; but performance wise the first option is (slightly) better.
In WebForms ASP.NET site (IIS, single app pool), I have call to lengthy web service method referenced in Visual Studio as Service Reference (.NET 4.0). Unfortunately I must wait for information from web service before I can serve page to user. Currently web service is called synchronously so server can’t reuse current thread to process other requests which has performance impact.
Of course I can generate asynchronous operations for service reference in Visual Studio and call BeginGetFoo instead of GetFoo, but still I must wait somehow for result from web service.
Here comes question. If I use AsyncWaitHandle.WaitOne (as below) will it be any better in whole application performance terms from synchronous call I use today?
IAsyncResult result = fooSoapClient.BeginGetFoo();
result.AsyncWaitHandle.WaitOne();
var foo = fooSoapClient.EndGetFoo(result);
And of course, if waiting can be done better I am open for suggestions.
You want to use an Asynchronous Page. See "Wicked Code: Scalable Apps with Asynchronous Programming in ASP.NET", also Asynchronous Pages in ASP.NET 2.0, which talks about web services and Asynchronous Tasks with RegisterAsyncTask.
You'd still be hogging the thread. A "safe" option would be to use ASP.NET MVC's async controllers: http://www.aaronstannard.com/post/2011/01/06/asynchonrous-controllers-ASPNET-mvc.aspx
Ideally though, you shouldn't do long running things on a web request. Have a windows service or something process the long running task (that could get kicked off by a web request dropping something on a message queue or putting a task in a database) and poll from the client using ajax or something and then update the user when it's done.
If refactoring your code is not acceptable so you cannot follow #John Saunders's answer then the only thing you can do is increase the number of threads for the application. This will allow you to scale better but at some point it will have diminishing returns and you will start hurting the performance. What is more if you do not have users waiting on the request queue (i.e. more than 25 simultaneous users per core on your server) you don't need to do anything. Async programming in the web server helps only with scalability but not actual performance for a single user.
We have a large process in our application that runs once a month. This process typically runs in about 30 minutes and generates 342000 or so log events. Recently we updated our logging to a centralized model using WCF and are now having difficulty with performance. Whereas the previous solution would complete in about 30 minutes, with the new logging, it now takes 3 or 4 hours. The problem it seems is because the application is actually waiting for the WCF request to complete before execution continues. The WCF method is already configured as IsOneWay and I wrapped the call on the client side to that WCF method in a different thread to try to prevent this type of problem but it doesn't seem to have worked. I have thought about using the async WCF calls but thought before I tried something else I would ask here to see if there is a better way to handle this.
342000 log events in 30 minutes, if I did my math correctly, comes out to 190 log events per second. I think your problem may have to do with the default throttling settings in WCF. Even if your method is set to one-way, depending on if you're creating a new proxy for each logged event, calling the method will still block while the proxy is created, the channel is opened, and if you're using an HTTP-based binding, it will block until the message has been received by the service (an HTTP-based binding sends back a null response for a 1-way method call when the message is received). The default WCF throttling limits concurrent instances to 10 on the service side, which means only 10 requests will be handled at a time, and any further requests will get queued, so pair that with an HTTP binding, and anything after the first 10 requests are going to block at the client until it's one of the 10 requests getting handled. Without knowing how your services are configured (instance mode, etc.) it's hard to say more than that, but if you're using per-call instancing, I'd recommend setting MaxConcurrentCalls and MaxConcurrentInstances on your ServiceBehavior to something much higher (the defaults are 16 and 10, respectively).
Also, to build on what others have mentioned about aggregating multiple events and submitting them all at once, I've found it helpful to setup a static Logger.LogEvent(eventData) method. That way it's simple to use throughout your code, and you can control in your LogEvent method how you want logging to behave throughout your application, such as configuring how many events should get submitted at a time.
Making a call to another process or remote service (i.e. calling a WCF service) is about the most expensive thing you can do in an application. Doing it 342,000 times is just sheer insanity!
If you must log to a centralized service, you need to accumulate batches of log entries and then, only when you have say 1000 or so in memory, send them all to the service in one hit. This will give you a reasonable performance improvement.
log4net has a buffering system that exists outside the context of the calling thread, so it won't hold up your call while it logs. Its usage should be clear from the many appender config examples - search for the term bufferSize. It's used on many of the slower appenders (eg. remoting, email) to keep the source thread moving without waiting on the slower logging medium, and there is also a generic buffering meta-appender that may be used "in front of" any other appender.
We use it with an AdoNetAppender in a system of similar volume and it works wonderfully.
There's always the traditional syslog there are plenty of syslog daemons that run on Windows. Its designed to be a more efficient way of centralised logging than WCF, which is designed for less intensive opertions, especially if you're not using the tcpip WCF configuration.
In other words, have a go with this - the correct tool for the job.