If I create a web application and host it on a Windows Server, then as I understand it, IIS handles the initial request and routes it to the appropriate website or application. I'm under the impression that a w3wp.exe (worker process) instance is created for each application. IIS works with the worker process, which in turn works with the web application.
What happens if the application gets twenty requests per second? Will the worker process create twenty instances of the application to handle each request, or will it queue the requests passing them to a single instance of the application as and when?
I suspect it's the latter. If that is the case then am I right to think that the worker process will keep an application alive whilst it is getting requests?
I'm trying to fully understand what a web application does when it handles many con-current requests. I've tried asking this question before but struggled with the wording, so hopefully this makes sense.
EDIT:
Thanks to Mason I realised that the answer was right in front of me! Web applications use DLLs, which can't run by themselves. It's the w3wp.exe (worker process) which call the DLLs to handle the requests.
The number of worker processes per web site is controlled in the application pool advanced settings (in IIS management console).
The configuration of number of concurrent requests each of the workers can handle depends on the IIS version. In IIS 7 was in the same place, for more recent versions you will have to check your machine.config (looking for maxWorkerThreads)
Related
Can anyone explain the differences, in IIS, between application pools, worker processes and app domains? Also, how do they work together? I've read a couple of articles but it's still a little confusing.
Does each website created in IIS becomes an application?
Is each application associated with one worker process?
Where do app domains come into the picture?
I try to say them with other words.
In a server you can have many asp.net sites that runs together. Each one site is an app domain.
You must assign to each of them one application pool. Many application domains (sites) can have the same application pool, and because they have the same application pool they run under the same processes, and under the same account - and they have the same settings of the pool. If this pool restarts, then all sites under that pools restarts.
Now each pool can have one or more worker process. Each worker process is a different program that's run your site, have their alone static variables, they different start stop calls etc. Different worker process are not communicate together, and the only way to exchange data is from common files or a common database. If you have more than one worker process and one of them make long time calculations, then the other can take care to handle the internet calls and show content.
When you assign many worker process to a single pool then you make the called web garden and your site is like to be run from more than one computer if a computer is one processing machine.
Each worker process can have many threads.
How the more worker process affect you:
When you have one worker process everything is more simple, among your application all static variables are the same, and you use the lock to synchronize them.
When you assign more than one worker process then you still continue to use the lock for static variables, static variables are not different among the many runs of your site, and if you have some common resource (e.g. the creation of a thumbnail on the disk) then you need to synchronize your worker process with Mutex.
One more note. Its sounds that when you make more worker process then you may have more smooth asynchronous page loads. There is a small issue with the session handler of asp.net that is lock the entire process for a page load - that is good and not good depend if you know it and handle it - or change it.
So let talk about one site only with many worker process. Here you face the issue that you need to synchronize your common resource change with Mutex. But the pages/handlers that use session they are not asynchronous because the session locks them. This is good for start because you avoid to make this synchronization of many points your self.
Some questions on this topic:
Web app blocked while processing another web app on sharing same session
jQuery Ajax calls to web service seem to be synchronous
ASP.NET Server does not process pages asynchronously
Replacing ASP.Net's session entirely
Now this session lock is not affect different sites.
Among different sites the more worked process can help to not the one site block the other with long running process.
Also among different sites the more pools also can help, because each pool have at least one worked process, but remember and see by your self using the process explorer, each working process takes more memory of your computer, and one big server with 16G memory and one SQL server can not have too many different worked process - for example on a server with 100 shared sites, you can not have 100 different pools.
One IIS server may have multiple application pools.
One web application binds to one application pool.
One application pool may have more than one worker process (when Web Garden is enable).
One worker process can have multiple app domains. One app domain lives only in one worker process.
One app domain may have multiple threads. One thread can be shared by different app domains in different time.
The meaning to ASP.NET developers: to make your web site scalable, don't use in-proc session and don't use static class variable lock for synchronization.
Yes, though not every application is a website. You can have an application that is nested under a website.
Yes, every application has to have one worker process (application pool), though one application pool can server several applications. A single web application can be distributed (web garden/farm) meaning that it will run in multiple processes.
Each process will run in its own app domain (every application pool is a separate app domain).
From MSDN.
Create a Web Application:
An application is a grouping of content at the root level of a Web site or a grouping of content in a separate folder under the Web site's root directory.
Application Pools:
An application pool defines a group of one or more worker processes, configured with common settings that serve requests to one or more applications that are assigned to that application pool. Because application pools allow a set of Web applications to share one or more similarly configured worker processes, they provide a convenient way to isolate a set of Web applications from other Web applications on the server computer. Process boundaries separate each worker process; therefore, application problems in one application pool do not affect Web sites or applications in other application pools. Application pools significantly increase both the reliability and manageability of your Web infrastructure.
From the source link:-http://weblogs.asp.net/owscott/archive/2007/09/02/application-vs-appdomain.aspx
An application is an IIS term, but it's one that ASP.NET utilizes.
Essentially it creates a sandbox, or a set of boundaries to separate
different sites, or parts of sites, from the others.
An AppDomain is a .NET term. (In IIS7, AppDomains play a larger role
within IIS, but for the most part it's an ASP.NET term)
The worker process is used to process the request of the web application.
A test WCF webservice that I have hosted using IIS 7.5 is consistently slow to respond to calls made after a period of inactivity (i.e. the first call of each day).
From researching this topic I gather that the problem of "application warmup" is commonly encountered when using IIS (e.g. see here).
I have taken the usual steps that are recommended to try and mitigate this problem:
Installed the Application Initialization Module.
Disabled the application pool Idle Time-out, and the Regular Recycling Time Interval (i.e. set them to '0').
Edited the applicationhost.config file so that autoStart=True and startMode="alwaysRunning" for the necessary app pool, and preloadEnabled="true" for my application.
With these settings, I expect the application pool to immediately spin up a worker process when IIS is started, and spin up a new worker process when the existing one exits. Also, I expect the application to be loaded within the worker process.
However, for the first call of each day, the logs show the difference in time between the client making a call, and the webservice receiving the call, can be as much as 10 seconds. Subsequent calls are typically handled in well under 2 seconds.
Curiously, the long response time is not reproduced by making a call following an iisreset command. I would expect that such a heavy-handed operation would put the webservice in a similarly "cold" situation, however this does not seem to be the case.
I would like to know:
What could still be causing the delay in the application "warming up"?
What is the difference in the state of the webservice following iisreset and a long period of inactivity?
Should I resort to a "heart beat" solution to regularly ping the service to keep it alive?
Thanks in advance for any tips or insight.
I'll try to help with you questions:
What could still be causing the delay in the application "warming up"?
Warm up an application does not mean warm up its resources. For instance, if you configure Auto-start with Application Fabric in your WCF application (https://msdn.microsoft.com/en-us/library/ee677260(v=azure.10).aspx), and this application access database using EF, it will not initiate your DBContext.
If you want these resources initialized after your application warmed up, you need to implement a method to initialize your resources, like cache, DBContext, etc.
What is the difference in the state of the webservice following iisreset and a long period of inactivity?
When the application spend long time of inactivity, probably the application pool goes down and it is restarted when it receives any request, like a recycle does.
This link has interest information about the difference between iisreset and application pool recycle, and it can help to answer your question: https://fullsocrates.wordpress.com/2012/07/25/iisreset-vs-recycling-application-pools/
Should I resort to a "heart beat" solution to regularly ping the service to keep it alive?
If you keep on accessing your service, it will probably keep its resources initialized in memory, so can be a good approach.
Anyway, if your Application Pool is configured to recycle in some interval time, it will be recycled and your resources in memory lost.
If it looks problem to you, just turn off this feature going to IIS -> Application Pool -> Advanced settings and set Regular Time Interval=0
For this issue, it's just some suggestions, you need to make some tests and find out the better solution.
How to host a Windows Service in IIS and keep that service runing like it is running on Windows?
Could I use some feature from WCF service?
I've not access to the Windows itself, only to IIS. Inside that service I'll create a thread which at scheduled time will process some data.
In short, you can't.
A more detailed answer is that there are 2 problems:
IIS worker processes are launched only when a HTTP request comes in. This means you can't start your service with the system.
IIS worker processes are recycled (i.e. restarted) on several conditions. For example, a worker process is restarted if no HTTP request comes in for a long time. This means you can't control when your service is shut down, unless you have access to application pool recycling configuration. Keep in mind that the recycling logic only ensures that all pending HTTP requests are complete, but does not await all background threads to complete.
You can come with a partial solution this way:
Create a WCF service method that checks if your long-running thread is alive and if not, starts it.
Create a very simple windows service that periodically (once in 5 seconds) calls that method. Deploy the service somewhere, e.g. on your own machine.
The only question that remains is: do you really need to avoid windows services? Could you find a place to host the service? There are some use cases when a windows service is the best or even the only way.
You cant, in a nut shell.
However you can make use of the health monitoring API specifically the heartbeat functionality. see:
http://msdn.microsoft.com/en-us/library/system.web.management.webheartbeatevent.aspx
for details on the class you will need to implement to be called when there is work to do
also this answer on SO might help
Understanding heartbeat in ASP.NET health monitoring
Once you have implemented a webheartbeatevent derived class you can check your db or what ever you want to check if there is work to do.
A better solution IMHO is to scrap the service entirely and redesign the system to be 100% web based, as services become a deployment and maintenance nightmare. as i assume you are now finding out...
I am not too clear about the IIS lifecycle, but my general understanding is:
Every couple of hours IIS resets itself. This is apparently done so as to fix up any memory leaks, resource deadlocks etc. etc. ie. It seems to be a cleanup operation.
Every couple more hours (I think I read 23 hours) the server just stops listening to inbound requests and runs Application_End. An external page request will restart the app.
Can I get a bit more reasoning to why these behaviors occur? Especially with regards to item #2... My server runs internal scheduling behaviors which completely died last night. The reason was that Application_End occurs and no customer requests were happening to start the IIS server again. This seems weird. Why not just clean up memory leaks etc. and then keep IIS running exactly as it was? The only reason I can think of is that it lets the server reclaim memory/cpu used by IIS, but that seems nonsensical and the cause of bugs, such as my scheduler issue!
Each website in IIS sits in an application pool and you have three different sections that influence when an application pool recycles it's worker process; Recycling, Performance and Health. When the process recycles, a new worker process (w3p.exe) is created first to handle any new requests. Any existing requests are completed on the old process before that is then closed. Application_Start and Application_End will run on each process so you can setup and teardown resources appropriately.
The Recycling settings have the most direct impact on when the worker process will recycle and you can choose to restart after a specific number of minutes running, number of requests processed or at a specific time each day. In a web farm using a specific time can ensure that you never have all the severs in the farm recycle at the same time. You can turn all these off so your worker process doesn't recycle but as you stated in your question this leaves the server vulnerable to memory leaks and threads hanging which will stop IIS serving any requests for websites in that application pool.
The Performance settings can shutdown a worker process if it has been idle for an specified number of minutes or if the CPU reaches a specified threshold. You can also increase the number of worker processes for an application pool and create a web garden.
The Health settings monitor worker processes and will shut them down if they fail repeatedly and will check that they start and stop within a specified time.
Technically, IIS is not stopping or resetting. It's the application pool that is being recycled which ensures that the application domain in which your web applications run does not bog down over time due to bugs/inefficiencies in your code, bugs in the framework, etc.
The IIS model is actually very good for the health of a long-running application. Windows Services for example don't get these benefits. If the process crashes, it's done. But because IIS can measure various aspects of your web application like response time, memory consumption, inactivity, etc, it can offer to reset your application under certain circumstances. They're all configurable but you should always strive to develop web applications in such a way that one request does not depend upon a prior request.
You should also not rely on things happening in the web application that are not directly in response to a web request. So if you are starting up a background thread to do some background tasks then I'd recommend moving that out into a separate process (such as a Windows Service or Scheduled Task.) Although if you really don't want to do that, there is an IIS 7 Application Warm-Up Module that will periodically ping your web application in order to start it.
If you are using in-process session state and the resets are causing problems, you may want to consider using a SQL-based session state provider.
In any event, you can read more about configuring the IIS 7 application pool recycling behavior here. http://technet.microsoft.com/en-us/library/cc753179(WS.10).aspx
I think the other posters have already answered your main question sufficiently well however I'd like to address the final part of your question.
Why not just clean up memory leaks etc. and then keep IIS running exactly as it was? The only reason I can think of is that it lets the server reclaim memory/cpu used by IIS, but that seems nonsensical and the cause of bugs, such as my scheduler issue!
and
Why do I need to wait for a web page request to start up my pool, rather then having the server automatically running and gung-ho about receiving client web requests?
Let's think about the following scenario and what would happen if IIS behaved in this way. If we have a machine hosting several thousand websites (i.e. your typical shared hosting environment) each website has it's own application pool (w3p.exe) running. Say that IIS started up a worker pool for each website regardless of if a request to that site had been made, you'd have a few thousand processes starting up at once each idling at say 2MB of RAM. If you've got 2000 websites you've just allocated 4GB of RAM to sit and essentially do nothing and the OS might start eating into the page file without any real need.
Is this desirable? I think you'd agree that the answer is no.
These behaviors can be controlled by changing the app pool recycle settings of your website. Our production website at work recycles its pool every night at 3am, but our QA environment recycles several times a day.
Question: When a webapplication gets started, it executes Application_Start in global.asax.
Now, a web application gets started as soon as the first request for a page in that application reaches the server.
But my question is: how long will the application run until the application is stopped.
I mean when after the first page request, there's no traffic on the server.
I need to know because I intend to start a server that listens on a tcp port in global.asax.
And when the application stops, the server ceases to listen to its port.
It depends on your IIS settings. Your application will run in an application pool, which takes a bunch of settings defining the behaviour of this pool.
The thing you're looking for are recycling settings. In IIS 7, you can access these easily from the management console. Go to Application Pools, right click on the application pool your app runs in (if you don't know which one that is, then it's probably the DefaultAppPool) and select recycling.
Here you'll find the options you have to control the recycling behaviour of your app pool, which in turn controls when your app 'resets'.
in a word (well 2) - shared hosting.
on shared hosting beware, (godaddy/webhost4life etc) this timeout could well be less, plus you don't have option to configure that on these hosting environments. i've had cases where the app pool is recycled after 5 mins at certain peek times, so you might have to investigate 'wakeup' routines to poke your app to keep in in the memory. i do this for a few shared hosting apps to great effect using pingalive.com.
hope this helps, even if in an abstract way.
jim