I am wondering if a static class in an ASP.NET MVC app could be initialized more than once. I initially designed my app so that a static component would fetch some stuff from the database and serve as a cache, and I added a refresh method to the class which was called from the constructor. The refresh method was also made available through the administration portion of the app. At some point I noticed that the data was updated without requiring this manual trigger, which implies that the static constructor run more than once.
There are several scenarios where I could reasonably see this happen, such as an unhandled Exception causing re-initialization. But I am having trouble reproducing this, so I would like to know for sure.
The most usual scenarios would be:
a reload of the web application
touched Web.config
touched binaries
abnormal termination (out-of-memory, permission errors)
a reload of the Application Pool
a restart of IIS
a restart of w3wp.exe (at least once in 29 hours!)
The app-domain will get reloaded (recompiling the dynamic parts as necessary) and this would void any statically initialized data.
You can get around that by persisting the static data somewhere if creating it is expensive, or avoid reloading the AppDomain, the Application Pool or the IIS server.
Update: Phil Haack just published a relevant blog entry here: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
Bye Bye App Domain
it does a better job at explaining the above. Notable, IIS will recycle it's worker process very 29 hours at minimum and shared hosters will recycle AppDomain much more often (perhaps in 20 minutes of idle time)
So tell ASP.NET, “Hey, I’m working here!”
outlines techniques you can apply to get notified of an AppDomain take down - you could use this to get your Singleton instance behaving 'correctly'
Recommendation
I suggest you read it :)
static classes are initialized once per AppDomain.
If IIS recycles your AppDomain, everything will re-initialize.
Related
In order to speed up reaction time of our ASP.NET MVC application, we'd like to "warm up" the application after it has been installed (or after the app pool has been recycled). Some frequently used lookup data should be fetched from the SQL Server database, and stored into the global System.Runtime.Caching.MemoryCache object that .NET provides.
For a situation where you have a dedicated VM with a dedicated IIS for your ASP.NET app, I first of all set the app pool to be "Always Running" - that's step #1.
Given this situation, there are two options I see:
Application warmup as described in this blog post by Scott Gu based on the System.Web.Hosting.IProcessHostPreloadClient interface. If I understand correctly, this code runs when the app pool is started, and before the first request is accepted into the application
Use the Application_Start event in global.asax.cs. If I understand correctly, this event is called only once, when the application is started for the first time (which would happen automatically after installation, since the app pool is set to be "Always Running" - right?)
So - given this setup - which is the preferred way of "pre-warming" your application? Are there any significant differences between these two approaches? What do I need to be aware of when going with one approach over the other?
Thanks for any inputs, hints, warnings, or further links explaining this in more detail!
The short answer, use IProcessHostPreloadClient -- it will run immediately on startup.
Application_Start is a bit of a misnomer, it actually fires on the first request. That means the site might recycle/restart and be sitting idle, where it could be warming.
If your site is on IIS 7 or above I'm not aware of a reason to use Application_Start.
I am working on an ASP.NET MVC web application (C#) that is a module of a larger application as a whole (mostly desktop/win service based - VB.NET). Currently the application makes HTTP calls to a web service (provided as an API) which is it's own independent application (also using MVC, VB.NET). Possibly not how I would design it, but it is what I have inherited.
My issue is this: If I host the MVC app in local IIS and run the API project in IIS Express, all is good. If I split the two projects up to run in separate Application Pools in local IIS, all is good. However, if I run both apps out of the same pool in IIS, I run into a lot of issues. Namely, timeouts when I make the call to HttpClient.GetAsync(url) - especially on a page that is calling this 9 times to dynamically retrieve different images based on ID (each call is made to MVC app which then makes the call to the API). Some calls make it through, most do not.
The exceptions relate to cancelled tasks (timeout = 100s) - but the actions require a fraction of a second so there is no need to timeout. Execution never even makes it into the functions in the API side when it fails - like the HTTP client has given up offering any more connections, or the task is waiting for HTTP to send the request and it never does.
I have tried making it Async all the way through, tried making the HttpClient static, etc. But no joy. Is this simply something that just shouldn't be done - allowing the two apps to share an app pool? If so, I can live with that. But if there is something I can do to handle it more efficiently, that'd be very useful. Any information/resources on this would be very much appreciated. Ta!
We recently ran into the same issue, it took quite a bit of debugging to work out that using the same applciation pool was the cause of the problem.
I also found this could work if you increase the number of Maximum Worker Processes in the advanced settings for the the application pool.
I'm not sure why this is, but i'm guessing all of the requests are being dealt with by one process and this is causing a backlog and ultimately there are too many resulting in timeouts. (I'd be happy for someone more knowledgeable in IIS to correct me though)
HTH
Edit: On further reading here Carmelo Pulvirenti's Blog it seems as though the garbage collector could be to be blame. The blog states that multiple applications running on the same pool share memory, the side effects are;
It means that GC runs a lot of time per second in order to provide
clean memory to your app. What is the side effect? In server mode,
garbage collector requires to stop all threads activities to clean up
the memory (this was improved on .NET Fx 4.5, where the collation does
not require to stop all threads). What is the side effect?
Slow performance of your web application
High impact on CPU time due to GC’s activity.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a web api server which basically responds to 2 requests: stop and start.
On start it does several things, for example, initializing timers that perform a method every X seconds, and on stop it just stops the timers.
In order to achieve that, I created a singleton class which handles the logic of the operations. (It needs to be singleton so the timers and more variables will exist only once for all the requests).
Server is running fine, but recently I got a AccessViolationException while accessing Globalconfiguration.AppSettings in order to retrieve a value from my webconfig file.
I found out by looking at my logs that the singleton class finalizer was called, even though I didn't restart the server.
The finalizer calls a method which I regularly use and works fine in other scenarios, and this method uses the GlobalConfiguration class which threw the exception.
I tried to find the cause for this without success.
So basically there are two bugs here:
1. Why was the finalizer called out of the blue? The server could run for a week.
2. The AccessViolationException.
Perhaps the bugs are related? If my application memory was somehow cleaned would it cause the finalizer to be called and an exception accessing the GlobalConfiguration? Just a theory....
Or perhaps maybe I don't handle the singleton properly? But, after reading about static variables in c# web servers I see that they should exist while the application exist, and as such the bug might not be related with handling the singleton. I do handle it OK - the singleton has a private static field which holds the actual instance and the initialization occurs via locking and double checking to prevent multiple threads creating the singleton.
Is my approach OK? Do you see any possible bug that I didnt expect in this behavior, or do you know of any behavior of the .net framework that could cause my static singleton to be destroyed?
For your first question as to why the finalizer was called the obvious explanation is that it didn't have any active GC roots, which caused the GC to place it on the finialization queue. As to why the GC didn't find an active root to your object there are two possibilities:
You have a bug in your code and are not holding onto a reference to the singleton
Your IIS application pool is being recycled (I'm assuming you're hosting the app in IIS)
In its default configuration IIS automatically restarts each application pool every 1740 minutes (29 hours) which in turn causes the current app domain for your application to be unloaded. In .net static variables exist per app domain which means that when the app domain is unloaded there are no longer any active GC roots to the singleton, which in turn means that the singleton is eligible for garbage collection and by extension finalization.
With regards to the AVE you're getting you need to remember that when your finalizer is executed everything you know is wrong. By the time your finalizer is executed the GlobalConfiguration object may have also been GC'd/finalized and whatever handle it had to the web.config may have been destroyed.
It's also important to note that by default when IIS recycles your application pool it waits for the next HTTP request before it actually recreates the application pool. This is good from a resource utilization perspective as it means that if your application is not getting any requests it will not be loaded into memory, however in your case it also means that none of your timers will exist until your app receives an HTTP request.
In terms of solving this problem you have a couple of options:
Disable IIS's automatic restart. This solves your immediate issue however it raises the question of what happens if the server gets restarted for some other reason. Does your application have some way of persisting state (e.g. in a database) so that if it was previously started it will continue when it comes online?
If so, you would also want to enable auto-start and pre-load for your application so that IIS automatically loads your application without an external HTTP request needing to be made. See this blog post for more details.
Host outside of IIS. IIS is primarily designed for hosting websites rather than background services so rather than fighting against it you could simply switch to using a Windows service to host your application. Using Owin you can even host your existing Web API within the service so you should need to make minimal code changes.
I'm trying to determine the cause of a very long (imho) initial start up of an ASP.NET application.
The application uses various third party libraries, and lots of references that I'm sure could be consolidated, however, I'm trying to identify (and apportion blame) the dlls and how much they contribute to the extended startup process.
So far, the start up times vary from 2-5 minutes depending on usage of other things on the box. This is unacceptable in my opinion based on the complexity of the site, and I need to reduce this to something in the region of 30 seconds maximum.
To be clear on the scope of the performance I'm looking for, it's the time from first request to the initial Application_Start method being hit.
So where would I start with getting information on which DLL's are loaded, and how long they take to load so I can try to put a cost/benefit together on which we need to tackle/consolidate.
From an ability perspective, I've been using JetBrains dotTrace for a while, and I'm clear on how benchmark the application once we're in the application, but it appears this is outside of the application code, and therefore outside of what I currently know.
What I'm looking for is methodologies on how to get visibility of what is happening before the first entry point into my code.
Note: I know that I can call the default page on recycle/upgrade to do an initial load, but I'd rather solve the actual problem rather than papering over it.
Note2: the hardware is more than sufficiently scaled and separated in terms of functionality, therefore I'm fairly sure that this isn't the issue.
Separate answer on profiling/debugging start up code:
w3wp is just a process that runs .Net code. So you can use all profiling and debugging tools you would use for normal .Net application.
One tricky point is that w3wp process starts automatically on first request to an application and if your tools do not support attaching to process whenever it start it makes problematic to investigate startup code of your application.
Trick to solve it is to add another application to the same Application Pool. This way you can trigger w3wp creation by navigating to another application, than attach/configure your tools against already running process. When you finally trigger your original application tools will see loading happening in existing w3wp process.
With 2-5 minutes delay you may not even need profiler - simply attach Visual Studio debugger the way suggested above and randomly trigger "break all" several times during load of your site. There is a good chance that slowest portion of the code will be on the stack of one of many threads. Also watch out for debug output - may give you some clues what is going on.
You may also use WinDbg to capture stacks of all threads in similar way (could be more light way than VS).
Your DLL references are loaded as needed, not all at once.
Do external references slow down my ASP.NET application? (VS: Add Reference dialog)
If startup is taking 2-5 minutes, I would look at what happens in Application_Start, and at what the DLLs do once loaded. Are they trying to connect to a remote service that is very slow? Is the machine far too small for what it's doing (e.g. running a DB with large amounts of data plus the web server on an AWS micro instance or similar)?
Since the load time is probably not the IIS worker process resolving references, I would turn to traditional application profilers (e.g. Jetbrains, Antz, dotTrace) to see where the time is being spent as the DLLs initialize, and in your Application_Start method.
Entertainment options check along with profiling:
profile everything, add time tracing to everything and log the information
if you have many ASPX views that need to be compiled on startup (I think it is default for release configuration) than it will take some time
references to Web services or other XML serialization related code will need to compile serialization assemblies if none are present yet
access to remote services (including local SQL) may require the services start up too
aggressive caching in application/remote services may require per-population of caches
Production:
What is the goal for start up time? Figure out it first, otherwise you will not be able to reach it.
What is price you are willing to pay to decrease start up time. Adding 1-10 more servers may be cheaper than spending months of development/test time and delaying the product.
Consider multiple servers, rolling restarts with warm up calls, web gardens
If caching of DB objects or in general is an issue consider existing distributed in-memory caches...
Despite a large number of dlls I'm almost sure that for a reasonable application it cannot be a cause of problem. Most of the time it is static objects initialization is causing slow startup.
In C# static variables are initialized when a type is first time accessed. I would recommend to use a sql profiler and see what are the queries that are performed during the start time of the application and from there see what are the objects that are expensive to initialized.
At the moment I am working on a project admin application in C# 3.5 on ASP.net. In order to reduce hits to the database, I'm caching a lot of information using static variables. For example, a list of users is kept in memory in a static class. The class reads in all the information from the database on startup, and will update the database whenever changes are made, but it never needs to read from the datebase.
The class pings other webservers (if they exist) with updated information at the same time as a write to the database. The pinging mechanism is a Windows service to which the cache object registers using a random available port. It is used for other things as well.
The amount of data isn't all that great. At the moment I'm using it just to cache the users (password hashes, permissions, name, email etc.) It just saves a pile of calls being made to the database.
I was wondering if there are any pitfalls to this method and/or if there are better ways to cache the data?
A pitfall: A static field is scoped per app domain, and increased load will make the server generate more app domains in the pool. This is not necessarily a problem if you only read from the statics, but you will get duplicate data in memory, and you will get a hit every time an app domain is created or recycled.
Better to use the Cache object - it's intended for things like this.
Edit: Turns out I was wrong about AppDomains (as pointed out in comments) - more instances of the Application will be generated under load, but they will all run in the same AppDomain. (But you should still use the Cache object!)
As long as you can expect that the cache will never grow to a size greater than the amount of available memory, it's fine. Also, be sure that there will only be one instance of this application per database, or the caches in the different instances of the app could "fall out of sync."
Where I work, we have a homegrown O/RM, and we do something similar to what you're doing with certain tables which are not expected to grow or change much. So, what you're doing is not unprecedented, and in fact in our system, is tried and true.
Another Pitfall you must consider is thread safety. All of your application requests are running in the same AppDomain but may come on different threads. Accessing a static variable must account for it being accessed from multiple threads. Probably a bit more overhead than you are looking for. Cache object is better for this purpose.
Hmmm... The "classic" method would be the application cache, but provided you never update the static variables, or understand the locking issues if you do, and you understand that they can disappear at anytime with an appdomain restart then I don't really see the harm in using a static.
I suggest you look into ways of having a distributed cache for your app. You can take a look at NCache or indeXus.Net
The reason I suggested that is because you rolled your own ad-hoc way of updating information that you're caching. Static variables/references are fine but they don't update/refresh (so you'll have to handle aging on your own) and you seem to have a distributed setup.