First off, We have written a library, that is used by a Windows Service, and a ASP Webpage that is running in IIS 7.
The library needed to load other libraries as plugins, but some of the external libraries had the same name (but different internal versions). To resolve this binary namespace conflict, we created an AppDomain for each plugin. The Library has a Manager object which references a pool of static connections. Inside the pool of static SharedConnections, the AppDomains live and are destroyed. When the last Manager objects is removed, the Manager object invokes the cleanup of the SharedConnections. This cleanup releases the AppDomains we created.
The Service that relies on our library handles this beautifully. At the beginning of its lifetime it creates the AppDomains and at the conclusion it removes them during cleanup.
The Website that relies on our library handles this poorly. At the beginning of its lifetime it creates the AppDomains, however when IIS decides to unload us after a period of inactivity, the Manger objects are deleted, which invokes the cleanup of the SharedConnection objects as expected. Which in turn kills off the AppDomains.
There are two problems
a) We use lock() around the connection and AppDomain releases, so they don't release twice and subsequently throw errors. Except for some reason, on rare instances, the thread that enters the lock and kills the AppDomain ceases to exist, and never leaves the lock, causing a dead lock scenario. The only way we can resolve this is to stop the AppPool in IIS and restart it 30-60 seconds later. This does not happen with the Windows Service.
b) When we don't observe the above scenario (which is rarely happens), instead occasionally we have AppDomain release issues, this throws Exceptions that crash and restart the webpage, which is okay-ish.
Other things I have discovered via debug. IIS places the website in its own AppDomain, which means we are a child AppDomain making more child AppDomains.
What are we doing wrong? Is there an IIS configuration that might help?
Use Application Request Routing
To solve exact same problem, IIS created ARR to route request to specific version of application based on URL, Cookie or Header parameter which is configurable easily. ARR works as a HTTP Proxy server, which does simple routing.
Here is the example,
http://blogs.msdn.com/b/carlosag/archive/2010/04/02/setting-up-a-reverse-proxy-using-iis-url-rewrite-and-arr.aspx
IIS will do its job of recycling and managing application pools and managing domains for you, you don't have to do any of that.
You might be able to avoid problem a) by putting your SharedConnection and AppDomain clean up code in the destructor of your Manager object. The destructor will be invoked exactly once by the garbage collector after the manager is disposed and no longer referenced, or when the AppDomain containing the Manager is unloaded. That should eliminate the risk of having your clean up thread aborted by IIS. (This may count as an abuse of the destructor functionality, but I'm not sure what the negative consequences, if any, might be.)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a web api server which basically responds to 2 requests: stop and start.
On start it does several things, for example, initializing timers that perform a method every X seconds, and on stop it just stops the timers.
In order to achieve that, I created a singleton class which handles the logic of the operations. (It needs to be singleton so the timers and more variables will exist only once for all the requests).
Server is running fine, but recently I got a AccessViolationException while accessing Globalconfiguration.AppSettings in order to retrieve a value from my webconfig file.
I found out by looking at my logs that the singleton class finalizer was called, even though I didn't restart the server.
The finalizer calls a method which I regularly use and works fine in other scenarios, and this method uses the GlobalConfiguration class which threw the exception.
I tried to find the cause for this without success.
So basically there are two bugs here:
1. Why was the finalizer called out of the blue? The server could run for a week.
2. The AccessViolationException.
Perhaps the bugs are related? If my application memory was somehow cleaned would it cause the finalizer to be called and an exception accessing the GlobalConfiguration? Just a theory....
Or perhaps maybe I don't handle the singleton properly? But, after reading about static variables in c# web servers I see that they should exist while the application exist, and as such the bug might not be related with handling the singleton. I do handle it OK - the singleton has a private static field which holds the actual instance and the initialization occurs via locking and double checking to prevent multiple threads creating the singleton.
Is my approach OK? Do you see any possible bug that I didnt expect in this behavior, or do you know of any behavior of the .net framework that could cause my static singleton to be destroyed?
For your first question as to why the finalizer was called the obvious explanation is that it didn't have any active GC roots, which caused the GC to place it on the finialization queue. As to why the GC didn't find an active root to your object there are two possibilities:
You have a bug in your code and are not holding onto a reference to the singleton
Your IIS application pool is being recycled (I'm assuming you're hosting the app in IIS)
In its default configuration IIS automatically restarts each application pool every 1740 minutes (29 hours) which in turn causes the current app domain for your application to be unloaded. In .net static variables exist per app domain which means that when the app domain is unloaded there are no longer any active GC roots to the singleton, which in turn means that the singleton is eligible for garbage collection and by extension finalization.
With regards to the AVE you're getting you need to remember that when your finalizer is executed everything you know is wrong. By the time your finalizer is executed the GlobalConfiguration object may have also been GC'd/finalized and whatever handle it had to the web.config may have been destroyed.
It's also important to note that by default when IIS recycles your application pool it waits for the next HTTP request before it actually recreates the application pool. This is good from a resource utilization perspective as it means that if your application is not getting any requests it will not be loaded into memory, however in your case it also means that none of your timers will exist until your app receives an HTTP request.
In terms of solving this problem you have a couple of options:
Disable IIS's automatic restart. This solves your immediate issue however it raises the question of what happens if the server gets restarted for some other reason. Does your application have some way of persisting state (e.g. in a database) so that if it was previously started it will continue when it comes online?
If so, you would also want to enable auto-start and pre-load for your application so that IIS automatically loads your application without an external HTTP request needing to be made. See this blog post for more details.
Host outside of IIS. IIS is primarily designed for hosting websites rather than background services so rather than fighting against it you could simply switch to using a Windows service to host your application. Using Owin you can even host your existing Web API within the service so you should need to make minimal code changes.
I have a RESTful WCF service hosted on IIS 7.5. When some operation is called, it returns almost immediately, but starts a complex task, dealing with combinatorics and opening big files in memory. After several requests about 50% of memory is in use by application pool, although tasks have been completed. When does IIS pool reclaim memory? I tried to call GC.Collect(), but nothing happened. Is there any way to profile applications like this one? I tried several profilers, but they show only .NET classes, which IIS uses to process request itself.
Long running tasks don't typically suit web applications as they time out/hang the responsiveness of the website/API
Is it possible to configure a background task to run asynchronously of the IIS site? So you can push these slow tasks into a queue and process them in the background
I think the memory usage on the process is an issue but doesn't tell the whole story, what have you managed to profile so far? Do you have unclosed connections lingering? Are you creating instances of multiple classes that are not being disposed of effectively? I would look to profile the call execution plan more than the memory usage, as it may lead you more calls as to where items are being left
When you say 50% memory how much are we actually talking about in mb? IIS can be a little greedy/lazy when it doesn't need to give up RAM
The worker process itself will not release memory to the operating system on its own. You can set the process to recycle on a schedule - this restarts the process releasing memory without interfering with running requests.
You probably should not do that though - basically .net is holding on to the memory to avoid having to reallocate it for later requests. The memory is available for reuse within the WCF process, and if the memory is not used the OS will page it out and allow it be reused when other processes need it. See Answer to When is memory, allocated by .NET process, released back to Windows for more details.
I had almost a similar issue, and i solved it by using Castle.Windsor as a IoC container, added the svc client class to container with Transient Scope, and finally, i have decorate the svc class with :
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)].
All other dependent bindings have beend added with Transient Livestyle, thus making them dependend by their instantiator. I`m not sure that this will help in your situation, given the fact that you use large files, but if every thing else fails, try to implement IDisposable on your most memory eaters classes, and check if Dispose is called when it should.
Hope that helps!
I have code to implement GoF's proxy pattern in C#. The code has MathProxy for calculating arithmetic functions.
The left side example is one implementation, and the right side is the better one for C# (.NET) with AppDomain.
What benefits can I expect using AppDomain especially with Proxy Pattern?
public MathProxy()
{
// Create Math instance in a different AppDomain
var ad = AppDomain.CreateDomain("MathDomain", null, null);
var o = ad.CreateInstance(
"DoFactory.GangOfFour.Proxy.NETOptimized",
"DoFactory.GangOfFour.Proxy.NETOptimized.Math");
_math = (Math)o.Unwrap();
}
Any given Windows process that hosts the CLR can have one or more application domains defined that contain the executable code, data, metadata structures, and
resources. In addition to the protection guarantees built in by the process, an application domain further introduces the following guarantees:
Faulty code within an application domain cannot adversely affect code running in a different application domain within the same process.
Code running within an application domain cannot directly access resources in a different application domain.
Code-specific configurations can be configured on a per application domain basis. For example, you can configure security-specific settings on a per application
domain basis.
AppDomain provides isolation boundary in CLR same as a process provides a isolation boundary at operating system level
Difference between AppDomain and Process :
Process:
When a user starts an application, memory and a whole host of resources are allocated for the application. The physical separation of this memory and resources is called a process. An application may launch more than one process. It's important to note that applications and processes are not the same thing at all.
AppDomain :
Microsoft also introduced an extra layer of abstraction/isolation called an AppDomain. The AppDomain is not a physical isolation, but rather a logic isolation within the process. Since more than one AppDomain can exist in a process, we get some benefits. For example, until we had an AppDomain, processes that needed to access each other's data had to use a proxy, which introduced extra code and overhead. By using an AppDomain, it is possible to launch several applications within the same process. The same sort of isolation that exists with processes is also available for AppDomains. Threads can execute across application domains without the overhead of inter process communication. This is all encapsulated within the AppDomain class. Any time a namespace is loaded in an application, it is loaded into an AppDomain. The AppDomain used will be the same as the calling code unless otherwise specified. An AppDomain may or may not contain threads, which is different to processes.
Why You Should Use AppDomains : Read Post
Good use case scenario for AppDomains :
"NUnit was written by .NET Framework experts. If you look at the NUnit source, you see that they knew how to dynamically create AppDomains and load assemblies into these domains. Why is a dynamic AppDomain important? What the dynamic AppDomain lets NUnit do is to leave NUnit open, while permitting you to compile, test, modify, recompile, and retest code without ever shutting down. You can do this because NUnit shadow copies your assemblies, loads them into a dynamic domain, and uses a file watcher to see if you change them. If you do change your assemblies, then NUnit dumps the dynamic AppDomain, recopies the files, creates a new AppDomain, and is ready to go again."
Entire info borrowed from Sacha Barbers article
I am wondering if a static class in an ASP.NET MVC app could be initialized more than once. I initially designed my app so that a static component would fetch some stuff from the database and serve as a cache, and I added a refresh method to the class which was called from the constructor. The refresh method was also made available through the administration portion of the app. At some point I noticed that the data was updated without requiring this manual trigger, which implies that the static constructor run more than once.
There are several scenarios where I could reasonably see this happen, such as an unhandled Exception causing re-initialization. But I am having trouble reproducing this, so I would like to know for sure.
The most usual scenarios would be:
a reload of the web application
touched Web.config
touched binaries
abnormal termination (out-of-memory, permission errors)
a reload of the Application Pool
a restart of IIS
a restart of w3wp.exe (at least once in 29 hours!)
The app-domain will get reloaded (recompiling the dynamic parts as necessary) and this would void any statically initialized data.
You can get around that by persisting the static data somewhere if creating it is expensive, or avoid reloading the AppDomain, the Application Pool or the IIS server.
Update: Phil Haack just published a relevant blog entry here: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
Bye Bye App Domain
it does a better job at explaining the above. Notable, IIS will recycle it's worker process very 29 hours at minimum and shared hosters will recycle AppDomain much more often (perhaps in 20 minutes of idle time)
So tell ASP.NET, “Hey, I’m working here!”
outlines techniques you can apply to get notified of an AppDomain take down - you could use this to get your Singleton instance behaving 'correctly'
Recommendation
I suggest you read it :)
static classes are initialized once per AppDomain.
If IIS recycles your AppDomain, everything will re-initialize.
Here is the situation that we're in.
We are distributing our assemblies (purely DLL) to our clients (we don't have control over their environment).
They call us by passing a list of item's id and we search through our huge database and return items with highest price. Since we have our SLA (30 milisecond) to meet, we are caching our items in memory cache (using Microsoft MemoryCache) We are caching about a million items.
The problem here is, it only caches throughout our client application lifetime. When the process exit, so are all the cached items.
Is there a way i can make my memorycache live longer, so that subsequent process can reused cached items?
I have consider having a window service and allow all these different processes to communicate with one on the same box, but that's going to create a huge mess when it comes to deployment.
We are using AppFabric as our distributed cache but the only way we can achieve our SLA is to use memorycache.
Any help would be greatly appreciated. Thank you
I don't see a way to make sure that your AppDomain lives longer - since all the calling assembly has to do is unload the AppDomain...
One option could be -although messy too- to implement some sort of "persisting MemoryCache"... to achieve performance you could/would use a ConcurrentDictionary persisted in a MemoryMappedFile...
Another option would be to use a local database - could even be Sqlite and implement to cache interface in-memory such that all writes/updates/deletes are "write-through" while reads are pure RAM-access...
Another option could be to include a EXE (as embedded resource for example) and start that from inside the DLL if it is not running... the EXE provides the MemoryCache, communication could be via IPC (for example shared memory...). Since the EXE is a separate process it would stay alive even after unloading your AppDomain... the problem with this is more whether the client likes and/or permissions allow it...
I really like Windows Service approach although I agree that could be a deployment mess...
The basic issue seems to be that you don't have control of the run-time Host - which is what controls the lifespan (and hence the cache).
I'd investigate creating some sort of (light-weight ?) host - maybe a .exe or a service.
The bulk of your DLL's would hang off the new host, but you could still deploy a "facade" DLL which in turn called your main solution (tied to your host). Yes you could have the external clients call your new host directly but that would mean changing / re-configuring those external callers where-as leaving your original DLL / API in place would isolate the external callers from your internal changes.
This would (I assume) mean completely gutting and re-structuring your solution, particularly whatever DLLs the external callers currently hit, because instead of processing the requests itself it's just going to pass the request off to your new host.
Performance
Inter-process communication is more expensive than keeping it within a process - I'm not sure how the change in approach would affect your performance and ability to hit the SLA.
In-particular, sparking up a new instance of the host will incur a performance hit.