Static variables in .net web api server [closed] - c#

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a web api server which basically responds to 2 requests: stop and start.
On start it does several things, for example, initializing timers that perform a method every X seconds, and on stop it just stops the timers.
In order to achieve that, I created a singleton class which handles the logic of the operations. (It needs to be singleton so the timers and more variables will exist only once for all the requests).
Server is running fine, but recently I got a AccessViolationException while accessing Globalconfiguration.AppSettings in order to retrieve a value from my webconfig file.
I found out by looking at my logs that the singleton class finalizer was called, even though I didn't restart the server.
The finalizer calls a method which I regularly use and works fine in other scenarios, and this method uses the GlobalConfiguration class which threw the exception.
I tried to find the cause for this without success.
So basically there are two bugs here:
1. Why was the finalizer called out of the blue? The server could run for a week.
2. The AccessViolationException.
Perhaps the bugs are related? If my application memory was somehow cleaned would it cause the finalizer to be called and an exception accessing the GlobalConfiguration? Just a theory....
Or perhaps maybe I don't handle the singleton properly? But, after reading about static variables in c# web servers I see that they should exist while the application exist, and as such the bug might not be related with handling the singleton. I do handle it OK - the singleton has a private static field which holds the actual instance and the initialization occurs via locking and double checking to prevent multiple threads creating the singleton.
Is my approach OK? Do you see any possible bug that I didnt expect in this behavior, or do you know of any behavior of the .net framework that could cause my static singleton to be destroyed?

For your first question as to why the finalizer was called the obvious explanation is that it didn't have any active GC roots, which caused the GC to place it on the finialization queue. As to why the GC didn't find an active root to your object there are two possibilities:
You have a bug in your code and are not holding onto a reference to the singleton
Your IIS application pool is being recycled (I'm assuming you're hosting the app in IIS)
In its default configuration IIS automatically restarts each application pool every 1740 minutes (29 hours) which in turn causes the current app domain for your application to be unloaded. In .net static variables exist per app domain which means that when the app domain is unloaded there are no longer any active GC roots to the singleton, which in turn means that the singleton is eligible for garbage collection and by extension finalization.
With regards to the AVE you're getting you need to remember that when your finalizer is executed everything you know is wrong. By the time your finalizer is executed the GlobalConfiguration object may have also been GC'd/finalized and whatever handle it had to the web.config may have been destroyed.
It's also important to note that by default when IIS recycles your application pool it waits for the next HTTP request before it actually recreates the application pool. This is good from a resource utilization perspective as it means that if your application is not getting any requests it will not be loaded into memory, however in your case it also means that none of your timers will exist until your app receives an HTTP request.
In terms of solving this problem you have a couple of options:
Disable IIS's automatic restart. This solves your immediate issue however it raises the question of what happens if the server gets restarted for some other reason. Does your application have some way of persisting state (e.g. in a database) so that if it was previously started it will continue when it comes online?
If so, you would also want to enable auto-start and pre-load for your application so that IIS automatically loads your application without an external HTTP request needing to be made. See this blog post for more details.
Host outside of IIS. IIS is primarily designed for hosting websites rather than background services so rather than fighting against it you could simply switch to using a Windows service to host your application. Using Owin you can even host your existing Web API within the service so you should need to make minimal code changes.

Related

Can an MVC web app and an API web app run in the same pool?

I am working on an ASP.NET MVC web application (C#) that is a module of a larger application as a whole (mostly desktop/win service based - VB.NET). Currently the application makes HTTP calls to a web service (provided as an API) which is it's own independent application (also using MVC, VB.NET). Possibly not how I would design it, but it is what I have inherited.
My issue is this: If I host the MVC app in local IIS and run the API project in IIS Express, all is good. If I split the two projects up to run in separate Application Pools in local IIS, all is good. However, if I run both apps out of the same pool in IIS, I run into a lot of issues. Namely, timeouts when I make the call to HttpClient.GetAsync(url) - especially on a page that is calling this 9 times to dynamically retrieve different images based on ID (each call is made to MVC app which then makes the call to the API). Some calls make it through, most do not.
The exceptions relate to cancelled tasks (timeout = 100s) - but the actions require a fraction of a second so there is no need to timeout. Execution never even makes it into the functions in the API side when it fails - like the HTTP client has given up offering any more connections, or the task is waiting for HTTP to send the request and it never does.
I have tried making it Async all the way through, tried making the HttpClient static, etc. But no joy. Is this simply something that just shouldn't be done - allowing the two apps to share an app pool? If so, I can live with that. But if there is something I can do to handle it more efficiently, that'd be very useful. Any information/resources on this would be very much appreciated. Ta!
We recently ran into the same issue, it took quite a bit of debugging to work out that using the same applciation pool was the cause of the problem.
I also found this could work if you increase the number of Maximum Worker Processes in the advanced settings for the the application pool.
I'm not sure why this is, but i'm guessing all of the requests are being dealt with by one process and this is causing a backlog and ultimately there are too many resulting in timeouts. (I'd be happy for someone more knowledgeable in IIS to correct me though)
HTH
Edit: On further reading here Carmelo Pulvirenti's Blog it seems as though the garbage collector could be to be blame. The blog states that multiple applications running on the same pool share memory, the side effects are;
It means that GC runs a lot of time per second in order to provide
clean memory to your app. What is the side effect? In server mode,
garbage collector requires to stop all threads activities to clean up
the memory (this was improved on .NET Fx 4.5, where the collation does
not require to stop all threads). What is the side effect?
Slow performance of your web application
High impact on CPU time due to GC’s activity.

IIS App Pool, memory management

I have a RESTful WCF service hosted on IIS 7.5. When some operation is called, it returns almost immediately, but starts a complex task, dealing with combinatorics and opening big files in memory. After several requests about 50% of memory is in use by application pool, although tasks have been completed. When does IIS pool reclaim memory? I tried to call GC.Collect(), but nothing happened. Is there any way to profile applications like this one? I tried several profilers, but they show only .NET classes, which IIS uses to process request itself.
Long running tasks don't typically suit web applications as they time out/hang the responsiveness of the website/API
Is it possible to configure a background task to run asynchronously of the IIS site? So you can push these slow tasks into a queue and process them in the background
I think the memory usage on the process is an issue but doesn't tell the whole story, what have you managed to profile so far? Do you have unclosed connections lingering? Are you creating instances of multiple classes that are not being disposed of effectively? I would look to profile the call execution plan more than the memory usage, as it may lead you more calls as to where items are being left
When you say 50% memory how much are we actually talking about in mb? IIS can be a little greedy/lazy when it doesn't need to give up RAM
The worker process itself will not release memory to the operating system on its own. You can set the process to recycle on a schedule - this restarts the process releasing memory without interfering with running requests.
You probably should not do that though - basically .net is holding on to the memory to avoid having to reallocate it for later requests. The memory is available for reuse within the WCF process, and if the memory is not used the OS will page it out and allow it be reused when other processes need it. See Answer to When is memory, allocated by .NET process, released back to Windows for more details.
I had almost a similar issue, and i solved it by using Castle.Windsor as a IoC container, added the svc client class to container with Transient Scope, and finally, i have decorate the svc class with :
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)].
All other dependent bindings have beend added with Transient Livestyle, thus making them dependend by their instantiator. I`m not sure that this will help in your situation, given the fact that you use large files, but if every thing else fails, try to implement IDisposable on your most memory eaters classes, and check if Dispose is called when it should.
Hope that helps!

Creating custom AppDomains in IIS prevents their release, and deadlocks the website

First off, We have written a library, that is used by a Windows Service, and a ASP Webpage that is running in IIS 7.
The library needed to load other libraries as plugins, but some of the external libraries had the same name (but different internal versions). To resolve this binary namespace conflict, we created an AppDomain for each plugin. The Library has a Manager object which references a pool of static connections. Inside the pool of static SharedConnections, the AppDomains live and are destroyed. When the last Manager objects is removed, the Manager object invokes the cleanup of the SharedConnections. This cleanup releases the AppDomains we created.
The Service that relies on our library handles this beautifully. At the beginning of its lifetime it creates the AppDomains and at the conclusion it removes them during cleanup.
The Website that relies on our library handles this poorly. At the beginning of its lifetime it creates the AppDomains, however when IIS decides to unload us after a period of inactivity, the Manger objects are deleted, which invokes the cleanup of the SharedConnection objects as expected. Which in turn kills off the AppDomains.
There are two problems
a) We use lock() around the connection and AppDomain releases, so they don't release twice and subsequently throw errors. Except for some reason, on rare instances, the thread that enters the lock and kills the AppDomain ceases to exist, and never leaves the lock, causing a dead lock scenario. The only way we can resolve this is to stop the AppPool in IIS and restart it 30-60 seconds later. This does not happen with the Windows Service.
b) When we don't observe the above scenario (which is rarely happens), instead occasionally we have AppDomain release issues, this throws Exceptions that crash and restart the webpage, which is okay-ish.
Other things I have discovered via debug. IIS places the website in its own AppDomain, which means we are a child AppDomain making more child AppDomains.
What are we doing wrong? Is there an IIS configuration that might help?
Use Application Request Routing
To solve exact same problem, IIS created ARR to route request to specific version of application based on URL, Cookie or Header parameter which is configurable easily. ARR works as a HTTP Proxy server, which does simple routing.
Here is the example,
http://blogs.msdn.com/b/carlosag/archive/2010/04/02/setting-up-a-reverse-proxy-using-iis-url-rewrite-and-arr.aspx
IIS will do its job of recycling and managing application pools and managing domains for you, you don't have to do any of that.
You might be able to avoid problem a) by putting your SharedConnection and AppDomain clean up code in the destructor of your Manager object. The destructor will be invoked exactly once by the garbage collector after the manager is disposed and no longer referenced, or when the AppDomain containing the Manager is unloaded. That should eliminate the risk of having your clean up thread aborted by IIS. (This may count as an abuse of the destructor functionality, but I'm not sure what the negative consequences, if any, might be.)

static class in Asp.NET MVC app

I am wondering if a static class in an ASP.NET MVC app could be initialized more than once. I initially designed my app so that a static component would fetch some stuff from the database and serve as a cache, and I added a refresh method to the class which was called from the constructor. The refresh method was also made available through the administration portion of the app. At some point I noticed that the data was updated without requiring this manual trigger, which implies that the static constructor run more than once.
There are several scenarios where I could reasonably see this happen, such as an unhandled Exception causing re-initialization. But I am having trouble reproducing this, so I would like to know for sure.
The most usual scenarios would be:
a reload of the web application
touched Web.config
touched binaries
abnormal termination (out-of-memory, permission errors)
a reload of the Application Pool
a restart of IIS
a restart of w3wp.exe (at least once in 29 hours!)
The app-domain will get reloaded (recompiling the dynamic parts as necessary) and this would void any statically initialized data.
You can get around that by persisting the static data somewhere if creating it is expensive, or avoid reloading the AppDomain, the Application Pool or the IIS server.
Update: Phil Haack just published a relevant blog entry here: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
Bye Bye App Domain
it does a better job at explaining the above. Notable, IIS will recycle it's worker process very 29 hours at minimum and shared hosters will recycle AppDomain much more often (perhaps in 20 minutes of idle time)
So tell ASP.NET, “Hey, I’m working here!”
outlines techniques you can apply to get notified of an AppDomain take down - you could use this to get your Singleton instance behaving 'correctly'
Recommendation
I suggest you read it :)
static classes are initialized once per AppDomain.
If IIS recycles your AppDomain, everything will re-initialize.

How to make my appDomain live longer?

Here is the situation that we're in.
We are distributing our assemblies (purely DLL) to our clients (we don't have control over their environment).
They call us by passing a list of item's id and we search through our huge database and return items with highest price. Since we have our SLA (30 milisecond) to meet, we are caching our items in memory cache (using Microsoft MemoryCache) We are caching about a million items.
The problem here is, it only caches throughout our client application lifetime. When the process exit, so are all the cached items.
Is there a way i can make my memorycache live longer, so that subsequent process can reused cached items?
I have consider having a window service and allow all these different processes to communicate with one on the same box, but that's going to create a huge mess when it comes to deployment.
We are using AppFabric as our distributed cache but the only way we can achieve our SLA is to use memorycache.
Any help would be greatly appreciated. Thank you
I don't see a way to make sure that your AppDomain lives longer - since all the calling assembly has to do is unload the AppDomain...
One option could be -although messy too- to implement some sort of "persisting MemoryCache"... to achieve performance you could/would use a ConcurrentDictionary persisted in a MemoryMappedFile...
Another option would be to use a local database - could even be Sqlite and implement to cache interface in-memory such that all writes/updates/deletes are "write-through" while reads are pure RAM-access...
Another option could be to include a EXE (as embedded resource for example) and start that from inside the DLL if it is not running... the EXE provides the MemoryCache, communication could be via IPC (for example shared memory...). Since the EXE is a separate process it would stay alive even after unloading your AppDomain... the problem with this is more whether the client likes and/or permissions allow it...
I really like Windows Service approach although I agree that could be a deployment mess...
The basic issue seems to be that you don't have control of the run-time Host - which is what controls the lifespan (and hence the cache).
I'd investigate creating some sort of (light-weight ?) host - maybe a .exe or a service.
The bulk of your DLL's would hang off the new host, but you could still deploy a "facade" DLL which in turn called your main solution (tied to your host). Yes you could have the external clients call your new host directly but that would mean changing / re-configuring those external callers where-as leaving your original DLL / API in place would isolate the external callers from your internal changes.
This would (I assume) mean completely gutting and re-structuring your solution, particularly whatever DLLs the external callers currently hit, because instead of processing the requests itself it's just going to pass the request off to your new host.
Performance
Inter-process communication is more expensive than keeping it within a process - I'm not sure how the change in approach would affect your performance and ability to hit the SLA.
In-particular, sparking up a new instance of the host will incur a performance hit.

Categories

Resources