I have a RESTful WCF service hosted on IIS 7.5. When some operation is called, it returns almost immediately, but starts a complex task, dealing with combinatorics and opening big files in memory. After several requests about 50% of memory is in use by application pool, although tasks have been completed. When does IIS pool reclaim memory? I tried to call GC.Collect(), but nothing happened. Is there any way to profile applications like this one? I tried several profilers, but they show only .NET classes, which IIS uses to process request itself.
Long running tasks don't typically suit web applications as they time out/hang the responsiveness of the website/API
Is it possible to configure a background task to run asynchronously of the IIS site? So you can push these slow tasks into a queue and process them in the background
I think the memory usage on the process is an issue but doesn't tell the whole story, what have you managed to profile so far? Do you have unclosed connections lingering? Are you creating instances of multiple classes that are not being disposed of effectively? I would look to profile the call execution plan more than the memory usage, as it may lead you more calls as to where items are being left
When you say 50% memory how much are we actually talking about in mb? IIS can be a little greedy/lazy when it doesn't need to give up RAM
The worker process itself will not release memory to the operating system on its own. You can set the process to recycle on a schedule - this restarts the process releasing memory without interfering with running requests.
You probably should not do that though - basically .net is holding on to the memory to avoid having to reallocate it for later requests. The memory is available for reuse within the WCF process, and if the memory is not used the OS will page it out and allow it be reused when other processes need it. See Answer to When is memory, allocated by .NET process, released back to Windows for more details.
I had almost a similar issue, and i solved it by using Castle.Windsor as a IoC container, added the svc client class to container with Transient Scope, and finally, i have decorate the svc class with :
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)].
All other dependent bindings have beend added with Transient Livestyle, thus making them dependend by their instantiator. I`m not sure that this will help in your situation, given the fact that you use large files, but if every thing else fails, try to implement IDisposable on your most memory eaters classes, and check if Dispose is called when it should.
Hope that helps!
Related
I am working on an ASP.NET MVC web application (C#) that is a module of a larger application as a whole (mostly desktop/win service based - VB.NET). Currently the application makes HTTP calls to a web service (provided as an API) which is it's own independent application (also using MVC, VB.NET). Possibly not how I would design it, but it is what I have inherited.
My issue is this: If I host the MVC app in local IIS and run the API project in IIS Express, all is good. If I split the two projects up to run in separate Application Pools in local IIS, all is good. However, if I run both apps out of the same pool in IIS, I run into a lot of issues. Namely, timeouts when I make the call to HttpClient.GetAsync(url) - especially on a page that is calling this 9 times to dynamically retrieve different images based on ID (each call is made to MVC app which then makes the call to the API). Some calls make it through, most do not.
The exceptions relate to cancelled tasks (timeout = 100s) - but the actions require a fraction of a second so there is no need to timeout. Execution never even makes it into the functions in the API side when it fails - like the HTTP client has given up offering any more connections, or the task is waiting for HTTP to send the request and it never does.
I have tried making it Async all the way through, tried making the HttpClient static, etc. But no joy. Is this simply something that just shouldn't be done - allowing the two apps to share an app pool? If so, I can live with that. But if there is something I can do to handle it more efficiently, that'd be very useful. Any information/resources on this would be very much appreciated. Ta!
We recently ran into the same issue, it took quite a bit of debugging to work out that using the same applciation pool was the cause of the problem.
I also found this could work if you increase the number of Maximum Worker Processes in the advanced settings for the the application pool.
I'm not sure why this is, but i'm guessing all of the requests are being dealt with by one process and this is causing a backlog and ultimately there are too many resulting in timeouts. (I'd be happy for someone more knowledgeable in IIS to correct me though)
HTH
Edit: On further reading here Carmelo Pulvirenti's Blog it seems as though the garbage collector could be to be blame. The blog states that multiple applications running on the same pool share memory, the side effects are;
It means that GC runs a lot of time per second in order to provide
clean memory to your app. What is the side effect? In server mode,
garbage collector requires to stop all threads activities to clean up
the memory (this was improved on .NET Fx 4.5, where the collation does
not require to stop all threads). What is the side effect?
Slow performance of your web application
High impact on CPU time due to GC’s activity.
I have a service that runs on a domain controller that is randomly accessed by other computers on the network. I can't shutdown the service and run it only when needed (this would defeat the purpose of running it as a service anyway).
The problem is that the memory used by the service doesn't seem to ever get cleared, and increases every time the service is queried by a remote computer.
Is there a way to set a limit on the RAM used by the application?
I've found a few references to using MaxWorkingSet, but none of the references actually tell me how to use it. Can I use MaxWorkingSet to limit the RAM used to, for example, 35MB? and if so, how? (what is the syntax etc?)
Otherwise, is there a function like "clearall()" that I could use to reset the variables and memory at the end of each run through? I've tried using GC.Collect(), but it didn't work.
Literally, MaxWorkingSet only affect Working set, which is the amount of physical memory. To restrict of an overall memory usage, you need Job Object API. But it is danger if your program really need such memory (many codes don't consider an OutOfMemoryException and sometimes .NET runtime has strange behaviors when memory is not enough)
You need to:
Create a Win32 Job object
Set the maximum memory to the job
Assign your process to the job
Here is a wrapper for .NET. ^reference
Besides, you could try this method of GC: (for .NET 4.6 or newer)
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(2, GCCollectionMode.Forced, true, true);
(for older but sometimes doesn't work)
GC.Collect(2, GCCollectionMode.Forced);
The third param in 4.6 version of GC.Collect() is to tell runtime whether to do garbage collecting immediately. In older versions, GC.Collect() only notifies and leaves the decision to runtime.
As for some programming advice, I suggest you could wrap a class for one query. The class could be explicitly disposed after a query is done. It may help make GC smarter.
Finally, indeed there are something in .NET framework which you need to manage yourself. Like Bitmap.GetHBitmap, they need to be disposed manually.
I have a question regarding high memory usage of Web Role running MVC application, with Simple Injector as DI, Entity Framework 6 for DAL. Application is running on Azure Cloud Service as Web Role with 2 x Standard A2 Instances (2 Cores, 3.5 GB RAM) and is also running CachingService (Co-located Role) with 20% memory usage configured.
Problem is that when instance is started or rebooted the memory usage of w3wp.exe service is only around 500-600 MB (with all other apps memory usage is around 50%), but even if there are no requests coming in it starts and continues growing until around 1.7GB and stops (with all other apps memory usage is around 90%). But what I noticed is that memory drops sometimes randomly and of course after reboot or republishing.
After monitoring memory heaps I noticed that it is Gen2 Heap that grows and stays large and after debugging locally with ANTS Memory Profiler I saw that largest amount of Gen2 is taken by Entity Framework objects with class name "TypeUsage" and "MetadataProperty" objects ("System.Data.Entity.Core.Metadata.Edm" namespace).
Now my question are:
is this a memory leak in our code and how can I solve it if that is the case (I checked and already tried to dispose DbContext that is created every request)?
is this a memory leak in EF, if that is the case what can I do about this, maybe another DAL framework?
is this a normal behavior and I should leave it as it is?
There is a very low chance that this is a memory leak in EF and this is not OK and you shouldn't leave it like this. Your code leaks memory.
The best way to find the leak is to use a memory profiler (ANTS is a good option, I used dotMemory). The profiler will show you the leaked objects and it should also show you two other important things:
The stack trace of the location in code where the object was created
The object tree which keeps reference to your leaked object and doesn't allow it to be collected.
These should help you understand how the objects were created and why they weren't GC'ed.
You mentioned that most of the memory is in Gen2. That means that your leaked objects are referenced by something "long lived". This could be a static variable, ASP.Net Application, or something similar.
The random drop of memory may occur when IIS recycles your application. By default that happens every 29 hours, but IIS may be configured differently or may decide to recycle your application for some other purpose.
"But what I noticed is that memory drops sometimes randomly..."
Probably it's not a memory leak but an issue of the uncontrolled growth of garbage collection. I faced something similar some years ago.
The problem is that by default garbage collector lets the process memory grow until its size overs some bound of the totally available memory in OS. When your process runs in the cloud environment which is a kind of shared hosting, it's possible that it still doesn't reach the necessary memory bound from the OS point of view and so the memory is not collected, but it overs a memory limitation for a shared process.
I'd recommend you to force the garbage collector to collect the memory explicitly by using GC.Collect(0); periodically, after the certain amount of operations. May be it can solve the problem.
I had a similar problem (web app with EF and lots of TypeUsage's taking up memory in the dump) and found that setting "Enable 32-Bit Applications" on the application pool reduced the memory use considerably.
Here is the situation that we're in.
We are distributing our assemblies (purely DLL) to our clients (we don't have control over their environment).
They call us by passing a list of item's id and we search through our huge database and return items with highest price. Since we have our SLA (30 milisecond) to meet, we are caching our items in memory cache (using Microsoft MemoryCache) We are caching about a million items.
The problem here is, it only caches throughout our client application lifetime. When the process exit, so are all the cached items.
Is there a way i can make my memorycache live longer, so that subsequent process can reused cached items?
I have consider having a window service and allow all these different processes to communicate with one on the same box, but that's going to create a huge mess when it comes to deployment.
We are using AppFabric as our distributed cache but the only way we can achieve our SLA is to use memorycache.
Any help would be greatly appreciated. Thank you
I don't see a way to make sure that your AppDomain lives longer - since all the calling assembly has to do is unload the AppDomain...
One option could be -although messy too- to implement some sort of "persisting MemoryCache"... to achieve performance you could/would use a ConcurrentDictionary persisted in a MemoryMappedFile...
Another option would be to use a local database - could even be Sqlite and implement to cache interface in-memory such that all writes/updates/deletes are "write-through" while reads are pure RAM-access...
Another option could be to include a EXE (as embedded resource for example) and start that from inside the DLL if it is not running... the EXE provides the MemoryCache, communication could be via IPC (for example shared memory...). Since the EXE is a separate process it would stay alive even after unloading your AppDomain... the problem with this is more whether the client likes and/or permissions allow it...
I really like Windows Service approach although I agree that could be a deployment mess...
The basic issue seems to be that you don't have control of the run-time Host - which is what controls the lifespan (and hence the cache).
I'd investigate creating some sort of (light-weight ?) host - maybe a .exe or a service.
The bulk of your DLL's would hang off the new host, but you could still deploy a "facade" DLL which in turn called your main solution (tied to your host). Yes you could have the external clients call your new host directly but that would mean changing / re-configuring those external callers where-as leaving your original DLL / API in place would isolate the external callers from your internal changes.
This would (I assume) mean completely gutting and re-structuring your solution, particularly whatever DLLs the external callers currently hit, because instead of processing the requests itself it's just going to pass the request off to your new host.
Performance
Inter-process communication is more expensive than keeping it within a process - I'm not sure how the change in approach would affect your performance and ability to hit the SLA.
In-particular, sparking up a new instance of the host will incur a performance hit.
We have a web service that uses up more and more private bytes until that application stops responding. The managed heap (mostly Gen2) will show some 200-250 MB, while private bytes shows over 1GB. What are possible causes of a memory leak outside of the managed heap?
I've already checked for the following:
Prolific dynamic assemblies (Xml serialization, regex, etc.)
Session state (turned off)
System.Policy.Evidence memory leak (SP1 installed)
Threading deadlock (no use of Join, only lock)
Use of SQLOLEDB (using SqlClient)
What other sources can I check for?
Make sure your app is complied in release mode. If you compile under debug mode, and deploy that, simply instantiating a class that has an event defined (event doesn't even need to be raised), will cause a small piece of memory to leak. Instantiating enough of these objects over a long enough period of time will cause all the memory to be used. I've seen web apps that would use up all the memory within a matter of hours, simply because a debug build was used. Compiling as a release build immediately and permanently fixed the problem.
I would recommend you view snapshots of the stack at various times, and see what's using up the memory. If your application is using Java, then jmap works extremely well - you just give it the PID of the java process.
If using something else, try Lambda Probe (http://www.lambdaprobe.org/d/index.htm). It doesn't show as much detail, but will at least show you memory use.
I had a bad memory leak in my JDBC code that ended up being traced to a change in the JDBC specification a few years ago that I missed (with respect to closing statements and such). It took a combination of Lamdba Probe and then jmap to localize the problem enough to fix it.
Cheers,
-R
Also look for:
COM Assemblies being loaded
DB Connections not being closed
Cache & State (Session, Application)
Try forcing the Garbage Collector (GC) to run (write a page that does it when it loads) or try the instrumentation, but that's a bit hit and miss in my experience. Another thing would be to keep it running and see if it runs out of memory.
What could be happening is that there is plenty of memory and Windows does not signal your app to clean up. This causes the app to look like its using more and more memory because it can, when in fact the system can reclaim the memory when it needs. SQL Server and Exchange do this a lot. The idea is why cause a unnecessary cleanup when there are plenty of resources.
Rob
Garbage collection does not run until a request for memory is denied due to lack of available memory. This can often make things look like a memory leak when one is not around.
Do you have any events and event handlers within the service? Services often have static variables, and if you are creating event handlers from the static instances, connected to a non-static instance object, the static will hold a reference to the instance forever, which will stop it from releasing.
Double check that trace is not enabled. I've seen instances of trace slowly consuming memory until the app reaches it's app pool limit.
For what it is worth, my issue was not with the service, but the with HttpClient that was calling it.
The client was not properly disposed, so it kept the connection open and the memory locked.
After disposing the client the service released the memory as expected.