I can't seem to understand why Asp.net core is not collecting garbage. Last week I let a web service run for a few of days, and my memory usage reached 20GB. GC doesn't seem to be working. So to test this I wrote a very simple web method that return a large collection of strings. The application started off using only 124MB, but with each time I called the web method, the memory usage kept getting higher and higher until it reached 411MB. It would have gone higher if I had kept calling the web method. But I decided to stop testing.
Does anyone know why the GC is not working? As you can see from the performance monitor the GC was called (the yellow marker on the graph). But it did not collect the garbage from memory. I would think that the GC would eagerly collect anything that didn't have a reference to it.
Any help will be GREATLY appreciated. Thanks! :)
I've ran into the same problem and after a long day found a solution in one of the github issues registered for high memory consumption.
Actually, this is expected behaviour (in a multi-core high memory machine) and called "Server" garbage collection mode. This mode is optimized for server load and runs GC only when it's really needed (when it starts to lack memory in a machine).
The solution is to change GC mode to "Workstation" mode.
You can do this by adding a setting to your .csproj
<PropertyGroup>
<ServerGarbageCollection>false</ServerGarbageCollection>
</PropertyGroup>
Workstation mode is meant to use less memory but run GC more often.
It is well-documented by Sebastien Ros here:
https://github.com/sebastienros/memoryleak
On a typical web server environment the CPU resource is more critical
than memory, hence using the Server GC is better suited.
The server is caching the output, 100,000,000 * 4 characters ~ 411MB.
Related
I have a question regarding high memory usage of Web Role running MVC application, with Simple Injector as DI, Entity Framework 6 for DAL. Application is running on Azure Cloud Service as Web Role with 2 x Standard A2 Instances (2 Cores, 3.5 GB RAM) and is also running CachingService (Co-located Role) with 20% memory usage configured.
Problem is that when instance is started or rebooted the memory usage of w3wp.exe service is only around 500-600 MB (with all other apps memory usage is around 50%), but even if there are no requests coming in it starts and continues growing until around 1.7GB and stops (with all other apps memory usage is around 90%). But what I noticed is that memory drops sometimes randomly and of course after reboot or republishing.
After monitoring memory heaps I noticed that it is Gen2 Heap that grows and stays large and after debugging locally with ANTS Memory Profiler I saw that largest amount of Gen2 is taken by Entity Framework objects with class name "TypeUsage" and "MetadataProperty" objects ("System.Data.Entity.Core.Metadata.Edm" namespace).
Now my question are:
is this a memory leak in our code and how can I solve it if that is the case (I checked and already tried to dispose DbContext that is created every request)?
is this a memory leak in EF, if that is the case what can I do about this, maybe another DAL framework?
is this a normal behavior and I should leave it as it is?
There is a very low chance that this is a memory leak in EF and this is not OK and you shouldn't leave it like this. Your code leaks memory.
The best way to find the leak is to use a memory profiler (ANTS is a good option, I used dotMemory). The profiler will show you the leaked objects and it should also show you two other important things:
The stack trace of the location in code where the object was created
The object tree which keeps reference to your leaked object and doesn't allow it to be collected.
These should help you understand how the objects were created and why they weren't GC'ed.
You mentioned that most of the memory is in Gen2. That means that your leaked objects are referenced by something "long lived". This could be a static variable, ASP.Net Application, or something similar.
The random drop of memory may occur when IIS recycles your application. By default that happens every 29 hours, but IIS may be configured differently or may decide to recycle your application for some other purpose.
"But what I noticed is that memory drops sometimes randomly..."
Probably it's not a memory leak but an issue of the uncontrolled growth of garbage collection. I faced something similar some years ago.
The problem is that by default garbage collector lets the process memory grow until its size overs some bound of the totally available memory in OS. When your process runs in the cloud environment which is a kind of shared hosting, it's possible that it still doesn't reach the necessary memory bound from the OS point of view and so the memory is not collected, but it overs a memory limitation for a shared process.
I'd recommend you to force the garbage collector to collect the memory explicitly by using GC.Collect(0); periodically, after the certain amount of operations. May be it can solve the problem.
I had a similar problem (web app with EF and lots of TypeUsage's taking up memory in the dump) and found that setting "Enable 32-Bit Applications" on the application pool reduced the memory use considerably.
We got "out of memory" issue on production servers. What API can we use to get live memory (physical and managed) usage of the ASP.NET application?
Thanks.
PS: we're forbidden to profile memory with tools.
Also an Out of memory doesn't necessarily mean what you think. heavy memory fragmentation due to excessive object creation can be reported as out of memory since the GC is not able to find a continuous block of memory large enough for next (or last attempted) allocation.
.Net performance counters can help you with a number of managed memory related stats.
This page details some of the ones available- http://msdn.microsoft.com/en-us/library/x2tyfybc.aspx
You can use normal windows memory counters to get an overview of non-managed memory- http://msdn.microsoft.com/en-us/library/aa965225(VS.85).aspx
Well, the first thing that I can suggest you is to take a memory dump when your w3wp.exe usage is high. You should takes these dumps and analyze it yourself, or get an expert to do it.
http://blogs.msdn.com/tess will tell you how if you are interested in doing it.
BUT... before you do any of the real exercise... two things that you absolutely must do.
Turn Debug = False in all your web.config files http://aspalliance.com/1341
Turn Trace = False in all your web.config files since the trace data is maintained in your memory and worsens memory shortage.
in addition to the performance counters you could use the profiling tools shipped with Visual Studio in order to monitor the memory usage. This step can help you on identifing memory issues before the code is released to production. Here's how to do it with VS2010: http://msdn.microsoft.com/en-us/library/dd264934.aspx
A memory intensive program that I wrote ran out of memory: threw an OutOfMemory exception. During attempts to reduce memory usage, I started calling GC.GetTotalMemory(true) (to write the total memory usage to debug file), which triggers a garbage collect.
For some reason, when calling this function I don't get an out of memory exception anymore. If I remove the calls again (keeping everything else the same), the exception gets thrown again. In my understanding, calls are automatically made to collect garbage when memory pressure increases, so I don't understand this behavior.
Can anyone explain why the out of memory exception is only thrown when there are no calls to GC.collect?
Update:
I'm using VS 2010, but I'm downtargeting the application to framework 3.5. I believe that defragmentation is indeed causing my problems.
I did some tests: When the exception is thrown, a call to GC.gettotalmemory tells me I am using ~800 * 10^6 bytes. However, task manager tells me the application is using 1700 mb. A rather large discrepancy. I'm now planning to allocate memory only once, and to never deallocate any large arrays but reusing them. Luckily, my program allows me to accomplish this without too much fuss.
I solved the problem by doing some smarter memory management. In particular by using a CustomList according to the suggestions on http://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/
Is your app running at full CPU? I'm pretty sure automatic garbage collection only occurs when the application is idle. Otherwise, you have to run a manual cycle.
I'm fairly sure that running out of memory does not force a garbage collection. That probably sounds incredibly unintuitive to you but I think this was done for a good reason. It prevents the program from entering a death-spiral where it constantly tries to find more space and getting all objects firmly lodged into gen #2. From which it is very hard to recover again.
The true argument you pass to GetTotalMemory() forces a full garbage collection. I would guess that this happens to free up enough space in the Large Object Heap to satisfy the memory allocation. This will of course work only once. If your program just keeps running, gobbling up memory beyond the 1.5 gigabytes or so that it has already consumed then OOM is just around the corner again. This time without any way to recover. Surviving an OOM requires drastic measures.
You'll need a good memory profiler to find out what's really going on. Unmanaged C++ in your project is always a fertile source of memory leaks. The unmanaged kind, always hard to trouble-shoot.
GC.Collect is merely a "suggestion" to free unused memory - it does not guarantee its release.
[Edit]
It appears that, while once true when I was learning the JVM years ago, this may not be the case in .NET anymore. The MSDN Library says that GC.Collect "Forces an immediate garbage collection of all generations." Good stuff (for me, anyway) about this here.
If you have unmanaged resources that occupy a lot of memory, the garbage collector won't really recognize that memory pressure. If you clean up those resources in finalizers, then forcing a collection will result in those unmanaged resources being freed, while if you don't force a collection, the garbage collector might not realize that it needs to be collecting.
If you are performing large unmanaged allocations, you can use GC.AddMemoryPressure to tell the GC that, so it can take it into account when deciding whether to run a collection or not.
I was browsing around the Microsoft Connect site and I am seeing bug reports where people are making the same claim you are. The claim being that an OutOfMemoryException is occuring which can be resolved by periodically calling GC.Collect. I saw one report where the lead engineer from the garbage collector team responded back and said a bug was fixed in .NET 4.0 that should resolve a fragmentation issue with the large object heap. That is why I asked what version you were using.
It is certainly possible that you have stumbled upon a bug in the garbage collector. As with all GC related issues this could be very version dependent.
My advice would be to:
make sure you have the latest patches and service pack
refactor the code so that it is not as memory intensive
reuse LOH objects as much as possible instead of creating new ones
continue using GC.Collect at strategic points if necessary as a workaround
We have a 64bit C#/.Net3.0 application that runs on a 64bit Windows server. From time to time the app can use large amount of memory which is available. In some instances the application stops allocating additional memory and slows down significantly (500+ times slower).When I check the memory from the task manager the amount of the memory used barely changes. The application keeps on running very slowly and never gives an out of memory exception.
Any ideas? Let me know if more data is needed.
You might try enabling server mode for the Garbage Collector. By default, all .NET apps run in Workstation Mode, where the GC tries to do its sweeps while keeping the application running. If you turn on server mode, it temporarily stops the application so that it can free up memory (much) faster, and it also uses different heaps for each processor/core.
Most server apps will see a performance improvement using the GC server mode, especially if they allocate a lot of memory. The downside is that your app will basically stall when it starts to run out of memory (until the GC is finished).
* To enable this mode, insert the following into your app.config or web.config:
<configuration>
<runtime>
<gcServer enabled="true"/>
</runtime>
</configuration>
The moment you are hitting the physical memory limit, the OS will start paging (that is, write memory to disk). This will indeed cause the kind of slowdown you are seeing.
Solutions?
Add more memory - this will only help until you hit the new memory limit
Rewrite your app to use less memory
Figure out if you have a memory leak and fix it
If memory is not the issue, perhaps your application is hitting CPU very hard? Do you see the CPU hitting close to 100%? If so, check for large collections that are being iterated over and over.
As with 32-bit Windows operating systems, there is a 2GB limit on the size of an object you can create while running a 64-bit managed application on a 64-bit Windows operating system.
Investigating Memory Issues (MSDN article)
There is an awful lot of good stuff mentioned in the other answers. However, I'm going to chip in my two pence (or cents - depending on where you're from!) anyway.
Assuming that this is indeed a 64-bit process as you have stated, here's a few avenues of investigation...
Which memory usage are you checking? Mem Usage or VMem Size? VMem size is the one that actually matters, since that applies to both paged and non-paged memory. If the two numbers are far out of whack, then the memory usage is indeed the cause of the slow-down.
What's the actual memory usage across the whole server when things start to slow down? Does the slow down also apply to other apps? If so, then you may have a kernel memory issue - which can be due to huge amounts of disk accessing and low-level resource usage (for example, create 20000 mutexes, or load a few thousand bitmaps via code that uses Win32 HBitmaps). You can get some indication of this on the Task Manager (although Windows 2003's version is more informative directly on this than 2008's).
When you say that the app gets significantly slower, how do you know? Are you using vast dictionaries or lists? Could it not just be that the internal data structures are getting so big so as to complicate the work any internal algorithms are performing? When you get to huge numbers some algorithms can start to become slower by orders of magnitude.
What's the CPU load of the application when it's running at full-pelt? Is actually the same as when the slow-down occurs? If the CPU usage decreases as the memory usage goes up, then that means that whatever it's doing is taking the OS longer to fulfill, meaning that it's probably putting too much load on the OS. If there's no difference in CPU load, then my guess is it's internal data structures getting so big as to slow down your algos.
I would certainly be looking at running a Perfmon on the application - starting off with some .Net and native memory counters, Cache hits and misses, and Disk Queue length. Run it over the course of the application from startup to when it starts to run like an asthmatic tortoise, and you might just get a clue from that as well.
Having skimmed through the other answers, I'd say there's a lot of good ideas. Here's one I didn't see:
Get a memory profiler, such as SciTech's MemProfiler. It will tell you what's being allocated, by what, and it will show you the whole slice n dice.
It also has video tutorials in case you don't know how to use it. In my case, I discovered I had IDisposable instances that I wasn't Using(...)
We have a web service that uses up more and more private bytes until that application stops responding. The managed heap (mostly Gen2) will show some 200-250 MB, while private bytes shows over 1GB. What are possible causes of a memory leak outside of the managed heap?
I've already checked for the following:
Prolific dynamic assemblies (Xml serialization, regex, etc.)
Session state (turned off)
System.Policy.Evidence memory leak (SP1 installed)
Threading deadlock (no use of Join, only lock)
Use of SQLOLEDB (using SqlClient)
What other sources can I check for?
Make sure your app is complied in release mode. If you compile under debug mode, and deploy that, simply instantiating a class that has an event defined (event doesn't even need to be raised), will cause a small piece of memory to leak. Instantiating enough of these objects over a long enough period of time will cause all the memory to be used. I've seen web apps that would use up all the memory within a matter of hours, simply because a debug build was used. Compiling as a release build immediately and permanently fixed the problem.
I would recommend you view snapshots of the stack at various times, and see what's using up the memory. If your application is using Java, then jmap works extremely well - you just give it the PID of the java process.
If using something else, try Lambda Probe (http://www.lambdaprobe.org/d/index.htm). It doesn't show as much detail, but will at least show you memory use.
I had a bad memory leak in my JDBC code that ended up being traced to a change in the JDBC specification a few years ago that I missed (with respect to closing statements and such). It took a combination of Lamdba Probe and then jmap to localize the problem enough to fix it.
Cheers,
-R
Also look for:
COM Assemblies being loaded
DB Connections not being closed
Cache & State (Session, Application)
Try forcing the Garbage Collector (GC) to run (write a page that does it when it loads) or try the instrumentation, but that's a bit hit and miss in my experience. Another thing would be to keep it running and see if it runs out of memory.
What could be happening is that there is plenty of memory and Windows does not signal your app to clean up. This causes the app to look like its using more and more memory because it can, when in fact the system can reclaim the memory when it needs. SQL Server and Exchange do this a lot. The idea is why cause a unnecessary cleanup when there are plenty of resources.
Rob
Garbage collection does not run until a request for memory is denied due to lack of available memory. This can often make things look like a memory leak when one is not around.
Do you have any events and event handlers within the service? Services often have static variables, and if you are creating event handlers from the static instances, connected to a non-static instance object, the static will hold a reference to the instance forever, which will stop it from releasing.
Double check that trace is not enabled. I've seen instances of trace slowly consuming memory until the app reaches it's app pool limit.
For what it is worth, my issue was not with the service, but the with HttpClient that was calling it.
The client was not properly disposed, so it kept the connection open and the memory locked.
After disposing the client the service released the memory as expected.