Consider this simple example program that puts ints into a list:
void Main()
{
Experiment experiment = new();
var task = Task.Run(experiment.Start);
}
public class Experiment
{
public async Task Start()
{
List<int> values = new();
for (int i = 0; i < 1000000000; i++)
values.Add(i);
await Task.CompletedTask;
}
}
When run this uses about 7 GB of memory. But then that data just stays there. Even if I clear the list or set it to null, the program still takes up 7 GB. When I run it again, the RAM usage suddenly goes down to 10 MB and then shoots up to 7 GB again, making me think that only if I start a new tasks with this method, the data is actually released.
Why does the memory not get released when the task is done? I don't understand why the list is not temporary and keeps occupying memory. What am I doing wrong?
.NET uses garbage collection to release unused memory.
Garbage collection is more likely to run when memory is getting low. But other than that, there is no way to predict when it will run, or when your memory will be released.
In either case, garbage collection does not run as soon as the memory is no longer needed (or when the task is done in your case).
This is normal behavior. When you're running low on memory, garbage collection should take care of it soon enough.
Maybe add task.Wait() in the main program. I think as written the main program exits before the the task is complete. Maybe the task still is referenced when it exits therefore slowing down the gc.
The general advice is that you should not call GC.Collect from your code, but what are the exceptions to this rule?
I can only think of a few very specific cases where it may make sense to force a garbage collection.
One example that springs to mind is a service, that wakes up at intervals, performs some task, and then sleeps for a long time. In this case, it may be a good idea to force a collect to prevent the soon-to-be-idle process from holding on to more memory than needed.
Are there any other cases where it is acceptable to call GC.Collect?
If you have good reason to believe that a significant set of objects - particularly those you suspect to be in generations 1 and 2 - are now eligible for garbage collection, and that now would be an appropriate time to collect in terms of the small performance hit.
A good example of this is if you've just closed a large form. You know that all the UI controls can now be garbage collected, and a very short pause as the form is closed probably won't be noticeable to the user.
UPDATE 2.7.2018
As of .NET 4.5 - there is GCLatencyMode.LowLatency and GCLatencyMode.SustainedLowLatency. When entering and leaving either of these modes, it is recommended that you force a full GC with GC.Collect(2, GCCollectionMode.Forced).
As of .NET 4.6 - there is the GC.TryStartNoGCRegion method (used to set the read-only value GCLatencyMode.NoGCRegion). This can itself, perform a full blocking garbage collection in an attempt to free enough memory, but given we are disallowing GC for a period, I would argue it is also a good idea to perform full GC before and after.
Source: Microsoft engineer Ben Watson's: Writing High-Performance .NET Code, 2nd Ed. 2018.
See:
https://msdn.microsoft.com/en-us/library/system.runtime.gclatencymode(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/dn906204(v=vs.110).aspx
I use GC.Collect only when writing crude performance/profiler test rigs; i.e. I have two (or more) blocks of code to test - something like:
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
TestA(); // may allocate lots of transient objects
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
TestB(); // may allocate lots of transient objects
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
...
So that TestA() and TestB() run with as similar state as possible - i.e. TestB() doesn't get hammered just because TestA left it very close to the tipping point.
A classic example would be a simple console exe (a Main method sort-enough to be posted here for example), that shows the difference between looped string concatenation and StringBuilder.
If I need something precise, then this would be two completely independent tests - but often this is enough if we just want to minimize (or normalize) the GC during the tests to get a rough feel for the behaviour.
During production code? I have yet to use it ;-p
The best practise is to not force a garbage collection in most cases. (Every system I have worked on that had forced garbage collections, had underlining problems that if solved would have removed the need to forced the garbage collection, and speeded the system up greatly.)
There are a few cases when you know more about memory usage then the garbage collector does. This is unlikely to be true in a multi user application, or a service that is responding to more then one request at a time.
However in some batch type processing you do know more then the GC. E.g. consider an application that.
Is given a list of file names on the command line
Processes a single file then write the result out to a results file.
While processing the file, creates a lot of interlinked objects that can not be collected until the processing of the file have complete (e.g. a parse tree)
Does not keep match state between the files it has processed.
You may be able to make a case (after careful) testing that you should force a full garbage collection after you have process each file.
Another cases is a service that wakes up every few minutes to process some items, and does not keep any state while it’s asleep. Then forcing a full collection just before going to sleep may be worthwhile.
The only time I would consider forcing
a collection is when I know that a lot
of object had been created recently
and very few objects are currently
referenced.
I would rather have a garbage collection API when I could give it hints about this type of thing without having to force a GC my self.
See also "Rico Mariani's Performance Tidbits"
These days I consider same of the above cases would be better to use a short lived worker process to do each batch of work and let the OS do the resource recovery.
One case is when you are trying to unit test code that uses WeakReference.
In large 24/7 or 24/6 systems -- systems that react to messages, RPC requests or that poll a database or process continuously -- it is useful to have a way to identify memory leaks. For this, I tend to add a mechanism to the application to temporarily suspend any processing and then perform full garbage collection. This puts the system into a quiescent state where the memory remaining is either legitimately long lived memory (caches, configuration, &c.) or else is 'leaked' (objects that are not expected or desired to be rooted but actually are).
Having this mechanism makes it a lot easier to profile memory usage as the reports will not be clouded with noise from active processing.
To be sure you get all of the garbage, you need to perform two collections:
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
As the first collection will cause any objects with finalizers to be finalized (but not actually garbage collect these objects). The second GC will garbage collect these finalized objects.
You can call GC.Collect() when you know something about the nature of the app the garbage collector doesn't.
As the author, it's often tempting to think this is likely or normal. However, the truth is the GC amounts to a pretty well-written and tested expert system, and it's rare you'll know something about the low level code paths it doesn't.
The best example I can think of where you might have some extra information is an app that cycles between idle periods and very busy periods. You want the best performance possible for the busy periods and therefore want to use the idle time to do some clean up.
However, most of the time the GC is smart enough to do this anyway.
One instance where it is almost necessary to call GC.Collect() is when automating Microsoft Office through Interop. COM objects for Office don't like to automatically release and can result in the instances of the Office product taking up very large amounts of memory. I'm not sure if this is an issue or by design. There's lots of posts about this topic around the internet so I won't go into too much detail.
When programming using Interop, every single COM object should be manually released, usually though the use of Marshal.ReleseComObject(). In addition, calling Garbage Collection manually can help "clean up" a bit. Calling the following code when you're done with Interop objects seems to help quite a bit:
GC.Collect()
GC.WaitForPendingFinalizers()
GC.Collect()
In my personal experience, using a combination of ReleaseComObject and manually calling garbage collection greatly reduces the memory usage of Office products, specifically Excel.
As a memory fragmentation solution.
I was getting out of memory exceptions while writing a lot of data into a memory stream (reading from a network stream). The data was written in 8K chunks. After reaching 128M there was exception even though there was a lot of memory available (but it was fragmented). Calling GC.Collect() solved the issue. I was able to handle over 1G after the fix.
Have a look at this article by Rico Mariani. He gives two rules when to call GC.Collect (rule 1 is: "Don't"):
When to call GC.Collect()
I was doing some performance testing on array and list:
private static int count = 100000000;
private static List<int> GetSomeNumbers_List_int()
{
var lstNumbers = new List<int>();
for(var i = 1; i <= count; i++)
{
lstNumbers.Add(i);
}
return lstNumbers;
}
private static int[] GetSomeNumbers_Array()
{
var lstNumbers = new int[count];
for (var i = 1; i <= count; i++)
{
lstNumbers[i-1] = i + 1;
}
return lstNumbers;
}
private static int[] GetSomeNumbers_Enumerable_Range()
{
return Enumerable.Range(1, count).ToArray();
}
static void performance_100_Million()
{
var sw = new Stopwatch();
sw.Start();
var numbers1 = GetSomeNumbers_List_int();
sw.Stop();
//numbers1 = null;
//GC.Collect();
Console.WriteLine(String.Format("\"List<int>\" took {0} milliseconds", sw.ElapsedMilliseconds));
sw.Reset();
sw.Start();
var numbers2 = GetSomeNumbers_Array();
sw.Stop();
//numbers2 = null;
//GC.Collect();
Console.WriteLine(String.Format("\"int[]\" took {0} milliseconds", sw.ElapsedMilliseconds));
sw.Reset();
sw.Start();
//getting System.OutOfMemoryException in GetSomeNumbers_Enumerable_Range method
var numbers3 = GetSomeNumbers_Enumerable_Range();
sw.Stop();
//numbers3 = null;
//GC.Collect();
Console.WriteLine(String.Format("\"int[]\" Enumerable.Range took {0} milliseconds", sw.ElapsedMilliseconds));
}
and I got OutOfMemoryException in GetSomeNumbers_Enumerable_Range method the only workaround is to deallocate the memory by:
numbers = null;
GC.Collect();
You should try to avoid using GC.Collect() since its very expensive. Here is an example:
public void ClearFrame(ulong timeStamp)
{
if (RecordSet.Count <= 0) return;
if (Limit == false)
{
var seconds = (timeStamp - RecordSet[0].TimeStamp)/1000;
if (seconds <= _preFramesTime) return;
Limit = true;
do
{
RecordSet.Remove(RecordSet[0]);
} while (((timeStamp - RecordSet[0].TimeStamp) / 1000) > _preFramesTime);
}
else
{
RecordSet.Remove(RecordSet[0]);
}
GC.Collect(); // AVOID
}
TEST RESULT: CPU USAGE 12%
When you change to this:
public void ClearFrame(ulong timeStamp)
{
if (RecordSet.Count <= 0) return;
if (Limit == false)
{
var seconds = (timeStamp - RecordSet[0].TimeStamp)/1000;
if (seconds <= _preFramesTime) return;
Limit = true;
do
{
RecordSet[0].Dispose(); // Bitmap destroyed!
RecordSet.Remove(RecordSet[0]);
} while (((timeStamp - RecordSet[0].TimeStamp) / 1000) > _preFramesTime);
}
else
{
RecordSet[0].Dispose(); // Bitmap destroyed!
RecordSet.Remove(RecordSet[0]);
}
//GC.Collect();
}
TEST RESULT: CPU USAGE 2-3%
In your example, I think that calling GC.Collect isn't the issue, but rather there is a design issue.
If you are going to wake up at intervals, (set times) then your program should be crafted for a single execution (perform the task once) and then terminate. Then, you set the program up as a scheduled task to run at the scheduled intervals.
This way, you don't have to concern yourself with calling GC.Collect, (which you should rarely if ever, have to do).
That being said, Rico Mariani has a great blog post on this subject, which can be found here:
http://blogs.msdn.com/ricom/archive/2004/11/29/271829.aspx
One useful place to call GC.Collect() is in a unit test when you want to verify that you are not creating a memory leak (e. g. if you are doing something with WeakReferences or ConditionalWeakTable, dynamically generated code, etc).
For example, I have a few tests like:
WeakReference w = CodeThatShouldNotMemoryLeak();
Assert.IsTrue(w.IsAlive);
GC.Collect();
GC.WaitForPendingFinalizers();
Assert.IsFalse(w.IsAlive);
It could be argued that using WeakReferences is a problem in and of itself, but it seems that if you are creating a system that relies on such behavior then calling GC.Collect() is a good way to verify such code.
There are some situations where it is better safe than sorry.
Here is one situation.
It is possible to author an unmanaged DLL in C# using IL rewrites (because there are situations where this is necessary).
Now suppose, for example, the DLL creates an array of bytes at the class level - because many of the exported functions need access to such. What happens when the DLL is unloaded? Is the garbage collector automatically called at that point? I don't know, but being an unmanaged DLL it is entirely possible the GC isn't called. And it would be a big problem if it wasn't called. When the DLL is unloaded so too would be the garbage collector - so who is going to be responsible for collecting any possible garbage and how would they do it? Better to employ C#'s garbage collector. Have a cleanup function (available to the DLL client) where the class level variables are set to null and the garbage collector called.
Better safe than sorry.
The short answer is: never!
using(var stream = new MemoryStream())
{
bitmap.Save(stream, ImageFormat.Png);
techObject.Last().Image = Image.FromStream(stream);
bitmap.Dispose();
// Without this code, I had an OutOfMemory exception.
GC.Collect();
GC.WaitForPendingFinalizers();
//
}
Another reason is when you have a SerialPort opened on a USB COM port, and then the USB device is unplugged. Because the SerialPort was opened, the resource holds a reference to the previously connected port in the system's registry. The system's registry will then contain stale data, so the list of available ports will be wrong. Therefore the port must be closed.
Calling SerialPort.Close() on the port calls Dispose() on the object, but it remains in memory until garbage collection actually runs, causing the registry to remain stale until the garbage collector decides to release the resource.
From https://stackoverflow.com/a/58810699/8685342:
try
{
if (port != null)
port.Close(); //this will throw an exception if the port was unplugged
}
catch (Exception ex) //of type 'System.IO.IOException'
{
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
}
port = null;
If you are creating a lot of new System.Drawing.Bitmap objects, the Garbage Collector doesn't clear them. Eventually GDI+ will think you are running out of memory and will throw a "The parameter is not valid" exception. Calling GC.Collect() every so often (not too often!) seems to resolve this issue.
i am still pretty unsure about this.
I am working since 7 years on an Application Server. Our bigger installations take use of 24 GB Ram. Its hightly Multithreaded, and ALL calls for GC.Collect() ran into really terrible performance issues.
Many third party Components used GC.Collect() when they thought it was clever to do this right now.
So a simple bunch of Excel-Reports blocked the App Server for all threads several times a minute.
We had to refactor all the 3rd Party Components in order to remove the GC.Collect() calls, and all worked fine after doing this.
But i am running Servers on Win32 as well, and here i started to take heavy use of GC.Collect() after getting a OutOfMemoryException.
But i am also pretty unsure about this, because i often noticed, when i get a OOM on 32 Bit, and i retry to run the same Operation again, without calling GC.Collect(), it just worked fine.
One thing i wonder is the OOM Exception itself...
If i would have written the .Net Framework, and i can't alloc a memory block, i would use GC.Collect(), defrag memory (??), try again, and if i still cant find a free memory block, then i would throw the OOM-Exception.
Or at least make this behavior as configurable option, due the drawbacks of the performance issue with GC.Collect.
Now i have lots of code like this in my app to "solve" the problem:
public static TResult ExecuteOOMAware<T1, T2, TResult>(Func<T1,T2 ,TResult> func, T1 a1, T2 a2)
{
int oomCounter = 0;
int maxOOMRetries = 10;
do
{
try
{
return func(a1, a2);
}
catch (OutOfMemoryException)
{
oomCounter++;
if (maxOOMRetries > 10)
{
throw;
}
else
{
Log.Info("OutOfMemory-Exception caught, Trying to fix. Counter: " + oomCounter.ToString());
System.Threading.Thread.Sleep(TimeSpan.FromSeconds(oomCounter * 10));
GC.Collect();
}
}
} while (oomCounter < maxOOMRetries);
// never gets hitted.
return default(TResult);
}
(Note that the Thread.Sleep() behavior is a really App apecific behavior, because we are running a ORM Caching Service, and the service takes some time to release all the cached objects, if RAM exceeds some predefined values. so it waits a few seconds the first time, and has increased waiting time each occurence of OOM.)
one good reason for calling GC is on small ARM computers with little memory, like the Raspberry PI (running with mono).
If unallocated memory fragments use too much of the system RAM, then the Linux OS can get unstable.
I have an application where I have to call GC every second (!) to get rid of memory overflow problems.
Another good solution is to dispose objects when they are no longer needed. Unfortunately this is not so easy in many cases.
This isn't that relevant to the question, but for XSLT transforms in .NET (XSLCompiledTranform) then you might have no choice. Another candidate is the MSHTML control.
If you are using a version of .net less than 4.5, manual collection may be inevitable (especially if you are dealing with many 'large objects').
this link describes why:
https://blogs.msdn.microsoft.com/dotnet/2011/10/03/large-object-heap-improvements-in-net-4-5/
Since there are Small object heap(SOH) and Large object heap(LOH)
We can call GC.Collect() to clear de-reference object in SOP, and move lived object to next generation.
In .net4.5, we can also compact LOH by using largeobjectheapcompactionmode
I am trying to track down a memory leak in a larger C# program which spawns multiple threads. In the process, I have created a small side program which I am using to test some basic things, and I found some behavior that I really do not understand.
class Program
{
static void test()
{
}
static void Main(string[] args)
{
while (true)
{
Thread test_thread = new Thread(() => test());
test_thread.Start();
Thread.Sleep(20);
}
}
}
Running this program, I see that the memory usage of the program increases steadily without stopping. In just a few minutes the memory usage goes well over 100MB and keeps climbing. If I comment out the line test_thread.Start();, the memory used by the program maxes out at about a few megabytes, and levels out. I also tried forcing garbage collection at the end of the while loop using GC.Collect(), but it did not seem to do anything.
I thought that the thread would be dereferenced as soon as the function is finished executing allowing the GC to mop it up, but this doesn't seem to be happening. I must not be understanding something deeper here, and I would appreciate some help with fixing this leak. Thanks in advance!
This is by design, your test program is supposed to exhibit runaway memory usage. You can see the underlying reason from Taskmgr.exe. Use View + Select Columns and tick "Handles". Observe how the number of handles for your process is steadily increasing. Memory usage goes up along with that, reflecting the unmanaged memory used by the handle objects.
The design choice was a very courageous one, the CLR uses 5 operating system objects per thread. Plumbing, used for synchronization. These objects are themselves disposable, the design choice was to not make the Thread class implement IDisposable. That would be quite a hardship on .NET programmers, very difficult to make the Dispose() call at the right time. Courage that wasn't exhibited in the Task class design btw, causing lots of hand-wringing and the general advice not to bother.
This is not normally a problem in a well-designed .NET program. Where the GC runs often enough to clean up those OS objects. And Thread objects are creating sparingly, using the ThreadPool for very short running threads like your test program uses.
It can be, we can't see your real program. Do beware of drawing too many conclusions from such a synthetic test. You can see GC statistics with Perfmon.exe, gives you an idea if it is running often enough. A decent .NET memory profiler is the weapon of choice. GC.Collect() is the backup weapon. For example:
static void Main(string[] args) {
int cnt = 0;
while (true) {
Thread test_thread = new Thread(() => test());
test_thread.Start();
if (++cnt % 256 == 0) GC.Collect();
Thread.Sleep(20);
}
}
And you'll see it bounce back and forth now, never getting much higher than 4 MB.
Or is it already to late if the finalize method is reached?
Basically I'm creating some code to log to a MySql database. Each log entry is represented by an object and stored in a queue until it gets flushed to the database in a batch insert / update. I figured it'd be inefficient to create a new object on the heap every time I wanted to write an entry (especially since I might want to write an entry or two in performance sensitive areas). My solution was to create a pool of objects and reuse them.
Basically I'm trying to not re-invent the wheel by letting the .Net Garbage Collector let me know when an object is no longer needed and can be added back to the pool. The problem is I need away to abort garbage collection from the destructor. Is that possible?
Can you? Yes.
Should you? No, it is almost certainly a terrible idea.
The general rule C# developers should remember is the following:
If you find yourself writing a finalizer, you probably did something wrong.
The memory allocators used by well-established managed VMs (such as the CLR or JVM) are extremely fast. One of the things that slows down the garbage collector in these systems is the use of customized finalizers. In an effort to optimize the runtime, you are actually giving up a very fast operation in favor of a much slower operation. Furthermore, the semantics of "bringing an object back to life" are difficult to understand and reason about.
Before you consider using a finalizer, you should understand everything in the following articles.
Never write a finalizer again (well, almost never)
DG Update: Dispose, Finalization, and Resource Management
Connection pooling is a feature virtually any major DB connection implementation is already going to natively support, so there is no reason to handle this manually. You'll be able to simply create a new connection for each operation and know that behind the scenes the connections will actually be pooled.
To answer the literal question that you asked, yes. You can ensure that an object is not going to be GCed after it is finalized. You can do so simply by creating a reference to it from some "live" location.
This is a really bad idea though. Take a look at this example:
public class Foo
{
public string Data;
public static Foo instance = null;
~Foo()
{
Console.WriteLine("Finalized");
instance = this;
}
}
public static void Bar()
{
new Foo() { Data = "Hello World" };
}
static void Main(string[] args)
{
Bar();
GC.Collect();
GC.WaitForPendingFinalizers();
Console.WriteLine(Foo.instance.Data);
Foo.instance = null;
GC.Collect();
GC.WaitForPendingFinalizers();
}
This will print out:
Finalized
Hello World
So here we had an object end up being finalized, and we then accessed it later on. The problem however is that this object has been marked as "finalized". When it is finally hit by the GC again it's not finalized a second time.
You could re-register for finalization in the destructor, like so:
~YourClass()
{
System.GC.ReRegisterForFinalize(this);
}
And from there you'd probably want something to reference so it doesn't get finalized again, but this is a way to do it.
http://msdn.microsoft.com/en-us/library/system.gc.reregisterforfinalize(v=vs.110).aspx
Below sample code has memory leak. If I comment out the two lines inside RefreshTimer_Elapsed, then the memory leak is gone. Does anybody know what's wrong? Thanks for help.
static void RefreshTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
Thread innerThread = new Thread(delegate() { });
innerThread.Start();
}
static void Main(string[] args)
{
System.Timers.Timer RefreshTimer = new System.Timers.Timer();
RefreshTimer.Interval = 5000;
RefreshTimer.Elapsed += new System.Timers.ElapsedEventHandler(RefreshTimer_Elapsed);
RefreshTimer.Start();
for (; ; )
{ }
}
Are you sure theres a memory leak? Or do you notice that your memory just grows?
Until the garbage collector cleans up all the threads you create, memory will grow, but its not leaking, the garbage collector knows that this is dead memory.
The only way memory "leaks" in a managed enviroment like .NET or java is when you have objects being referenced that are never used or needed. Thats not the case here. You're just creating a bunch of threads and forget about them immediately. As soon as they're no longer referenced by RefreshTimer_Elapsed and the thread stops running then there are no more references and they are free to be cleaned.
You won't see the memory drop until the garbage collector is ready to do a cleanup. You can try and force this but its not generally recommended for performance reasons.
What you see might be just resources not yet reclaimed by the Garbage collector because there is no memory pressure.
Also you have a busy for loop in your Main routine, you probably want a Thread.Sleep statement there for testing, unless this is somehow part of this test...
To force a garbage collection just for your testing only you could replace your for loop with:
while(true)
{
Thread.Sleep(5000);
GC.Collect();
GC.WaitForPendingFinalizers();
}
In general when examining 'memory leaks' or resource problems in managed code I would recommend using a profiler (i.e. Redgate ANTS) and take a closer look at the allocations over time.
I think it's because you keep creating new threads.
The Timer object needs to be disposed!
It appears you are creating new items as there is a recursive call of the code and there may be some kind of loop developing at runtime causing untidy filling of memory with multiple copies of objects as every called item does not fully complete.
RefreshTimer_Elapsed makes a new thread every interval. What kind of work is the anonymous method doing? Is it completing? Every thread you make will get 1MB of virtual memory allocated via Windows.
If you threads never finish, then every interval, you will consume another 1MB of memory.