Best Practice for Forcing Garbage Collection in C# - c#

In my experience it seems that most people will tell you that it is unwise to force a garbage collection but in some cases where you are working with large objects that don't always get collected in the 0 generation but where memory is an issue, is it ok to force the collect? Is there a best practice out there for doing so?

The best practise is to not force a garbage collection.
According to MSDN:
"It is possible to force garbage
collection by calling Collect, but
most of the time, this should be
avoided because it may create
performance issues. "
However, if you can reliably test your code to confirm that calling Collect() won't have a negative impact then go ahead...
Just try to make sure objects are cleaned up when you no longer need them. If you have custom objects, look at using the "using statement" and the IDisposable interface.
This link has some good practical advice with regards to freeing up memory / garbage collection etc:
http://msdn.microsoft.com/en-us/library/66x5fx1b.aspx

Look at it this way - is it more efficient to throw out the kitchen garbage when the garbage can is at 10% or let it fill up before taking it out?
By not letting it fill up, you are wasting your time walking to and from the garbage bin outside. This analogous to what happens when the GC thread runs - all the managed threads are suspended while it is running. And If I am not mistaken, the GC thread can be shared among multiple AppDomains, so garbage collection affects all of them.
Of course, you might encounter a situation where you won't be adding anything to the garbage can anytime soon - say, if you're going to take a vacation. Then, it would be a good idea to throw out the trash before going out.
This MIGHT be one time that forcing a GC can help - if your program idles, the memory in use is not garbage-collected because there are no allocations.

The best practise is to not force a garbage collection in most cases. (Every system I have worked on that had forced garbage collections, had underlining problems that if solved would have removed the need to forced the garbage collection, and sped the system up greatly.)
There are a few cases when you know more about memory usage then the garbage collector does. This is unlikely to be true in a multi user application, or a service that is responding to more then one request at a time.
However in some batch type processing you do know more then the GC. E.g. consider an application that.
Is given a list of file names on the command line
Processes a single file then write the result out to a results file.
While processing the file, creates a lot of interlinked objects that can not be collected until the processing of the file have complete (e.g. a parse tree)
Does not keep much state between the files it has processed.
You may be able to make a case (after careful) testing that you should force a full garbage collection after you have process each file.
Another cases is a service that wakes up every few minutes to process some items, and does not keep any state while it’s asleep. Then forcing a full collection just before going to sleep may be worthwhile.
The only time I would consider forcing
a collection is when I know that a lot
of object had been created recently
and very few objects are currently
referenced.
I would rather have a garbage collection API when I could give it hints about this type of thing without having to force a GC my self.
See also "Rico Mariani's Performance Tidbits"

I think the example given by Rico Mariani was good: it may be appropriate to trigger a GC if there is a significant change in the application's state. For example, in a document editor it may be OK to trigger a GC when a document is closed.

There are few general guidelines in programming that are absolute. Half the time, when somebody says 'you're doing it wrong', they're just spouting a certain amount of dogma. In C, it used to be fear of things like self-modifying code or threads, in GC languages it is forcing the GC or alternatively preventing the GC from running.
As is the case with most guidelines and good rules of thumb (and good design practices), there are rare occasions where it does make sense to work around the established norm. You do have to be very sure you understand the case, that your case really requires the abrogation of common practice, and that you understand the risks and side-effects you can cause. But there are such cases.
Programming problems are widely varied and require a flexible approach. I have seen cases where it makes sense to block GC in garbage collected languages and places where it makes sense to trigger it rather than waiting for it to occur naturally. 95% of the time, either of these would be a signpost of not having approached the problem right. But 1 time in 20, there probably is a valid case to be made for it.

I've learned to not try to outsmart the garbage collection. With that said, I just stick to using using keyword when dealing with unmanaged resources like file I/O or database connections.

One case I recently encountered that required manual calls to GC.Collect() was when working with large C++ objects that were wrapped in tiny managed C++ objects, which in turn were accessed from C#.
The garbage collector never got called because the amount of managed memory used was negligible, but the amount of unmanaged memory used was huge. Manually calling Dispose() on the objects would require that I keep track of when objects are no longer needed myself, whereas calling GC.Collect() will clean up any objects that are no longer referred.....

Not sure if it is a best practice, but when working with large amounts of images in a loop (i.e. creating and disposing a lot of Graphics/Image/Bitmap objects), i regularly let the GC.Collect.
I think I read somewhere that the GC only runs when the program is (mostly) idle, and not in the middle of a intensive loop, so that could look like an area where manual GC could make sense.

I think you already listed the best practice and that is NOT to use it unless REALLY necessary. I would strongly recommend looking at your code in more detail, using profiling tools potentially if needed to answer these questions first.
Do you have something in your code that is declaring items at a larger scope than needed
Is the memory usage really too high
Compare performance before and after using GC.Collect() to see if it really helps.

Suppose your program doesn't have memory leakage, objects accumulates and cannot be GC-ed in Gen 0 because:
1) They are referenced for long time so get into Gen1 & Gen2;
2) They are large objects (>80K) so get into LOH (Large Object Heap). And LOH doesn't do compacting as in Gen0, Gen1 & Gen2.
Check the performance counter of ".NET Memory" can you can see that the 1) problem is really not a problem. Generally, every 10 Gen0 GC will trigger 1 Gen1 GC, and every 10 Gen1 GC will trigger 1 Gen2 GC. Theoretically, GC1 & GC2 can never be GC-ed if there is no pressure on GC0 (if the program memory usage is really wired). It never happens to me.
For problem 2), you can check ".NET Memory" performance counter to verify whether LOH is getting bloated. If it is really a issue to your problem, perhaps you can create a large-object-pool as this blog suggests http://blogs.msdn.com/yunjin/archive/2004/01/27/63642.aspx.

I would like to add that:
Calling GC.Collect() (+ WaitForPendingFinalizers()) is one part of the story.
As rightly mentioned by others, GC.COllect() is non-deterministic collection and is left to the discretion of the GC itself (CLR).
Even if you add a call to WaitForPendingFinalizers, it may not be deterministic.
Take the code from this msdn link and run the code with the object loop iteration as 1 or 2. You will find what non-deterministic means (set a break point in the object's destructor).
Precisely, the destructor is not called when there were just 1 (or 2) lingering objects by Wait..().[Citation reqd.]
If your code is dealing with unmanaged resources (ex: external file handles), you must implement destructors (or finalizers).
Here is an interesting example:
Note: If you have already tried the above example from MSDN, the following code is going to clear the air.
class Program
{
static void Main(string[] args)
{
SomePublisher publisher = new SomePublisher();
for (int i = 0; i < 10; i++)
{
SomeSubscriber subscriber = new SomeSubscriber(publisher);
subscriber = null;
}
GC.Collect();
GC.WaitForPendingFinalizers();
Console.WriteLine(SomeSubscriber.Count.ToString());
Console.ReadLine();
}
}
public class SomePublisher
{
public event EventHandler SomeEvent;
}
public class SomeSubscriber
{
public static int Count;
public SomeSubscriber(SomePublisher publisher)
{
publisher.SomeEvent += new EventHandler(publisher_SomeEvent);
}
~SomeSubscriber()
{
SomeSubscriber.Count++;
}
private void publisher_SomeEvent(object sender, EventArgs e)
{
// TODO: something
string stub = "";
}
}
I suggest, first analyze what the output could be and then run and then read the reason below:
{The destructor is only implicitly called once the program ends. }
In order to deterministically clean the object, one must implement IDisposable and make an explicit call to Dispose(). That's the essence! :)

Large objects are allocated on LOH (large object heap), not on gen 0. If you're saying that they don't get garbage-collected with gen 0, you're right. I believe they are collected only when the full GC cycle (generations 0, 1 and 2) happens.
That being said, I believe on the other side GC will adjust and collect memory more aggressively when you work with large objects and the memory pressure is going up.
It is hard to say whether to collect or not and in which circumstances. I used to do GC.Collect() after disposing of dialog windows/forms with numerous controls etc. (because by the time the form and its controls end up in gen 2 due to creating many instances of business objects/loading much data - no large objects obviously), but actually didn't notice any positive or negative effects in the long term by doing so.

One more thing, triggering GC Collect explicitly may NOT improve your program's performance. It is quite possible to make it worse.
The .NET GC is well designed and tuned to be adaptive, which means it can adjust GC0/1/2 threshold according to the "habit" of your program memory usage. So, it will be adapted to your program after some time running. Once you invoke GC.Collect explicitly, the thresholds will be reset! And the .NET has to spent time to adapt to your program's "habit" again.
My suggestion is always trust .NET GC. Any memory problem surfaces, check ".NET Memory" performance counter and diagnose my own code.

Not sure if it is a best practice...
Suggestion: do not implement this or anything when unsure. Reevaluate when facts are known, then perform before/after performance tests to verify.

However, if you can reliably test your code to confirm that calling Collect() won't have a negative impact then go ahead...
IMHO, this is similar to saying "If you can prove that your program will never have any bugs in the future, then go ahead..."
In all seriousness, forcing the GC is useful for debugging/testing purposes. If you feel like you need to do it at any other times, then either you are mistaken, or your program has been built wrong. Either way, the solution is not forcing the GC...

There are some scenarios where there will definitely be very little to no negative impact on your system when forcing a garbage collection e.g. On a date roll/a scheduled time where the system is not in use.
Aside from such times you would need to test performance of your code before and after implementing the forced collect to ensure that it is actually beneficial.

I do NOT recommend manual garbage collection. I assure you that you're not disposing of large objects properly. Make use of the USING statement. Whenever you instantiate an object, be sure to DISPOSE of it when you are through using it. This sample code creates a connection with USING statements. Then it instantiates a shipping label object, uses it, and disposes of it properly.
Using con As SqlConnection = New SqlConnection(DB_CONNECTION_STRING)
con.Open()
Using command As SqlCommand = New SqlCommand(sqlStr, con)
Using reader As SqlDataReader = command.ExecuteReader()
While reader.Read()
code_here()
End While
End Using
End Using
End Using
Dim f1 As frmShippingLabel
f1 = New frmShippingLabel
f1.PrintLabel()
f1.Dispose()

Related

High CPU usage after long running

I have an issue with my application hope somebody can give me suggestions how to fix it.
I have multithread application. It tuns 10-20 threads and in each thread I execute some complicated tasks.
Thread thread = new Thread(ProcessThread);
thread.Start();
private void ProcessThread()
{
while(IsRunning)
{
// do some very complex operations: grab HTTP pages. Save to files. Read from files. Run another threads etc.
}
}
At the beginning app uses about 10% CPU and 140Mb memory. But after 1000 executes CPU usage is 25%-30% and memory is 1200Mb. I know that probably I have a memory leak in my code, I will try to fix it. But what happened with CPU? Why it grows? Each execution does the same operations as in the beginning and later (for example open web page, grab some info and save it to the file).
I think that the issue can be with GC. More memory app take, more CPU need to clean the memory?
The other question, could you please advise a good tool how to measure what take CPU in my app?
And maybe you can recommend a good tool to analyze memory and check where it leaks? I tried JetBrains dotMemory but didn't understand much. Maybe you can help me.
Here is the stat:
http://prntscr.com/dev067
http://prntscr.com/dev7a2
As I see I don't have really too much unmanaged memory. But at the same time I see problems with strings, but can't understand what's wrong as GC must clean it?
Appreciate any suggestions and recommendations what I can improve.
Unless you are wrapping native types, I doubt you have a memory leak. The memory usage is most likely due to the way the GC works.
The GC will not collect all dead items in a cycle. It will use generations. That means it will collect older objects only if it needs the space. So the memory usage will increase until the first collection. Then, only a fraction of the items is collected, which will lower memory usage somewhat, but not completely. And the GC will also not necessarily return freed memory to the OS.
In higher editions of Visual Studio you'll find a memory profiler.
It is very unclear what are "some complicated tasks".
And this especially worries me:
// do some very complex operations: grab HTTP pages. Save to files. Read from files. Run another threads etc.
Well, if you are indeed starting new threads from your process thread, then it totally makes sense for CPU usage to raise. From performance perspective creation of new threads is costly, and that is a reason why we use thread pools(recycling threads).
The same goes for memory usage. More threads = more memory required for each thread (each thread has it's own stack requiring additional memory)...
Also, I don't believe that GC is a culprit here.
There is a lot of questions that need te be answered before I can help you find a cause of such a behavior, and before we can blame GC :):
1)Do you start all, let's say, 20 threads at the start of your program?
2)Do you create new threads from already running ones?
3)What is termination condition for those threads? Do they actually terminate?
I would recommend using dotTrace for determining CPU usage.
Unfortunately, I haven't used any tool for profiling memory usage so I cannot recommend any.
I looked at your screenshot and from that I see you have many objects which are surviving generation 0 collection so they are being escalated to generation 1 and then to generation 2. This could be a sign of a memory leak but not necessarily. Without seeing your code it is hard to say. All I can tell is that you are keeping your objects for a very long time. This may be needed but again I cannot tell without seeing the code.
A bit about GC
When the GC wakes up to cleanup, it analyzes the managed heap to see which objects are not rooted and marks them all ready for collection. This is called the mark phase. Then it starts deallocating the memory. This is called the sweep phase. If it cannot clean anything up, those objects go to generation 1. Sometime later the GC wakes up again and repeats the above but this time, since there are items in generation 1, if they cannot be collected they will go to generation 2. This could be a bad sign. Do the objects really need to be around for that long?
So what can you do?
You say the GC must clean stuff. Well yes the GC will clean things but only if you are not referencing them. If an object is rooted then GC will not be able to clean it.
Write Code with GC in mind
To start with the investigation, you need to investigate your code and make sure your code is written with GC in mind. Go through your code and make a list of all the objects you are using. If any object is of a class which implements IDisposable then wrap them in a using statement:
using (Font font1 = new Font("Arial", 10.0f))
{
// work here depends on font1
}
If you need font1 in one of your classes as a class level variable, then you have to make the descision when to call Dispose on it: At the very least this class should implement IDisposable and call dispose on font1. Users of this class (any code which calls new on this class) should either use this class with the using statement or call `Dispose' on it.
Do not keep objects longer than you need them.
Still need more investigation?
Once you have investigated your code and ensured your code is friendlier to the GC and you still have issues then investigate using a tool. Here is a good article to help you with this part.
Some people have the misconception that calling GC.Collect() will solve the memory leak issue. This is not true at all. Forcing a garbage collection will still follow the same rules and if objects are rooted, you can call GC.Collect() indefinitely, the objects will still not be cleaned up.
Here is a sample application which will show this:
public class Writer : IDisposable
{
public void Dispose()
{
}
public void Write(string s)
{
Console.WriteLine(s);
}
}
class Program
{
static void Main(string[] args)
{
Writer writer = new Writer();
writer.Write("1");
writer.Dispose();
// writer will still be around because I am referencing it (rooted)
writer.Write("2");
GC.Collect();
// calling GC.Collect() has no impact since writer is still rooted
writer.Write("3");
Console.ReadKey();
}
}
I can only comment on why you have a CPU issue and not GC.
I suggest you verify that you are creating only the number threads you expect. If you happen to have threads launching threads, it's easy to slip up.
As mentioned in another comment, thread creation and destruction is costly. If you are creating and destroying threads all the time, this will have a notable impact upon your performance.
I suspect the cause of your slowdown is memory thrashing, meaning the amount of memory used by each thread is large enough, and/or the total amount used by all threads is large enough to cause memory pages to be swapped in and out all the time. In certain situations, the processor can spend more time swapping memory to and from disk than it spends executing the thread.
Here are my suggestions:
Use a thread pool. This minimizes the time you need for thread creation and destruction. This will also bound the number of threads you are using.
Make sure each thread executes a reasonable amount of time before it is swapped out -- i.e. you don't have the threads context thrashing.
Make sure the amount of memory each thread is using is not much larger than you expect to prevent memory thrashing.
If you have to use a large amount of memory for each thread, make sure your pattern of usage is such that there isn't a lot of memory paging.
Run you application with "Start collecting allocations data immediately" enabled, get snapshot after a while and look at memory traffic view.
To understand what occupies a memory open "All objects" grouped by type and look what types take the most amount of memory.
For performance profiling I could recommend JetBrains dotTrace (timeline mode, for your case).

Can I force garbage collector to collect specific object? [duplicate]

I need to dispose of an object so it can release everything it owns, but it doesn't implement the IDisposable so I can't use it in a using block. How can I make the garbage collector collect it?
You can force a collection with GC.Collect(). Be very careful using this, since a full collection can take some time. The best-practice is to just let the GC determine when the best time to collect is.
Does the object contain unmanaged resources but does not implement IDisposable? If so, it's a bug.
If it doesn't, it shouldn't matter if it gets released right away, the garbage collector should do the right thing.
If it "owns" anything other than memory, you need to fix the object to use IDisposable. If it's not an object you control this is something worth picking a different vendor over, because it speaks to the core of how well your vendor really understands .Net.
If it does just own memory, even a lot of it, all you have to do is make sure the object goes out of scope. Don't call GC.Collect() — it's one of those things that if you have to ask, you shouldn't do it.
You can't perform garbage collection on a single object. You could request a garbage collection by calling GC.Collect() but this will effect all objects subject to cleanup. It is also highly discouraged as it can have a negative effect on the performance of later collections.
Also, calling Dispose on an object does not clean up it's memory. It only allows the object to remove references to unmanaged resources. For example, calling Dispose on a StreamWriter closes the stream and releases the Windows file handle. The memory for the object on the managed heap does not get reclaimed until a subsequent garbage collection.
Chris Sells also discussed this on .NET Rocks. I think it was during his first appearance but the subject might have been revisited in later interviews.
http://www.dotnetrocks.com/default.aspx?showNum=10
This article by Francesco Balena is also a good reference:
When and How to Use Dispose and Finalize in C#
http://www.devx.com/dotnet/Article/33167/0/page/1
Garbage collection in .NET is non deterministic, meaning you can't really control when it happens. You can suggest, but that doesn't mean it will listen.
Tells us a little bit more about the object and why you want to do this. We can make some suggestions based off of that. Code always helps. And depending on the object, there might be a Close method or something similar. Maybe the useage is to call that. If there is no Close or Dispose type of method, you probably don't want to rely on that object, as you will probably get memory leaks if in fact it does contain resourses which will need to be released.
If the object goes out of scope and it have no external references it will be collected rather fast (likely on the next collection).
BEWARE: of f ra gm enta tion in many cases, GC.Collect() or some IDisposal is not very helpful, especially for large objects (LOH is for objects ~80kb+, performs no compaction and is subject to high levels of fragmentation for many common use cases) which will then lead to out of memory (OOM) issues even with potentially hundreds of MB free. As time marches on, things get bigger, though perhaps not this size (80 something kb) for LOH relegated objects, high degrees of parallelism exasperates this issue due simply due to more objects in less time (and likely varying in size) being instantiated/released.
Array’s are the usual suspects for this problem (it’s also often hard to identify due to non-specific exceptions and assertions from the runtime, something like “high % of large object heap fragmentation” would be swell), the prognosis for code suffering from this problem is to implement an aggressive re-use strategy.
A class in Systems.Collections.Concurrent.ObjectPool from the parallel extensions beta1 samples helps (unfortunately there is not a simple ubiquitous pattern which I have seen, like maybe some attached property/extension methods?), it is simple enough to drop in or re-implement for most projects, you assign a generator Func<> and use Get/Put helper methods to re-use your previous object’s and forgo usual garbage collection. It is usually sufficient to focus on array’s and not the individual array elements.
It would be nice if .NET 4 updated all of the .ToArray() methods everywhere to include .ToArray(T target).
Getting the hang of using SOS/windbg (.loadby sos mscoreei for CLRv4) to analyze this class of issue can help. Thinking about it, the current garbage collection system is more like garbage re-cycling (using the same physical memory again), ObjectPool is analogous to garbage re-using. If anybody remembers the 3 R’s, reducing your memory use is a good idea too, for performance sakes ;)

Undesirable Garbage Collection

In a title "Forcing a Garbage Colection" from book "C# 2010 and the .NET 4 Platform" by Andrew Troelsen written:
"Again, the whole purpose of the .NET garbage collector is to manage memory on our behalf. However, in some very rare circumstances, it may be beneficial to programmatically force a garbage collection using GC.Collect(). Specifically:
• Your application is about to enter into a block of code that you don’t want interrupted by a possible garbage collection.
...
"
But stop! Is there a such case when Garbage Collection is undesirable? I never saw/read something like that (because of my little development experience of course). If while your practice you have done something like that, please share. For me it's very interesting point.
Thank you!
Yes, there's absolutely a case when garbage collection is undesirable: when a user is waiting for something to happen, and they have to wait longer because the code can't proceed until garbage collection has completed.
That's Troelsen's point: if you have a specific point where you know a GC isn't problematic and is likely to be able to collect significant amounts of garbage then it may be a good idea to provoke it then, to avoid it triggering at a less opportune moment.
I run a recipe related website, and I store a massive graph of recipes and their ingredient usage in memory. Due to the way I pivot this information for quick access, I have to load several gigs of data into memory when the application loads before I can organize the data into a very optimized graph. I create a huge amount of tiny objects on the heap that, once the graph is built, become unreachable.
This is all done when the web application loads, and probably takes 4-5 seconds to do. After I do so, I call GC.Collect(); because I'd rather re-claim all that memory now rather than potentially block all threads during an incoming HTTP request while the garbage collector is freaking out cleaning up all these short lived objects. I also figure it's better to clean up now since the heap is probably less fragmented at this time, since my app hasn't really done anything else so far. Delaying this might result in many more objects being created, and the heap needing to be compressed more when GC runs automatically.
Other than that, in my 12 years of .NET programming, I've never come across a situation where I wanted to force the garbage collector to run.
The recommendation is that you should not explicitly call Collect in your code. Can you find circumstances where it's useful?
Others have detailed some, and there are no doubt more. The first thing to understand though, is don't do it. It's a last resort, investigate other options, learn how GC works look at how your code is impacted, follow best practices for your designs.
Calling Collect at the wrong point will make your performance worse. Worse still, to rely on it makes your code very fragile. The rare conditions required to make a call to Collect beneficial, or at last not harmful, can be utterly undone with a simple change to the code, which will result unexpected OOMs, sluggish performamnce and such.
I call it before performance measurements so that the GC doesn't falsify the results.
Another situation are unit-tests testing for memory leaks:
object doesItLeak = /*...*/; //The object you want to have tested
WeakReference reference = new WeakRefrence(doesItLeak);
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Assert.That(!reference.IsAlive);
Besides those, I did not encounter a situation in which it would actually be helpful.
Especially in production code, GC.Collect should never be found IMHO.
It would be very rare, but GC can be a moderately expensive process so if there's a particular section that's timing sensitive, you don't want that section interupted by GC.
Your application is about to enter into a block of code that you don’t
want interrupted by a possible garbage collection. ...
A very suspect argument (that is nevertheless used a lot).
Windows is not a Real Time OS. Your code (Thread/Process) can always be pre-empted by the OS scheduler. You do not have a guaranteed access to the CPU.
So it boils down to: how does the time for a GC-run compare to a time-slot (~ 20 ms) ?
There is very little hard data available about that, I searched a few times.
From my own observation (very informal), a gen-0 collection is < 40 ms, usually a lot less. A full gen-2 can run into ~100 ms, probably more.
So the 'risk' of being interrupted by the GC is of the same order of magnitude as being swapped out for another process. And you can't control the latter.

Why is it always necessary to implement IDisposable on an object that has an IDisposable member?

From what I can tell, it is an accepted rule that if you have a class A that has a member m that is IDisposable, A should implement IDisposable and it should call m.Dispose() inside of it.
I can't find a satisfying reason why this is the case.
I understand the rule that if you have unmanaged resources, you should provide a finalizer along with IDisposable so that if the user doesn't explicitly call Dispose, the finalizer will still clean up during GC.
However, with that rule in place, it seems like you shouldn't need to have the rule that this question is about. For instance...
If I have a class:
class MyImage{
private Image _img;
... }
Conventions states that I should have MyImage : IDisposable. But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
UPDATE
Found a good discussion on what I was trying to get at here.
But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
You've missed the point of Dispose entirely. It's not about your convenience. It's about the convenience of other components that might want to use those unmanaged resources. Unless you can guarantee that no other code in the system cares about the timely release of resources, and the user doesn't care about timely release of resources, you should release your resources as soon as possible. That's the polite thing to do.
In the classic Prisoner's Dilemma, a lone defector in a world of cooperators gains a huge benefit. But in your case, being a lone defector produces only the tiny benefit of you personally saving a few minutes by writing low-quality, best-practice-ignoring code. It's your users and all the programs they use that suffer, and you gain practically nothing. Your code takes advantage of the fact that other programs unlock files and release mutexes and all that stuff. Be a good citizen and do the same for them. It's not hard to do, and it makes the whole software ecosystem better.
UPDATE: Here is an example of a real-world situation that my team is dealing with right now.
We have a test utility. It has a "handle leak" in that a bunch of unmanaged resources aren't aggressively disposed; it's leaking maybe half a dozen handles per "task". It maintains a list of "tasks to do" when it discovers disabled tests, and so on. We have ten or twenty thousand tasks in this list, so we very quickly end up with so many outstanding handles -- handles that should be dead and released back into the operating system -- that soon none of the code in the system that is not related to testing can run. The test code doesn't care. It works just fine. But eventually the code being tested can't make message boxes or other UI and the entire system either hangs or crashes.
The garbage collector has no reason to know that it needs to run finalizers more aggressively to release those handles sooner; why should it? Its job is to manage memory. Your job is to manage handles, so you've got to do that job.
But if Image has followed conventions
and implemented a finalizer and I
don't care about the timely release of
resources, what's the point?
Then there isn't one, if you don't care about timely release, and you can ensure that the disposable object is written correct (in truth I never make an assumption like that, not even with MSs code. You never know when something accidentally slipped by). The point is that you should care, as you never know when it will cause a problem. Think about an open database connection. Leaving it hanging around, means that it isn't replaced in the pool. You can run out if you have several requests come in for one.
Nothing says you have to do it if you don't care. Think of it this way, it's like releasing variables in an unmanaged program. You don't have to, but it is highly advisable. If for no other reason the person inheriting from the program doesn't have to wonder why it wasn't taken care of and then try and clear it up.
Firstly, there's no guaranteeing when an object will be cleaned up by the finalizer thread - think about the case where a class has a reference to a sql connection. Unless you make sure this is disposed of promptly, you'll have a connection open for an unknown period of time - and you won't be able to reuse it.
Secondly, finalization is not a cheap process - you should be making sure that if your objects are disposed of properly you're calling GC.SuppressFinalize(this) to prevent finalization happening.
Expanding on the "not cheap" aspect, the finalizer thread is a high-priority thread. It will take resources away from your main application if you give it too much to do.
Edit: Ok, here's a blog article by Chris Brummie about Finalization, including why it is expensive. (I knew I'd read loads about this somewhere)
If you don't care about the timely release of resources, then indeed there is no point. If you can be sure that the code is only for your consumption and you've got plenty of free memory/resources why not let GC hoover it up when it chooses to. OTOH, if someone else is using your code and creating many instances of (e.g.) MyImage, it's going to be pretty difficult to control memory/resource usage unless it disposes nicely.
Many classes require that Dispose be called to ensure correctness. If some C# code uses an iterator with a "finally" block, for example, the code in that block will not run if an enumerator is created with that iterator and not disposed. While there a few cases where it would be impractical to ensure objects were cleaned up without finalizers, for the most part code which relies upon finalizers for correct operation or to avoid memory leaks is bad code.
If your code acquires ownership of an IDisposable object, then unless either the object's cleass is sealed or your code creates the object by calling a constructor (as opposed to a factory method) you have no way of knowing what the real type of the object is, and whether it can be safely abandoned. Microsoft may have originally intended that it should be safe to abandon any type of object, but that is unrealistic, and the belief that it should be safe to abandon any type of object is unhelpful. If an object subscribes to events, allowing for safe abandonment will require either adding a level of weak indirection to all events, or a level of (non-weak) indirection to all other accesses. In many cases, it's better to require that a caller Dispose an object correctly than to add significant overhead and complexity to allow for abandonment.
Note also, btw, that even when objects try to accommodate abandonment it can still be very expensive. Create a Microsoft.VisualBasic.Collection (or whatever it's called), add a few objects, and create and Dispose a million enumerators. No problem--executes very quickly. Now create and abandon a million enumeartors. Major snooze fest unless you force a GC every few thousand enumerators. The Collection object is written to allow for abandonment, but that doesn't mean it doesn't have a major cost.
If an object you're using implements IDisposable, it's telling you it has something important to do when you're finished with it. That important thing may be to release unmanaged resources, or unhook from events so that it doesn't handle events after you think you're done with it, etc, etc. By not calling the Dispose, you're saying that you know better about how that object operates than the original author. In some tiny edge cases, this may actually be true, if you authored the IDisposable class yourself, or you know of a bug or performance problem related to calling Dispose. In general, it's very unlikely that ignoring a class requesting you to dispose it when you're done is a good idea.
Talking about finalizers - as has been pointed out, they have a cost, which can be avoided by Disposing the object (if it uses SuppressFinalize). Not just the cost of running the finalizer itself, and not just the cost of having to wait till that finalizer is done before the GC can collect the object. An object with a finalizer survives the collection in which it is identified as being unused and needing finalization. So it will be promoted (if it's not already in gen 2). This has several knock on effects:
The next higher generation will be collected less frequently, so after the finalizer runs, you may be waiting a long time before the GC comes around to that generation and sweeps your object away. So it can take a lot longer to free memory.
This adds unnecessary pressure to the collection the object is promoted to. If it's promoted from gen 0 to gen 1, then now gen 1 will fill up earlier than it needs to.
This can lead to more frequent garbage collections at higher generations, which is another performance hit.
If the object's finalizer isn't completed by the time the GC comes around to the higher generation, the object can be promoted again. Hence in a bad case you can cause an object to be promoted from gen 0 to gen 2 without good reason.
Obviously if you're only doing this on one object it's not likely to cost you anything noticeable. If you're doing it as general practice because you find calling Dispose on objects you're using tiresome, then it can lead to all of the problems above.
Dispose is like a lock on a front door. It's probably there for a reason, and if you're leaving the building, you should probably lock the door. If it wasn't a good idea to lock it, there wouldn't be a lock.
Even if you don't care in this particular case, you should still follow the standard because you will care in some cases. It's much easier to set a standard and follow it always based on specific guidelines than have a standard that you sometimes disregard. This is especially true as your team grows and your product ages.

How to avoid garbage collection in real time .NET application?

I'm writting a financial C# application which receive messages from the network, translate them into different object according to the message type and finaly apply the application business logic on them.
The point is that after the business logic is applied, I'm very sure I will never need this instance again. Rather than to wait for the garbage collector to free them, I'd like to explicitly "delete" them.
Is there a better way to do so in C#, should I use a pool of object to reuse always the same set of instance or is there a better strategy.
The goal being to avoid the garbage collection to use any CPU during a time critical process.
Don't delete them right away. Calling the garbage collector for each object is a bad idea. Normally you really don't want to mess with the garbage collector at all, and even time critical processes are just race conditions waiting to happen if they're that sensitive.
But if you know you'll have busy vs light load periods for your app, you might try a more general GC.Collect() when you reach a light period to encourage cleanup before the next busy period.
Look here: http://msdn.microsoft.com/en-us/library/bb384202.aspx
You can tell the garbage collector that you're doing something critical at the moment, and it will try to be nice to you.
You hit in yourself -- use a pool of objects and reuse those objects. The semantics of the calls to those object would need to be hidden behind a factory facade. You'll need to grow the pool in some pre-defined way. Perhaps double the size everytime it hits the limit -- a high water algorithm, or a fixed percentage. I'd really strongly advise you not to call GC.Collect().
When the load on your pool gets low enough you could shrink the pool and that will eventually trigger a garbage collection -- let the CLR worry about it.
Attempting to second-guess the garbage collector is generally a very bad idea. On Windows, the garbage collector is a generational one and can be relied upon to be pretty efficient. There are some noted exceptions to this general rule - the most common being the occurrence of a one-time event that you know for a fact will have caused a lot of old objects to die - once objects are promoted to Gen2 (the longest lived) they tend to hang around.
In the case you mention, you sound as though you are generating a number of short-lived objects - these will result in Gen0 collections. These happen relatively often anyway, and are the most efficient. You could avoid them by having a reusable pool of objects, if you prefer, but it is best to ascertain for certain if GC is a performance problem before taking such action - the CLR profiler is the tool for doing this.
It should be noted that the garbage collector is different on different .NET frameworks - on the compact framework (which runs on the Xbox 360 and on mobile platforms) it is a non-generational GC and as such you must be much more careful about what garbage your program generates.
Forcing a GC.Collect() is generally a bad idea, leave the GC to do what it does best. It sounds like the best solution would be to use a pool of objects that you can grow if necessary - I've used this pattern successfully.
This way you avoid not only the garbage collection but the regular allocation cost as well.
Finally, are you sure that the GC is causing you a problem? You should probably measure and prove this before implementing any perf-saving solutions - you may be causing yourself unnecessary work!
"The goal being to avoid the garbage collection to use any CPU during
a time critical process"
Q: If by time critical, you mean you're listening to some esoteric piece of hardware, and you can't afford to miss the interrupt?
A: If so then C# isn't the language to use, you want Assembler, C or C++ for that.
Q: If by time Critical, you mean while there are lots of messages in the pipe, and you don't want to let the Garbage collector slow things down?
A: If so you are worrying needlessly. By the sounds of things your objects are very short lived, this means the garbage collector will recycle them very efficiently, without any apparent lag in performance.
However, the only way to know for sure is test it, set it up to run overnight processing a constant stream of test messages, I'll be stunned if you your performance stats can spot when the GC kicks in (and even if you can spot it, I'll be even more surprised if it actually matters).
Get a good understanding and feel on how the Garbage Collector behaves, and you will understand why what you are thinking of here is not recommended. unless you really like the CLR to spend time rearranging objects in memory alot.
http://msdn.microsoft.com/en-us/magazine/bb985010.aspx
http://msdn.microsoft.com/en-us/magazine/bb985011.aspx
How intensive is the app? I wrote an app that captures 3 sound cards (Managed DirectX, 44.1KHz, Stereo, 16-bit), in 8KB blocks, and sends 2 of the 3 streams to another computer via TCP/IP. The UI renders an audio level meter and (smooth) scrolling title/artist for each of the 3 channels. This runs on PCs with XP, 1.8GHz, 512MB, etc. The App uses about 5% of the CPU.
I stayed clear of manually calling GC methods. But I did have to tune a few things that were wasteful. I used RedGate's Ant profiler to hone in on the wasteful portions. An awesome tool!
I wanted to use a pool of pre-allocated byte arrays, but the managed DX Assembly allocates byte buffers internally, then returns that to the App. It turned out that I didn't have to.
If it is absolutely time critical then you should use a deterministic platform like C/C++. Even calling GC.Collect() will generate CPU cycles.
Your question starts off with the suggestion that you want to save memory but getting rid of objects. This is a space critical optimization. You need to decide what you really want because the GC is better at optimizing this situation than a human.
From the sound of it, it seems like you're talking about deterministic finalization (destructors in C++), which doesn't exist in C#. The closest thing that you will find in C# is the Disposable pattern. Basically you implement the IDisposable interface.
The basic pattern is this:
public class MyClass: IDisposable
{
private bool _disposed;
public void Dispose()
{
Dispose( true );
GC.SuppressFinalize( this );
}
protected virtual void Dispose( bool disposing )
{
if( _disposed )
return;
if( disposing )
{
// Dispose managed resources here
}
_disposed = true;
}
}
You could have a limited amount of instances of each type in a pool, and reuse the already done with instances. The size of the pool would depend on the amount of messages you'll be processing.
Instead of creating a new instance of an object every time you get a message, why don't you reuse objects that have already been used? This way you won't be fighting against the garbage collector and your heap memory won't be getting fragmented.**
For each message type, you can create a pool to hold the instances that are not in use. Whenever you receive a network message, you look at the message type, pull a waiting instance out of the appropriate pool and apply your business logic. After that, you put that instance of the message object back into it's pool.
You will most likely want to "lazy load" your pool with instances so your code scales easily. Therefore, your pool class will need to detect when a null instance has been pulled and fill it up before handing it out. Then when the calling code puts it back in the pool it's a real instance.
** "Object pooling is a pattern to use that allows objects to be reused rather than allocated and deallocated, which helps to prevent heap fragmentation as well as costly GC compactions."
http://geekswithblogs.net/robp/archive/2008/08/07/speedy-c-part-2-optimizing-memory-allocations---pooling-and.aspx
In theory the GC shouldn't run if your CPU is under heavy load or unless it really needs to. But if you have to, you may want to just keep all of your objects in memory, perhaps a singleton instance, and never clean them up unless you're ready. That's probably the only way to guarantee when the GC runs.

Categories

Resources