GC Behavior and CLR Thread Hijacking - c#

I was reading about the GC in the book CLR via C#, specifically about when the CLR wants to start a collection. I understand that it has to suspend the threads before a collection occurs, but it mentions that it has to do this when the thread instruction pointer reaches a safe point. In the cases where it's not in a safe point, it tries to get to one quickly, and it does so by hijacking the thread (inserting a special function pointer in the thread stack). That's all fine and dandy, but I thought managed threads by default were safe?
I had initially thought it might have been referring to unmanaged threads, but the CLR lets unmanaged threads continue executing because any object being used should have been pinned anyway.
So, what is a safe point in a managed thread, and how can the GC determine what that is?
EDIT:
I don't think I was being specific enough. According to this MSDN article, even when Thread.Suspend is called, the thread will not actually be suspended until a safe point is reached. It goes on to further state that a safe point is a point in a threads execution at which a garbage collection can be performed.
I think I was unclear in my question. I realize that a Thread can only be suspended at a safe point and they have to be suspended for a GC, but I can't seem to find a clear answer as to what a safe point is. What determines a point in code as being safe?

'Safe Points' are where we are:
Not in a catch block.
Not inside a finally
Not inside a lock
Not inside p/invoke'd call (in managed code). Not running unmanaged code in the CLR.
The memory tree is walkable.
Point #5 is a bit confusing, but there are times when the memory tree will not be walkable. For example, after optimization, the CLR may new an Object and not assign it directly to a variable. According to the GC, this object would be a dead object ready to be collected. The compiler will instruct the GC when this happens to not run GC yet.
Here's a blog post on msdn with a little bit more information: http://blogs.msdn.com/b/abhinaba/archive/2009/09/02/netcf-gc-and-thread-blocking.aspx
EDIT: Well, sir, I was WRONG about #4. See here in the 'Safe Point' section. If we are inside a p/invoke (unmanaged) code section then it is allowed to run until it comes back out to managed code again.
However, according to this MSDN article, if we are in an unmanaged portion of CLR code, then it is not considered safe and they will wait until the code returns to managed. (I was close, at least).

Actually none of the answers I found so far on SO explains the 'why', i.e. what makes a certain point in code unsafe. And for that, from what've read in "Pro .NET Memory Management", the answer seems to be: in principle every point in code can be safe point as long as there're GCInfo generated by JIT to fully describe GC roots for that given point in code.
However, it's both impractical (think about the memory overhead, we're talking about GCInfo for every CPU instruction) and unnecessary (because what matters really is the "time-to-safe-point" (TTSP), it's sufficient to generate safe points with a granularity that makes this TTSP latency sufficiently small) to generate safe point for every struction.
Therefore, the JIT compiler uses some heuristics to decide how often safe points are generated so that it can tradeoff between memory overhead (not too often), and gc latency due to TTSP delay (not too few). Most of the time it's sufficient to just rely on method call site to act as safe points as they happen frequently enough to make TTSP delay very small. One of the exceptions is tight loop within which no method calls are made, in which case JIT may decide to inject safe points at loop repetition boundary.
So to sum it up, nothing fundamentally makes a particular point in code "unsafe" for GC. It's only a matter of tradeoff by JIT to decide how often safe-points are inserted.

Related

Undesirable Garbage Collection

In a title "Forcing a Garbage Colection" from book "C# 2010 and the .NET 4 Platform" by Andrew Troelsen written:
"Again, the whole purpose of the .NET garbage collector is to manage memory on our behalf. However, in some very rare circumstances, it may be beneficial to programmatically force a garbage collection using GC.Collect(). Specifically:
• Your application is about to enter into a block of code that you don’t want interrupted by a possible garbage collection.
...
"
But stop! Is there a such case when Garbage Collection is undesirable? I never saw/read something like that (because of my little development experience of course). If while your practice you have done something like that, please share. For me it's very interesting point.
Thank you!
Yes, there's absolutely a case when garbage collection is undesirable: when a user is waiting for something to happen, and they have to wait longer because the code can't proceed until garbage collection has completed.
That's Troelsen's point: if you have a specific point where you know a GC isn't problematic and is likely to be able to collect significant amounts of garbage then it may be a good idea to provoke it then, to avoid it triggering at a less opportune moment.
I run a recipe related website, and I store a massive graph of recipes and their ingredient usage in memory. Due to the way I pivot this information for quick access, I have to load several gigs of data into memory when the application loads before I can organize the data into a very optimized graph. I create a huge amount of tiny objects on the heap that, once the graph is built, become unreachable.
This is all done when the web application loads, and probably takes 4-5 seconds to do. After I do so, I call GC.Collect(); because I'd rather re-claim all that memory now rather than potentially block all threads during an incoming HTTP request while the garbage collector is freaking out cleaning up all these short lived objects. I also figure it's better to clean up now since the heap is probably less fragmented at this time, since my app hasn't really done anything else so far. Delaying this might result in many more objects being created, and the heap needing to be compressed more when GC runs automatically.
Other than that, in my 12 years of .NET programming, I've never come across a situation where I wanted to force the garbage collector to run.
The recommendation is that you should not explicitly call Collect in your code. Can you find circumstances where it's useful?
Others have detailed some, and there are no doubt more. The first thing to understand though, is don't do it. It's a last resort, investigate other options, learn how GC works look at how your code is impacted, follow best practices for your designs.
Calling Collect at the wrong point will make your performance worse. Worse still, to rely on it makes your code very fragile. The rare conditions required to make a call to Collect beneficial, or at last not harmful, can be utterly undone with a simple change to the code, which will result unexpected OOMs, sluggish performamnce and such.
I call it before performance measurements so that the GC doesn't falsify the results.
Another situation are unit-tests testing for memory leaks:
object doesItLeak = /*...*/; //The object you want to have tested
WeakReference reference = new WeakRefrence(doesItLeak);
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Assert.That(!reference.IsAlive);
Besides those, I did not encounter a situation in which it would actually be helpful.
Especially in production code, GC.Collect should never be found IMHO.
It would be very rare, but GC can be a moderately expensive process so if there's a particular section that's timing sensitive, you don't want that section interupted by GC.
Your application is about to enter into a block of code that you don’t
want interrupted by a possible garbage collection. ...
A very suspect argument (that is nevertheless used a lot).
Windows is not a Real Time OS. Your code (Thread/Process) can always be pre-empted by the OS scheduler. You do not have a guaranteed access to the CPU.
So it boils down to: how does the time for a GC-run compare to a time-slot (~ 20 ms) ?
There is very little hard data available about that, I searched a few times.
From my own observation (very informal), a gen-0 collection is < 40 ms, usually a lot less. A full gen-2 can run into ~100 ms, probably more.
So the 'risk' of being interrupted by the GC is of the same order of magnitude as being swapped out for another process. And you can't control the latter.

Why is it always necessary to implement IDisposable on an object that has an IDisposable member?

From what I can tell, it is an accepted rule that if you have a class A that has a member m that is IDisposable, A should implement IDisposable and it should call m.Dispose() inside of it.
I can't find a satisfying reason why this is the case.
I understand the rule that if you have unmanaged resources, you should provide a finalizer along with IDisposable so that if the user doesn't explicitly call Dispose, the finalizer will still clean up during GC.
However, with that rule in place, it seems like you shouldn't need to have the rule that this question is about. For instance...
If I have a class:
class MyImage{
private Image _img;
... }
Conventions states that I should have MyImage : IDisposable. But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
UPDATE
Found a good discussion on what I was trying to get at here.
But if Image has followed conventions and implemented a finalizer and I don't care about the timely release of resources, what's the point?
You've missed the point of Dispose entirely. It's not about your convenience. It's about the convenience of other components that might want to use those unmanaged resources. Unless you can guarantee that no other code in the system cares about the timely release of resources, and the user doesn't care about timely release of resources, you should release your resources as soon as possible. That's the polite thing to do.
In the classic Prisoner's Dilemma, a lone defector in a world of cooperators gains a huge benefit. But in your case, being a lone defector produces only the tiny benefit of you personally saving a few minutes by writing low-quality, best-practice-ignoring code. It's your users and all the programs they use that suffer, and you gain practically nothing. Your code takes advantage of the fact that other programs unlock files and release mutexes and all that stuff. Be a good citizen and do the same for them. It's not hard to do, and it makes the whole software ecosystem better.
UPDATE: Here is an example of a real-world situation that my team is dealing with right now.
We have a test utility. It has a "handle leak" in that a bunch of unmanaged resources aren't aggressively disposed; it's leaking maybe half a dozen handles per "task". It maintains a list of "tasks to do" when it discovers disabled tests, and so on. We have ten or twenty thousand tasks in this list, so we very quickly end up with so many outstanding handles -- handles that should be dead and released back into the operating system -- that soon none of the code in the system that is not related to testing can run. The test code doesn't care. It works just fine. But eventually the code being tested can't make message boxes or other UI and the entire system either hangs or crashes.
The garbage collector has no reason to know that it needs to run finalizers more aggressively to release those handles sooner; why should it? Its job is to manage memory. Your job is to manage handles, so you've got to do that job.
But if Image has followed conventions
and implemented a finalizer and I
don't care about the timely release of
resources, what's the point?
Then there isn't one, if you don't care about timely release, and you can ensure that the disposable object is written correct (in truth I never make an assumption like that, not even with MSs code. You never know when something accidentally slipped by). The point is that you should care, as you never know when it will cause a problem. Think about an open database connection. Leaving it hanging around, means that it isn't replaced in the pool. You can run out if you have several requests come in for one.
Nothing says you have to do it if you don't care. Think of it this way, it's like releasing variables in an unmanaged program. You don't have to, but it is highly advisable. If for no other reason the person inheriting from the program doesn't have to wonder why it wasn't taken care of and then try and clear it up.
Firstly, there's no guaranteeing when an object will be cleaned up by the finalizer thread - think about the case where a class has a reference to a sql connection. Unless you make sure this is disposed of promptly, you'll have a connection open for an unknown period of time - and you won't be able to reuse it.
Secondly, finalization is not a cheap process - you should be making sure that if your objects are disposed of properly you're calling GC.SuppressFinalize(this) to prevent finalization happening.
Expanding on the "not cheap" aspect, the finalizer thread is a high-priority thread. It will take resources away from your main application if you give it too much to do.
Edit: Ok, here's a blog article by Chris Brummie about Finalization, including why it is expensive. (I knew I'd read loads about this somewhere)
If you don't care about the timely release of resources, then indeed there is no point. If you can be sure that the code is only for your consumption and you've got plenty of free memory/resources why not let GC hoover it up when it chooses to. OTOH, if someone else is using your code and creating many instances of (e.g.) MyImage, it's going to be pretty difficult to control memory/resource usage unless it disposes nicely.
Many classes require that Dispose be called to ensure correctness. If some C# code uses an iterator with a "finally" block, for example, the code in that block will not run if an enumerator is created with that iterator and not disposed. While there a few cases where it would be impractical to ensure objects were cleaned up without finalizers, for the most part code which relies upon finalizers for correct operation or to avoid memory leaks is bad code.
If your code acquires ownership of an IDisposable object, then unless either the object's cleass is sealed or your code creates the object by calling a constructor (as opposed to a factory method) you have no way of knowing what the real type of the object is, and whether it can be safely abandoned. Microsoft may have originally intended that it should be safe to abandon any type of object, but that is unrealistic, and the belief that it should be safe to abandon any type of object is unhelpful. If an object subscribes to events, allowing for safe abandonment will require either adding a level of weak indirection to all events, or a level of (non-weak) indirection to all other accesses. In many cases, it's better to require that a caller Dispose an object correctly than to add significant overhead and complexity to allow for abandonment.
Note also, btw, that even when objects try to accommodate abandonment it can still be very expensive. Create a Microsoft.VisualBasic.Collection (or whatever it's called), add a few objects, and create and Dispose a million enumerators. No problem--executes very quickly. Now create and abandon a million enumeartors. Major snooze fest unless you force a GC every few thousand enumerators. The Collection object is written to allow for abandonment, but that doesn't mean it doesn't have a major cost.
If an object you're using implements IDisposable, it's telling you it has something important to do when you're finished with it. That important thing may be to release unmanaged resources, or unhook from events so that it doesn't handle events after you think you're done with it, etc, etc. By not calling the Dispose, you're saying that you know better about how that object operates than the original author. In some tiny edge cases, this may actually be true, if you authored the IDisposable class yourself, or you know of a bug or performance problem related to calling Dispose. In general, it's very unlikely that ignoring a class requesting you to dispose it when you're done is a good idea.
Talking about finalizers - as has been pointed out, they have a cost, which can be avoided by Disposing the object (if it uses SuppressFinalize). Not just the cost of running the finalizer itself, and not just the cost of having to wait till that finalizer is done before the GC can collect the object. An object with a finalizer survives the collection in which it is identified as being unused and needing finalization. So it will be promoted (if it's not already in gen 2). This has several knock on effects:
The next higher generation will be collected less frequently, so after the finalizer runs, you may be waiting a long time before the GC comes around to that generation and sweeps your object away. So it can take a lot longer to free memory.
This adds unnecessary pressure to the collection the object is promoted to. If it's promoted from gen 0 to gen 1, then now gen 1 will fill up earlier than it needs to.
This can lead to more frequent garbage collections at higher generations, which is another performance hit.
If the object's finalizer isn't completed by the time the GC comes around to the higher generation, the object can be promoted again. Hence in a bad case you can cause an object to be promoted from gen 0 to gen 2 without good reason.
Obviously if you're only doing this on one object it's not likely to cost you anything noticeable. If you're doing it as general practice because you find calling Dispose on objects you're using tiresome, then it can lead to all of the problems above.
Dispose is like a lock on a front door. It's probably there for a reason, and if you're leaving the building, you should probably lock the door. If it wasn't a good idea to lock it, there wouldn't be a lock.
Even if you don't care in this particular case, you should still follow the standard because you will care in some cases. It's much easier to set a standard and follow it always based on specific guidelines than have a standard that you sometimes disregard. This is especially true as your team grows and your product ages.

C# .NET object disposal

Should be an easy one. Let's say I have the following code:
void Method()
{
AnotherMethod(new MyClass());
}
void AnotherMethod(MyClass obj)
{
Console.WriteLine(obj.ToString());
}
If I call "Method()", what happens to the MyClass object that was created in the process? Does it still exist in the stack after the call, even though nothing is using it? Or does it get removed immediately?
Do I have to set it to null to get GC to notice it quicker?
After the call to Method completes, your MyClass object is alive but there are no references to it from a rooted value. So it will live until the next time the GC runs where it will be collected and the memory reclaimed.
There is really nothing you can do to speed up this process other than to force a GC. However this is likely a bad idea. The GC is designed to clean up such objects and any attempt you make to make it faster will likely result in it being slower overall. You'll also find that a GC, while correctly cleaning up managed objects, may not actually reduce the memory in your system. This is because the GC keeps it around for future use. It's a very complex system that's typically best left to it's own devices.
If I call "Method()", what happens to the MyClass object that was created in the process?
It gets created on the GC heap. Then a reference to its location in the heap is placed on the stack. Then the call to AnotherMethod happens. Then the object's ToString method is called and the result is printed out. Then AnotherMethod returns.
Does it still exist in the stack after the call, even though nothing is using it?
Your question is ambiguous. By "the call" do you mean the call to Method, or AnotherMethod? It makes a difference because at this point, whether the heap memory is a candidate for garbage collection depends upon whether you compiled with optimizations turned on or off. I'm going to slightly change your program to illustrate the difference. Suppose you had:
void Method()
{
AnotherMethod(new MyClass());
Console.WriteLine("Hello world");
}
With optimizations off, we sometimes actually generate code that would be like this:
void Method()
{
var temp = new MyClass();
AnotherMethod(temp);
Console.WriteLine("Hello world");
}
In the unoptimized version, the runtime will actually choose to treat the object as not-collectable until Method returns, after the WriteLine. In the optimized version, the runtime can choose to treat the object as collectible as soon as AnotherMethod returns, before the WriteLine.
The reason for the difference is because making object lifetime more predictable during debugging sessions often helps people understand their programs.
Or does it get removed immediately?
Nothing gets collected immediately; the garbage collector runs when it feels like it ought to run. If you need some resource like a file handle to be cleaned up immediately when you're done with it then use a "using" block. If not, then let the garbage collector decide when to collect memory.
Do I have to set it to null to get GC to notice it quicker?
Do you have to set what to null? What variable did you have in mind?
Regardless, you do not have to do anything to make the garbage collector work. It runs on its own just fine without prompting from you.
I think you're overthinking this problem. Let the garbage collector do its thing and don't stress about it. If you're having a real-world problem with memory not being collected in a timely manner, then show us some code that illustrates that problem; otherwise, just relax and learn to love automatic storage reclamation.
Actually, the instance will be declared on the heap, but have a look at Eric Lipper's article, The Stack is an Implementation Detail.
In any case, because there will be no more references to the instance after the function executes, it will be (or, more accurately, can be) deleted by the garbage collector at some undefined point in the future. As to exactly when that happens, it's undefined, but you also (essentially) don't need to worry about it; the GC has complicated algorithms that help it determine what and when to collect.
Does it still exist in the stack after the call
Semantics are important here. You asked if it still exists on the stack after the method call. The answer there is "no". It was removed from the stack. But that's not the final story. The object does still exist, it's just no longer rooted. It won't be destroyed or collected until the GC runs. But at this point it's no longer your concern. The GC is much better at deciding when to collect something than you or I are.
Do I have to set it to null to get GC to notice it quicker?
There's almost never a good reason to do that, anyway. The only time that helps is if you have a very long running method and an object that you are done with early that otherwise won't go out of scope until the end of the method. Even then, setting it to null will only help in the rare case where the GC decides to run during the method. But in that case you're probably doing something else wrong as well.
In C# the new MyClass() is scoped only to live inside Method() while it is active. Once that AnotherMethod() is finished executing, it is out of scope and gets unrooted. It then remains on the heap until the GC runs its collection cycle and identifies it as an unreferenced memory block. So it is still "alive" on the heap but it is inaccessible.
The GC keeps track of what objects can possibly still be referenced later in the code. It then, at intervals, checks to see if there are any objects still alive that could not possibly be referenced later in the code, and cleans them up.
The mechanics of this are somewhat complex, and when these collections will happen depends on a variety of factors. The GC is designed to do these collections at the most optimal time (that it can establish) and so, while it is possible to force it to do a collection, it is almost always a bad idea.
Setting the variable to null will have very little overall effect on how soon the object is dealt with. While it can, in some small corner cases, be of benefit it is not worth littering your code with redundant assignments which will not affect your codes performance and only harm readability.
The GC is designed to be as effective as possible without you needing to think about it. To be honest, the only thing you really need to be mindful of is being careful when allocating really large objects that will stay alive for a long time, and that's generally quite rare in my experience.
As far as I know, the object is only valid inside your method context. After the method "Method()" is executed it is added to the dispose queue.

Best Practice for Forcing Garbage Collection in C#

In my experience it seems that most people will tell you that it is unwise to force a garbage collection but in some cases where you are working with large objects that don't always get collected in the 0 generation but where memory is an issue, is it ok to force the collect? Is there a best practice out there for doing so?
The best practise is to not force a garbage collection.
According to MSDN:
"It is possible to force garbage
collection by calling Collect, but
most of the time, this should be
avoided because it may create
performance issues. "
However, if you can reliably test your code to confirm that calling Collect() won't have a negative impact then go ahead...
Just try to make sure objects are cleaned up when you no longer need them. If you have custom objects, look at using the "using statement" and the IDisposable interface.
This link has some good practical advice with regards to freeing up memory / garbage collection etc:
http://msdn.microsoft.com/en-us/library/66x5fx1b.aspx
Look at it this way - is it more efficient to throw out the kitchen garbage when the garbage can is at 10% or let it fill up before taking it out?
By not letting it fill up, you are wasting your time walking to and from the garbage bin outside. This analogous to what happens when the GC thread runs - all the managed threads are suspended while it is running. And If I am not mistaken, the GC thread can be shared among multiple AppDomains, so garbage collection affects all of them.
Of course, you might encounter a situation where you won't be adding anything to the garbage can anytime soon - say, if you're going to take a vacation. Then, it would be a good idea to throw out the trash before going out.
This MIGHT be one time that forcing a GC can help - if your program idles, the memory in use is not garbage-collected because there are no allocations.
The best practise is to not force a garbage collection in most cases. (Every system I have worked on that had forced garbage collections, had underlining problems that if solved would have removed the need to forced the garbage collection, and sped the system up greatly.)
There are a few cases when you know more about memory usage then the garbage collector does. This is unlikely to be true in a multi user application, or a service that is responding to more then one request at a time.
However in some batch type processing you do know more then the GC. E.g. consider an application that.
Is given a list of file names on the command line
Processes a single file then write the result out to a results file.
While processing the file, creates a lot of interlinked objects that can not be collected until the processing of the file have complete (e.g. a parse tree)
Does not keep much state between the files it has processed.
You may be able to make a case (after careful) testing that you should force a full garbage collection after you have process each file.
Another cases is a service that wakes up every few minutes to process some items, and does not keep any state while it’s asleep. Then forcing a full collection just before going to sleep may be worthwhile.
The only time I would consider forcing
a collection is when I know that a lot
of object had been created recently
and very few objects are currently
referenced.
I would rather have a garbage collection API when I could give it hints about this type of thing without having to force a GC my self.
See also "Rico Mariani's Performance Tidbits"
I think the example given by Rico Mariani was good: it may be appropriate to trigger a GC if there is a significant change in the application's state. For example, in a document editor it may be OK to trigger a GC when a document is closed.
There are few general guidelines in programming that are absolute. Half the time, when somebody says 'you're doing it wrong', they're just spouting a certain amount of dogma. In C, it used to be fear of things like self-modifying code or threads, in GC languages it is forcing the GC or alternatively preventing the GC from running.
As is the case with most guidelines and good rules of thumb (and good design practices), there are rare occasions where it does make sense to work around the established norm. You do have to be very sure you understand the case, that your case really requires the abrogation of common practice, and that you understand the risks and side-effects you can cause. But there are such cases.
Programming problems are widely varied and require a flexible approach. I have seen cases where it makes sense to block GC in garbage collected languages and places where it makes sense to trigger it rather than waiting for it to occur naturally. 95% of the time, either of these would be a signpost of not having approached the problem right. But 1 time in 20, there probably is a valid case to be made for it.
I've learned to not try to outsmart the garbage collection. With that said, I just stick to using using keyword when dealing with unmanaged resources like file I/O or database connections.
One case I recently encountered that required manual calls to GC.Collect() was when working with large C++ objects that were wrapped in tiny managed C++ objects, which in turn were accessed from C#.
The garbage collector never got called because the amount of managed memory used was negligible, but the amount of unmanaged memory used was huge. Manually calling Dispose() on the objects would require that I keep track of when objects are no longer needed myself, whereas calling GC.Collect() will clean up any objects that are no longer referred.....
Not sure if it is a best practice, but when working with large amounts of images in a loop (i.e. creating and disposing a lot of Graphics/Image/Bitmap objects), i regularly let the GC.Collect.
I think I read somewhere that the GC only runs when the program is (mostly) idle, and not in the middle of a intensive loop, so that could look like an area where manual GC could make sense.
I think you already listed the best practice and that is NOT to use it unless REALLY necessary. I would strongly recommend looking at your code in more detail, using profiling tools potentially if needed to answer these questions first.
Do you have something in your code that is declaring items at a larger scope than needed
Is the memory usage really too high
Compare performance before and after using GC.Collect() to see if it really helps.
Suppose your program doesn't have memory leakage, objects accumulates and cannot be GC-ed in Gen 0 because:
1) They are referenced for long time so get into Gen1 & Gen2;
2) They are large objects (>80K) so get into LOH (Large Object Heap). And LOH doesn't do compacting as in Gen0, Gen1 & Gen2.
Check the performance counter of ".NET Memory" can you can see that the 1) problem is really not a problem. Generally, every 10 Gen0 GC will trigger 1 Gen1 GC, and every 10 Gen1 GC will trigger 1 Gen2 GC. Theoretically, GC1 & GC2 can never be GC-ed if there is no pressure on GC0 (if the program memory usage is really wired). It never happens to me.
For problem 2), you can check ".NET Memory" performance counter to verify whether LOH is getting bloated. If it is really a issue to your problem, perhaps you can create a large-object-pool as this blog suggests http://blogs.msdn.com/yunjin/archive/2004/01/27/63642.aspx.
I would like to add that:
Calling GC.Collect() (+ WaitForPendingFinalizers()) is one part of the story.
As rightly mentioned by others, GC.COllect() is non-deterministic collection and is left to the discretion of the GC itself (CLR).
Even if you add a call to WaitForPendingFinalizers, it may not be deterministic.
Take the code from this msdn link and run the code with the object loop iteration as 1 or 2. You will find what non-deterministic means (set a break point in the object's destructor).
Precisely, the destructor is not called when there were just 1 (or 2) lingering objects by Wait..().[Citation reqd.]
If your code is dealing with unmanaged resources (ex: external file handles), you must implement destructors (or finalizers).
Here is an interesting example:
Note: If you have already tried the above example from MSDN, the following code is going to clear the air.
class Program
{
static void Main(string[] args)
{
SomePublisher publisher = new SomePublisher();
for (int i = 0; i < 10; i++)
{
SomeSubscriber subscriber = new SomeSubscriber(publisher);
subscriber = null;
}
GC.Collect();
GC.WaitForPendingFinalizers();
Console.WriteLine(SomeSubscriber.Count.ToString());
Console.ReadLine();
}
}
public class SomePublisher
{
public event EventHandler SomeEvent;
}
public class SomeSubscriber
{
public static int Count;
public SomeSubscriber(SomePublisher publisher)
{
publisher.SomeEvent += new EventHandler(publisher_SomeEvent);
}
~SomeSubscriber()
{
SomeSubscriber.Count++;
}
private void publisher_SomeEvent(object sender, EventArgs e)
{
// TODO: something
string stub = "";
}
}
I suggest, first analyze what the output could be and then run and then read the reason below:
{The destructor is only implicitly called once the program ends. }
In order to deterministically clean the object, one must implement IDisposable and make an explicit call to Dispose(). That's the essence! :)
Large objects are allocated on LOH (large object heap), not on gen 0. If you're saying that they don't get garbage-collected with gen 0, you're right. I believe they are collected only when the full GC cycle (generations 0, 1 and 2) happens.
That being said, I believe on the other side GC will adjust and collect memory more aggressively when you work with large objects and the memory pressure is going up.
It is hard to say whether to collect or not and in which circumstances. I used to do GC.Collect() after disposing of dialog windows/forms with numerous controls etc. (because by the time the form and its controls end up in gen 2 due to creating many instances of business objects/loading much data - no large objects obviously), but actually didn't notice any positive or negative effects in the long term by doing so.
One more thing, triggering GC Collect explicitly may NOT improve your program's performance. It is quite possible to make it worse.
The .NET GC is well designed and tuned to be adaptive, which means it can adjust GC0/1/2 threshold according to the "habit" of your program memory usage. So, it will be adapted to your program after some time running. Once you invoke GC.Collect explicitly, the thresholds will be reset! And the .NET has to spent time to adapt to your program's "habit" again.
My suggestion is always trust .NET GC. Any memory problem surfaces, check ".NET Memory" performance counter and diagnose my own code.
Not sure if it is a best practice...
Suggestion: do not implement this or anything when unsure. Reevaluate when facts are known, then perform before/after performance tests to verify.
However, if you can reliably test your code to confirm that calling Collect() won't have a negative impact then go ahead...
IMHO, this is similar to saying "If you can prove that your program will never have any bugs in the future, then go ahead..."
In all seriousness, forcing the GC is useful for debugging/testing purposes. If you feel like you need to do it at any other times, then either you are mistaken, or your program has been built wrong. Either way, the solution is not forcing the GC...
There are some scenarios where there will definitely be very little to no negative impact on your system when forcing a garbage collection e.g. On a date roll/a scheduled time where the system is not in use.
Aside from such times you would need to test performance of your code before and after implementing the forced collect to ensure that it is actually beneficial.
I do NOT recommend manual garbage collection. I assure you that you're not disposing of large objects properly. Make use of the USING statement. Whenever you instantiate an object, be sure to DISPOSE of it when you are through using it. This sample code creates a connection with USING statements. Then it instantiates a shipping label object, uses it, and disposes of it properly.
Using con As SqlConnection = New SqlConnection(DB_CONNECTION_STRING)
con.Open()
Using command As SqlCommand = New SqlCommand(sqlStr, con)
Using reader As SqlDataReader = command.ExecuteReader()
While reader.Read()
code_here()
End While
End Using
End Using
End Using
Dim f1 As frmShippingLabel
f1 = New frmShippingLabel
f1.PrintLabel()
f1.Dispose()

How to avoid garbage collection in real time .NET application?

I'm writting a financial C# application which receive messages from the network, translate them into different object according to the message type and finaly apply the application business logic on them.
The point is that after the business logic is applied, I'm very sure I will never need this instance again. Rather than to wait for the garbage collector to free them, I'd like to explicitly "delete" them.
Is there a better way to do so in C#, should I use a pool of object to reuse always the same set of instance or is there a better strategy.
The goal being to avoid the garbage collection to use any CPU during a time critical process.
Don't delete them right away. Calling the garbage collector for each object is a bad idea. Normally you really don't want to mess with the garbage collector at all, and even time critical processes are just race conditions waiting to happen if they're that sensitive.
But if you know you'll have busy vs light load periods for your app, you might try a more general GC.Collect() when you reach a light period to encourage cleanup before the next busy period.
Look here: http://msdn.microsoft.com/en-us/library/bb384202.aspx
You can tell the garbage collector that you're doing something critical at the moment, and it will try to be nice to you.
You hit in yourself -- use a pool of objects and reuse those objects. The semantics of the calls to those object would need to be hidden behind a factory facade. You'll need to grow the pool in some pre-defined way. Perhaps double the size everytime it hits the limit -- a high water algorithm, or a fixed percentage. I'd really strongly advise you not to call GC.Collect().
When the load on your pool gets low enough you could shrink the pool and that will eventually trigger a garbage collection -- let the CLR worry about it.
Attempting to second-guess the garbage collector is generally a very bad idea. On Windows, the garbage collector is a generational one and can be relied upon to be pretty efficient. There are some noted exceptions to this general rule - the most common being the occurrence of a one-time event that you know for a fact will have caused a lot of old objects to die - once objects are promoted to Gen2 (the longest lived) they tend to hang around.
In the case you mention, you sound as though you are generating a number of short-lived objects - these will result in Gen0 collections. These happen relatively often anyway, and are the most efficient. You could avoid them by having a reusable pool of objects, if you prefer, but it is best to ascertain for certain if GC is a performance problem before taking such action - the CLR profiler is the tool for doing this.
It should be noted that the garbage collector is different on different .NET frameworks - on the compact framework (which runs on the Xbox 360 and on mobile platforms) it is a non-generational GC and as such you must be much more careful about what garbage your program generates.
Forcing a GC.Collect() is generally a bad idea, leave the GC to do what it does best. It sounds like the best solution would be to use a pool of objects that you can grow if necessary - I've used this pattern successfully.
This way you avoid not only the garbage collection but the regular allocation cost as well.
Finally, are you sure that the GC is causing you a problem? You should probably measure and prove this before implementing any perf-saving solutions - you may be causing yourself unnecessary work!
"The goal being to avoid the garbage collection to use any CPU during
a time critical process"
Q: If by time critical, you mean you're listening to some esoteric piece of hardware, and you can't afford to miss the interrupt?
A: If so then C# isn't the language to use, you want Assembler, C or C++ for that.
Q: If by time Critical, you mean while there are lots of messages in the pipe, and you don't want to let the Garbage collector slow things down?
A: If so you are worrying needlessly. By the sounds of things your objects are very short lived, this means the garbage collector will recycle them very efficiently, without any apparent lag in performance.
However, the only way to know for sure is test it, set it up to run overnight processing a constant stream of test messages, I'll be stunned if you your performance stats can spot when the GC kicks in (and even if you can spot it, I'll be even more surprised if it actually matters).
Get a good understanding and feel on how the Garbage Collector behaves, and you will understand why what you are thinking of here is not recommended. unless you really like the CLR to spend time rearranging objects in memory alot.
http://msdn.microsoft.com/en-us/magazine/bb985010.aspx
http://msdn.microsoft.com/en-us/magazine/bb985011.aspx
How intensive is the app? I wrote an app that captures 3 sound cards (Managed DirectX, 44.1KHz, Stereo, 16-bit), in 8KB blocks, and sends 2 of the 3 streams to another computer via TCP/IP. The UI renders an audio level meter and (smooth) scrolling title/artist for each of the 3 channels. This runs on PCs with XP, 1.8GHz, 512MB, etc. The App uses about 5% of the CPU.
I stayed clear of manually calling GC methods. But I did have to tune a few things that were wasteful. I used RedGate's Ant profiler to hone in on the wasteful portions. An awesome tool!
I wanted to use a pool of pre-allocated byte arrays, but the managed DX Assembly allocates byte buffers internally, then returns that to the App. It turned out that I didn't have to.
If it is absolutely time critical then you should use a deterministic platform like C/C++. Even calling GC.Collect() will generate CPU cycles.
Your question starts off with the suggestion that you want to save memory but getting rid of objects. This is a space critical optimization. You need to decide what you really want because the GC is better at optimizing this situation than a human.
From the sound of it, it seems like you're talking about deterministic finalization (destructors in C++), which doesn't exist in C#. The closest thing that you will find in C# is the Disposable pattern. Basically you implement the IDisposable interface.
The basic pattern is this:
public class MyClass: IDisposable
{
private bool _disposed;
public void Dispose()
{
Dispose( true );
GC.SuppressFinalize( this );
}
protected virtual void Dispose( bool disposing )
{
if( _disposed )
return;
if( disposing )
{
// Dispose managed resources here
}
_disposed = true;
}
}
You could have a limited amount of instances of each type in a pool, and reuse the already done with instances. The size of the pool would depend on the amount of messages you'll be processing.
Instead of creating a new instance of an object every time you get a message, why don't you reuse objects that have already been used? This way you won't be fighting against the garbage collector and your heap memory won't be getting fragmented.**
For each message type, you can create a pool to hold the instances that are not in use. Whenever you receive a network message, you look at the message type, pull a waiting instance out of the appropriate pool and apply your business logic. After that, you put that instance of the message object back into it's pool.
You will most likely want to "lazy load" your pool with instances so your code scales easily. Therefore, your pool class will need to detect when a null instance has been pulled and fill it up before handing it out. Then when the calling code puts it back in the pool it's a real instance.
** "Object pooling is a pattern to use that allows objects to be reused rather than allocated and deallocated, which helps to prevent heap fragmentation as well as costly GC compactions."
http://geekswithblogs.net/robp/archive/2008/08/07/speedy-c-part-2-optimizing-memory-allocations---pooling-and.aspx
In theory the GC shouldn't run if your CPU is under heavy load or unless it really needs to. But if you have to, you may want to just keep all of your objects in memory, perhaps a singleton instance, and never clean them up unless you're ready. That's probably the only way to guarantee when the GC runs.

Categories

Resources