Can objects be automatically assigned to second generation of GC? - c#

During an interview I was asked if there can be some object that will automatically be assigned to second generation of garbage collector and I didn't know what to answer.
Is this possible?
Maybe if object is large enough to be kept in zero or first generations?

Newly allocated objects form a new generation of objects and are implicitly generation 0 collections, unless they are large objects, in which case they go on the large object heap in a generation 2 collection.
(Link: Fundamentals of Garbage Collection)
So yes, large objects automatically go to generation 2.
When is an object considered large?
In the Microsoft® .NET Framework 1.1 and 2.0, if an object is greater than or equal to 85,000 bytes it's considered a large object. This number was determined as a result of performance tuning. When an object allocation request comes in and meets that size threshold, it will be allocated on the large object heap. What does this mean exactly? To understand this, it may be beneficial to explain some fundamentals about the .NET garbage collector.
(Link: CLR Inside Out: Large Object Heap Uncovered)

Related

What are the different heaps in .net?

I was profiling the memory usage of a Windows Forms application in dotmemory and I noticed that for my application there were 0-4 heaps all of varying sizes as well as the large object heap.
I was just wondering if anyone had a good explanation of what each heap is for and what is typically stored in each heap?
The other answers seem to be missing the fact that there is a difference between heaps and generations. I don't see why a commercial profiler would confuse the two concepts, so I strongly suspect it's heaps and not generations after all.
When the CLR GC is using the server flavor, it creates a separate heap for each logical processor in the process' affinity mask. The reason for this breakdown is mostly to improve scalability of allocations, and to perform in GC in parallel. These are separate memory regions, but you can of course have object references between the heaps and can consider them a single logical heap.
So, assuming that you have four logical processors (e.g. an i5 CPU with HyperThreading enabled), you'll have four heaps under server GC.
The Large Object Heap has an unfortunate, confusing name. It's not a heap in the same sense as the per-processor heaps. It's a logical abstraction on top of multiple memory regions that contain large objects.
You have different heaps because of how the C# garbage collector works. It uses a generational GC, which separates data based on how recently it was used. The use of different heaps allows the garbage collector to clean up memory more efficiently.
According to MSDN:
The heap is organized into generations so it can handle long-lived and short-lived objects. Garbage collection primarily occurs with the reclamation of short-lived objects that typically occupy only a small part of the heap.
Generation 0. This is the youngest generation and contains short-lived objects. An example of a short-lived object is a temporary variable. Garbage collection occurs most frequently in this generation.
Newly allocated objects form a new generation of objects and are implicitly generation 0 collections, unless they are large objects, in which case they go on the large object heap in a generation 2 collection.
Most objects are reclaimed for garbage collection in generation 0 and do not survive to the next generation.
Generation 1. This generation contains short-lived objects and serves as a buffer between short-lived objects and long-lived objects.
Generation 2. This generation contains long-lived objects. An example of a long-lived object is an object in a server application that contains static data that is live for the duration of the process.
Objects that are not reclaimed in a garbage collection are known as survivors, and are promoted to the next generation.
Important data quickly gets put on the garbage collector's back burner (higher generations) and is checked for deletion less often. This lowers the amount of time wasted checking memory that truly needs to persist, which lets you see performance gains from an efficient garbage collector.
When it comes to managed objects, there are three Small Object Heaps(SOH) and one Large Object Heap(LOH).
Large Object Heap (LOH)
Objects that are larger than 85KB are going to LOH straight away. There are some risks if you have too many large objects. That's a different discussion, for more details have a look at The Dangers of the Large Object Heap
Small Object Heap (SOH) : Gen0, Gen1, Gen2
Garbage collector uses a clever algorithm to execute the garbage collecton only when it is required. Full garbage collection process is an expensive operation which shouldn't happen too often. So, it has broken its SOH into three parts and as you have noticed each Gen has a specified amount of memory.
Every small object (<85KB) initially going to Gen0. When Gen0 is full, garbage collection executes only for Gen0. It checks all instances that are in Gen0 and clears/releases memory that is used by any unnecessary objects(non-referenced, out of scoped or disposed objects). And then it copies all the required (in used) instances to Gen1.
Above process is actually occurs even when you execute below: (not required to call manually)
// Perform a collection of generation 0 only.
GC.Collect(0);
In this way, Garbage collector clears the memory that are allocated for short lived instances first (strings which is immutable, variables in methods or smaller scopes).
When GC is keep doing this operation at one stage, Gen1 overflows. Then it does the same operation to Gen1. It clears all the unnecessary memory in Gen1 and copies all required ones to Gen2.
Above process is occurs when you execute below manually (not required to call manually)
// Perform a collection of all generations up to and including 1.
GC.Collect(1);
When GC is keep doing this operation at one stage if Gen2 overflows it tries to clean Gen2.
Above process is occurs even when you execute below manually (not required to do manually)
// Perform a collection of all generations up to and including 2.
GC.Collect(2);
If the amount of memory needs to be copy from Gen1 to Gen2 is greater than the amount of memory available in Gen2, GC throws out of memory exception.

Manual GC Gen2 data allocation

I'm prototyping some managed directx game engine before moving to c++ syntax horror.
So let's say I've got some data (f.e. an array or a hashset of references) that I'm sure it'll stay alive throughout whole application's life. Since performance is crucial here and I'm trying to avoid any lag spikes on generation promotion, I'd like to ask if there's any way to initialize an object (allocate its memory) straight ahead in GC's generation 2? I couldn't find an answer for that, but I'm pretty sure I've seen someone doing that before.
Alternatively since there would be no real need to "manage" that piece of memory, would it be possible to allocate it with unmanaged code, but to expose it to the rest of the code as a .NET type?
You can't allocate directly in Gen 2. All allocations happen in either Gen 0 or on the large object heap (if they are 85000 bytes or larger). However, pushing something to Gen 2 is easy: Just allocate everything you want to go to Gen 2 and force GCs at that point. You can call GC.GetGeneration to inspect the generation of a given object.
Another thing to do is keep a pool of objects. I.e. instead of releasing objects and thus making them eligible for GC, you return them to a pool. This reduces allocations and thus the number of GCs as well.

PerfView GC Trigger Reasons

I have been investigating some garbage collection issues in a c# server app. I'm currently using PerfView to do this. After collecting some data and getting a load of GC Stats I'm a little confused about one of the columns 'Trigger Reason'. I'm getting two values 'AllocLarge' and 'AllocSmall'. I have searched through the help and google and can't find what exactly these two terms mean.
The .NET GC treats objects larger than 85K (a large object) very differently than other objects (small objects). In particular large objects are only collected in 'Generation 2' (the most expensive kind of GC). 'AllocLarge' means a GC was triggered while allocating a large object (and thus must have provoked a Gen 2 GC). 'AllocSmall' means a GC happened in responce to a allocation of an 'ordinary' object.
Note that in general it is bad to have short lived large objects (since these force expensive GCs). You can see everywhere you allocated a large object by looking at the 'GC Alloc Stats' view and looking for the pseudo-frame 'LargeObject'. Double click on that (which brings you to the 'callers' view, and yoiu will see where you are allocating large objects.

Objects memory re-location?

Is there any way to tell .net runtime , not to re-locate object in memory ?
IMHO - Object can be re-locate by GC when :
Moving from one generation to another
Being moved from finilization-queue to the f-reachable queue.
else ( maybe optimization mechanism ?).
Also,I thought immutable (strings)are automatically recreated each time , so they must be created in a new location.
(just a theoratical question )
As an implementation detail, the .Net framework can move an object in the memory in the final stage of garbage collection. But this doesn't necessarily mean moving between generations: when performing generation 2 GC, objects in gen 2 will be moved, even though they don't change generation (because there is nowhere to go beyond gen 2).
The finalization queue and the f-reachable queue have nothing to do with this, they contain only references to objects, not the objects themselves.
I have no idea what does this have to do with immutable objects. The runtime doesn't give any special treatment to them (except for strings).
Telling the runtime not to relocate an object (also known as “pinning” the object) is an unusual requirement and should have a really good reason, because it can negatively affect the performance of the GC. To temporarily pin an object in unsafe code, you can use the fixed statement. To do it permanently or from safe code, you can use GCHandle.Alloc(), specifying GCHandleType.Pinned.
Pinned Objects tells the gc to not to move it to create large chuck of free space. They are created using Fixed keyword.
Useful scenario
lets think of a scenario where we have an int of array needed to be passed to some unmanaged function and unmanaged function reads the value of array and does some changes. If the array is not pinned, changed values would not be able to be written back as pointer to array had been moved by GC.
Not sure if this is useful in questions context, but in managed scenario you could use Marshall class to allocate memory, move a structure to allocated memory and get a pointer back. This structure wouldn't be moved around by gc. Later on you could retrieve the structure from allocated memory using the pointer from before.

Short-lived objects

What is the overhead of generating a lot of temporary objects (i.e. for interim results) that "die young" (never promoted to the next generation during a garbage collection interval)? I'm assuming that the "new" operation is very cheap, as it is really just a pointer increment. However, what are the hidden costs of dealing with this temporary "litter"?
Not a lot - the garbage collector is very fast for gen0. It also tunes itself, adjusting the size of gen0 depending on how much it manages to collect each time it goes. (If it's managed to collect a lot, it will reduce the size of gen0 to collect earlier next time, and vice versa.)
The ultimate test is how your application performs though. Perfmon is very handy here, showing how much time has been spent in GC, how many collections there have been of each generation etc.
As you say the allocation itself is very inexpensive. The cost of generating lots of short lived objects is more frequent garbage collections as they are triggered when generation 0's budget is exhausted. However, a generation 0 collection is fairly cheap, so as long as your object really are short lived the overhead is most likely not significant.
On the other hand the common example of concatenating lots of strings in a loop pushes the garbage collector significantly, so it all depends on the number of objects you create. It doesn't hurt to think about allocation.
The cost of garbage collection is that managed threads are suspended during compaction.
In general, this isn't something you should probably be worrying about and sounds like it starts to fall very close to "micro-optimization". The GC was designed with an assumption that a "well tuned application" will have all of it's allocations in Gen0 - meaning that they all "die young". Any time you allocate a new object it is always in Gen0. A collection won't occur until the Gen0 threshold is passed and there isn't enough available space in Gen0 to hold the next allocation.
The "new" operation is actually a bunch of things:
allocating memory
running the types constructor
returning a pointer to the memory
incrementing the next object pointer
Although the new operation is designed and written efficiently it is not free and does take time to allocate new memory. The memory allocation library needs to track what chunks are available for allocation and the newly allocated memory is zeroed.
Creating a lot of objects that die young will also trigger garbage collection more often and that operation can be expensive. Especially with "stop the world" garbage collectors.
Here's an article from the MSDN on how it works:
http://msdn.microsoft.com/en-us/magazine/bb985011.aspx
Note: that it describes how calling garbage collection is expensive because it needs to build the object graph before it can start garbage collection.
If these objects are never promoted out of Generation 0 then you will see pretty good performance. The only hidden cost I can see is that if you exceed your Generation 0 budget you will force the GC to compact the heap but the GC will self-tune so this isn't much of a concern.
Garbage collection is generational in .Net. Short lived objects will collect first and frequently. Gen 0 collection is cheap, but depending on the scale of the number of objects you're creating, it could be quite costly. I'd run a profiler to find out if it is affecting performance. If it is, consider switching them to structs. These do not need to be collected.

Categories

Resources