private button btnNew=new button();
btnNew.addclickhandler(this);
private DataGrid grid;
private void onClick(event click) {grid=new DataGrid();}
Hello ,I write a code like this sample ,I want to know that every time a user click on btnNew,what is going on in heap and stack memory?for example does a new block in heap memory assign to this grid?Or an older block remove and this new block replace it ?Or an older block remains in heap memory and also new block assign to it.
Is this block of code allocate a huge memory on several click?
**The DataGrid could be replace with any component I want to know about this type of new statement usage and memory allocation **
sorry, for my bad english!
.
what is going on with respect to heap
and stack memory?
since the button is reference type and declared in global will be allocated in heap, not in stack.
Is a new block in heap memory assigned to this button?
yes if memory is available, else unreached references will be removed and this one is allocated
Does this block of code allocate a
large amount of memory on a single
click?
No, but it will, if you add thousand buttons
Check out this cool article Memory in .NET - what goes where by Jon Skeet to understand the memory internals better..
Cheers
This is a huge topic. This is akin to asking "you type www.amazon.com into a browser. What happens next?" To answer that question fully you have to explain the architecture of the entire internet. To answer your question fully you have to understand the entire memory model of a modern operating system.
You should start by reading about the fundamentals of memory and garbage collection, here:
http://msdn.microsoft.com/en-us/library/ee787088.aspx
and then ask more specific questions about things you don't understand.
Using the new statement allocates memory on the heap. In general new memory is allocated. If the btnNew pointer was the only pointer associated with a button object, it should become a target to the garbage collector. So the memory will be freed again. For multiple clicks the same will happen, but you should be aware that the garbage collector does not work in real time. So in a high-frequency loop allocating large objects - "new" can become a problem in c#.
if button is a class (ref type), it is allocated on the heap. (value-types are allocated on the stack in the current CLR implementation, unless they are contained by another reference type or captured in a closure - in which case they are on the heap.).
The garbage collector has pre-allocated segments of memory of different sizes corresponding to generations 0, 1 and 2. WHen you new up an object, it is allocated in generation 0. And this allocation is really fast, since it is just moving a pointer by a delta = size of the object. The CLR clears the values in the object to default values as a prereq step before executing the ctor.
Periodically all threads are paused and the garbage collector runs. It creates a graph of reachable objects by traversing "roots". All unreachable objects are discarded. The generation segments are moved around / compacted to avoid fragmentation. Gen 0 is collected more frequently than 1 and so on... (since Gen-0 objects are likely to be short-lived objects). After the collection, the app threads resume.
For more on this, refer to documents explaining the garbage collector and generations. Here's one.
Related
We have written a custom indexing engine for a multimedia-matching project written in C#.
The indexing engine is written in unmanaged C++ and can hold a significant amount of unmanaged memory in the form of std:: collections and containers.
Every unmanaged index instance is wrapped by a managed object; the lifetime of the unamanaged index is controlled by the lifetime of the managed wrapper.
We have ensured (via custom, tracking C++ allocators) that every byte that is being consumed internally by the indexes is being accounted for, and we update (10 times per second) the managed garbage collector's memory pressure value with the deltas of this value (Positive deltas call GC.AddMemoryPressure(), negative deltas call GC.RemoveMemoryPressure()).
These indexes are thread-safe, and can be shared by a number of C# workers, so there may be multiple references in use for the same index. For that reason, we can not call Dispose() freely, and instead rely on the garbage collector to track reference sharing and eventually to trigger the finalization of the indexes once they are not in use by a worker process.
Now, the problem is that we are running out of memory. Full collections are in fact executed relatively often, however, with the help of a memory profiler, we can find a very large number of "dead" index instances being held in the finalization queue at the point where the process runs out of memory after exhausting the pagination file.
We can actually circumvent the problem if we add a watchdog thread that calls GC::WaitForPendingFinalizers() followed by a GC::Collect() on low memory conditions, however, from what we have read, calling GC::Collect() manually severely disrupts garbage collection efficiency, and we don't want that.
We have even added, to no avail, a pessimistic pressure factor (tried up to 4x) to exaggerate the amount of unmanaged memory reported to the .net side, to see if we could coax the garbage collector to empty the queue faster. It seems as if the thread that processes the queue is completely unaware of the memory pressure.
At this point we feel we need to implement a manual reference counting to Dispose() as soon as the count reaches zero, but this seems to be an overkill, especially because the whole purpose of the memory pressure API is precisely to account for cases like ours.
Some facts:
.Net version is 4.5
App is in 64-bit mode
Garbage collector is running in concurrent server mode.
Size of an index is ~800MB of unmanaged memory
There can be up to 12 "alive" indexes at any point in time.
Server has 64GB of RAM
Any ideas or suggestions are welcome
Well, there will be no answer but "if you want to dispose external resource explicitly you had to do it by yourself".
AddMemoryPressure() method does not guarantee to trigger garbage collection immediately. Instead, CLR uses unmanaged memory allocation/deallocation stats to adjust it's own gc thresholds and GC is triggered only if it is considered appropriate.
Note that RemoveMemoryPressure() does not trigger GC at all (theoretically it can do it due to side effects from actions such as setting GCX_PREEMP but let's skip it for brevity). Instead it decreases the current mempressure value, nothing more (simplifying again).
Actual algorithm is undocumented, however you may look at the implementation from CoreCLR. In short, your bytesAllocated value had to exceed some dynamically calculated limit and then the CLR triggers the GC.
Now the bad news:
In the real app the process is totally unpredictable as each GC collection and each third-party code have an influence on the GC limits. The GC may be called, may be called later on may not be called at all
GC tunes it limits trying to minimize the costly GC2 collections (you're interested in these as you're working with long-lived index objects add they're always promoted to the next generation due to finalizer). So, DDOSing the runtime with huge mem pressure values may strike back as you'll raise the bar high enough to make (almost) no chance to trigger the GC by setting the mem pressure at all.
(NB: the last issue will be fixed with new AddMemoryPressure() implementation but not today, definitely).
UPD: more details.
Ok, lets move on : )
Part 2, or "newer underestimate what _udocumented_ means"
As I've said above, you are interested in GC 2 collections as you are using long-lived objects.
It's well-known fact that the finalizer runs almost immediately after the object was GC-ed (assuming that the finalizer queue is not filled with other objects).
As a proof: just run this gist.
The real reason why your indexes are not freed is pretty obvious: the generation the objects belongs to is not GCed.
And now we're returning to the original question. How do you think, how much memory you had to allocate to trigger the GC2 collection?
As I've said above actual numbers are undocumented. In theory, GC2 may not be called at all until you consume very large chunks of memory.
And now really bad news comes: for server GC "in theory" and "what really happens" are the same.
One more gist, on .Net4.6 x64 the output will be alike this:
GC low latency:
Allocated, MB: 512.19 GC gen 0|1|2, MB: 194.19 | 317.81 | 0.00 GC count 0-1-2: 1-0-0
Allocated, MB: 1,024.38 GC gen 0|1|2, MB: 421.19 | 399.56 | 203.25 GC count 0-1-2: 2-1-0
Allocated, MB: 1,536.56 GC gen 0|1|2, MB: 446.44 | 901.44 | 188.13 GC count 0-1-2: 3-1-0
Allocated, MB: 2,048.75 GC gen 0|1|2, MB: 258.56 | 1,569.75 | 219.69 GC count 0-1-2: 4-1-0
Allocated, MB: 2,560.94 GC gen 0|1|2, MB: 623.00 | 1,657.56 | 279.44 GC count 0-1-2: 4-1-0
Allocated, MB: 3,073.13 GC gen 0|1|2, MB: 563.63 | 2,273.50 | 234.88 GC count 0-1-2: 5-1-0
Allocated, MB: 3,585.31 GC gen 0|1|2, MB: 309.19 | 723.75 | 2,551.06 GC count 0-1-2: 6-2-1
Allocated, MB: 4,097.50 GC gen 0|1|2, MB: 686.69 | 728.00 | 2,681.31 GC count 0-1-2: 6-2-1
Allocated, MB: 4,609.69 GC gen 0|1|2, MB: 593.63 | 1,465.44 | 2,548.94 GC count 0-1-2: 7-2-1
Allocated, MB: 5,121.88 GC gen 0|1|2, MB: 293.19 | 2,229.38 | 2,597.44 GC count 0-1-2: 8-2-1
That's right, in worst cases you had to allocate ~3.5 gig to trigger the GC2 collection. I'm pretty sure that your allocations are much smaller:)
NB: Note that dealing with objects from GC1 generation does not make it any better. The size of GC0 segment may exceed 500mb. You had to try really hard to trigger the garbage collection on the ServerGC :)
Summary: the approach with Add/RemoveMemoryPressure will have (almost) no influence on the garbage collection frequency, at least on server GC.
Now, the last part of the question: what possible solutions do we have?
In short, the simplest possible approach is to do ref-counting via disposable wrappers.
To be continued
we can find a very large number of "dead" index instances being held in the finalization queue
It does not make any sense that these "dead" instances are not getting finalized. After all, you found out that GC::WaitForPendingFinalizers() actually works. So what must be going on here is that they are actually finalized, they are just waiting for the next collection to run so they can get destroyed. And that is taking a while. Yes, that is not unlikely, after all you already called GC::RemoveMemoryPressure() for them. And, hopefully, released the big unmanaged allocation for them.
So this is surely just a false signal, these objects only take up GC heap, not unmanaged heap and GC heap is not your problem.
We have ensured (via custom, tracking C++ allocators) that every byte...
I don't much like the sound of that. Pretty important that the GC calls have some correspondence to actually creating and finalizing managed objects. Very simple to do, you call AddMemoryPressure in your constructor and RemoveMemoryPressure in your finalizer, right after you called the C++ delete operator. The value you pass only needs to be an estimate for the corresponding C++ unmanaged allocation, it doesn't have to be accurate down to the byte, being off by a factor of 2 is not a grave problem. It also doesn't matter that the C++ allocation happens later.
calling GC::Collect() manually severely disrupts garbage collection efficiency
Don't panic. Pretty high odds that, since your unmanaged allocations are so large, that you rarely collect "naturally" and actually need forced allocations. Like the kind that GC::AddMemoryPressure() triggers, it is just as "forced" as calling GC::Collect(). Albeit that it has a heuristic that avoids collecting too frequently, one you might not particularly care about right now :)
Garbage collector is running in concurrent server mode
Don't, use workstation GC, it is much more conservative about heap segment size.
I want to suggest a brief read about "Finalizers are not guaranteed to run". You can test it easily by continuously generating good old Bitmaps by yourself:
private void genButton_Click(object sender, EventArgs e)
{
Task.Run(() => GenerateNewBitmap());
}
private void GenerateNewBitmap()
{
//Changing size also changes collection behavior
//If this is a small bitmap then collection happens
var size = picBox.Size;
Bitmap bmp = new Bitmap(size.Width, size.Height);
//Generate some pixels and Invoke it onto UI if you wish
picBox.Invoke((Action)(() => { picBox.Image = bmp; }));
//Call again for an infinite loop
Task.Run(() => GenerateNewBitmap());
}
It seems on my machine that if I generate more than 500K pixels, I can't generate forever and .NET gives me an OutOfMemoryException.
This thing about Bitmap class was true on 2005 and it is still true in 2015. Bitmap class is important because it exists in the library for a long time. Having bug fixes, performance improvements along the way, what I think is if it can't do something I need, then I need to change my need.
First, the thing about a disposable object is you need to call Dispose by yourself. No, you really need to call it yourself. Seriously. I suggest enabling relevant rules on VisualStudio's code analyze and making use of using etc. appropriately.
Second, calling a Dispose method does not mean calling delete (or free) on the unmanaged side. What I did, and I think you should, is to use reference counting. If your unmanaged side makes use of C++ then I suggest using shared_ptr. Since VS2012, as far as I know, VisualStudio supports shared_ptr.
Therefore, with reference counting, calling a Dispose on your managed object decreases the reference count on your unmanaged object and the unmanaged memory gets deleted only if that reference counts gets down to zero.
I was asked this question in an interview: How does memory leakage problem occur in C# as of all know Garbage Collector responsible for all the memory management related work? So how is it possible?
From MSDN:-
A memory leak occurs when memory is allocated in a program and is
never returned to the operating system, even though the program does
not use the memory any longer. The following are the four basic types of memory leaks:
In a manually managed memory environment: Memory is dynamically allocated and referenced by a pointer. The pointer is erased before the memory is freed. After the pointer is erased, the memory can no longer be accessed and therefore cannot be freed.
In a dynamically managed memory environment: Memory is disposed of but never collected, because a reference to the object is still active. Because a reference to the object is still active, the garbage collector never collects that memory. This can occur with a reference that is set by the system or the program.
In a dynamically managed memory environment: The garbage collector can collect and free the memory but never returns it to the operating system. This occurs when the garbage collector cannot move the objects that are still in use to one portion of the memory and free the rest.
In any memory environment: Poor memory management can result when many large objects are declared and never permitted to leave scope. As a result, memory is used and never freed.
Dim DS As DataSet
Dim cn As New SqlClient.SqlConnection("data source=localhost;initial catalog=Northwind;integrated security=SSPI")
cn.Open()
Dim da As New SqlClient.SqlDataAdapter("Select * from Employees", cn)
Dim i As Integer
DS = New DataSet()
For i = 0 To 1000
da.Fill(DS, "Table" + i.ToString)
Next
Although this code is obviously inefficient and not practical, it is meant to demonstrate that if objects are added to a collection (such as adding the tables to the DataSet collection), the objects are kept active as long as the collection remains alive. If a collection is declared at the global level of the program, and objects are declared throughout the program and added to that collection, this means that even though the objects are no longer in scope, the objects remain alive because they are still being referenced.
You may also check this reference:-
Identify And Prevent Memory Leaks In Managed Code
The above link gives a very good conclusion
Although .NET reduces the need for you to be concerned with memory,
you still must pay attention to your application's use of memory to
ensure that it is well-behaved and efficient. Just because an
application is managed doesn't mean you can throw good software
engineering practices out the window and count on the GC to perform
magic.
Just holding on to a reference to an object when you actually don't have any use for it anymore is a good way to leak. Particularly so when you store it in a static variable, that creates a reference that lives for the life of the AppDomain unless you explicitly set it back to null. If such a reference is stored in a collection then you can get a true leak that can crash your program with OOM. Not usual.
Such leaks can be hard to find. Particularly events can be tricky that way, they will get the event source object to add a reference to the event handler object when you subscribe an event handler. Hard to see in C# code because you never explicitly pass this when you subscribe the event. If the event source object lives for a long time then you can get in trouble with all of the subscriber objects staying referenced.
The tricky ones do require a memory profiler to diagnose.
Example would be a Class Child containing ClickEventHandler method subscribed to an Event ClickEvent of another class Parent.
GC of Child class would be blocked until Parent class goes out of scope..Even if Child goes out of scope it won't be collected by GC until Parent goes out of scope
All such subscribers subscribing to a Broadcaster(Event) would not be collected by GC until the broadcaster goes out of scope.
So, its a one way relation
Broadcaster(ClickEvent) -> Subscribers(ClickEventHandler)
GC of all the ClickEventHandlers would be blocked until ClickEvent goes out of scope!
As example:
You have application with main form and static variable Popups[] collectionOfPopups.
Store all application popups objects into static array and never delete them.
So with each new popup will take memory and GC will never release it.
Try to read how GC works
http://msdn.microsoft.com/en-us/library/ee787088.aspx
This will explains everything.
There can be multiple reasons, but here is one:
Consider two classes:
class A
{
private B b;
}
class B
{
private A a;
}
If you create an object for each of those classes, and cross-link them, and after that both those objects go out of the scope, you will still have link to each of them within another one. It is very difficult for GC to catch these sorts of crosslinks, and it may continue to believe that both objects are still in use.
I have a design and I am not sure if garbage collection will occur correctly.
I have some magic apples, and some are yummy some are bad.
I have a dictionary : BasketList = Dictionary <basketID,Basket>
(list of baskets).
Each Basket object has a single Apple in it and each Basket stores a reference to an objectAppleSeperation.AppleSeperation stores 2 dictionaries, YummyApples = <basketID,Apple> and BadApples = Dictionary<basketID,Apple>, so when I'm asked where an apple is I know.
An Apple object stores BasketsImIn = Dictionary<ID,Basket>, which points to the Basket and in Shops, and the Apple in Basket.
My question is, if I delete a basket from BasketList and make sure I delete the Apple from BadApples and/or YummyApples, will garbage collection happen properly, or will there be some messy references lying around?
You are right to be thinking carefully about this; having the various references has implications not just for garbage collection, but for your application's proper functioning.
However, as long as you are careful to remove all references you have set, and you don't have any other free-standing variables holding onto the reference, the garbage collector will do its work.
The GC is actually fairly sophisticated at collecting unreferenced objects. For example, it can collect two objects that reference each other, but have no other 'living' references in the application.
See http://msdn.microsoft.com/en-us/library/ee787088.aspx for fundamentals on garbage collection.
From the above link, when garbage collection happens...
The system has low physical memory.
The memory that is used by allocated objects on the managed heap surpasses an acceptable threshold. This means that a threshold of acceptable memory usage has been exceeded on the managed heap. This threshold is continuously adjusted as the process runs.
The GC.Collect method is called. In almost all cases, you do not have to call this method, because the garbage collector runs continuously. This method is primarily used for unique situations and testing.
If you are performing the clean-up properly, then you need not worry!
I am a bit confused about the fact that in C# only the reference types get garbage collected.
That means GC picks only the reference types for memory de-allocation.
So what happens with the value types as they also occupy memory on stack ?
For a start, whether they're on the stack or part of the heap depends on what context they're part of - if they're within a reference type, they'll be on the heap anyway. (You should consider how much you really care about the stack/heap divide anyway - as Eric Lippert has written, it's largely an implementation detail.)
However, basically value type memory is reclaimed when the context is reclaimed - so when the stack is popped by you returning from a method, that "reclaims" the whole stack frame. Likewise if the value type value is actually part of an object, then the memory is reclaimed when that object is garbage collected.
The short answer is that you don't have to worry about it :) (This assumes you don't have anything other than the memory to worry about, of course - if you've got structs with references to native handles that need releasing, that's a somewhat different scenario.)
I am a bit confused about the fact that in C# only the reference types get garbage collected.
This is not a fact. Or, rather, the truth or falsity of this statement depends on what you mean by "get garbage collected". The garbage collector certainly looks at value types when collecting; those value types might be alive and holding on to a reference type:
struct S { public string str; }
...
S s = default(S); // local variable of value type
s.str = M();
when the garbage collector runs it certainly looks at s, because it needs to determine that s.str is still alive.
My suggestion: clarify precisely what you mean by the verb "gets garbage collected".
GC picks only the reference types for memory de-allocation.
Again, this is not a fact. Suppose you have an instance of
class C { int x; }
the memory for the integer will be on the garbage-collected heap, and therefore reclaimed by the garbage collector when the instance of C becomes unrooted.
Why do you believe the falsehood that only the memory of reference types is deallocated by the garbage collector? The correct statement is that memory that was allocated by the garbage collector is deallocated by the garbage collector, which I think makes perfect sense. The GC allocated it so it is responsible for cleaning it up.
So what happens with the value types as they also occupy memory on stack ?
Nothing at all happens to them. Nothing needs to happen to them. The stack is a million bytes. The size of the stack is determined when the thread starts up; it starts at a million bytes and it stays a million bytes throughout the entire execution of the thread. Memory on the stack is neither created nor destroyed; only its contents are changed.
There are too many verbs used in this question, like destroyed, reclaimed, deallocated, removed. That doesn't correspond well with what actually happens. A local variable simply ceases to be, Norwegian parrot style.
A method has a single point of entry, first thing that happens is that the CPU stack pointer is adjusted. Creating a "stack frame", storage space for the local variables. The CLR guarantees that this space is initialized to 0, not otherwise a feature you use strongly in C# because of the definite assignment rule.
A method has a single point of exit, even if you method code is peppered with multiple return statements. At that point, the stack pointer is simply restored to its original value. In effect it "forgets" that the local variables where ever there. Their values are not 'scrubbed' in any way, the bytes are still there. But they won't last long, the next call in your program are going to overwrite them again. The CLR zero-initialization rule ensures that you can never observe those old values, that would be insecure.
Very, very fast, takes no more than a single processor cycle. A visible side-effect of this behavior in the C# language is that value types cannot have a finalizer. Ensuring no extra work has to be done.
A value type on the stack is removed from the stack when it goes out of scope.
Value types are destroyed as soon as they go out of scope.
value types would get deallocated when the stack frame is removed after it has been executed i would assume
Would also like to add that stack is at a thread level, and heap is at application domain level.
So, when a thread ends it would reclam the stack memory used by that specific thread.
Every value type instance in .NET will be part of something else, which could be a larger enclosing value type instance, a heap object, or a stack frame. Whenever any of those things come into being, any structures within them will also come into being; those structures will then continue to exist as long as the thing containing them does. When the thing that contains the structure ceases to exist, the structure will as well. There is no way to destroy a structure without destroying the container, and there is no way to destroy something that contains one or more structures without destroying the structures contained therein.
I have a problem where a couple 3 dimensional arrays allocate a huge amount of memory and the program sometimes needs to replace them with bigger/smaller ones and throws an OutOfMemoryException.
Example: there are 5 allocated 96MB arrays (200x200x200, 12 bytes of data in each entry) and the program needs to replace them with 210x210x210 (111MB). It does it in a manner similar to this:
array1 = new Vector3[210,210,210];
Where array1-array5 are the same fields used previously. This should set the old arrays as candidates for garbage collection but seemingly the GC does not act quickly enough and leaves the old arrays allocated before allocating the new ones - which causes the OOM - whereas if they where freed before the new allocations the space should be enough.
What I'm looking for is a way to do something like this:
GC.Collect(array1) // this would set the reference to null and free the memory
array1 = new Vector3[210,210,210];
I'm not sure if a full garbage collecion would be a good idea since that code may (in some situations) need to be executed fairly often.
Is there a proper way of doing this?
This is not an exact answer to the original question, "how to force GC', yet, I think it will help you to reexamine your issue.
After seeing your comment,
Putting the GC.Collect(); does seem to help, altought it still does not solve the problem completely - for some reason the program still crashes when about 1.3GB are allocated (I'm using System.GC.GetTotalMemory( false ); to find the real amount allocated).
I will suspect you may have memory fragmentation. If the object is large (85000 bytes under .net 2.0 CLR if I remember correctly, I do not know whether it has been changed or not), the object will be allocated in a special heap, Large Object Heap (LOH). GC does reclaim the memory being used by unreachable objects in LOH, yet, it does not perform compaction, in LOH as it does to other heaps (gen0, gen1, and gen2), due to performance.
If you do frequently allocate and deallocate large objects, it will make LOH fragmented and even though you have more free memory in total than what you need, you may not have a contiguous memory space anymore, hence, will get OutOfMemory exception.
I can think two workarounds at this moment.
Move to 64-bit machine/OS and take advantage of it :) (Easiest, but possibly hardest as well depending on your resource constraints)
If you cannot do #1, then try to allocate a huge chuck of memory first and use them (it may require to write some helper class to manipulate a smaller array, which in fact resides in a larger array) to avoid fragmentation. This may help a little bit, yet, it may not completely solve the issue and you may have to deal with the complexity.
Seems you've run into LOH (Large object heap) fragmentation issue.
Large Object Heap
CLR Inside Out Large Object Heap Uncovered
You can check to see if you're having loh fragmentation issues using SOS
Check this question for an example of how to use SOS to inspect the loh.
Forcing a Garbage Collection is not always a good idea (it can actually promote the lifetimes of objects in some situations). If you have to, you would use:
array1 = null;
GC.Collect();
array1 = new Vector3[210,210,210];
Isn't this just large object heap fragmentation? Objects > 85,000 bytes are allocated on the large object heap. The GC frees up space in this heap but never compacts the remaining objects. This can result in insufficent contiguous memory to successfully allocate a large object.
Alan.
If I had to speculate you problem is not really that you are going from Vector3[200,200,200] to a Vector3[210,210,210] but that most likely you have similar previous steps before this one:
i.e.
// first you have
Vector3[10,10,10];
// then
Vector3[20,20,20];
// then maybe
Vector3[30,30,30];
// .. and so on ..
// ...
// then
Vector3[200,200,200];
// and eventually you try
Vector3[210,210,210] // and you get an OutOfMemoryException..
If that is true, I would suggest a better allocation strategy. Try over allocating - maybe doubling the size every time as opposed to always allocating just the space that you need. Especially if these arrays are ever used by objects that need to pin the buffers (i.e. if that have ties to native code)
So, instead of the above, have something like this:
// first start with an arbitrary size
Vector3[64,64,64];
// then double that
Vector3[128,128,128];
// and then.. so in thee steps you go to where otherwise
// it would have taken you 20..
Vector3[256,256,256];
They might not be getting collected because they're being referenced somewhere you're not expecting.
As a test, try changing your references to WeakReferences instead and see if that resolves your OOM problem. If it doesn't then you're referencing them somewhere else.
I understand what you're trying to do and pushing for immediate garbage collection is probably not the right approach (since the GC is subtle in its ways and quick to anger).
That said, if you want that functionality, why not create it?
public static void Collect(ref object o)
{
o = null;
GC.Collect();
}
An OutOfMemory exception internally triggers a GC cycle automatically once and attempts the allocation again before actually throwing the exception to your code. The only way you could be having OutOfMemory exceptions is if you're holding references to too much memory. Clear the references as soon as you can by assigning them null.
Part of the problem may be that you're allocating a multidimensional array, which is represented as a single contiguous block of memory on the large object heap (more details here). This can block other allocations as there isn't a free contiguous block to use, even if there is still some free space somewhere, hence the OOM.
Try allocating it as a jagged array - Vector3[210][210][210] - which spreads the arrays around memory rather than as a single block, and see if that improves matters
John, Creating objects > 85000 bytes will make the object end up in the large object heap. The large object heap is never compacted, instead the free space is reused again.
This means that if you are allocating larger arrays every time, you can end up in situations where LOH is fragmented, hence the OOM.
you can verify this is the case by breaking with the debugger at the point of OOM and getting a dump, submitting this dump to MS through a connect bug (http://connect.microsoft.com) would be a great start.
What I can assure you is that the GC will do the right thing trying to satisfy you allocation request, this includes kicking off a GC to clean the old garbage to satisfy the new allocation requests.
I don't know what is the policy of sharing out memory dumps on Stackoverflow, but I would be happy to take a look to understand your problem more.