Avoiding memory leaks using ArrayPool within a class - c#

I have a class which I am currently using to build byte arrays (for network packets), it starts with some initial buffer, and resizes as more data is written to it. I want to reduce the amount of GC caused by this util by using an ArrayPool, but am having a hard time figuring out a good way to prevent memory leaks without too much overhead.
My main idea was to make my class IDisposable and return the array back to the pool in the Dispose() call.
using ByteBuilder builder = new ByteBuilder();
builder.AddData(); // Add a bunch of data
// write data to network stream
...
// builder is diposed and memory is returned to ArrayPool at the end of the method call
The problem here is that if it happens to be used without the using declaration, the memory would never be returned. Is there some way I can guarantee how someone uses it?
Another idea I had was to use a finalizer to return the memory to the array pool, but it seems that this has significant overhead from what I have read. I am allocating a lot of arrays, and many of them could be small, so having a finalizer seems like it may not be worth the trade off.

Related

C# Too Much Memory Usage

I have a process going with multiple steps defied. (Let's say generic implementation of a strategy pattern) where all steps are passing a common ProcessParameter object around. (read/write to it)
This ProcessParameter is an object having many arrays and collections. Example:
class ProcessParameter() {
public List<int> NumbersAllStepsNeed {get; set;}
public List<int> OhterNumbersAllStepsNeed {get; set;}
public List<double> SomeOtherData {get; set;}
public List<string> NeedThisToo {get; set;}
...
}
Once the steps finished, I'd like to make sure the memory is freed and not hang around, because this can have a big memory footprint and other processes need to run too.
Do I do that by running:
pParams.NumbersAllStepsNeed = null;
pParams.OhterNumbersAllStepsNeed = null;
pParams.SomeOtherData = null;
...
or should ProcessParameter implement IDosposable, and Dispose method would do that, and then I just need to use pParams.Dispose() (or wrap it in using block)
What is the best and most elegant way to clean the memory footprint of the used data of one process running?
Does having arrays instead of lists change anything? Or Mixed?
The actual param type I need is collections/array of custom objects.
Am I looking in the right direction?
UPDATE
Great questions! Thanks for the comments!
I used to have this process running as a single run and I could see memory usage go very high and then gradually down to "normal".
The problem came when I started chaining this processes on top of each other with different stating parameters. That is when memory went unreasonably high, so I want to include a cleaning step between two processes and looking for best way to do that.
There is a DB, this params is a sort of "cache" to speed up things.
Good point on IDisposable, I do not keep unmanaged resources in the params object.
Whilst using the Disposal pattern is a good idea, I don't think it will give you any extra benefits in terms of freeing up memory.
Two things that might:
Call GC.Collect()
However, I really wouldn't bother (unless perhaps you are getting out of memory exceptions). Calling GC.Collect() explicity may hurt performance and the garbage collector really does do a good job on its own. (But see LOH - below.)
Be aware of the Large Object Heap (LOH)
You mentioned that it uses a "big memory footprint". Be aware that any single memory allocation for 85,000 bytes or above comes from the large object heap (LOH). The LOH doesn't get compacted like the small object heap. This can lead to the LOH becoming fragmented and can result in out of memory errors even when you have plenty of available memory.
When might you stray into the LOH? Any memory allocation of 85,000 bytes or more, so on a 64 bit system that would be any array (or list or dictionary) with 10,625 elements or more, image manipulation, large strings etc.
Three strategies to help minimise fragmentation of the LOH:
i. Redesign to avoid it. Not always practical. But a list of lists or dictionary of dictionaries might avoid the limit. This can make the implementation more complex so I wouldn't unless you really need to, but on the plus side this can be very effective.
ii. Use fixed sizes. If all of more of your memory allocations in the LOH are the same size then this will help minimise any fragmentation. For example for dictionaries and lists set the capacity (which sets the size of the internal array) to the largest size you are likely to use. Not so practical if you are doing image manipulation.
iii. Force the garbage collector to compact the LOH:
System.Runtime.GCSettings.LargeObjectHeapCompactionMode = System.Runtime.GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
you do need to be using .NET Framework 4.5.1 or later to use that.
This is probably the simplest approach. In my own applications I have a couple of instances where I know I will be straying into the LOH and that fragmentation can be an issue and I set
System.Runtime.GCSettings.LargeObjectHeapCompactionMode = System.Runtime.GCLargeObjectHeapCompactionMode.CompactOnce;
as standard in the destructor - but only call GC.Collect() explicitly if I get an out of memory exception when allocating.
Hope this helps.

Memory pressure informer/counter/statistics

I have an application that saves/caches many objects in a static field. When there are a lot of objects saved during the lifetime of the application, there is out of memory exception being thrown because the cache grows so large.
Is there any class or object that informs me that the memory is running out and there will be an outofmemoryexception throwing soon so that I can know that I need to free up some memory by removing some of these cached objects? I'm looking for a sign that there is memory pressure in the application so that I can take precautionary action during application runtime before the memory exception is thrown.
I also have an application that uses as much ram as it possibly can. You have a couple of options here depending on your specific scenario (all have drawbacks).
One of the simplest is to preallocate a circular buffer of classes (or bytes) that you are caching to consume all ram up front. This usually is not preferred because consuming all the RAM on a box when you don't need to is just plain Rude.
Another way to handle large caches (or keep from throwing out of memory exceptions) is to first check that the memory I need exists before allocation. This has the drawback of needing to serialize memory allocations. Otherwise the time between the available RAM check and the allocation you may run out of RAM.
Here is a sample of the best way I have found to do this in .NET:
//Wait for enough memory
var temp = new System.Diagnostics.PerformanceCounter("Memory", "Available MBytes");
long freeMemory = Convert.ToInt64(temp.NextValue()) * (long)1000000;
long neededMemory = (long)bytesToAllocate;
int attempts=1200; //two minutes
while (freeMemory < neededMemory)
{
//Signal that memory needs to be freed
Console.WriteLine("Waiting for enough free memory:. Free Memory:" + freeMemory + " Needed Memory(MB):" + neededMemory);
System.Threading.Thread.Sleep(100);
freeMemory = Convert.ToInt64(temp.NextValue()) * (long)1000000;
--attempts;
if (0 == attempts)
throw new OutOfMemoryException("Could not get enough free memory. File:" + Path.GetFileName(wavFileURL));
}
//Try and allocate the memory we need.
Furthermore once I am inside the while loop I signal that some memory needs to be freed (depending on your application). NOTE: I tried to simplify the code by using a Sleep statement but ultimately you would want some type of non polling operation if possible.
I am not sure about the specifics of your application but if you are multithreaded, or run many different executables then it may be better to serialize this memory check when allocated memory to the cache. If that is the case I use a Semaphore to make sure that only one thread and/or process can allocate RAM as needed. Similar to this:
Semaphore namedSemaphore = new Semaphore(1, 1, "MemoryAllocationSemaphore"); //named semaphores are cross process
if (bytesToAllocate > 2000000) //if we need less than 2MB then dont bother locking.
{
if (!namedSemaphore.WaitOne((int)TimeSpan.FromMinutes(15).TotalMilliseconds))
{
throw new TimeoutException("Waited over 15 minutes for aquiring memory allocation semaphore");
}
}
Then a little later on call this:
namedSemaphore.Release();
in a finally block to release the semaphore.
Another option is to use a multiple reader single writer lock. To have multiple threads pulling data from the cache (such as the ReaderWriterLock class). This has the advantage that while you are writing to can check memory pressure and cleanup then add more to the cache, yet still have multiple threads processing data. The drawback is of course the bottle neck of getting data into the cache comes from a single fixed point.
If you have so much data cached that you're running out of memory that indicates something seriously wrong with your caching approach. Even as a 32-bit process, you have a 2 GB address space.
You should consider using a limited cache where the user can set a preferred cache size and you automatically remove objects when this size is reached. You can implement certain caching strategies to pick which objects to remove from the cache: oldest objects or the least frequently used objects or a combination of these.
You can also try mapping some of your cache to disk and only keep the most frequently used objects in memory. When an object is needed that's not in memory, you can de-serialize it from disk and put it back in memory. If an object is then unused for a period of time, you can swap it to disk.

Detecting when about to run out of memory (getting the amount of "free physical memory")

I'm transferring images from a high-FPS camera into a memory buffer (a List), and as those images are pretty large, the computer runs out of memory pretty quickly.
What I would like to do is to stop the transfer some time before the application runs out of memory. During my testing, I have found it to be consistent with the "Free Physical Memory" indicator getting close to zero.
Now the problem is that I can't find a way actually to get this value programmatically; in XP, it is not even displayed anywhere (just in the Vista/7 task manager).
I have tried all the ways I could find (WMI, performance counters, MemoryStatus, ...), but everything I got from those was just the "Available Physical Memory," which is of course not the same.
Any ideas?
Update
Unfortunately, I need the data to be in memory (yes, I know I can't guarantee it will be in physical memory, but still), because the data is streamed in real-time and I need to preview it in memory after it's been stored there.
Correlation is not causation. You can "run out of memory" even with loads of physical memory still free. Physical memory is almost certainly irrelevant; what you are probably running out of is address space.
People tend to think of "memory" as consuming space on a chip, but that hasn't been true for over a decade. Memory in modern operating systems is often better thought of as a large disk file that has a big hardware cache sitting on top of it to speed it up. Physical memory is just a performance optimization of disk-based memory.
If you're running out of physical memory then your performance is going to be terrible. But the scarce resource is actually address space that you are running out of. A big list has to have a large contiguous block of address space, and there might not be any block large enough left in the size you want.
Don't do that. Pull down a reasonably sized block, dump it to disk, and then deal with the file on disk as needed.
I'm late to the party, but have you considered using the System.Runtime.MemoryFailPoint class? It does a bunch of stuff to ensure that the requested allocation would succeed and throws InsufficientMemoryException if it fails; you can catch this and stop your transfer. You can probably predict an average size of incoming frames and try to allocate 3 or 4 of them, then stop acquisition when a failure is made. Maybe something like this?
const int AverageFrameSize = 10 * 1024 * 1024 // 10MB
void Image_OnAcquired(...)
{
try
{
var memoryCheck = new MemoryFailPoint(AverageFrameSize * 3);
}
catch (InsufficientMemoryException ex)
{
_camera.StopAcquisition();
StartWaitingForFreeMemory();
return;
}
// if you get here, there's enough memory for at least a few
// more frames
}
I doubt it'd be 100% foolproof, but it's a start. It's definitely more reliable than the performance counter for reasons that are explained in the other answers.
You can't use the free memory counter on its own in Vista/7 as a guide since it may be close to zero all the time. The reason for this is Vista/7's superfetch which uses free memory to cache stuff from disk that it thinks you're likely to use.
Linky: http://www.codinghorror.com/blog/2006/09/why-does-vista-use-all-my-memory.html
In addition if you're running a 32-bit C# process you are limited to 2GB of memory per process anyway (actually more like 1.5GB in practice before things become unstable) so even if your box shows you have loads of free memory you will still get an out of memory exception when your process hits the 2GB limit.
As Tergiver comments above, the real solution is to avoid holding all of the file in memory and instead swapping bits of the image in and out of memory as required.
Thanks for all the answers.
I've been thinking about it some more and came to a conclusion that it would be quite difficult (if not impossible) to do what I initially wanted, that is to somehow detect when the application is about to run out of memory.
All answers seem to point in the same direction (to somehow keep data out of memory), however unfortunately I cannot go there, as I really "need" the data to stay inside the memory (physical if possible).
As I have to make a compromise, I decided to create a setting for user to decide the memory usage limit for captured data. It is at least easy to implement.
Wanted to add my own answer because the otherwise good answer by OwenP has two important errors in the way it is using System.Runtime.MemoryFailPoint.
The first mistake is a very simple to fix: The constructor signature is public MemoryFailPoint(int sizeInMegabytes) so the AverageFrameSize argument should be in megabytes, not bytes. Also note the following about the size:
MemoryFailPoint operates at a granularity of 16 MB. Any values smaller than 16 MB are treated as 16 MB, and other values are treated as the next largest multiple of 16 MB.
The second mistake is that the MemoryFailPoint instance must be kept alive until after the memory you wish to use has been allocated, and then disposed!
This can be a bit harder to fix and might require design changes to be made depending on what OP's actual code looks like.
The reason that you have to dispose it in this fashion is that the MemoryFailPoint class keeps a process-wide record of memory reservations made from it's constructor. This is done to ensure that if two threads perform a memory-check at roughly the same time, they will not both succeed unless there is enough memory to meet the demands of both threads. (Otherwise the MemoryFailPoint class would be useless in multi-threaded applications!)
The memory "reserved" by the constructor is unreserved when calling Dispose(). Thus the thread should dispose the MemoryFailPoint-instance as soon as possible after it has allocated the required memory, but not before that.
(The "as soon as possible" part is preferred but not critical. Delaying the dispose can lead to other memory-checks failing needlessly, but at least you err on the conservative side.)
The above requirement is what requires alteration to the codes design. Either the method checking for memory has to also perform the allocation, or it has to pass out the MemoryFailPoint instance to the caller, which makes it the callers responsibility to dispose it at the correct time. (The latter is what the example code on MSDN does.)
Using the first approach (and a fixed buffer-size) might look something like this:
const int FrameSizeInMegabytes = 10; // 10MB (perhaps more is needed?)
const int FrameSizeInBytes = FrameSizeInMegabytes << 20;
// shifting by 20 is the same as multiplying with 1024 * 1024.
bool TryCreateImageBuffer(int numberOfImages, out byte[,] imageBuffer)
{
// check that it is theoretically possible to allocate the array.
if (numberOfImages < 0 || numberOfImages > 0x7FFFFFC7)
throw new ArgumentOutOfRangeException("numberOfImages",
"Outside allowed range: 0 <= numberOfImages <= 0x7FFFFFC7");
// check that we have enough memory to allocate the array.
MemoryFailPoint memoryReservation = null;
try
{
memoryReservation =
new MemoryFailPoint(FrameSizeInMegabytes * numberOfImages);
}
catch (InsufficientMemoryException ex)
{
imageBuffer = null;
return false;
}
// if you get here, there's likely to be enough memory
// available to create the buffer. Normally we can't be
// 100% sure because another thread might allocate memory
// without first reserving it with MemoryFailPoint in
// which case you have a race condition for the allocate.
// Because of this the allocation should be done as soon
// as possible - the longer we wait the higher the risk.
imageBuffer = new byte[numberOfImages, FrameSizeInBytes];
//Now that we have allocated the memory we can go ahead and call dispose
memoryReservation.Dispose();
return true;
}
0x7FFFFFC7 is the maximum indexer allowed in any dimension on arrays of single-byte types and can be found on the MSDN page about arrays.
The second approach (where the caller is responsible for the MemoryFailPoint instance) might look something like this:
const int AverageFrameSizeInMegabytes = 10; // 10MB
/// <summary>
/// Tries to create a MemoryFailPoint instance for enough megabytes to
/// hold as many images as specified by <paramref name="numberOfImages"/>.
/// </summary>
/// <returns>
/// A MemoryFailPoint instance if the requested amount of memory was
/// available (at the time of this call), otherwise null.
/// </returns>
MemoryFailPoint GetMemoryFailPointFor(int numberOfImages)
{
MemoryFailPoint memoryReservation = null;
try
{
memoryReservation =
new MemoryFailPoint(AverageFrameSizeInMegabytes * numberOfImages);
}
catch (InsufficientMemoryException ex)
{
return null;
}
return memoryReservation;
}
This looks a lot simpler (and is more flexible), but it is now up to the caller to handle the MemoryFailPoint instance and dispose of it at the correct point in time. (Added some mandatory documentation since I didn't come up with a good and descriptive name for the method.)
Important: What "reserved" means in this context
Memory is not "reserved" in the sense that it is guaranteed to be available (to the calling thread). It only means that when a thread uses MemoryFailPoint to check for memory, assuming it succeeds, it adds it's memory size to a process-wide (static) "reserved" amount that the MemoryFailPoint class keeps track of. This reservation will cause any other call to MemoryFailPoint (e.g. from other threads) to perceive the total amount of free memory as the actual amount minus the current process-wide (static) "reserved" amount. (When MemoryFailPoint instances are disposed they subtract their amount from the reserved total.). However the actual memory allocation system itself doesn't know or care about this so called "reservation" which is one of the reasons that MemoryFailPoint doesn't have strong guarantees.
Note also that memory "reserved" is simply kept track of as an amount. Since it isn't an actual reservation of a specific segment of memory this further weakens the guarantees as is illustrated by the following frustrated comment found in the reference source:
// Note that multiple threads can still ---- on our free chunk of address space, which can't be easily solved.
It's not hard to guess what the censored word is.
Here is an interesting article about how to overcome the 2GB limit on arrays.
Also if you need to allocate very large data structures you will need to know about <gcAllowVeryLargeObjects> which you can set in your app-config.
It is worth nothing that this doesn't really have anything to do with physical memory exclusively as the OP really wanted. Matter of fact, one of the things MemoryFailPoint will try to do before it gives up and reports failure is to increase the size of the page-file. But it will do a very decent job of avoiding getting an OutOfMemoryException if used correctly, which is at least half of what the OP wanted.
If you really want to force data into physical memory then, as far as I know, you have to go native with AllocateUserPhysicalPages which isn't the easiest thing in the world with a plethora of things that can go wrong, requires the appropriate permissions and is almost certainly overkill. The OS doesn't really like to be told how to manage memory so it doesn't make it easy to do so...
Getting an OutOfMemoryException just means that the current memory allocation could not be honored. It doesn't necessarily mean that the system or even the process is running out of memory. Imagine a hello world type application that starts off by allocating a 2 GB chunk of memory. On a 32 bit system, that will most likely trigger an exception despite the fact that the process hasn't really allocated any significant memory at this point.
A common source of OutOfMemoryExceptions is not enough contiguous memory available. I.e. plenty of memory is available, but no chunk is big enough to honor the current request. In other words trying to avoid OOM by watching the free memory counters is not really feasible.

How to implement my own byte array creation and disposal

BACKGROUND:
In running my app through a profiler, it looks like the hotspots are all involved in allocating a large number of temporary new byte[] arrays.
In one run under CLR Profiler, a few short (3-5 seconds worth of CPU time outside the profiler) produced over a gigabyte of garbage, the majority of it byte[] allocation, and this triggered over 500 collections.
In some cases it appears that the application is spending upwards of 10% of its CPU time performing collections.
Clearly a rewrite is in order.
So, I am thinking of replacing the new byte[] allocations with a pool class that could reuse the buffer at a later time.
Something like this ...
{
byte[] temp = Pool.AllocateBuffer(1024);
...
}
QUESTION:
How can I force the application to call code in the routine Pool.deAllocate(temp) when temp is no longer needed.
In the above code fragment, when temp is a Pool allocated byte[] buffer, but when it goes out of scope it gets deleted. Not a real problem, but doesn't get reused by the pool.
I know I could replace the "return 0;" with "Pool.deAllocate(temp); return 0", but I'm trying to force the recovery to occur.
Is this even remotely possible?
You could implement a Buffer class which implements IDisposable and returns the buffer to the pool when it's disposed. You can then give access to the underlying byte array, and so long as everyone plays nicely you can take advantage of reuse.
Be warned though:
Your buffers will quickly end up in gen 2, which may not be ideal for other reasons
If a malicious piece of code keeps a reference to the byte array, they could spy on data used by other code
You need to remember to dispose of buffers at the right time.
I actually have some code in MiscUtil to do this - see CachingBufferManager, CachedBuffer etc. I can't say I've used it much, mind you... and from what I remember, I made it a bit more complicated than I really needed to...
EDIT: To respond to the comments...
You can't force application code to release buffers, no. There's no automatic release mechanism in C# - a using statement is the closest we've got.
You could implement an implicit conversion to byte[] in your buffer class to allow you to call methods which have byte array parameters. Personally I'm not much of a fan of implicit conversions, but it's certainly available as an option.

Force garbage collection of arrays, C#

I have a problem where a couple 3 dimensional arrays allocate a huge amount of memory and the program sometimes needs to replace them with bigger/smaller ones and throws an OutOfMemoryException.
Example: there are 5 allocated 96MB arrays (200x200x200, 12 bytes of data in each entry) and the program needs to replace them with 210x210x210 (111MB). It does it in a manner similar to this:
array1 = new Vector3[210,210,210];
Where array1-array5 are the same fields used previously. This should set the old arrays as candidates for garbage collection but seemingly the GC does not act quickly enough and leaves the old arrays allocated before allocating the new ones - which causes the OOM - whereas if they where freed before the new allocations the space should be enough.
What I'm looking for is a way to do something like this:
GC.Collect(array1) // this would set the reference to null and free the memory
array1 = new Vector3[210,210,210];
I'm not sure if a full garbage collecion would be a good idea since that code may (in some situations) need to be executed fairly often.
Is there a proper way of doing this?
This is not an exact answer to the original question, "how to force GC', yet, I think it will help you to reexamine your issue.
After seeing your comment,
Putting the GC.Collect(); does seem to help, altought it still does not solve the problem completely - for some reason the program still crashes when about 1.3GB are allocated (I'm using System.GC.GetTotalMemory( false ); to find the real amount allocated).
I will suspect you may have memory fragmentation. If the object is large (85000 bytes under .net 2.0 CLR if I remember correctly, I do not know whether it has been changed or not), the object will be allocated in a special heap, Large Object Heap (LOH). GC does reclaim the memory being used by unreachable objects in LOH, yet, it does not perform compaction, in LOH as it does to other heaps (gen0, gen1, and gen2), due to performance.
If you do frequently allocate and deallocate large objects, it will make LOH fragmented and even though you have more free memory in total than what you need, you may not have a contiguous memory space anymore, hence, will get OutOfMemory exception.
I can think two workarounds at this moment.
Move to 64-bit machine/OS and take advantage of it :) (Easiest, but possibly hardest as well depending on your resource constraints)
If you cannot do #1, then try to allocate a huge chuck of memory first and use them (it may require to write some helper class to manipulate a smaller array, which in fact resides in a larger array) to avoid fragmentation. This may help a little bit, yet, it may not completely solve the issue and you may have to deal with the complexity.
Seems you've run into LOH (Large object heap) fragmentation issue.
Large Object Heap
CLR Inside Out Large Object Heap Uncovered
You can check to see if you're having loh fragmentation issues using SOS
Check this question for an example of how to use SOS to inspect the loh.
Forcing a Garbage Collection is not always a good idea (it can actually promote the lifetimes of objects in some situations). If you have to, you would use:
array1 = null;
GC.Collect();
array1 = new Vector3[210,210,210];
Isn't this just large object heap fragmentation? Objects > 85,000 bytes are allocated on the large object heap. The GC frees up space in this heap but never compacts the remaining objects. This can result in insufficent contiguous memory to successfully allocate a large object.
Alan.
If I had to speculate you problem is not really that you are going from Vector3[200,200,200] to a Vector3[210,210,210] but that most likely you have similar previous steps before this one:
i.e.
// first you have
Vector3[10,10,10];
// then
Vector3[20,20,20];
// then maybe
Vector3[30,30,30];
// .. and so on ..
// ...
// then
Vector3[200,200,200];
// and eventually you try
Vector3[210,210,210] // and you get an OutOfMemoryException..
If that is true, I would suggest a better allocation strategy. Try over allocating - maybe doubling the size every time as opposed to always allocating just the space that you need. Especially if these arrays are ever used by objects that need to pin the buffers (i.e. if that have ties to native code)
So, instead of the above, have something like this:
// first start with an arbitrary size
Vector3[64,64,64];
// then double that
Vector3[128,128,128];
// and then.. so in thee steps you go to where otherwise
// it would have taken you 20..
Vector3[256,256,256];
They might not be getting collected because they're being referenced somewhere you're not expecting.
As a test, try changing your references to WeakReferences instead and see if that resolves your OOM problem. If it doesn't then you're referencing them somewhere else.
I understand what you're trying to do and pushing for immediate garbage collection is probably not the right approach (since the GC is subtle in its ways and quick to anger).
That said, if you want that functionality, why not create it?
public static void Collect(ref object o)
{
o = null;
GC.Collect();
}
An OutOfMemory exception internally triggers a GC cycle automatically once and attempts the allocation again before actually throwing the exception to your code. The only way you could be having OutOfMemory exceptions is if you're holding references to too much memory. Clear the references as soon as you can by assigning them null.
Part of the problem may be that you're allocating a multidimensional array, which is represented as a single contiguous block of memory on the large object heap (more details here). This can block other allocations as there isn't a free contiguous block to use, even if there is still some free space somewhere, hence the OOM.
Try allocating it as a jagged array - Vector3[210][210][210] - which spreads the arrays around memory rather than as a single block, and see if that improves matters
John, Creating objects > 85000 bytes will make the object end up in the large object heap. The large object heap is never compacted, instead the free space is reused again.
This means that if you are allocating larger arrays every time, you can end up in situations where LOH is fragmented, hence the OOM.
you can verify this is the case by breaking with the debugger at the point of OOM and getting a dump, submitting this dump to MS through a connect bug (http://connect.microsoft.com) would be a great start.
What I can assure you is that the GC will do the right thing trying to satisfy you allocation request, this includes kicking off a GC to clean the old garbage to satisfy the new allocation requests.
I don't know what is the policy of sharing out memory dumps on Stackoverflow, but I would be happy to take a look to understand your problem more.

Categories

Resources