Large object heap waste - c#

I noticed that my application runs out of memory quicker than it should. It creates many byte arrays of several megabytes each. However when I looked at memory usage with vmmap, it appears .NET allocates much more than needed for each buffer. To be precise, when allocating a buffer of 9 megabytes, .NET creates a heap of 16 megabytes. The remaining 7 megabytes cannot be used to create another buffer of 9 megabytes, so .NET creates another 16 megabytes. So each 9MB buffer wastes 7MB of address space!
Here's a sample program that throws an OutOfMemoryException after allocating 106 buffers in 32-bit .NET 4:
using System.Collections.Generic;
namespace CSharpMemoryAllocationTest
{
class Program
{
static void Main(string[] args)
{
var buffers = new List<byte[]>();
for (int i = 0; i < 130; ++i)
{
buffers.Add(new byte[9 * 1000 * 1024]);
}
}
}
}
Note that you can increase the size of the array to 16 * 1000 * 1024 and still allocate the same amount of buffers before running out of memory.
VMMap shows this:
Also note how there's an almost 100% difference between the total Size of the Managed Heap and the total Commited size. (1737MB vs 946MB).
Is there a reliable way around this problem on .NET, i.e. can I coerce the runtime into allocating no more than what I actually need, or maybe much larger Managed Heaps that can be used for several contiguous buffers?

Internally the CLR allocates memory in segments. From your description it sounds like the 16 MB allocations are segments and your arrays are allocated within these. The remaining space is reserved and not really wasted under normal circumstances, as it will be used for other allocations. If you don't have any allocation that will fit within the remaining chunks these are essentially overhead.
As your arrays are allocated using contiguous memory you can only fit a single of those within a segment and hence the overhead in this case.
The default segment size is 16 MB, but if you allocation is larger than that the CLR will allocate segments that are larger. I'm not aware of the details, but e.g. if I allocate 20 MB Wmmap shows me 24 MB segments.
One way to reduce the overhead is to make allocations that fit with the segment sizes if possible. But keep in mind that these are implementation details and could change with any update of the CLR.

The CLR reserving a 16MB chunk in one go from the OS, but only actively occupying 9MB.
I believe you are expecting the 9MB and 9MB to be in one heap. The difficulty is that the variable is now split over 2 heaps.
Heap 1 = 9MB + 7MB
Heap 2 = 2MB
The problem we have now, is if the original 9MB is deleted, we now have 2 heaps we can't tidy up, as the contents are shared across heaps.
To improve performance, the approach is to put them in single heaps.
If you are worried about memory usage, don't be. Memory usage is not a bad thing with .NET, as it if no-one is using it, what's the problem? The GC will at some point kick in, and memory will be tidied up. GC will only kick in either
When the CLR deems it necessary
When the OS tells the CLR to give back memory
When forced to by the code
But memory usage, especially in this example shouldn't be a concern. The memory usage stops CPU cycles occurring. Otherwise if it tidied up memory constantly, your CPU would be high, and your process (and all others on your machine) would run much slower.

Age old symptom of the buddy system heap management algorithm, where powers of 2 are used to split each block recursively, in a binary tree, so for 9M the next size is 16M. If you dropped your array size down to 8mb, you will see your usage drop by half. Not a new problem, native programmers deal with it too.
The small object pool (less than 85,000 bytes) is managed differently, but at 9MB your arrays are in the large object pool. As of .NET 4.5, the large object heap doesn't participate in compaction, large objects are immediately promoted to generation 2.
You can't coerce the algorithm, but you can certainly coerce your user code by figuring out what sizes to use that will most efficiently fill the binary segments.
If you need to fill your process space with 9 MB arrays, either:
Figure out how to save 1MB to reduce the arrays to 8MB segments
Write or use a segmented array class that abstracts an array of 1 or 2MB array segments, using an indexer property. Same way you build an unlimited bitfield or a growable ArrayList. Actually, I thought one of the containers already did this.
Move to 64-bit
Reclaiming fragmented portion of a buddy system heap is an optimization with logarithmic returns (ie. you are approximately running out of memory anyway). At some point you'll have to move to 64-bit whether its convenient or not, unless your data size is fixed.

Related

Impose a Memory Limit on the CLR

I'm trying to test the error handling of my code under limited memory situations.
I'm also keen to see how the performance of my code is affected in low memory situations where perhaps the GC has to run more often.
Is there a way of running a .Net applicataion (or NUnit test-suite) with limited memory? I know with Java you can limit the amount of memory the JVM has access to - is there something similar in .Net?
This is not an option in the CLR. Memory is managed very differently, there are at least 10 distinct heaps in a .NET process. A .NET program can use the entire virtual memory space available in a Windows process without restrictions.
The simplest approach is to just allocate the memory when your program starts. You have to be a bit careful, you cannot swallow too much in one gulp, the address space is fragmented due to it containing a mix of code and data at different addresses. Memory is allocated from the holes in between. To put a serious dent into the available address space, you have to allocate at least a gigabyte and that's not possible with a single allocation.
So just use a loop to allocate smaller chunks, say one megabyte at a time:
private static List<byte[]> Gobble = new List<byte[]>();
static void Main(string[] args) {
for (int megabyte = 0; megabyte < 1024; megabyte++)
Gobble.Add(new byte[1024 * 1024]);
// etc..
}
Note that this is very fast, the allocated address space is just reserved and doesn't occupy any RAM.
You can enlist your process into a Windows Job Object. You can set memory (and other) limits for the job. This is the cleanest and the only sane way to limit the amount of memory that your process can use.

Array Allocation in .NET

The question is regarding the allocation of arrays in .net. i have a sample program below in which maximum array i can get is to length. I increase length to +1 it gives outofMemory exception. But If I keep the length and remove the comments I am able to allocation 2 different big arrays. Both arrays are less the .net permissible object size of 2 gb and total memory also less than virtual Memory. Can someone put any thoughts?
class Program
{
static int length = 203423225;
static double[] d = new double[length];
//static int[] i = new int[15000000];
static void Main(string[] args)
{
Console.WriteLine((sizeof(double)*(double)length)/(1024*1024));
Console.WriteLine(d.Length);
//Console.WriteLine(i.Length);
Console.WriteLine(Process.GetCurrentProcess().VirtualMemorySize64.ToString());
}
}
A 32-bit process must allocate virtual memory for the array from the address space it has available. by default 2 gigabytes. Which contains a mix of both code and data. Allocations are made from the holes between the existing allocations.
Such allocations always fail not because there is no more virtual memory left, they fail because the available holes are not big enough. And you asking for a big hole, getting 1.6 jiggabytes is very rare and will only work on very simple programs that don't load any additional DLLs. A poorly based DLL is a good way to cut a large hole in two, drastically reducing the odds of such an allocation succeeding. The more typical works-on-the-first-try allocation is around 650 megabytes. The second allocation didn't fail because there was another hole available. The odds go down considerably after a program has been running for a while and the address space got fragmented. A 90 MB allocation can fail.
You can get insight in how the virtual memory address space is carved up for a program with the SysInternals' VMMap utility.
A simple workaround is to set the EXE project's Platform target setting to AnyCPU and run the program on a 64-bit operating system. It will have gobs of addressable virtual memory space available, you'll only be limited by the maximum allowed size of the paging file and the .NET 2 gigabyte object size limit. A limitation that's addressed in .NET 4.5 with the new <gcAllowVeryLargeObjects> config element. Even a 32-bit program can take advantage of the available 4 gigabyte 32-bit address space on a 64-bit operating system with the /LARGEADDRESSAWARE option of editbin.exe, you'll have to run it in a post-build event.
This will be because when allocating memory for arrays the memory must be contiguous (i.e. the array must be allocated as one large block of memory). Even if there is enough space in total to allocate the array, if the free address space is split up then the memory allocation will still fail unless the largest of those free spaces is large enough for the entire array.

Large Object Heap Fragmentation, Issues with arrays

I am writing an analysis application in C# that has to deal with a lot of memory. I use ANTS Memory Profiler 7.4 for optimizing my memory management. While doing so, I realized that all of my double[,] arrays I use (and i need them) are placed on the LOH although the largest of these arrays is about 24.000 bytes. Objects should not be put there before 85.000 bytes as far as I know. The problem is now, that since I have about several thousand instances of these double[,] arrays I have a lot of memory fragmentation (about 25% of my total memory usage is free memory that I can not use). some of these arrays stored on the LOH are even only 1.036 bytes in size. The problem is that sometimes I have to perform larger analysis and then I end up with an out of memory exception because of the massive memory loss due to LOH fragmentation.
Does anyone know why this is happening although it should not be a large object by definition?
The threshold size for putting arrays of doubles on the LOH is much lower than for other types. The reason for this is that items on the LOH are always 64-bit aligned, and doubles benefit greatly from being 64-bit aligned.
Note that this only affects programs running in 32 bits. Programs running in 64 bits have objects that are always aligned on a 64-bit boundary, so that LOH heuristic is not used for 64 bit programs.
The threshold size is 1000 doubles.
Also see https://connect.microsoft.com/VisualStudio/feedback/details/266330/

When does the zero-fill operation take place for an array in C#?

Consider this in C#:
int[] a = new int[1024];
All integers will be set to 0. Traditionally, this is done by a highly optimized zero-fill operation (e.g. memset). However, as this array is allocated by the CLR I can think of an optimization which is pretty straight-forward; the CLR could maintain a zeroed memory area to skip this zero-filling at the time we request our memory.
Now, does CLR perform such an optimization? (I'm currently using .NET 4.0)
This operates at a much lower level. Windows keeps a low priority kernel thread alive whose only job is zero-ing the content of memory pages. Called the "zero page thread". Those pages are kept in a pool, ready for use to any process that generates a page fault for reserved but not committed pages. Any code in Windows benefits from this, not just managed code. The intention is security, not zero-ing the RAM content of mapped memory pages would allow a program to spy on the memory of another process.
This won't happen with your array, it is too small. It gets allocated in the gen #0 heap, a heap that will always have mapped pages. Large arrays however get allocated in the Large Object Heap, large is 85,000 bytes, 8000 bytes for an array of double. LOH allocations can take advantage of getting the pages pre-initialized to zero. Whether it actually does is hard to tell, the source code for that isn't available anywhere. I'd say it is likely considering the amount of cpu cycles it saves.
The CLR does only allocate a bit more memory than the managed heap consumes and then if it runs out of unallocated memory for the managed heap it will request more memory from the operation system. Prezeroing the memory would only take advantage if your process workingset it already big and you have a lot of unused memory that got garbage collected. This memory then is free for allocation from the managed heap.
If this is the case for your application you should think of reusing the array rather than dropping it to avoid the reallocation on the managed heap.
Have you looked at the IL produced for this code? A call to newarr is likely generated for the call in question. Per ECMA-355, newarr creates a new zero-based, one-dimensional array and initializes elements of the array to 0 of the appropriate type. In this case, since it's an array of int the elements will be initialized to 0 at creation.

Memory limitations in a 64-bit .Net application?

On my laptop, running 64 bit Windows 7 and with 2 Gb of free memory (as reported by Task Manager), I'm able to do:
var x = new Dictionary<Guid, decimal>( 30 * 1024 *1024 );
Without having a computer with more RAM at my hands, I'm wondering if this will scale so that on a computer with 4 Gb free memory, I'll be able to allocate 60M items instead of "just" 30M and so on?
Or are there other limitations (of .Net and/or Windows) that I'll bump into before I'm able to consume all available RAM?
Update: OK, so I'm not allowed to allocate a single object larger than 2 Gb. That's important to know! But then I'm of course curious to know if I'll be able to fully utilize all memory by allocating 2 Gb chunks like this:
var x = new List<Dictionary<Guid, decimal>>();
for ( var i = 0 ; i < 10 ; i++ )
x.Add( new Dictionary<Guid, decimal>( 30 * 1024 *1024 ) );
Would this work if the computer have >20Gb free memory?
There's a 2 GiB limitation on all objects in .NET, you are never allowed to create a single object that exceeds 2 GiB. If you need a bigger object you need to make sure that the objects is built from parts smaller than 2 GiB, so you cannot have an array of continuous bits larger than 2 GiB or a single string longer larger than 512 MiB, I'm not entirely sure about the string but I've done some testing on the issue and was getting OutOfMemoryExceptions when I tried to allocate strings bigger than 512 MiB.
These limits though are subject to heap fragmentation and even if the GC does try to compact the heap, large objects (which is somewhat of an arbitrary cross over around 80K) end up on the large object heap which is a heap that isn't compacted. Strictly speaking, and somewhat of a side note, if you can maintain short lived allocations below this threshold it would be better for your overall GC memory management and performance.
Update: The 2Gb single-object memory limit has been lifted on 64 bit with the release of .NET 4.5.
You'll need to set gcAllowVeryLargeObjects in your app.config.
The maximum number of elements in an array is still 2^32-1, though.
See Single objects still limited to 2 GB in size in CLR 4.0? for more details.

Categories

Resources