Passing large data structures to unmanaged code using fixed pointer - c#

I'm developing an application which consists of two parts:
C# front-end
C++ number cruncher
In some cases the amount of data passed from C# to C++ can be really large. I'm talking about Gb and maybe more. There's a large array of doubles in particular and I wanted to pass a pinning/fixed pointer to this array to C++ code. The number crunching can take up to several hours to finish. I'm worrying about any problems that can be triggered by this usage of pinning pointers. As I see it, the garbage collector will not be able to touch this large memory region for a long time. Can this cause any problems? Should I consider a different strategy?
I thought that instead of passing the whole array I could provide an interface for building this array from within C++ code, so that the memory is owned by unmanaged part of the application. But in the end both strategies will create a large chunk of memory which is not relocatable for C# garbage collector for a long time. Am I missing something?

You don't have a problem. Large arrays are allocated in the Large Object Heap. Pinning them can not have any detrimental effect, the LOH is not compacted. "Large" here means an array of doubles with 1000 or more elements for 32-bit code or any array equal or larger than 85,000 bytes.

For your specific use case, it may be worthwhile to use a memory mapped file as the shared memory buffer between your c# and c++ code. This circumvents the garbage collector altogether. And also lets the OS cache pager deal with memory pressure issues, instead of GC managed memory.

Related

HeapAlloc, HeapCreate Is garbage-collectable in C#

I have created a memory allocation library that can't be collected by GC. (https://github.com/10sa/Unmanaged-Memory)
The heap area allocated by this library is basically obtained by using the WinAPI GetProcessHeap() function. You can also create a heap area and assign it to it. However, the function used to create the heap area is the HeapCreate function.
Question is,
1. Is this memory area (GetProcessHeap()) managed by GC?
2. If you create a new heap area using the HeapCreate function, can the generated heap area be collected by the GC?
3. If all of the above questions are true, How can I create a memory region in C # that is not collected without using Global Heap?
no; the clue is in the name ("unmanaged memory"), and in the fact that it is being allocated by the OS, not the CLR
no!
n/a
What are you trying to do here? There are already extensive inbuilt mechanisms for allocating unmanaged memory through the CLR without needing an external tool.
Additionally, in the examples: allocating 4 bytes in unmanaged memory is terribly terribly expensive and unnecessary. Usually when we talk about unmanaged memory we're talking about slabs of memory - huge chunks that we then sub-divide and partition internally via clever code.
In Win32/Win64 and C/C++ the Heap API were a godsend, allowing any amount of temporary allocation, with all allocations from the custom heap being freed in a single call. They are likely unnecessary in .NET.
Allow me some skepticism for Microsoft's advice on good practices. I remember a division meeting where we were told that memory leaks are an inescapable fact of life.

How to obtain remaining free stack / heap memory in C# using managed code?

Is there a way to obtain the remaining free stack / heap memory in C# using only managed code?
To be specific, I do not mean the memory that is still available in the currently allocated state but all the memory that (in the future) could be allocated (if necessary) based on the main memory of the host system.
The information will be used to take measures on systems with low free memory to prevent running out of system memory.
There is a method called virtualquery that can be used to determine the size of the call stack. There are a bunch of C# examples here.
Checking available stack size in C
Checking stack size in C#
For big heap allocations you could try the MemoryFailPoint which checks to see if allocation is possible and throws a different exception then OOM
http://msdn.microsoft.com/en-us/library/system.runtime.memoryfailpoint.aspx
Answer mainly covered by AbdElRaheim... additional note about heap for 32bit systems.
If you want to go all the way with checking space for heap allocations (BTW, somewhat interesting for non-x64 programs): you need not only total amount of free memory but also map of all regions and see what is already allocated. Most interesting piece of information you would be looking for is DLLs loaded into your address space - even having 1GB free does not mean you can allocate 1GB block - there could be multiple chunks that GC can't combine together if some random native DLL is loaded in the middle.
If you want to go that far - VirtualQuery is a possible starting point.

When does the zero-fill operation take place for an array in C#?

Consider this in C#:
int[] a = new int[1024];
All integers will be set to 0. Traditionally, this is done by a highly optimized zero-fill operation (e.g. memset). However, as this array is allocated by the CLR I can think of an optimization which is pretty straight-forward; the CLR could maintain a zeroed memory area to skip this zero-filling at the time we request our memory.
Now, does CLR perform such an optimization? (I'm currently using .NET 4.0)
This operates at a much lower level. Windows keeps a low priority kernel thread alive whose only job is zero-ing the content of memory pages. Called the "zero page thread". Those pages are kept in a pool, ready for use to any process that generates a page fault for reserved but not committed pages. Any code in Windows benefits from this, not just managed code. The intention is security, not zero-ing the RAM content of mapped memory pages would allow a program to spy on the memory of another process.
This won't happen with your array, it is too small. It gets allocated in the gen #0 heap, a heap that will always have mapped pages. Large arrays however get allocated in the Large Object Heap, large is 85,000 bytes, 8000 bytes for an array of double. LOH allocations can take advantage of getting the pages pre-initialized to zero. Whether it actually does is hard to tell, the source code for that isn't available anywhere. I'd say it is likely considering the amount of cpu cycles it saves.
The CLR does only allocate a bit more memory than the managed heap consumes and then if it runs out of unallocated memory for the managed heap it will request more memory from the operation system. Prezeroing the memory would only take advantage if your process workingset it already big and you have a lot of unused memory that got garbage collected. This memory then is free for allocation from the managed heap.
If this is the case for your application you should think of reusing the array rather than dropping it to avoid the reallocation on the managed heap.
Have you looked at the IL produced for this code? A call to newarr is likely generated for the call in question. Per ECMA-355, newarr creates a new zero-based, one-dimensional array and initializes elements of the array to 0 of the appropriate type. In this case, since it's an array of int the elements will be initialized to 0 at creation.

C# large byte array and memory leak if not nulled quickly

I have a class that has a byte array holding from 1,048,576 bytes up to 134,217,728 bytes.
In the dispose void I have the array set to null and the method calling the dispose calls GC.Collect after that.
If I dispose right away I will get my memory back, but if I wait like 10 hours and dispose, memory usage doesn't change.
Memory usage is based on OS memory allocation. It may be freed immediately, it may not be. This depends on OS utilization, application, etc. You free it in the runtime, but this doesn't mean the OS alway gets it back. I have to think heres a determination based on memory patterns (ie time here) that affects this calculation of when to return it or not. This was addressed here:
Explicitly freeing memory in c#
Note: if the memory isn't released, it doesn't automatically means it used. It's up to the CLR whether it's releases chuncks of memory or not, but this memory isn't wasted.
That said, if you want a technical explanation, you'll want to read litterature about the Large Object Heap: http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
Basically, it's a zone of memory where very large objects (more than 85kB) are allocated. It differs from the other zones of memory in that it's never compacted, and thus can become fragmented. I think what happens in your case is:
Case 1: you allocate the object and immediately call GC.Collect. The object is allocated at the end of the heap, then freed. The CLR sees a free segment at the end of the heap and releases it to the OS.
Case 2: you allocate the object and wait for a while. In the mean time, an other object is allocated in the LOH. Now, your object isn't the last one anymore. Then, when you call GC.Collect, your object is erased, but there's still the other object(s) at the end of the memory segment. So the CLR cannot release the memory to the OS.
Just a guess based on my knowledge of memory management in .NET. I may be completely wrong.
Your findings are not unusual, but it doesn't mean that anything is wrong either. In order to be collected, something must prompt the GC to collect (often an attempted allocation). As a result, you can build an app that consumes a bunch of memory, releases it, and then goes idle. If there is no memory pressure on the machine, and if your app doesn't try to do anything after that, the GC won't fire (because it doesn't need to). Once you get busy, the GC will kick in and do its job. This behavior is very commonly mistaken for a leak.
BTW:
Are you using that very large array more than once? If so, you might be better off keeping it around and reusing it. Reason: any object larger than 85,000 bytes is allocated on the Large Object Heap. That heap only gets GC'd on Generation 2 collections. So if you are allocating and reallocating arrays very often, you will be causing a lot of Gen 2 (expensive) collections.
(note: that doesn't mean that there's a hard and fast rule to always reuse large arrays, but if you are doing a lot of allocation/deallocation/allocation of the array, you should measure how much it helps if you re-use).

Large Arrays, and LOH Fragmentation. What is the accepted convention?

I have an other active question HERE regarding some hopeless memory issues that possibly involve LOH Fragmentation among possibly other unknowns.
What my question now is, what is the accepted way of doing things?
If my app needs to be done in Visual C#, and needs to deal with large arrays to the tune of int[4000000], how can I not be doomed by the garbage collector's refusal to deal with the LOH?
It would seem that I am forced to make any large arrays global, and never use the word "new" around any of them. So, I'm left with ungraceful global arrays with "maxindex" variables instead of neatly sized arrays that get passed around by functions.
I've always been told that this was bad practice. What alternative is there?
Is there some kind of function to the tune of System.GC.CollectLOH("Seriously") ?
Are there possibly some way to outsource garbage collection to something other than System.GC?
Anyway, what are the generally accepted rules for dealing with large (>85Kb) variables?
Firstly, the garbage collector does collect the LOH, so do not be immediately scared by its prescence. The LOH gets collected when generation 2 gets collected.
The difference is that the LOH does not get compacted, which means that if you have an object in there that has a long lifetime then you will effectively be splitting the LOH into two sections — the area before and the area after this object. If this behaviour continues to happen then you could end up with the situation where the space between long-lived objects is not sufficiently large for subsequent assignments and .NET has to allocate more and more memory in order to place your large objects, i.e. the LOH gets fragmented.
Now, having said that, the LOH can shrink in size if the area at its end is completely free of live objects, so the only problem is if you leave objects in there for a long time (e.g. the duration of the application).
Starting from .NET 4.5.1, LOH could be compacted, see GCSettings.LargeObjectHeapCompactionMode property.
Strategies to avoid LOH fragmentation are:
Avoid creating large objects that hang around. Basically this just means large arrays, or objects which wrap large arrays (such as the MemoryStream which wraps a byte array), as nothing else is that big (components of complex objects are stored separately on the heap so are rarely very big). Also watch out for large dictionaries and lists as these use an array internally.
Watch out for double arrays — the threshold for these going into the LOH is much, much smaller — I can't remember the exact figure but its only a few thousand.
If you need a MemoryStream, considering making a chunked version that backs onto a number of smaller arrays rather than one huge array. You could also make custom version of the IList and IDictionary which using chunking to avoid stuff ending up in the LOH in the first place.
Avoid very long Remoting calls, as Remoting makes heavy use of MemoryStreams which can fragment the LOH during the length of the call.
Watch out for string interning — for some reason these are stored as pages on the LOH and can cause serious fragmentation if your application continues to encounter new strings to intern, i.e. avoid using string.Intern unless the set of strings is known to be finite and the full set is encountered early on in the application's life. (See my earlier question.)
Use Son of Strike to see what exactly is using the LOH memory. Again see this question for details on how to do this.
Consider pooling large arrays.
Edit: the LOH threshold for double arrays appears to be 8k.
It's an old question, but I figure it doesn't hurt to update answers with changes introduced in .NET. It is now possible to defragment the Large Object Heap. Clearly the first choice should be to make sure the best design choices were made, but it is nice to have this option now.
https://msdn.microsoft.com/en-us/library/xe0c2357(v=vs.110).aspx
"Starting with the .NET Framework 4.5.1, you can compact the large object heap (LOH) by setting the GCSettings.LargeObjectHeapCompactionMode property to GCLargeObjectHeapCompactionMode.CompactOnce before calling the Collect method, as the following example illustrates."
GCSettings can be found in the System.Runtime namespace
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
The first thing that comes to mind is to split the array up into smaller ones, so they don't reach the memory needed for the GC to put in it the LOH. You could spit the arrays into smaller ones of say 10,000, and build an object which would know which array to look in based on the indexer you pass.
Now I haven't seen the code, but I would also question why you need an array that large. I would potentially look at refactoring the code so all of that information doesn't need to be stored in memory at once.
You get it wrong. You do NOT need to havean array size 4000000 and you definitely do not need to call the garbace collector.
Write your own IList implementation. Like "PagedList"
Store items in arrays of 65536 elements.
Create an array of arrays to hold the pages.
This allows you to access basically all your elements with ONE redirection only. And, as the individual arrays are smaller, fragmentation is not an issue...
...if it is... then REUSE pages. Dont throw them away on dispose, put them on a static "PageList" and pull them from there first. All this can be transparently done within your class.
The really good thing is that this List is pretty dynamic in the memory usage. You may want to resize the holder array (the redirector). Even when not, it is about 512kbdata per page only.
Second level arrays have basically 64k per byte - which is 8 byte for a class (512kb per page, 256kb on 32 bit), or 64kb per struct byte.
Technically:
Turn
int[]
into
int[][]
Decide whether 32 or 64 bit is better as you want ;) Both ahve advantages and disadvantages.
Dealing with ONE large array like that is unwieldely in any langauge - if you ahve to, then... basically.... allocate at program start and never recreate. Only solution.
This is an old question, but with .NET Standard 1.1 (.NET Core, .NET Framework 4.5.1+) there is another possible solution:
Using ArrayPool<T> in the System.Buffers package, we can pool arrays to avoid this problem.
Am adding an elaboration to the answer above, in terms of how the issue can arise. Fragmentation of the LOH is not only dependent on the objects being long lived, but if you've got the situation that there are multiple threads and each of them are creating big lists going onto the LOH then you could have the situation that the first thread needs to grow its List but the next contiguous bit of memory is already taken up by a List from a second thread, hence the runtime will allocate new memory for the first threads List - leaving behind a rather big hole. This is whats happening currently on one project I've inherited and so even though the LOH is approx 4.5 MB, the runtime has got a total of 117MB free memory but the largest free memory segment is 28MB.
Another way this could happen without multiple threads, is if you've got more than one list being added to in some kind of loop and as each expands beyond the memory initially allocated to it then each leapfrogs the other as they grow beyond their allocated spaces.
A useful link is: https://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/
Still looking for a solution for this, one option may be to use some kind of pooled objects and request from the pool when doing the work. If you're dealing with large arrays then another option is to develop a custom collection e.g. a collection of collections, so that you don't have just one huge list but break it up into smaller lists each of which avoid the LOH.

Categories

Resources