I'm using locked bitmaps a lot recently, and I keep getting "attempted to access invalid memory" errors. This is mostly because the bitmap has been moved in memory. Some people using GCHandle.Alloc() to allocate memory in the CLR and pin it. Does Bitmap.LockBits() do the same? I don't understand the difference between "locking" memory and "pinning" memory. Can you also explain the terminology and the differences if any?
GCHandle.Alloc is a more generic method, that allows you to allocate a handle to any managed object and pin it in memory (or not). Pinning memory prevents GC from moving it around, which is especially useful when you have to pass some data, for example an array, to a unmanaged code.
GCHandle.Alloc will not help you access bitmap's data in any way, because pinning this object will just prevent the managed object from moving around (the Bitmap object) (and being garbage collected).
Bitmap however is a wrapper around native GDI+'s BITMAP structure. It doesn't keep data in any managed array that you would have to pin, it just managed a native handle to GDI+ bitmap object. Because of that Bitmap.LockBits is a way of telling this bitmap that you are interested in accessing it's memory, and it's just a wrapper around GdipBitmapLockBits function. So your need of calling it has more to do with the fact that you are working with GDI+ bitmaps than with the fact, that you're working in managed environment with GC.
Once you have used LockBits you should be able to access it's memory using pointers through BitmapData.Scan0 - it's an address of first byte of data. You should not have problems as long, as you do not access memory behind BitmapData.Scan0 + Height * Stride.
And rememberto UnlockBits when you are done.
In your case an attempted to access invalid memory error is most probably caused by invalid memory allocation which you are doing in the unsafe part of code, e.g allocated array is smaller than number of pixels you are trying to put in that.
There is also no need to think about pinning the objects unless your image data is less than 85000 Bytes as only objects less than 85K will be moved in memory.
Another story would be if you pass the object to unmanaged code, for example in the c++ library for faster processing. In this case your exception is very possible if the passed image gets out of scope and will be garbage collected. In this case you can use GCHandle.Alloc(imageArray,GCHandleType.Pinned); and than call Free if you do not need it any longer.
Related
I am working with some C# and C++ unmanaged code and there are two things I don't understand when dealing with memory. If someone can help me understand:
If a variable is dynamically allocated under C# (using new) and then is passed to the C++ unmanaged code. Does that variable memory needs to be deallocated manually under the C++ unmanaged code by the user ?
If a variable is dynamically allocated under C++ unmanaged (using new) and then passed to C#, is it safe to say the Garbage Collector will deallocate that memory ?
No, since the object is allocated on managed heap GC will handle deallocation as usual. The problem is you must tell him not to deallocate or change address of the object while it is used from unmanaged code because GC can't know how long you are going to use the object from the unmanaged code. This can be done by PINNING the object.
See answer to this question.
No, since the object is allocated on C++ unmanaged heap GC won't touch it. You have to deallocate it yourself using delete.
Edit:
If you need to allocate an object in managed code and deallocate in unmanaged code or vice versa, It's good to know there is OS heap for this purpose that you can use via Marshal.AllocHGlobal and Marshal.FreeHGlobal calls from C#, there will be similar calls in C++.
It's really simple!
Depends
Depends
Eh, Sorry about that.
Under typical conditions, C# will keep track of the memory and get rid of it any time after it's no longer used on the C# side. It has no way of tracking references on the C++ side, so one common mistake in interop is that the memory is deallocated before the unmanaged side is done with it (resulting in loads of FUN). This only applies for cases where the memory is directly referenced, not when its copied (the typical case being a byte[] that's pinned for the duration of the unmanaged call). Don't use automatic marshalling when the life-time of the object/pointer being passed to unmanaged code is supposed to be longer than the run of the invoked method.
Under typical conditions, C# has no way of tracking memory allocations in the C++ code, so you can't rely on automatic memory management. There are exceptions (e.g. some COM scenarios), but you'll almost always need to manage the memory manually. This usually means sending the pointer back to the C++ code to do the deallocation, unless it used a global allocator of some kind (e.g. CoMemoryInitialize). Remember that in the unmanaged world, there is no one memory manager that you can safely invoke to dispose of memory; you don't really have the necessary information anyway.
This only applies to pointers, of course. Passing integers is perfectly fine, and using automatic marshalling usually means the marshaller takes care of most of the subtleties (though still only in the simplest case, so be careful). Unmanaged code is unmanaged - you need to understand perfectly how the memory is allocated, and how, when and who is responsible for cleaning up the memory.
As a rule of thumb, whichever component/object allocates memory should deallocate memory. For every new a delete by the one which did new.
That is the ideal. If not followed for reasons such as you C++ program may terminate and not exists when allocated memory's lifecycle comes to an end, your C# should clean up and visa versa.
I have a service that's using ASP WebApi. Each http request translates to a thread that needs to do some data manipulation (possibly changing the data). The API layer is written in C# and the data manipulation is written in C++. The C# layer calls the native library and supplies a pointer to some managed buffer.
Couple of questions:
How can I make sure there are no races? is std::mutex in the native library enough in this case? (do managed threads map to native threads? will they share the same std::mutex?)
How can I make sure that the GC doesn't release the pointer to the managed buffer while the native library is manipulating it?
Do you need a shared buffer? If the buffer is only ever used on one thread, you save yourself a lot of trouble. Managed threads do not map to native threads 1:1, but I'm not sure if that has any effect on your scenario.
You need to fix the buffer, and keep it fixed the whole time the native code has a pointer to it - releasing is the least of your worries, the .NET memory is moved around all the time. This is done using the fixed block.
Fixing managed memory:
byte[] theBuffer = new byte[256];
fixed (byte* ptr = &theBuffer[0])
{
// The pointer is now fixed - the GC is prohibited from moving the memory
TheNativeFunction(ptr);
}
// Unfixed again
However, note that prohibiting the GC from moving memory around can cause you quite a bit of trouble - it can prevent heap compaction altogether in a high-throughput server, for example.
If you don't need to work with the memory in the managed environment, you can simply allocate unmanaged memory for the task, such as by using Marshal.AllocHGlobal.
I'm developing an application which consists of two parts:
C# front-end
C++ number cruncher
In some cases the amount of data passed from C# to C++ can be really large. I'm talking about Gb and maybe more. There's a large array of doubles in particular and I wanted to pass a pinning/fixed pointer to this array to C++ code. The number crunching can take up to several hours to finish. I'm worrying about any problems that can be triggered by this usage of pinning pointers. As I see it, the garbage collector will not be able to touch this large memory region for a long time. Can this cause any problems? Should I consider a different strategy?
I thought that instead of passing the whole array I could provide an interface for building this array from within C++ code, so that the memory is owned by unmanaged part of the application. But in the end both strategies will create a large chunk of memory which is not relocatable for C# garbage collector for a long time. Am I missing something?
You don't have a problem. Large arrays are allocated in the Large Object Heap. Pinning them can not have any detrimental effect, the LOH is not compacted. "Large" here means an array of doubles with 1000 or more elements for 32-bit code or any array equal or larger than 85,000 bytes.
For your specific use case, it may be worthwhile to use a memory mapped file as the shared memory buffer between your c# and c++ code. This circumvents the garbage collector altogether. And also lets the OS cache pager deal with memory pressure issues, instead of GC managed memory.
I have an external method that receives some parameters, allocates memory and returns a pointer.
[DllImport("some.dll", CallingConvention = CvInvoke.CvCallingConvention)]
public static extern IntPtr cvCreateHeader(
Size size,
int a,
int b);
I'm well aware that it is bad practice to allocate unmanaged memory in a managed application but in this case I have no choice as the dll is 3rd party.
There is an equivalent function that releases the memory and I do know what is the size of the allocated array.
How do I pin the returned pointer so the GC does not move it (without going unsafe)? 'fixed' won't do it as this pointer is widely used throughout the class?
Is there a better methodology for this p/Invoke?
No, you are getting back a pointer to memory that will never move. Memory allocated from a native heap stays put, there's nothing similar to the compacting strategy that a garbage collector uses. That can only work when a memory management system can find all the pointers that point to an allocated chunk of memory. So that it can update those pointers when the chunk moves. Nothing like that exists for native code, there is no reliable way to find those pointers back.
Do not bother looking for a way to pin the pointer. There isn't one because there is no need for one.
What functionality does the stackalloc keyword provide? When and Why would I want to use it?
From MSDN:
Used in an unsafe code context to allocate a block of memory on the
stack.
One of the main features of C# is that you do not normally need to access memory directly, as you would do in C/C++ using malloc or new. However, if you really want to explicitly allocate some memory you can, but C# considers this "unsafe", so you can only do it if you compile with the unsafe setting. stackalloc allows you to allocate such memory.
You almost certainly don't need to use it for writing managed code. It is feasible that in some cases you could write faster code if you access memory directly - it basically allows you to use pointer manipulation which suits some problems. Unless you have a specific problem and unsafe code is the only solution then you will probably never need this.
Stackalloc will allocate data on the stack, which can be used to avoid the garbage that would be generated by repeatedly creating and destroying arrays of value types within a method.
public unsafe void DoSomeStuff()
{
byte* unmanaged = stackalloc byte[100];
byte[] managed = new byte[100];
//Do stuff with the arrays
//When this method exits, the unmanaged array gets immediately destroyed.
//The managed array no longer has any handles to it, so it will get
//cleaned up the next time the garbage collector runs.
//In the mean-time, it is still consuming memory and adding to the list of crap
//the garbage collector needs to keep track of. If you're doing XNA dev on the
//Xbox 360, this can be especially bad.
}
Paul,
As everyone here has said, that keyword directs the runtime to allocate on the stack rather than the heap. If you're interested in exactly what this means, check out this article.
http://msdn.microsoft.com/en-us/library/cx9s2sy4.aspx
this keyword is used to work with unsafe memory manipulation. By using it, you have ability to use pointer (a powerful and painful feature in C/C++)
stackalloc directs the .net runtime to allocate memory on the stack.
Most other answers are focused on the "what functionality" part of OP's question.
I believe this will answers the when and why:
When do you need this?
For the best worst-case performance with cache locality of multiple small arrays.
Now in an average app you won't need this, but for realtime sensitive scenarios it gives more deterministic performance: No GC is involved and you are all but guaranteed a cache hit.
(Because worst-case performance is more important than average performance.)
Keep in mind that the default stack size in .net is small though!
(I think it's 1MB for normal apps and 256kb for ASP.net?)
Practical use could for example include realtime sound processing.
It is like Steve pointed out, only used in unsafe code context (e.g, when you want to use pointers).
If you don't use unsafe code in your C# application, then you will never need this.