Safe way to cast (V)C++ long* to C# Int32*? - c#

I'm currently writing a DLL in C++/CLI which will act as "proxy" between an unmanaged program and another C# DLL. The calling program requires my "proxy DLL" to implement various procedures that will be called by the unmanaged program. So far, no problem.
But: One of the functions has the following prototype:
extern "C" __declspec ( dllexport ) long Execute(unsigned long command, long nInBytes, byte bInData[], long nOutBytes, long* pnUsedOutBytes, byte bOutData[])
Well, my proxy DLL simply calls the C# DLL which provides the following function prototype (which was also given by the documentation of the calling program):
unsafe public UInt32 Execute(UInt32 command, Int32 nInBytes, byte* pInData, Int32 nOutBytes, Int32* pnUsedOutBytes, byte* pOutData);
The compiler throws an error (C2664) at parameter 5 pnUsedOutBytes and tells me, that long* cannot be cast to int*. Well OK, long and int currently have the same implementation which might change at some point in the future, so the thrown error is understandable (though the non-pointer long uses do not throw an error?).
Back to the actual question: What is the best solution to call my C# function? I've already read that (of course) the best solution is to use .NET types when calling a .NET function. So: Is it safe to do a simple type casting when calling the function or might there by any bad circumstance where this type cast will not work?
Using this line calms down the compiler, but is it really safe?
curInstance->Execute(command, nInBytes, pInData, nOutBytes, (System::Int32*)pnUsedOutBytes, pOutData);
Thanks in advance!

No, don't use that cast. Suppose that the actual value that pnUsedOutBytes was greater than 2^32. Best case, the call to Execute would overwrite the low bytes and leave the bits above 32 alone, resulting in a wrong answer.
The solution is to call Execute with a pointer to a 32-bit data type. Create one in your proxy, give it a sensible starting value if needed, make the call, and copy the resulting value into the long that pnUsedOutBytes points to.
Oh, and don't paraphrase error messages. The error message did not say that you can't cast long* to int*; you can. What it almost certainly said is that the compiler can't convert long* to int*. That's correct: there is no implicit conversion between the two types. Adding a cast tells the compiler to do it; with that you have an explicit conversion.

The easiest solution is just to fix the signature of the exported function:
extern "C" __declspec ( dllexport ) int32_t Execute(uint32_t command, int32_t nInBytes, byte bInData[], int32_t nOutBytes, int32_t* pnUsedOutBytes, byte bOutData[])
LoadLibrary will give you no grief whatsoever about the difference between int32_t and int and long, since they are all 32-bit integral types.
(Actually, LoadLibrary won't give you any grief for a bunch of actual errors either, ... but in this case you aren't using an incompatible type)

Related

Pass C# array of user defined struct to COM via MIDL

I have a custom struct defined in MIDL:
struct Foo
{
[string] char* Bar;
int Baz;
}
I'm trying to pass this from a C# application to a C++ application. The interface assembly was generated with tlbimp.exe.
I should mention that I have next to no knowledge of MIDL and COM interop, but we're stuck using this interop method for the time being, so there's no way around that for now.
I tried various ways of defining an array of such a type, none with success. Ideally I'd want to call it on the C# side just like this:
ComObject.SomeFunction(someArray);
However, I may not be able to do that without a size parameter, so this would also be fine:
ComObject.SomeFunction(someArray.Length, someArray);
I tried the following ways to define it, none of which were successful:
HRESULT SomeFunction(struct Foo array[]); // For some reason it interprets this as a
// ref parameter... I assume because it's
// equivalent to a pointer?
HRESULT SomeFunction(struct Foo** array); // Tried with a pointer as well,
// but I think the memory layout messes it up
HRESULT SomeFunction(int length, [size_is(length)] struct Foo array[]);
HRESULT SomeFunction(int length, [size_is(length)] struct Foo** array); // I don't know how
// to marshal that on
// the C# side
What's the right way to define this in the MIDL and to invoke it on the C# side? Ugly temporary solutions welcome, since this entire thing will be refactored anyway in two months, but we need this before then. For example, currently I'm considering passing in two arrays of simple types (char* and int) and then combine them into the appropriate structure on the C++ side.

Where function passed as UnmanagedFunctionPointer to C is executed?

In C, there is a function that accepts a pointer to a function to perform comparison:
[DllImport("mylibrary.dll", CallingConvention = CallingConvention.Cdecl)]
private static extern int set_compare(IntPtr id, MarshalAs(UnmanagedType.FunctionPtr)]CompareFunction cmp);
In C#, a delegate is passed to the C function:
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
delegate int CompareFunction(ref IntPtr left, ref IntPtr right);
Currently, I accept Func<T,T,int> comparer in a constructor of a generic class and convert it to the delegate. "mylibrary.dll" owns data, managed C# library knows how to convert pointers to T and then compare Ts.
//.in ctor
CompareFunction cmpFunc = (ref IntPtr left, ref IntPtr right) => {
var l = GenericFromPointer<T>(left);
var r = GenericFromPointer<T>(right);
return comparer(l, r);
};
I also have an option to write a CompareFunction in C for most important data types that are used in 90%+ cases, but I hope to avoid modifications to the native library.
The question is, when setting the compare function with P/Invoke, does every subsequent call to that function from C code incurs marshaling overheads, or the delegate is called from C as if it was initially written in C?
I imagine that, when compiled, the delegate is a sequence of machine instructions in memory, but do not understand if/why C code would need to ask .NET to make the actual comparison, instead of just executing these instructions in place?
I am mostly interested in better understanding how interop works. However, this delegate is used for binary search on big data sets, and if every subsequent call has some overheads as a single P/Invoke, rewriting comparers in native C could be a good option.
I imagine that, when compiled, the delegate is a sequence of machine instructions in memory, but do not understand if/why C code would need to ask .NET to make the actual comparison, instead of just executing these instructions in place?
I guess you're a bit confused about how .NET works. C doesn't ask .NET to execute code.
First, your lambda is turned into a compiler-generated class instance (because you're closing over the comparer variable), and then a delegate to a method of this class is used. And it's an instance method since your lambda is a closure.
A delegate is similar to a function pointer. So, like you say, it points to executable code. Whether this code is generated from a C source or a .NET source is irrelevant at this point.
It's in the interop case when this starts to matter. P/Invoke won't pass your delegate as-is as a function pointer to C code. It will pass a function pointer to a thunk which calls the delegate. Visual Studio will display this as a [Native to Managed Transition] stack frame. This is needed for different reasons such as marshaling or passing additional parameters (like the instance of the class backing your lambda for instance).
As to the performance considerations of this, here's what MSDN says, quite obviously:
Thunking. Regardless of the interoperability technique used, special transition sequences, which are known as thunks, are required each time a managed function calls an native function, and vice-versa. Because thunking contributes to the overall time that it takes to interoperate between managed code and native code, the accumulation of these transitions can negatively affect performance.
So, if your code requires a lot of transitions between managed and native code, you should get better performance by doing your comparisons on the C side if possible, so that you avoid the transitions.

Returning a unsigned long array from C dll into C# as uint[] why is a MarshalDirectiveException error thrown?

I am using a pointer to a unsigned long array (manipulate data) then send it back to C#
in C#
[DllImport("some_dll.dll")]
private static extern uint[] func(uint[]x, uint[]y, uint[]z);
C header
_declspec(dllexport) unsigned long* _stdcall func(unsigned long[],
unsigned long[],unsigned long[]);
error
MarshalDirectiveException
Cannot marshal 'return value': Invalid managed/unmanaged type combination
Kindly let me know what is causing the problem.
The message means that the p/invoke marshaller isn't capable of marshalling that return value into a uint[].
As I see it you have the following options:
Declare the C# function as returning IntPtr. Then on the managed side you need to copy the memory into a uint[] allocated in your C# code. You can use Marshal.Copy to do that. Somehow you'll need to find out the length of the array. You also need to handle deallocation. Your C# code cannot do that. So it will have to call another function in the native code and ask the native code to deallocate.
Allocate the uint[] in the C# code before the call to your native code. Instead of using a function return value, you pass the uint[] as a parameter. This requires the calling code, the C# code, to know how big the array needs to be.
If you can choose option 2, it will result in simpler code on both sides of the interface. My guess is that the return array is the same length as the input arrays. In which case choose option 2.

client getting a byte array returned from C++ COM dll

I have this declaration in C++ COM header and IDL files:
//Header file:
#define MAX_LENGTH 320
typedef BYTE PRE_KEY [MAX_LENGTH];
//IDL file:
#define MAX_COUNT 10
HRESULT Save([in] DWORD dwCommand, [in]float fdata[MAX_COUNT], [out] PRE_KEY* phKey);
This is the C# client code:
//After C# interop compilation, the method's signature in C# becomes:
Save(uint dwCommand, float[] fdata, out byte[] phKey);
//The code to call the C++ COM server:
uint dwCommand = 2;
float[] fdata = new float[dwCommand];
fdata[0] = 1;
fdata[1] = 2;
byte[] phKey = new byte[320];
save(dwCommand, fdata, out phKey);
The code will crash in ntdll.dll before the call returns to C#, but the C++ server has already finished processing and no longer in the stack.
Anyone can figure out how to resolve this issue? And as I am using interop compilation to compile the idl file to generate the C# signaure, so I can't do something in the C++ IDL file and manually change the C# signature.
And what is funny about this is I have another similar call which returns the exact same phKey from C++ to C# and it works perfectly. The only difference is in that call phKey is in a structure and the entire structure is an '[out]' param. Really can't see why this can be returned within a structure but not directly as a param.
The [out] attribute on your IDL declaration is a serious interop problem. It means that your COM server will allocate the array and the caller needs to release it. That very rarely comes to a good end, there is no guarantee whatsoever that your server and your client use the same heap. And will always fail when you use the C runtime allocator with the malloc() function or new[] operator, the CRT uses its own private heap, one that the caller can never get to unless they share the exact same version of the CRT. Odds are very small that this is the case in general, zero when you interop through the CLR.
Which is why it bombs, the CLR knows it needs to release the array after copying it to a managed array. It will use CoTaskMemFree(), using the heap that's reserved for COM interop allocations. Surely you didn't use CoTaskMemAlloc() to allocate the array.
The general solution to this problem is to have the caller supply the array, callee fills it in. Which requires [in, out] on the parameter. And an extra parameter that indicates the size of that passed array, [sizeis] to tell the marshaller about it. Very efficient, no allocation is required. Using the automation SAFEARRAY type avoids having to specify that extra argument, the CLR knows about that type.

C# Marshal byte[] to COM SAFEARRAY parameter with "ref object" signature

I've been going round and round in circles on Google on this, and I can find all kinds of discussion, lots of suggestions, but nothing seems to work. I have an ActiveX component which takes an image as a byte array. When I do a TLB import, it comes in with this signature:
int HandleImage([MarshalAs(UnmanagedType.Struct)] ref object Bitmap);
How do I pass a byte[] to that?
There's another function which can return the data with a similar signature, and it works because I can pass "null" in. The type that comes back is a byte[1..size] (non-zero bounded byte[]). But even if I try to pass in what came back, it still gets a type mismatch exception.
More details:
I've been editing the method in the IDispatch interface signature (using ILSpy to extract the interface from the auto-generated interop assembly). I've tried just about every combination of the following, it always gets Type mismatch exception:
Adding and removing the "ref"
Changing the parameter datatype to "byte[]" or "Array"
Marshalling as [MarshalAs(UnmanagedType.SafeArray, SafeArraySubType = VarEnum.VT_UI1)]. After playing around with MarshalAs quite a bit, I'm becoming convinced that IDispatch does not use those attributes.
Also tried using the "ref object" interface as is, and passing it different types: byte[], Array.CreateInstance(typeof(byte) (which I think are both identical, but I found someone suggesting it, so it couldn't hurt to try).
Here's an example of Delphi code that creates a proper array to pass in:
var
image: OLEVariant;
buf: Pointer;
image := VarArrayCreate([0, Stream.Size], VarByte);
Buf := VarArrayLock(image);
Stream.ReadBuffer(Buf^, Stream.Size);
VarArrayUnlock(image);
Here's the C code to do the same thing. I guess if I can't get it to work from C#, I can invoke it through managed C++, although I'd rather have everything in one project:
long HandleImage(unsigned char* Bitmap, int Length)
{
VARIANT vBitmap;
VariantInit (&vBitmap);
VariantClear(&vBitmap);
SAFEARRAYBOUND bounds[1];
bounds[0].cElements = Length;
bounds[0].lLbound = 1;
SAFEARRAY* arr = SafeArrayCreate(VT_UI1, 1, bounds);
SafeArrayLock(arr);
memcpy(arr->pvData, Bitmap, Length);
SafeArrayUnlock(arr);
vBitmap.parray = arr;
vBitmap.vt = VT_ARRAY | VT_UI1;
long result;
static BYTE parms[] = VTS_PVARIANT;
InvokeHelper(0x5e, DISPATCH_METHOD, VT_I4, (void*)&result, parms,
&vBitmap);
SafeArrayDestroy(arr);
VariantClear(&vBitmap);
return result;
}
I finally figured out how to do it in 100% C# code. Apparently Microsoft never considered the idea that someone might use a method with this signature to pass data in, since it marshals correctly going the other direction (it properly comes back as a byte[]).
Also, ICustomMarshaler doesn't get called on IDispatch calls, it never hit the breakpoints in the custom marshaler (except the static method to get an instance of it).
The answer by Hans Passant in this question got me on the right track: Calling a member of IDispatch COM interface from C#
The copy of IDispatch there doesn't contain the "Invoke" method on IUnknown, but it can be added to the interface, using types in System.Runtime.InteropServices.ComTypes as appropriate: http://msdn.microsoft.com/en-us/library/windows/desktop/ms221479%28v=vs.85%29.aspx
That means you get 100% control over marshaling arguments. Microsoft doesn't expose an implementation of the VARIANT structure, so you have to define your own: http://limbioliong.wordpress.com/2011/09/19/defining-a-variant-structure-in-managed-code-part-2/
The input parameters of Invoke are a variant array, so you have to marshal those to an unmanaged array, and there's a variant output parameter.
So now that we have a variant, what should it contain? This is where automatic marshaling falls down. Instead of directly embedding a pointer to the SAFEARRAY, it needs a pointer to another variant, and that variant should point to the SAFEARRAY.
You can build SAFEARRAYs via P/Invoking these methods: http://msdn.microsoft.com/en-us/library/windows/desktop/ms221145%28v=vs.85%29.aspx
So main variant should be VT_VARIANT | VT_BYREF, it should point to another variant VT_UI8 | VT_ARRAY, and that should point to a SAFEARRAY generated via SafeArrayCreate(). That outermost variant should then be copied into a block of memory, and its IntPtr sent to the Invoke method.

Categories

Resources