consider following setup: c# application with c++ library. c# elements are filled from c++ via callback. on c# side callback is defined like this:
void callbackTester(IntPtr pData, UInt32 length)
{
int[] data = new int[length];
Marshal.Copy(pData, data, (int)0, (int)length);
//using data.. on c#
}
now, on c++ side callback is defined like this:
typedef void (__stdcall *func)(uint8_t* pdata, uint32_t length);
and c++ is using callback like this:
void onData()
{
std::vector<uint8_t> dataToCallback;
// fill in dataToCallback
_callback(&(dataToCallback[0]), dataToCallback.size());
// where _callback is a pointer to that c# function
}
my task: get array from c++ side on c# side using callback.
so, when c++ object is calling onData() function, it calls my callback from c#. so far so good. i have made a c++ tester program, which uses this, and i am receiving array correctly on callback side. if i am using it on c# tester, i am receiving crap.
for example: if i am sending uint8_t array of {1, 1}, i am getting {1, 1} for c++ tester, and i am getting {0xfeeeabab, 0xfeeefeee} on c# side... obviously, the conversion between uint8_t* c++ pointer and IntPtr c# is working not as i expect.
any suggestions? thanks a lot.
The issue appears to be that C++ uint8_t is an unsigned byte, and C# int is a signed 4 byte integer. So you have a simple mismatch of types. The C# type that matches uint8_t is byte.
Your callback should be:
void callbackTester(IntPtr pData, uint length)
{
byte[] data = new byte[length];
Marshal.Copy(pData, data, 0, (int)length);
}
one thing to check is that on C# side you are expecting int-4 bytes per element ("int[] data") but on C++ you only allocating uint8-1 byte per element.
adjust the allocation or length usage, you could be getting access violation, that why you see magic bytes [1].
[1] http://en.wikipedia.org/wiki/Magic_number_(programming)#Magic_debug_values
Related
I just noticed std::byte in the C++ 17.
I am asking this question because I use the code below to send byte array to C++ to play audio sound.
C#:
[DllImport ("AudioStreamer")]
public static extern void playSound (byte[] audioBytes);
C++:
#define EXPORT_API __declspec(dllexport)
extern "C" void EXPORT_API playSound(unsigned char* audioBytes)
With the new byte type in C++ 17, it looks like I might be able to do this now:
C#:
[DllImport ("AudioStreamer")]
public static extern void playSound (byte[] audioBytes);
C++:
#define EXPORT_API __declspec(dllexport)
extern "C" void EXPORT_API playSound(byte[] audioBytes)
I am not sure if this will even work because the compiler I use does not support byte in C++ 17 yet.
So, is std::byte in C++ 17 equivalent to byte in C#? Is there a reason to not use std::byte over unsigned char* ?
According to C++ reference,
Like the character types (char, unsigned char, signed char) std::byte can be used to access raw memory occupied by other objects.
This tells me that you can freely replace
unsigned char audioBytes[]
with
std::byte audioBytes[]
in a function header, and everything is going to work, provided that you plan to treat bytes as bytes, not as numeric objects.
std::byte is equivalent to both unsigned char and char in C++ in a sense that it is a type to represent 1 byte of raw memory.
If you used unsigned char* in your interface, you can easily replace it with std::byte.
In your C# code this will result in no changes at all, on the C++ side this will make your type system more strict (which is a good thing) due to the fact that you will not be able to treat your std::bytes as text characters or as small integers.
Of course this is C++17 feature which may or may not be properly supported by your compiler.
I am developing a wrapper library that allow my project using a x86 C++ dll library in any CPU environment, I have no control about the dll thus I am using DllImport in C#.
There is a provided function which declared in C++: int __stdcall Func(int V, unsigned char *A)
and provided a sample declaration in VB: Private Declare Function Func Lib "lib.dll" Alias "_Func#8" (ByVal V As Long, A As Any) As Long
This function will request a device to Add/Deduct a value to/from a card by passing Convert.ToInt64(decimalValue) as V, and some customize information in A.
Here is the description of A:
It is a byte pointer containing 7 bytes.
The first 5 bytes are used to stores info that will be passed to the card log (The last 4 digits of the receipt number should be included in the first 2 bytes, the other 3 could be A3A4A5)
The last 2 bytes are used to stores info that will be passed to the device (The last 4 digits of the receipt number)
On return, the A contains a 32 bytes data.
After hours and hours of researches and tries, I cannot make result other than 'Access Violation Exception'. Please see the following draft code:
[DllImport("lib.dll", EntryPoint="_Func#8")]
public static external Int64 Func(Int64 V, StringBuilder sb);
string ReceiptNum = "ABC1234";
decimal Amount = 10m;
byte[] A = new byte[32];
A[0] = Convert.ToByte(ReceiptNum.Substring(3, 2));
A[1] = Convert.ToByte(ReceiptNum.Substring(5));
A[2] = Convert.ToByte("A3");
A[3] = Convert.ToByte("A4");
A[4] = Convert.ToByte("A5");
A[5] = Convert.ToByte(ReceiptNum.Substring(3, 2));
A[6] = Convert.ToByte(ReceiptNum.Substring(5));
StringBuilder sb = new StringBuilder(
new ASCIIEncoding().GetString(A), A.Length
);
Int64 Result = Func(Convert.ToInt64(Amount), sb);
And at this point it throws the exception. I have tried passing IntPtr, byte*, byte (by A[0]), byval, byref and none of them works. (Tried to deploy as x86 CPU as well)
Would appreciate any help! Thanks for your time!
PS - The reason of using StringBuilder is the library contains a function that accept a "char *Data" parameter that causes the same exception, and the solution is using StringBuilder to pass as a pointer, this function's VB Declaration is: Private Declare Function Func1 Lib "lib.dll" Alias "_Func1#12(ByVal c As Byte, ByVal o As Byte, ByVal Data As String) As Long
Your external definition is wrong.
StringBuilder is a complex structure containing an array of c# char.
c# chars are utf-16 (double bytes with complex rules for decoding unicode multichar caracters). Probably not what your are seeking.
If your data is a raw byte bufer you should go for byte[]
Int64 is also c# long.
Well, your native method signature takes int, and you're trying to pass a long long. That's not going to work, rather obviously. The same is true with the return value. Don't assume that VB maps clearly to VB.NET, much less C# - Long means a 32-bit integer in VB, but not in .NET. Native code is a very complex environment, and you better know what you're doing when trying to interface with native.
StringBuilder should only be used for character data. That's not your case, and you should use byte[] instead. No matter the fun things you're doing, you're trying to pass invalid unicode data instead of raw bytes. The confusion is probably from the fact that C doesn't distinguish between byte[] and string - both are usually represented as char*.
Additionally, I don't see how you'd expect this wrapper to work in an AnyCPU environment. If the native DLL is 32-bit, you can only use it from a 32-bit process. AnyCPU isn't magic, it just defers the decision of bit-ness to runtime, rather than compile-time.
I've found that the implementation of the GetBytes function in .net framework is something like:
public unsafe static byte[] GetBytes(int value)
{
byte[] bytes = new byte[4];
fixed(byte* b = bytes)
*((int*)b) = value;
return bytes;
}
I'm not so sure I understand the full details of these two lines:
fixed(byte* b = bytes)
*((int*)b) = value;
Could someone provide a more detailed explanation here? And how should I implement this function in standard C++?
Could someone provide a more detailed explanation here?
The MSDN documentation for fixed comes with numerous examples and explanation -- if that's not sufficient, then you'll need to clarify which specific part you don't understand.
And how should I implement this function in standard C++?
#include <cstring>
#include <vector>
std::vector<unsigned char> GetBytes(int value)
{
std::vector<unsigned char> bytes(sizeof(int));
std::memcpy(&bytes[0], &value, sizeof(int));
return bytes;
}
Fixed tells the garbage collector not to move a managed type so that you can access that type with standard pointers.
In C++, if you're not using C++/CLI (i.e. not using .NET) then you can just use a byte-sized pointer (char) and loop through the bytes in whatever you're trying to convert.
Just be aware of endianness...
First fixed has to be used because we want to assign a pointer to a managed variable:
The fixed statement prevents the garbage collector from relocating a
movable variable. The fixed statement is only permitted in an unsafe
context. Fixed can also be used to create fixed size buffers.
The fixed statement sets a pointer to a managed variable and "pins"
that variable during the execution of the statement. Without fixed,
pointers to movable managed variables would be of little use since
garbage collection could relocate the variables unpredictably. The
C# compiler only lets you assign a pointer to a managed variable in a
fixed statement. Ref.
Then we declare a pointer to byte and assign to the start of the byte array.
Then, we cast the pointer to byte to a pointer to int, dereference it and assign it to the int passed in.
The function creates a byte array that contains the same binary data as your platform's representation of the integer value. In C++, this can be achieved (for any type really) like so:
int value; // or any type!
unsigned char b[sizeof(int)];
unsigned char const * const p = reinterpret_cast<unsigned char const *>(&value);
std::copy(p, p + sizeof(int), b);
Now b is an array of as many bytes as the size of the type int (or whichever type you used).
In C# you need to say fixed to obtain a raw pointer, since usually you do not have raw pointers in C# on account of objects not having a fixed location in memory -- the garbage collector can move them around at any time. fixed prevents this and fixes the object in place so a raw pointer can make sense.
You can implement GetBytes() for any POD type with a simple function template.
#include <vector>
template <typename T>
std::vector<unsigned char> GetBytes(T value)
{
return std::vector<unsigned char>(reinterpret_cast<unsigned char*>(&value),
reinterpret_cast<unsigned char*>(&value) + sizeof(value));
}
Here is a C++ header-only library that may be of help.
BitConverter
The idea of implementing the GetBytes function in C++ is straight-forward: compute each byte of the value according to specified layout. For example, let's say we need to get the bytes of an unsigned 16-bit integer in big endian. We can divide the value by 256 to get the first byte, and take the remainder as the second byte.
For floating-point numbers, the algorithm is a little bit more complicated. We need to get the sign, exponent, and mantissa of the number, and encode them as bytes. See https://en.wikipedia.org/wiki/Double-precision_floating-point_format
I'm having trouble figuring out the best way to have a delphi function operate on a byte array from .net.
The delphi signature looks like this:
procedure Encrypt(
var Bytes: array of byte;
const BytesLength: Integer;
const Password: PAnsiChar); stdcall; export;
The C# code looks like this:
[DllImport("Encrypt.dll",
CallingConvention = CallingConvention.StdCall,
CharSet = CharSet.Ansi)]
public static extern void Encrypt(
ref byte[] bytes,
int bytesLength,
string password);
Omitting var and ref before the byte array declaration seemed to fail, but is it required since I'll be changing only the contents of the array and not the array itself?
Also, for some reason I can't seem to get the length of the array in delphi, if I remove the BytesLength parameter than Length(Bytes) will not work, if I add the BytesLength parameter, Length(Bytes) starts to work but BytesLength has a wrong value.
Make the first parameter of the Delphi Encrypt be Bytes: PByte and you should be good to go.
An open array, as you have it, expects to be passed both the pointer to the first element and the length which explains what you describe in your question.
I need to call an external dll from c#. This is the header definition:
enum WatchMode {
WATCH_MODE_SYSTEM = 0,
WATCH_MODE_APPLICATION = 1 };
LONG ADS_API WDT_GetMode ( LONG i_hHandle, WatchMode * o_pWatchMode );
I've added the enum and the call in C#:
public enum WatchMode
{
WATCH_MODE_SYSTEM = 0,
WATCH_MODE_APPLICATION = 1
}
[DllImport("AdsWatchdog.dll")]
internal static extern long WDT_GetMode(long hHandle, ref WatchMode watchmode);
This generates an AccessViolationException. I know the dll is 'working' because I've also added a call to GetHandle which returns the hHandle mentioned above. I've tried to change the param to an int (ref int watchmode) but get the same error. Doesn anyone know how I can PInvoke the above call?
You're running into a parameter size problem difference between C# and C++. In the C++/windows world LONG is a 4 byte signed integer. In the C# world long is a 8 byte signed integer. You should change your C# signature to take an int.
ffpf is wrong in saying that you should use an IntPtr here. It will fix this particular problem on a 32 bit machine since an IntPtr will marshal as a int. If you run this on a 64 bit machine it will marshal as a 8 byte signed integer again and will crash.
The Managed, Native, and COM Interop Team released the PInvoke Interop Assistant on codeplex. Maybe it can create the proper signature.
http://www.codeplex.com/clrinterop/Release/ProjectReleases.aspx?ReleaseId=14120