The pinvoke documentation fro GetExitCodeProcess shows exit codes returned as unsigned integers (uint). How do I handle a process with negative exit code values? Is LPDWORD correctly assigned to uint or is that a bug in pinvoke doc?
pinvoke doc:
http://www.pinvoke.net/default.aspx/kernel32.getexitcodeprocess
win32 api doc:
http://msdn.microsoft.com/en-us/library/ms683189(v=vs.85).aspx
DWORD in unsigned integer.
A 32-bit unsigned integer. The range is 0 through 4294967295 decimal.
This type is declared in WinDef.h as follows:
typedef unsigned long DWORD;
No bug here.
Related
I have a struct in c++:
struct some_struct{
uchar* data;
size_t size;
}
I want to pass it between manged(c#) and native(c++). What is the equivalent of size_t in C# ?
P.S. I need an exact match in the size because any byte difference will results in huge problem while wrapping
EDIT:
Both native and manged code are under my full control ( I can edit whatever I want)
There is no C# equivalent to size_t.
The C# sizeof() operator always returns an int value regardless of platform, so technically the C# equivalent of size_t is int, but that's no help to you.
(Note that Marshal.SizeOf() also returns an int.)
Also note that no C# object can be larger than 2GB in size as far as sizeof() and Marshal.Sizeof() is concerned. Arrays can be larger than 2GB, but you cannot use sizeof() or Marshal.SizeOf() with arrays.
For your purposes, you will need to know what the version of code in the DLL uses for size_t and use the appropriate size integral type in C#.
One important thing to realise is that in C/C++ size_t will generally have the same number of bits as intptr_t but this is NOT guaranteed, especially for segmented architectures.
I know lots of people say "use UIntPtr", and that will normally work, but it's not GUARANTEED to be correct.
From the C/C++ definition of size_t, size_t
is the unsigned integer type of the result of the sizeof operator;
The best equivalent for size_t in C# is the UIntPtr type. It's 32-bit on 32-bit platforms, 64-bit on 64-bit platforms, and unsigned.
You better to use nint/nuint which is wrapper around IntPtr/UIntPtr
I've noticed that DWord and QWord values when written to the Registry supposed to be Signed Integers, not Unsigned. This code will throw an exception if value is UInt64 or UInt32:
registryKey.SetValue(name, value);
According to MSDN DWORD is a 32-bit unsigned integer (range: 0 through 4294967295 decimal) https://msdn.microsoft.com/en-us/library/cc230318.aspx
So, to write new DWORD value to the Registry I need to cast it to signed integer like so:
UInt32 unsignedValue = (UInt32)someValue;
Int32 signedValue = (Int32)unsignedValue;
registryKey.SetValue(name, signedValue);
Passing unsigned value to SetValue method will throw an exception.
Am I missing something or I just being retarded?
For historical reasons, the .NET API/libraries is normally "signed" instead of being "signed + unsigned".
But in the end, a signed int and a unsigned int both occupy the same memory space, and there is no special handling done for negative values. So you can do as you said: cast the unsigned value to signed, write it with SetValue and then if you look at the value in Regedit you'll see that it was written "unsigned".
Note that if your program is compiled in "checked" mode, a more correct code would be:
uint unsignedValue = ... // Your original value
int signedValue = unchecked((int)unsignedValue);
registryKey.SetValue(name, signedValue);
Because in "checked" mode casting between int and uint can throw an exception if the conversion isn't possible.
Note that as written here:
This overload of SetValue stores 64-bit integers as strings (RegistryValueKind.String). To store 64-bit numbers as RegistryValueKind.QWord values, use the SetValue(String, Object, RegistryValueKind) overload that specifies RegistryValueKind.
Clearly you'll have to do the same handling for signed-unsigned.
From the RegistryKey.SetValue page' example:
// Numeric values that cannot be interpreted as DWord (int) values
// are stored as strings.
It seems the stored values are signed ints or strings.
Here's a piece of code for obtaining the time when a .NET assembly was built. Note this line:
int secondsSince1970 = System.BitConverter.ToInt32(b, i + c_LinkerTimestampOffset);
this code extracts the TimeDateStamp member of IMAGE_FILE_HEADER structure that is stored inside the assembly. The structure is defined as follows:
typedef struct _IMAGE_FILE_HEADER {
WORD Machine;
WORD NumberOfSections;
DWORD TimeDateStamp;
DWORD PointerToSymbolTable;
DWORD NumberOfSymbols;
WORD SizeOfOptionalHeader;
WORD Characteristics;
} IMAGE_FILE_HEADER, *PIMAGE_FILE_HEADER;
and DWORD is defined as follows:
typedef unsigned long DWORD;
and the struct description says that TimeDateStamp is a number of seconds since an arbitrary moment in the past, so it can't be negative.
Why does the C# code use signed type int to store that unsigned value?
It is because unsigned int is not a CLS compliant variable type and all .NET libraries should follow Common Language Specification.
More info about CLS compliance:
http://msdn.microsoft.com/en-us/library/12a7a7h3.aspx
Is this list correct?
unsigned int(c) -> uint(c#)
const char*(c) -> String(c#)
unsigned int*(c) -> uint[](c#)
unsigned char*(c) -> byte[](c#)
I think there's a mistake here because with these 4 parameters for native function I have PInvokeStackImbalance.
C function is:
bool something
(unsigned char *a,
unsigned int a_length,
unsigned char *b,
unsigned int *b_length);
PInvoke is:
[DllImport(#"lib.dll", EntryPoint = "something")]<br>
public static extern bool something(
byte[] a,
uint a_length,
byte[] b,
uint[] b_length);
First, PInvoke.net is your friend.
Second, You conversions are correct except that you should use a StringBuilder for functions that take a char* as a buffer to fill ([in out]).
Your stack imbalance may be due to the use of different calling conventions. The default calling convention for C# is __stdcall, but your C function is probably __cdecl. If that is the case you will need to add the CallingConvention to your DLLImport attribute.
EDIT: Also, as Groo pointed out, if the pointer arguments in your C function are actually just pointers to unsigned int (for example, as opposed to expecting an array of int) then you should use ref uint instead of an int[].
I am using C# in Mono and I'm trying to use pinvoke to call a Linux shared library.
The c# call is defined as:
[DllImport("libaiousb")]
extern static ulong AIOUSB_Init();
The Linux function is defined as follows:
unsigned long AIOUSB_Init() {
return(0);
}
The compile command for the Linux code is:
gcc -ggdb -std=gnu99 -D_GNU_SOURCE -c -Wall -pthread -fPIC
-I/usr/include/libusb-1.0 AIOUSB_Core.c -o AIOUSB_Core.dbg.o
I can call the function ok but the return result is bonkers. It should be 0 but I'm getting some huge mangled number.
I've put printf's in the Linux code just before the function value is returned and it is correct.
One thing I have noticed that is a little weird is that the printf should occur before the function returns. However, I see the function return to C# and then the c# prints the return result and finally the printf result is displayed.
From here:
An unsigned long can hold all the values between 0 and ULONG_MAX inclusive. ULONG_MAX must be at least 4294967295. The long types must contain at least 32 bits to hold the required range of values.
For this reason a C unsigned long is usually translated to a .NET UInt32:
[DllImport("libaiousb")]
extern static uint AIOUSB_Init();
You're probably running that on a system where C unsigned long is 32-bits. C# unsigned long is 64 bits. If you want to make sure the return value is a 64-bits unsigned long, include stdint.h and return an uint64_t from AIOUSB_Init().