C# structure include byte array and long value - c#

This is My Code to user a long variable with it bytes, but when program runs, Exception Happens and show these:
An unhandled exception of type 'System.TypeLoadException' occurred in Test.exe
Additional information: Could not load type 'Test.MyU32' from assembly 'Test,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' because it contains an
object field at offset 0 that is incorrectly aligned or overlapped by a non-object
field.
[StructLayout(LayoutKind.Explicit)]
public struct MyU32
{
[FieldOffset(0)]
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 4)]
public byte[] Bytes;
[FieldOffset(0)]
public long Value;
}
Please help me how to handle it!

Your code case doesn't work because you're overlapping a reference and a value type(a 64 bit int). You can overlap different value types and different references, but you can't mix them.
But even when they work, such low level hacks are usually a bad idea in C#. I recommend using properties which do the transformation instead of low level unions.
Perhaps what you actually want is:
internal static class ByteIntegerConverter
{
public static UInt32 LoadLittleEndian32(byte[] buf, int offset)
{
return
(UInt32)(buf[offset + 0])
| (((UInt32)(buf[offset + 1])) << 8)
| (((UInt32)(buf[offset + 2])) << 16)
| (((UInt32)(buf[offset + 3])) << 24);
}
public static void StoreLittleEndian32(byte[] buf, int offset, UInt32 value)
{
buf[offset + 0] = (byte)value;
buf[offset + 1] = (byte)(value >> 8);
buf[offset + 2] = (byte)(value >> 16);
buf[offset + 3] = (byte)(value >> 24);
}
}
UInt32 value = ByteIntegerConverter.LoadLittleEndian32(buf, offset);
// do something with `value`
ByteIntegerConverter.StoreLittleEndian32(buf, offset, value);
This always uses little endian regardless of the computer's native endianness. If you want native endainness you could check with BitConverter.IsLittleEndian and use different shift constants if it is big endian.

I'm not perfectly sure, but I think the problem is caused by the overlap between a value type and a reference type. It should be possible to overlap only value types. Because if it was possible to overlap value types & reference types, you could change the reference directly. For obvious safety reason, it's not possible.
As byte[] is a reference type (as all arrays in .NET). You can't have Value overlapping Bytes.
If you are used to C, your structure (without the explicit layout) would be "similar" to:
struct MyU32
{
byte* Bytes;
long Value;
}
but it is not similar to:
struct MyU32
{
byte Bytes[4];
long Value;
}

I solve my problem
Tanx to All bodies that put time on it.
public struct MyU32
{
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 4)]
public byte[] Bytes;
public uint Value
{
get { return ((uint)Bytes[0] + Bytes[1] << 8 + Bytes[2] << 16 + Bytes[3] << 24); }
set
{
Bytes[0] = (byte)(value & 0xFF);
Bytes[1] = (byte)(value>>8 & 0xFF);
Bytes[2] = (byte)(value>>16 & 0xFF);
Bytes[3] = (byte)(value>>24 & 0xFF);
}
}
}

Related

byte array from specific index as struct in c# without making a copy

Currently I code client-server junk and deal a lot with C++ structs passed over network.
I know about ways provided here Reading a C/C++ data structure in C# from a byte array, but they all about making a copy.
I want to have something like that:
struct/*or class*/ SomeStruct
{
public uint F1;
public uint F2;
public uint F3;
}
Later in my code I want to have something like that:
byte[] Data; //16 bytes that I got from network
SomeStruct PartOfDataAsSomeStruct { get { return /*make SomeStruct instance based on this.Data starting from index 4, without copying it. So when I do PartOfDataAsSomeStruct.F1 = 132465; it also changes bytes 4, 5, 6 and 7 in this.Data.*/; } }
If this is possible, please, tell how?
Like so?
byte[] data = new byte[16];
// 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
Console.WriteLine(BitConverter.ToString(data));
ref SomeStruct typed = ref Unsafe.As<byte, SomeStruct>(ref data[4]);
typed.F1 = 42;
typed.F2 = 3;
typed.F3 = 9;
// 00-00-00-00-2A-00-00-00-03-00-00-00-09-00-00-00
Console.WriteLine(BitConverter.ToString(data));
This coerces the data from the middle of the byte-array using a ref-local that is an "interior managed pointer" to the data. Zero copies.
If you need multiple items (like how a vector would work), you can do the same thing with spans and MemoryMarshal.Cast
Note that it uses CPU-endian rules for the elements - little endian in my case.
For spans:
byte[] data = new byte[256];
// create a span of some of it
var span = new Span<byte>(data, 4, 128);
// now coerce the span
var typed = MemoryMarshal.Cast<byte, SomeStruct>(span);
Console.WriteLine(typed.Length); // 10 of them fit
typed[3].F1 = 3; // etc
Thank you for the correction, Marc Gravell. And thank you for the example.
Here is a way using Class and Bitwise Operators, without pointers, to do the samething:
class SomeClass
{
public byte[] Data;
public SomeClass()
{
Data = new byte[16];
}
public uint F1
{
get
{
uint ret = (uint)(Data[4] << 24 | Data[5] << 16 | Data[6] << 8 | Data[7]);
return ret;
}
set
{
Data[4] = (byte)(value >> 24);
Data[5] = (byte)(value >> 16);
Data[6] = (byte)(value >> 8);
Data[7] = (byte)value;
}
}
}
Testing:
SomeClass sc = new SomeClass();
sc.F1 = 0b_00000001_00000010_00000011_00000100;
Console.WriteLine(sc.Data[3].ToString() + " " + sc.Data[4].ToString() + " " + sc.Data[5].ToString() + " " + sc.Data[6].ToString());
Console.WriteLine(sc.F1.ToString());
//Output:
//1 2 3 4
//16909060

Mimick C++ nested structs with union in C#

I know this question has been asked many times before, and I've tried to read through all the previous questions without much luck.
I am trying to convert the following C++ struct to C#, for use with socket communication.
enum class packet_type
{
read_mem,
get_base_addr,
get_pid,
completed
};
struct copy_mem
{
unsigned int dest_process_id;
unsigned long long dest_address;
unsigned int src_process_id;
unsigned long long src_address;
unsigned int size;
};
struct get_base_addr
{
unsigned int process_id;
};
struct get_pid
{
size_t len;
wchar_t name[256];
};
struct completed
{
unsigned long long result;
};
struct PacketHeader
{
//uint32_t magic;
packet_type type;
};
struct Packet
{
PacketHeader header;
union
{
copy_mem copy_memory;
get_base_addr get_base_address;
get_pid get_pid;
completed completed;
} data;
};
And this is my current C# implementation
public enum PacketType
{
read_mem = 0,
get_base_addr = 1,
get_pid = 2,
completed = 3
}
[StructLayout(LayoutKind.Sequential)]
public struct PacketHeader
{
public PacketType type;
}
[StructLayout(LayoutKind.Sequential)]
public struct get_base_addr
{
uint process_id;
};
[StructLayout(LayoutKind.Sequential)]
public struct get_pid
{
public ulong len;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 256)]
public string name;
}
[StructLayout(LayoutKind.Sequential)]
public struct copy_mem
{
public uint dest_process_id;
public ulong dest_address;
public uint src_process_id;
public ulong src_address;
public uint size;
}
[StructLayout(LayoutKind.Sequential)]
public struct completed
{
public ulong result;
};
[StructLayout(LayoutKind.Explicit, Pack = 0, CharSet = CharSet.Unicode)]
public struct Packet
{
[FieldOffset(0)] //
public PacketHeader header;
[FieldOffset(4)]
public copy_mem CopyMem; //28
[FieldOffset(32)]
public get_base_addr GetBaseAddress;
[FieldOffset(36)]
public get_pid GetPid;
[FieldOffset(300)]
public completed Completed;
}
I am then using this method to convert the struct to a byte array for the socket transmission:
public static byte[] RawSerialize(T item)
{
int rawSize = Marshal.SizeOf(typeof(T));
IntPtr buffer = Marshal.AllocHGlobal(rawSize);
var a = Marshal.SizeOf(item);
var b = Marshal.SizeOf(buffer);
Marshal.StructureToPtr(item, buffer, false);
byte[] rawData = new byte[rawSize];
Marshal.Copy(buffer, rawData, 0, rawSize);
Marshal.FreeHGlobal(buffer);
return rawData;
}
The issue is that var a = Marshal.SizeOf(item); reports a size of 312, but the actual struct should be 528 bytes when I do sizeof(Packet) in C++
Your assumptions seem to be wrong. First of all, the wchar_t type may have different lengths on different machines. On mine, an x64 Linux box, it's 4 byte - that alone makes get_pid a 1032 byte sized struct. You might be interested in using a char16_t or char32_t type instead (see e.g. here).
Since the union in Packet overlaps all fields, this also makes Packet a 1040 byte-sized struct: 4 bytes for PacketHeader, 1032 bytes for get_pid - which is the "longest" struct in there by far - and 4 bytes for padding. Padding, sadly, is platform specific.
To get rid of padding from the C/C++ compiler, you'd need to use attributes such as GCC's __attribute__ ((packed)) or Visual C++'s #pragma pack(1) (see e.g. this SO answer).
Careful though, the field offsets in C# are wrong as well: Except for the header, all field offsets in Packet have to be [FieldOffset(4)] - since in C++ it's a union that starts at byte 4 (assuming zero padding).
For portability, also be aware that an unsigned long long is platform specific as well and that the only guarantee for it is to be at least 64 bit long. If you need exactly 64 bit, you may want to use uint64_t instead (see e.g. here).
Here's the code I used to determine sizes (Linux x64, GCC 9.3):
int main() {
std::cout << "packet_type: " << sizeof(packet_type) << std::endl;
std::cout << "copy_mem: " << sizeof(copy_mem) << std::endl;
std::cout << "get_base_addr: " << sizeof(get_base_addr) << std::endl;
std::cout << "get_pid: " << sizeof(get_pid) << std::endl;
std::cout << "completed: " << sizeof(completed) << std::endl;
std::cout << "PacketHeader: " << sizeof(PacketHeader) << std::endl;
std::cout << "Packet: " << sizeof(Packet) << std::endl;
std::cout << "wchar_t: " << sizeof(wchar_t) << std::endl;
return 0;
}
With padding (default structs):
packet_type: 4
copy_mem: 40
get_base_addr: 4
get_pid: 1032
completed: 8
PacketHeader: 4
Packet: 1040
wchar_t: 4
No padding (__attribute__ ((packed))):
packet_type: 4
copy_mem: 28
get_base_addr: 4
get_pid: 1032
completed: 8
PacketHeader: 4
Packet: 1036
wchar_t: 4
As was pointed out in the comments, setting the Packet struct's GetPid field to [FieldAlign(4)] will result in the following runtime error:
Unhandled exception. System.TypeLoadException: Could not load type 'Packet' from assembly 'StructSize, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' because it contains an object field at offset 4 that is incorrectly aligned or overlapped by a non-object field.
One way to work around this is to define the get_pid struct like so:
[StructLayout(LayoutKind.Sequential, Pack = 0)]
public unsafe struct get_pid
{
public ulong len;
public fixed byte name[256];
}
This is still assuming that the name string is 128 characters long, each of which is a 2-byte Unicode. In doing so, the name property is now of type byte*. To get back the string, the following two methods should work:
public static unsafe string GetName(get_pid gp) =>
new string((sbyte*) gp.name, 0, 256, Encoding.Unicode);
public static unsafe string GetName(get_pid gp) =>
Marshal.PtrToStringUni(new IntPtr(gp.name), 256);

int to byte[] consistency over network

I have a struct that gets used all over the place and that I store as byteArray on the hd and also send to other platforms.
I used to do this by getting a string version of the struct and using getBytes(utf-8) and getString(utf-8) during serialization. With that I guess I avoided the little and big endian problems?
However that was quite a bit of overhead and I am now using this:
public static explicit operator byte[] (Int3 self)
{
byte[] int3ByteArr = new byte[12];//4*3
int x = self.x;
int3ByteArr[0] = (byte)x;
int3ByteArr[1] = (byte)(x >> 8);
int3ByteArr[2] = (byte)(x >> 0x10);
int3ByteArr[3] = (byte)(x >> 0x18);
int y = self.y;
int3ByteArr[4] = (byte)y;
int3ByteArr[5] = (byte)(y >> 8);
int3ByteArr[6] = (byte)(y >> 0x10);
int3ByteArr[7] = (byte)(y >> 0x18);
int z = self.z;
int3ByteArr[8] = (byte)z;
int3ByteArr[9] = (byte)(z >> 8);
int3ByteArr[10] = (byte)(z >> 0x10);
int3ByteArr[11] = (byte)(z >> 0x18);
return int3ByteArr;
}
public static explicit operator Int3(byte[] self)
{
int x = self[0] + (self[1] << 8) + (self[2] << 0x10) + (self[3] << 0x18);
int y = self[4] + (self[5] << 8) + (self[6] << 0x10) + (self[7] << 0x18);
int z = self[8] + (self[9] << 8) + (self[10] << 0x10) + (self[11] << 0x18);
return new Int3(x, y, z);
}
It works quite well for me, but I am not quite sure how little/big endian works,. do I still have to take care of something here to be safe when some other machine receives an int I sent as a bytearray?
Your current approach will not work for the case when your application running on system which use Big-Endian. In this situation you don't need reordering at all.
You don't need to reverse byte arrays by your self
And you don't need check for endianess of the system by your self
Static method IPAddress.HostToNetworkOrder will convert integer to the integer with big-endian order.
Static method IPAddress.NetworkToHostOrder will convert integer to the integer with order your system using
Those methods will check for Endianness of the system and will do/or not reordering of integers.
For getting bytes from integer and back use BitConverter
public struct ThreeIntegers
{
public int One;
public int Two;
public int Three;
}
public static byte[] ToBytes(this ThreeIntegers value )
{
byte[] bytes = new byte[12];
byte[] bytesOne = IntegerToBytes(value.One);
Buffer.BlockCopy(bytesOne, 0, bytes, 0, 4);
byte[] bytesTwo = IntegerToBytes(value.Two);
Buffer.BlockCopy(bytesTwo , 0, bytes, 4, 4);
byte[] bytesThree = IntegerToBytes(value.Three);
Buffer.BlockCopy(bytesThree , 0, bytes, 8, 4);
return bytes;
}
public static byte[] IntegerToBytes(int value)
{
int reordered = IPAddress.HostToNetworkOrder(value);
return BitConverter.GetBytes(reordered);
}
And converting from bytes to struct
public static ThreeIntegers GetThreeIntegers(byte[] bytes)
{
int rawValueOne = BitConverter.ToInt32(bytes, 0);
int valueOne = IPAddress.NetworkToHostOrder(rawValueOne);
int rawValueTwo = BitConverter.ToInt32(bytes, 4);
int valueTwo = IPAddress.NetworkToHostOrder(rawValueTwo);
int rawValueThree = BitConverter.ToInt32(bytes, 8);
int valueThree = IPAddress.NetworkToHostOrder(rawValueThree);
return new ThreeIntegers(valueOne, valueTwo, valueThree);
}
If you will use BinaryReader and BinaryWriter for saving and sending to another platforms then BitConverter and byte array manipulating can be dropped off.
// BinaryWriter.Write have overload for Int32
public static void SaveThreeIntegers(ThreeIntegers value)
{
using(var stream = CreateYourStream())
using (var writer = new BinaryWriter(stream))
{
int reordredOne = IPAddress.HostToNetworkOrder(value.One);
writer.Write(reorderedOne);
int reordredTwo = IPAddress.HostToNetworkOrder(value.Two);
writer.Write(reordredTwo);
int reordredThree = IPAddress.HostToNetworkOrder(value.Three);
writer.Write(reordredThree);
}
}
For reading value
public static ThreeIntegers LoadThreeIntegers()
{
using(var stream = CreateYourStream())
using (var writer = new BinaryReader(stream))
{
int rawValueOne = reader.ReadInt32();
int valueOne = IPAddress.NetworkToHostOrder(rawValueOne);
int rawValueTwo = reader.ReadInt32();
int valueTwo = IPAddress.NetworkToHostOrder(rawValueTwo);
int rawValueThree = reader.ReadInt32();
int valueThree = IPAddress.NetworkToHostOrder(rawValueThree);
}
}
Of course you can refactor methods above and get more cleaner solution.
Or add as extension methods for BinaryWriter and BinaryReader.
Yes you do. With changes endianness your serialization which preserves bit ordering will run into trouble.
Take the int value 385
In a bigendian system it would be stored as
000000000000000110000001
Interpreting it as littleendian would read it as
100000011000000000000000
And reverse translate to 8486912
If you use the BitConverter class there will be a book property desiring the endianness of the system. The bitconverter can also produce the bit arrays for you.
You will have to decide to use either endianness and reverse the byte arrays according to the serializing or deserializing systems endianness.
The description on MSDN is actually quite detailed. Here they use Array.Reverse for simplicity. I am not certain that your casting to/from byte in order to do the bit manipulation is in fact the fastest way of converting, but that is easily benchmarked.

Getting upper and lower byte of an integer in C# and putting it as a char array to send to a com port, how?

In C I would do this
int number = 3510;
char upper = number >> 8;
char lower = number && 8;
SendByte(upper);
SendByte(lower);
Where upper and lower would both = 54
In C# I am doing this:
int number = Convert.ToInt16("3510");
byte upper = byte(number >> 8);
byte lower = byte(number & 8);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
comport.Write(data);
However in the debugger number = 3510, upper = 13 and lower = 0
this makes no sense, if I change the code to >> 6 upper = 54 which is absolutely strange.
Basically I just want to get the upper and lower byte from the 16 bit number, and send it out the com port after "GETDM"
How can I do this? It is so simple in C, but in C# I am completely stumped.
Your masking is incorrect - you should be masking against 255 (0xff) instead of 8. Shifting works in terms of "bits to shift by" whereas bitwise and/or work against the value to mask against... so if you want to only keep the bottom 8 bits, you need a mask which just has the bottom 8 bits set - i.e. 255.
Note that if you're trying to split a number into two bytes, it should really be a short or ushort to start with, not an int (which has four bytes).
ushort number = Convert.ToUInt16("3510");
byte upper = (byte) (number >> 8);
byte lower = (byte) (number & 0xff);
Note that I've used ushort here instead of byte as bitwise arithmetic is easier to think about when you don't need to worry about sign extension. It wouldn't actually matter in this case due to the way the narrowing conversion to byte works, but it's the kind of thing you should be thinking about.
You probably want to and it with 0x00FF
byte lower = Convert.ToByte(number & 0x00FF);
Full example:
ushort number = Convert.ToUInt16("3510");
byte upper = Convert.ToByte(number >> 8);
byte lower = Convert.ToByte(number & 0x00FF);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
Even if the accepted answer fits the question, I consider it incomplete due to the simple fact that the question contains int and not short in header and it is misleading in search results, and as we know Int32 in C# has 32 bits and thus 4 bytes. I will post here an example that will be useful in the case of Int32 use. In the case of an Int32 we have:
LowWordLowByte
LowWordHighByte
HighWordLowByte
HighWordHighByte.
And as such, I have created the following method for converting the Int32 value into a little endian Hex string, in which every byte is separated from the others by a Whitespace. This is useful when you transmit data and want the receiver to do the processing faster, he can just Split(" ") and get the bytes represented as standalone hex strings.
public static String IntToLittleEndianWhitespacedHexString(int pValue, uint pSize)
{
String result = String.Empty;
pSize = pSize < 4 ? pSize : 4;
byte tmpByte = 0x00;
for (int i = 0; i < pSize; i++)
{
tmpByte = (byte)((pValue >> i * 8) & 0xFF);
result += tmpByte.ToString("X2") + " ";
}
return result.TrimEnd(' ');
}
Usage:
String value1 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 4);
String value2 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 4);
String value3 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 2);
String value4 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 1);
The result is:
7C 92 00 00
FF FF 03 00
7C 92
FF.
If it is hard to understand the method which I created, then the following might be a more comprehensible one:
public static String IntToLittleEndianWhitespacedHexString(int pValue)
{
String result = String.Empty;
byte lowWordLowByte = (byte)(pValue & 0xFF);
byte lowWordHighByte = (byte)((pValue >> 8) & 0xFF);
byte highWordLowByte = (byte)((pValue >> 16) & 0xFF);
byte highWordHighByte = (byte)((pValue >> 24) & 0xFF);
result = lowWordLowByte.ToString("X2") + " " +
lowWordHighByte.ToString("X2") + " " +
highWordLowByte.ToString("X2") + " " +
highWordHighByte.ToString("X2");
return result;
}
Remarks:
Of course insteand of uint pSize there can be an enum specifying Byte, Word, DoubleWord
Instead of converting to hex string and creating the little endian string, you can convert to chars and do whatever you want to do.
Hope this will help someone!
Shouldn't it be:
byte lower = (byte) ( number & 0xFF );
To be a little more creative
[System.Runtime.InteropServices.StructLayout( System.Runtime.InteropServices.LayoutKind.Explicit )]
public struct IntToBytes {
[System.Runtime.InteropServices.FieldOffset(0)]
public int Int32;
[System.Runtime.InteropServices.FieldOffset(0)]
public byte First;
[System.Runtime.InteropServices.FieldOffset(1)]
public byte Second;
[System.Runtime.InteropServices.FieldOffset(2)]
public byte Third;
[System.Runtime.InteropServices.FieldOffset(3)]
public byte Fourth;
}

How to pass an array of bytes as a Pointer in C#

I have two questions.They both are concerning a void in C++,which I am trying to translate in C#.
C++ code
void Func_X_2(LPBYTE stream, DWORD key, BYTE keyByte)
{
stream[0] ^= (stream[0] + LOBYTE(LOWORD(key)) + keyByte);
stream[1] ^= (stream[1] + HIBYTE(LOWORD(key)) + keyByte);
stream[2] ^= (stream[2] + LOBYTE(HIWORD(key)) + keyByte);
stream[3] ^= (stream[3] + HIBYTE(HIWORD(key)) + keyByte);
stream[4] ^= (stream[4] + LOBYTE(LOWORD(key)) + keyByte);
stream[5] ^= (stream[5] + HIBYTE(LOWORD(key)) + keyByte);
stream[6] ^= (stream[6] + LOBYTE(HIWORD(key)) + keyByte);
stream[7] ^= (stream[7] + HIBYTE(HIWORD(key)) + keyByte);
}
First question:
DWORD is UInt32,BYTE is byte,but what is LPBYTE? How to use it as an array?
Second question:
How to use LOBYTE,HIBYTE,LOWORD,HIWORD in C#?
EDIT
This is how the function is being called:
C++ code
Func_X_2((LPBYTE)keyArray, dwArgs[14], keyByte);
keyArray is a DWORD(UInt32),dwArgs is an array of dword.KeyByte is a byte.
Thanks in advance.
LPBYTE is a pointer to a byte array. The equivalent in C# would be a variable of type byte[]. So you could translate your function like so:
public static void Func_X_2(byte[] stream, int key, byte keyByte)
{
stream[0] ^= (byte)(stream[0] + BitConverter.GetBytes(LoWord(key))[0]);
stream[1] ^= (byte)(stream[1] + BitConverter.GetBytes(LoWord(key))[1]);
stream[2] ^= (byte)(stream[2] + BitConverter.GetBytes(HiWord(key))[0]);
stream[3] ^= (byte)(stream[3] + BitConverter.GetBytes(HiWord(key))[1]);
stream[4] ^= (byte)(stream[4] + BitConverter.GetBytes(LoWord(key))[0]);
stream[5] ^= (byte)(stream[5] + BitConverter.GetBytes(LoWord(key))[1]);
stream[6] ^= (byte)(stream[6] + BitConverter.GetBytes(HiWord(key))[0]);
stream[7] ^= (byte)(stream[7] + BitConverter.GetBytes(HiWord(key))[1]);
}
public static int LoWord(int dwValue)
{
return (dwValue & 0xFFFF);
}
public static int HiWord(int dwValue)
{
return (dwValue >> 16) & 0xFFFF;
}
LPBYTE stands for Long Pointer to Byte, so it's effectively a Byte array.
If you have an uint32, u (have to be careful shifting signed quantities):
LOWORD(u) = (u & 0xFFFF);
HIWORD(u) = (u >> 16);
assumes only bottom 16 bits set (ie. top 16 bits zero):
LOBYTE(b) = (b & 0xFF);
HIBYTE(b) = (b >> 8);
[...] what is LPBYTE? How to use it as an array?
It is a pointer to BYTE: a typedef, usually for unsigned char. You use it as you would use an unsigned char* to point to the first element of an array of unsigned characters. It is defined in windef.h:
typedef unsigned char BYTE;
typedef BYTE far *LPBYTE;
How to use LOBYTE,HIBYTE,LOWORD,HIWORD in C#?
These are macros to fetch parts of a WORD. They are very easy to implement (as bit-fiddling operatios). These are also defined in windef.h. You can simply take the definitions out and paste it into custom C# functions:
#define LOWORD(l) ((WORD)((DWORD_PTR)(l) & 0xffff))
#define HIWORD(l) ((WORD)((DWORD_PTR)(l) >> 16))
#define LOBYTE(w) ((BYTE)((DWORD_PTR)(w) & 0xff))
#define HIBYTE(w) ((BYTE)((DWORD_PTR)(w) >> 8))
You may want to look at this SO post also for bit manipulation in C#.

Categories

Resources