Bit manipulation of Uint32 bitfields in C# - c#

I have two enums in C#
public class PlayerAttributes
{
private UInt32 m_AttributeFlagsMask;
private UInt32 m_AttributeFlagsBitmap;
[Flags]
public enum EAttributesFlagsBmp
{
AIn = 0,
BIn = (1 << 1), //1
CIn = (1 << 2), //2
DIn = (1 << 3), //4
EIn = (1 << 4), //8
FIn = (1 << 5), //16
GIn = (1 << 6) //32
}
[Flags]
public enum EAttributeFlagsMask
{
None = 0,
AIn = (1 << 1), //1
BIn = (1 << 2), //2
CIn = (1 << 3), //4
DIn = (1 << 4), //8
EIn = (1 << 5), //16
FIn = (1 << 6) //32
}
public UInt32 AttributeFlagsMask { get { return m_AttributeFlagsMask; } private set { m_AttributeFlagsMask = value; } }
public UInt32 AttributeFlagsBmp { get { return m_AttributeFlagsBmp; } private set { m_AttributeFlagsBmp = value; } }
public bool SetAInAndBIn(bool a_in, bool b_in)
{
if(a_in && !b_in)
{
UInt32 flag = ((UInt32)PlayerAttributes.EAttributesFlagsBmp.AIn | ~(UInt32)PlayerAttributes.EAttributesFlagsBmp.BIn);
}else if(bin && !a_in)
{
UInt32 flag = (~(UInt32)PlayerAttributes.EAttributesFlagsBmp.AIn | (UInt32)PlayerAttributes.EAttributesFlagsBmp.BIn);
}
AttributeFlagsBmp = flag
return true;
}
}
The above code doesn't seem to set the value correctly.
What I want is in
case 1 AIn should be set and BIn should be unset. (All other bits should be unchanged)
and
In case 2 BIn should be set and AIn should be unset. (All other bits should be unchanged)
how do I achieve this

Not sure if that is what you want, but if the following line
((UInt32)PlayerAttributes.EAttributesFlagsBmp.AIn | ~(UInt32)PlayerAttributes.EAttributesFlagsBmp.BIn)
is expected to set A and unset B, while not changing the others, it will not work (it will actually flip all the others). To set A and unset B without changing anything else, you should use:
((UInt32)PlayerAttributes.EAttributesFlagsBmp.AIn & ~(UInt32)PlayerAttributes.EAttributesFlagsBmp.BIn)
Just like in regular booleans, when negating one side you need to change Or to And (and vice-versa) in the comparison to keep the same behavior.
EDIT:
The reason why the others flags gets erased when you set is because you are setting only those 2 flags on a property and then overwriting whatever exists on the property, by setting the value on it. In order to just change the values that is already on the property, you must use the appropriate operator on its values and then set it back. Like this:
AttributeFlagsBitmap = AttributeFlagsBitmap |
(UInt32)PlayerAttributes.EAttributesFlagsBmp.AIn &
~(UInt32)PlayerAttributes.EAttributesFlagsBmp.BIn;

Related

Set Bits of a Byte by a ByteValue

I have a Byte scaleValue with the value 3 (0000.0011 binary)
Now I want to set the BIT 3 and 4 (Scale) of the Byte Config (see image) with help of my Byte scaleValue but it does not work.
Before: 0000.0000 (if configByte has the init value 0)
After: 0001.1000
Here is my code:
configByte = (byte) (configByte | (scaleValue << 3));
Byte Config:
If configByte is the entire 8-bit chunk, and scaleValue is the value (currently in bits 0/1) that you want to inject into bits 3/4, then fundamentally you need:
configByte = (byte)(configByte | (scaleValue << 3));
However, this assumes that:
bits 3/4 in configByte are currently zero
only bits 0/1 in scaleValue are set (or not)
If those two assumptions aren't true, then you need to mask out the issue with ... masks:
configByte = (byte)((configByte & 231) | ((scaleValue & 3) << 3));
The & 231 removes bits 3/4 in the old value. The & 3 enforces just bits 0/1 in the new value (before the shift)
You can use these extension methods:
public static bool GetBit(this byte data, byte position)
{
byte mask = (byte)(1 << position);
return (data & mask) != 0;
}
public static byte SetBit(this byte data, byte position, bool value)
{
byte mask = (byte)(1 << position);
if (value)
{
return (byte)(data | mask);
}
else
{
return (byte)( data & (~mask));
}
}
static void Main(string[] args)
{
byte data = 0b0000100;
if (data.GetBit(2))
{
}
data = data.SetBit(4, true);
data = data.SetBit(2, false);
}

Convert double to UInt16

I'm making a function that will allow the user to pass a double value, and then return a UInt16.
This is my code:
public static UInt16 Value_To_BatteryVoltage(double value)
{
var ret = ((int)value << 8);
var retMod = (value % (int)value) * 10;
return (UInt16)(ret + retMod);
}
Basically what it does is as follows, function call:
Value_To_BatteryVoltage(25.10)
Will return: 6401
I can check the result by doing:
public static double VoltageLevel(UInt16 value)
{
return ((value & 0xFF00) >> 8) + ((value & 0x00FF) / 10.0);
}
This is working as expected, BUT, if I do:
Value_To_BatteryVoltage(25.11) //notice the 0.11
I get the wrong result, because:
public static UInt16 Value_To_BatteryVoltage(double value)
{
var ret = ((int)value << 8); // returns 6400 OK
var retMod = (value % (int)value) * 10; //returns 0.11 x 10 = 1.1 WRONG!
return (UInt16)(ret + retMod); //returns 6400, because (UInt16)(6400 + 1.1) = 6401 same as 25.10 so I've lost precision
}
So the question is, is there some way to do this kind of conversion without losing precision?
If I understand the question, you want to store the Characteristic (interger-part) in the first 8 bits of UInt16. And the Mantissa (fractional-part) in the second 8 bits.
This is one way to do it. I treat the double like a string and split it at the decimal. For example:
public static UInt16 Value_To_BatteryVoltage(double value)
{
string[] number = value.ToString().Split('.');
UInt16 c = (UInt16)(UInt16.Parse(number[0]) << 8);
UInt16 m = UInt16.Parse(number[1]);
return (UInt16)(c + m);
}
And here is the output:

Parse bits in a byte to enum

I'm working on a dll that parses binary data I get from a Home Automation module.
But I need some advice on some code I have.
So I get a message with some bytes, and each bit indicates a certain condition in this case.
In the code I have at the moment each condition is an enum, I put the enums in an array and check if the corresponding bit is set.
private void ParseZoneConditionFlag1(int Flag1) // Flag1 = Hex represenation of byte
{
Zone_Status_ZoneConditionFlagEnum[] FlagArray = new Zone_Status_ZoneConditionFlagEnum[8];
FlagArray[0] = Zone_Status_ZoneConditionFlagEnum.Faulted;
FlagArray[1] = Zone_Status_ZoneConditionFlagEnum.Tampered;
FlagArray[2] = Zone_Status_ZoneConditionFlagEnum.Trouble;
FlagArray[3] = Zone_Status_ZoneConditionFlagEnum.Bypassed;
FlagArray[4] = Zone_Status_ZoneConditionFlagEnum.Inhibited;
FlagArray[5] = Zone_Status_ZoneConditionFlagEnum.Low_Battery;
FlagArray[6] = Zone_Status_ZoneConditionFlagEnum.Loss_Supervision;
FlagArray[7] = Zone_Status_ZoneConditionFlagEnum.Reserved;
base.CheckBitsSet(FlagArray, Flag1, ZoneConditionFlags_List);
}
private void ParseZoneConditionFlag2(int Flag2)
{
Zone_Status_ZoneConditionFlagEnum[] FlagArray = new Zone_Status_ZoneConditionFlagEnum[8];
FlagArray[0] = Zone_Status_ZoneConditionFlagEnum.Alarm_Memory;
FlagArray[1] = Zone_Status_ZoneConditionFlagEnum.Bypass_Memory;
FlagArray[2] = Zone_Status_ZoneConditionFlagEnum.Reserved;
FlagArray[3] = Zone_Status_ZoneConditionFlagEnum.Reserved;
FlagArray[4] = Zone_Status_ZoneConditionFlagEnum.Reserved;
FlagArray[5] = Zone_Status_ZoneConditionFlagEnum.Reserved;
FlagArray[6] = Zone_Status_ZoneConditionFlagEnum.Reserved;
FlagArray[7] = Zone_Status_ZoneConditionFlagEnum.Reserved;
base.CheckBitsSet(FlagArray, Flag2, ZoneConditionFlags_List);
}
And the method were I check the actual bits
protected void CheckBitsSet<T>(T[] ConstantArray, int HexValue, List<T> DestinationList)
{
byte b = (byte) HexValue;
for (int i = 0; i < Mask.Length; i++)
{
if(IsBitSet(b, i))
{
DestinationList.Add(ConstantArray[i]);
}
}
}
public bool IsBitSet(byte b, int pos)
{
return (b & (1 << pos)) != 0;
}
This works, but I wonder if there is a cleaner way to do this.
With cleaner I mean without having to add the right enums to an array each time.
How about just:
[Flags]
enum MyFlags : short
{
None = 0,
Faulted = 1 << 0,
Tampered = 1 << 1,
Trouble = 1 << 2,
Bypassed = 1 << 3,
Inhibited = 1 << 4,
LowBattery = 1 << 5,
LossOfSupervision = 1 << 6,
AlarmMemory = 1 << 8,
BypassMemory = 1 << 9
}
static bool IsSet(MyFlags value, MyFlags flag)
{
return ((value & flag) == flag);
}
and read the value as a 2-byte value (short, being careful about endianness), and then cast to MyFlags.
To check for any flag, just:
MyFlags value = ...
bool isAlarmMemory = IsSet(value, MyFlags.AlarmMemory);
It gets tricker when you talk about composite flags, i.e.
bool memoryProblem = IsSet(value, MyFlags.AlarmMemory | MyFlags.BypassMemory);
as you need to figure out whether you mean "is any of these flags set?" vs "are all of these flags set?"
It comes down to the test;
return ((value & flag) == flag); // means "are all set"
return ((value & flag) != 0); // means "is any set"
For reading:
// this is just some garbage that I'm pretending is a message from
// your module; I'm assuming the byte numbers in the image are
// zero-based, so the two that we want are: \/\/\/ (the 6,3)
byte[] data = { 12, 63, 113, 0, 13, 123, 14, 6, 3, 14, 15 };
// and I'm assuming "byte 7" and "byte 8" (image) are zero-based;
// MyFlags uses byte 7 *first*, so it is little-endian; we can get that
// via:
short flagsRaw = (short)(data[7] | (data[8] << 8));
MyFlags flags = (MyFlags)flagsRaw;
// flags has value Tampered | Trouble | AlarmMemory | BypassMemory,
// which is what we expect for {6,3}
Use this:
[Flags]
public enum MyEnum
{
Value1 = 1,
Value2 = 2,
Value3 = 4,
Value5 = 8
}
(...)
void Func(int flag)
{
MyEnum #enum = (MyEnum)flag;
// Testing, whether a flag is set
if ((#enum & MyEnum.Value1) != 0) // sth
}

How to get two (0~15) numbers as properties with one byte as backing field?

I'm making a tile based 2d platformer and every byte of memory is precious. I have one byte field that can hold values from 0 to 255, but what I need is two properties with values 0~15. How can I turn one byte field into two properties like that?
do you mean just use the lower 4 bits for one value and the upper 4 bits for the other?
to get two values from 1 byte use...
a = byte & 15;
b = byte / 16;
setting is just the reverse as
byte = a | b * 16;
Using the shift operator is better but the compiler optimizers usually do this for you nowadays.
byte = a | (b << 4);
To piggy back off of sradforth's answer, and to answer your question about properties:
private byte _myByte;
public byte LowerHalf
{
get
{
return (byte)(_myByte & 15);
}
set
{
_myByte = (byte)(value | UpperHalf * 16);
}
}
public byte UpperHalf
{
get
{
return (byte)(_myByte / 16);
}
set
{
_myByte = (byte)(LowerHalf | value * 16);
}
}
Below are some properties and some backing store, I've tried to write them in a way that makes the logic easy to follow.
private byte HiAndLo = 0;
private const byte LoMask = 15; // 00001111
private const byte HiMask = 240; // 11110000
public byte Lo
{
get
{
// ----&&&&
return (byte)(this.hiAndLo & LoMask);
}
set
{
if (value > LoMask) //
{
// Values over 15 are too high.
throw new OverflowException();
}
// &&&&0000
// 0000----
// ||||||||
this.hiAndLo = (byte)((this.hiAndLo & HiMask) | value);
}
}
public byte Hi
{
get
{
// &&&&XXXX >> 0000&&&&
return (byte)((this.hiAndLo & HiMask) >> 4);
}
set
{
if (value > LoMask)
{
// Values over 15 are too high.
throw new OverflowException();
}
// -------- << ----0000
// XXXX&&&&
// ||||||||
this.hiAndLo = (byte)((hiAndLo & LoMask) | (value << 4 ));
}
}

Win api in C#. Get Hi and low word from IntPtr

I am trying to process a WM_MOUSEMOVE message in C#.
What is the proper way to get an X and Y coordinate from lParam which is a type of IntPtr?
Try:
(note that this was the initial version, read below for the final version)
IntPtr xy = value;
int x = unchecked((short)xy);
int y = unchecked((short)((uint)xy >> 16));
The unchecked normally isn't necessary (because the "default" c# projects are unchecked)
Consider that these are the definitions of the used macros:
#define LOWORD(l) ((WORD)(((DWORD_PTR)(l)) & 0xffff))
#define HIWORD(l) ((WORD)((((DWORD_PTR)(l)) >> 16) & 0xffff))
#define GET_X_LPARAM(lp) ((int)(short)LOWORD(lp))
#define GET_Y_LPARAM(lp) ((int)(short)HIWORD(lp))
Where WORD == ushort, DWORD == uint. I'm cutting some ushort->short conversions.
Addendum:
one and half year later, and having experienced the "vagaries" of 64 bits .NET, I concur with Celess (but note that 99% of the Windows messages are still 32 bits for reasons of compatibility, so I don't think the problem isn't really big now. It's more for the future and because if you want to do something, you should do it correctly.)
The only thing I would make different is this:
IntPtr xy = value;
int x = unchecked((short)(long)xy);
int y = unchecked((short)((long)xy >> 16));
instead of doing the check "is the IntPtr 4 or 8 bytes long", I take the worst case (8 bytes long) and cast xy to a long. With a little luck the double cast (to long and then to short/to uint) will be optimized by the compiler (in the end, the explicit conversion to int of IntPtr is a red herring... If you use it you are putting yourself at risk in the future. You should always use the long conversion and then use it directly/re-cast it to what you need, showing to the future programmers that you knew what you were doing.
A test example: http://ideone.com/a4oGW2 (sadly only 32 bits, but if you have a 64 bits machine you can test the same code)
Correct for both 32 and 64-bit:
Point GetPoint(IntPtr _xy)
{
uint xy = unchecked(IntPtr.Size == 8 ? (uint)_xy.ToInt64() : (uint)_xy.ToInt32());
int x = unchecked((short)xy);
int y = unchecked((short)(xy >> 16));
return new Point(x, y);
}
- or -
int GetIntUnchecked(IntPtr value)
{
return IntPtr.Size == 8 ? unchecked((int)value.ToInt64()) : value.ToInt32();
}
int Low16(IntPtr value)
{
return unchecked((short)GetIntUnchecked(value));
}
int High16(IntPtr value)
{
return unchecked((short)(((uint)GetIntUnchecked(value)) >> 16));
}
These also work:
int Low16(IntPtr value)
{
return unchecked((short)(uint)value); // classic unchecked cast to uint
}
int High16(IntPtr value)
{
return unchecked((short)((uint)value >> 16));
}
- or -
int Low16(IntPtr value)
{
return unchecked((short)(long)value); // presumption about internals
} // is what framework lib uses
int High16(IntPtr value)
{
return unchecked((short)((long)value >> 16));
}
Going the other way
public static IntPtr GetLParam(Point point)
{
return (IntPtr)((point.Y << 16) | (point.X & 0xffff));
} // mask ~= unchecked((int)(short)x)
- or -
public static IntPtr MakeLParam(int low, int high)
{
return (IntPtr)((high << 16) | (low & 0xffff));
} // (IntPtr)x is same as 'new IntPtr(x)'
The accepted answer is good translation of the C definition. If were dealing with just the raw 'void*' directly, then would be mostly ok. However when using 'IntPtr' in a .Net 64-bit execution environment, 'unchecked' will not stop conversion overflow exceptions from being thrown from inside IntPtr. The unchecked block does not affect conversions that happen inside IntPtr funcitons and operators. Currently the accepted answer states that use of 'unchecked' is not necesary. However the use of 'unchecked' is absolutely necessary, as would always be the case in casting to negative values from a larger type.
On 64-bit, from the accepted answer:
var xy = new IntPtr(0x0FFFFFFFFFFFFFFF);
int x = unchecked((short)xy); // <-- throws
int y = unchecked((short)((uint)xy >> 16)); // gets lucky, 'uint' implicit 'long'
y = unchecked((short)((int)xy >> 16)); // <-- throws
xy = new IntPtr(0x00000000FFFF0000); // 0, -1
x = unchecked((short)xy); // <-- throws
y = unchecked((short)((uint)xy >> 16)); // still lucky
y = (short)((uint)xy >> 16); // <-- throws (short), no longer lucky
On 64-bit, using extrapolated version of DmitryG's:
var ptr = new IntPtr(0x0FFFFFFFFFFFFFFF);
var xy = IntPtr.Size == 8 ? (int)ptr.ToInt64() : ptr.ToInt32(); // <-- throws (int)
int x = unchecked((short)xy); // fine, if gets this far
int y = unchecked((short)((uint)xy >> 16)); // fine, if gets this far
y = unchecked((short)(xy >> 16)); // also fine, if gets this far
ptr = new IntPtr(0x00000000FFFF0000); // 0, -1
xy = IntPtr.Size == 8 ? (int)ptr.ToInt64() : ptr.ToInt32(); // <-- throws (int)
On performance
return IntPtr.Size == 8 ? unchecked((int)value.ToInt64()) : value.ToInt32();
The IntPtr.Size property returns a constant as compile time literal that is capable if being inlined across assemblies. Thus is possible for the JIT to have nearly all of this optimized out. Could also do:
return unchecked((int)value.ToInt64());
- or -
return unchecked((int)(long)value);
- or -
return unchecked((uint)value); // traditional
and all 3 of these will always call the equivalient of IntPtr.ToInt64(). ToInt64(), and 'operator long', are also capable of being inlined, but less likely to be. Is much more code in 32-bit version than the Size constant. I would submit that the solution at the top is maybe more symantically correct. Its also important to be aware of sign-extension artifacts, which would fill all 64-bits reguardless on something like (long)int_val, though i've pretty much glossed over that here, however may additionally affect inlining on 32-bit.
Useage
if (Low16(wParam) == NativeMethods.WM_CREATE)) { }
var x = Low16(lParam);
var point = GetPoint(lParam);
A 'safe' IntPtr mockup shown below for future traverlers.
Run this without setting the WIN32 define on 32-bit to get a solid simulation of the 64-bit IntPtr behavour.
public struct IntPtrMock
{
#if WIN32
int m_value;
#else
long m_value;
#endif
int IntPtr_ToInt32() {
#if WIN32
return (int)m_value;
#else
long l = m_value;
return checked((int)l);
#endif
}
public static explicit operator int(IntPtrMock value) { //(short) resolves here
#if WIN32
return (int)value.m_value;
#else
long l = value.m_value;
return checked((int)l); // throws here if any high 32 bits
#endif // check forces sign stay signed
}
public static explicit operator long(IntPtrMock value) { //(uint) resolves here
#if WIN32
return (long)(int)value.m_value;
#else
return (long)value.m_value;
#endif
}
public int ToInt32() {
#if WIN32
return (int)value.m_value;
#else
long l = m_value;
return checked((int)l); // throws here if any high 32 bits
#endif // check forces sign stay signed
}
public long ToInt64() {
#if WIN32
return (long)(int)m_value;
#else
return (long)m_value;
#endif
}
public IntPtrMock(long value) {
#if WIN32
m_value = checked((int)value);
#else
m_value = value;
#endif
}
}
public static IntPtr MAKELPARAM(int low, int high)
{
return (IntPtr)((high << 16) | (low & 0xffff));
}
public Main()
{
var xy = new IntPtrMock(0x0FFFFFFFFFFFFFFF); // simulate 64-bit, overflow smaller
int x = unchecked((short)xy); // <-- throws
int y = unchecked((short)((uint)xy >> 16)); // got lucky, 'uint' implicit 'long'
y = unchecked((short)((int)xy >> 16)); // <-- throws
int xy2 = IntPtr.Size == 8 ? (int)xy.ToInt64() : xy.ToInt32(); // <-- throws
int xy3 = unchecked(IntPtr.Size == 8 ? (int)xy.ToInt64() : xy.ToInt32()); //ok
// proper 32-bit lParam, overflow signed
var xy4 = new IntPtrMock(0x00000000FFFFFFFF); // x = -1, y = -1
int x2 = unchecked((short)xy4); // <-- throws
int xy5 = IntPtr.Size == 8 ? (int)xy4.ToInt64() : xy4.ToInt32(); // <-- throws
var xy6 = new IntPtrMock(0x00000000FFFF0000); // x = 0, y = -1
int x3 = unchecked((short)xy6); // <-- throws
int xy7 = IntPtr.Size == 8 ? (int)xy6.ToInt64() : xy6.ToInt32(); // <-- throws
var xy8 = MAKELPARAM(-1, -1); // WinForms macro
int x4 = unchecked((short)xy8); // <-- throws
int xy9 = IntPtr.Size == 8 ? (int)xy8.ToInt64() : xy8.ToInt32(); // <-- throws
}
Usualy, for low-level mouse processing I have used the following helper (it also considers that IntPtr size depends on x86/x64):
//...
Point point = WinAPIHelper.GetPoint(msg.LParam);
//...
static class WinAPIHelper {
public static Point GetPoint(IntPtr lParam) {
return new Point(GetInt(lParam));
}
public static MouseButtons GetButtons(IntPtr wParam) {
MouseButtons buttons = MouseButtons.None;
int btns = GetInt(wParam);
if((btns & MK_LBUTTON) != 0) buttons |= MouseButtons.Left;
if((btns & MK_RBUTTON) != 0) buttons |= MouseButtons.Right;
return buttons;
}
static int GetInt(IntPtr ptr) {
return IntPtr.Size == 8 ? unchecked((int)ptr.ToInt64()) : ptr.ToInt32();
}
const int MK_LBUTTON = 1;
const int MK_RBUTTON = 2;
}

Categories

Resources