I have a Byte scaleValue with the value 3 (0000.0011 binary)
Now I want to set the BIT 3 and 4 (Scale) of the Byte Config (see image) with help of my Byte scaleValue but it does not work.
Before: 0000.0000 (if configByte has the init value 0)
After: 0001.1000
Here is my code:
configByte = (byte) (configByte | (scaleValue << 3));
Byte Config:
If configByte is the entire 8-bit chunk, and scaleValue is the value (currently in bits 0/1) that you want to inject into bits 3/4, then fundamentally you need:
configByte = (byte)(configByte | (scaleValue << 3));
However, this assumes that:
bits 3/4 in configByte are currently zero
only bits 0/1 in scaleValue are set (or not)
If those two assumptions aren't true, then you need to mask out the issue with ... masks:
configByte = (byte)((configByte & 231) | ((scaleValue & 3) << 3));
The & 231 removes bits 3/4 in the old value. The & 3 enforces just bits 0/1 in the new value (before the shift)
You can use these extension methods:
public static bool GetBit(this byte data, byte position)
{
byte mask = (byte)(1 << position);
return (data & mask) != 0;
}
public static byte SetBit(this byte data, byte position, bool value)
{
byte mask = (byte)(1 << position);
if (value)
{
return (byte)(data | mask);
}
else
{
return (byte)( data & (~mask));
}
}
static void Main(string[] args)
{
byte data = 0b0000100;
if (data.GetBit(2))
{
}
data = data.SetBit(4, true);
data = data.SetBit(2, false);
}
Related
I'm decoding a .BMP file, and I'm at a point where I need to handle 16-bit colors. The entire codebase uses 32-bit colors (R, G, B, A), so I need to convert the color to a 24-bit RGB value (one byte for each color).
Each component of the color is 5-bit as per the specification (1 bit is wasted). My code is as follows:
ushort color = BitConverter.ToUInt16(data, 54 + i);
byte blue = (byte)((color | 0b0_00000_00000_11111) / 31f * 255);
byte green = (byte)(((color | 0b0_00000_11111_00000) >> 5) / 31f * 255);
byte red = (byte)(((color | 0b0_11111_00000_00000) >> 10) / 31f * 255);
However, this doesn't seem particularly efficient. I tried doing color << (8 - 5), which makes the process much faster and avoids floating-point conversions, but it isn't accurate - a value of 31 (11111) converts to 248. Is there a way to achieve this with some other bit-manipulation hack, or am I forced to convert each number to a float just to change the color space?
Not only the floating point conversions can be avoided but also multiplications and divisions. From my implementation:
internal struct Color16Rgb555
{
private const ushort redMask = 0b01111100_00000000;
private const ushort greenMask = 0b00000011_11100000;
private const ushort blueMask = 0b00011111;
private ushort _value;
internal Color16Rgb555(ushort value) => _value = value;
internal byte R => (byte)(((_value & redMask) >> 7) | ((_value & redMask) >> 12));
internal byte G => (byte)(((_value & greenMask) >> 2) | ((_value & greenMask) >> 7));
internal byte B => (byte)(((_value & blueMask) << 3) | ((_value & blueMask) >> 2));
}
Usage:
var color = new Color16Rgb555(BitConverter.ToUInt16(data, 54 + i));
byte blue = color.B;
byte green = color.G;
byte red = color.R;
It produces 255 for 31 because it fills up the remaining 3 bits with the 3 MSB bits of the actual 5 bit value.
But assuming your data is a byte array you have an even more convenient option if you use my drawing libraries:
// to interpret your data array as 16BPP pixels with RGB555 format:
var my16bppBitmap = BitmapDataFactory.CreateBitmapData(
data, // your back buffer
new Size(pixelWidth, pixelHeight), // size in pixels
stride, // the size of one row in bytes
KnownPixelFormat.Format16bppRgb555);
// now you can get/set pixels normally
Color somePixel = my16bppBitmap.GetPixel(0, 0);
// For better performance obtain a row first.
var row = my16bppBitmap[0]; // or FirstRow (+MoveNextRow if you wish)
Color32 asColor32 = row[0]; // accessing pixels regardless of PixelFormat
ushort asUInt16 = row.ReadRaw<ushort>(0); // if you know that it's a 16bpp format
I'm making a function that will allow the user to pass a double value, and then return a UInt16.
This is my code:
public static UInt16 Value_To_BatteryVoltage(double value)
{
var ret = ((int)value << 8);
var retMod = (value % (int)value) * 10;
return (UInt16)(ret + retMod);
}
Basically what it does is as follows, function call:
Value_To_BatteryVoltage(25.10)
Will return: 6401
I can check the result by doing:
public static double VoltageLevel(UInt16 value)
{
return ((value & 0xFF00) >> 8) + ((value & 0x00FF) / 10.0);
}
This is working as expected, BUT, if I do:
Value_To_BatteryVoltage(25.11) //notice the 0.11
I get the wrong result, because:
public static UInt16 Value_To_BatteryVoltage(double value)
{
var ret = ((int)value << 8); // returns 6400 OK
var retMod = (value % (int)value) * 10; //returns 0.11 x 10 = 1.1 WRONG!
return (UInt16)(ret + retMod); //returns 6400, because (UInt16)(6400 + 1.1) = 6401 same as 25.10 so I've lost precision
}
So the question is, is there some way to do this kind of conversion without losing precision?
If I understand the question, you want to store the Characteristic (interger-part) in the first 8 bits of UInt16. And the Mantissa (fractional-part) in the second 8 bits.
This is one way to do it. I treat the double like a string and split it at the decimal. For example:
public static UInt16 Value_To_BatteryVoltage(double value)
{
string[] number = value.ToString().Split('.');
UInt16 c = (UInt16)(UInt16.Parse(number[0]) << 8);
UInt16 m = UInt16.Parse(number[1]);
return (UInt16)(c + m);
}
And here is the output:
I'm making a tile based 2d platformer and every byte of memory is precious. I have one byte field that can hold values from 0 to 255, but what I need is two properties with values 0~15. How can I turn one byte field into two properties like that?
do you mean just use the lower 4 bits for one value and the upper 4 bits for the other?
to get two values from 1 byte use...
a = byte & 15;
b = byte / 16;
setting is just the reverse as
byte = a | b * 16;
Using the shift operator is better but the compiler optimizers usually do this for you nowadays.
byte = a | (b << 4);
To piggy back off of sradforth's answer, and to answer your question about properties:
private byte _myByte;
public byte LowerHalf
{
get
{
return (byte)(_myByte & 15);
}
set
{
_myByte = (byte)(value | UpperHalf * 16);
}
}
public byte UpperHalf
{
get
{
return (byte)(_myByte / 16);
}
set
{
_myByte = (byte)(LowerHalf | value * 16);
}
}
Below are some properties and some backing store, I've tried to write them in a way that makes the logic easy to follow.
private byte HiAndLo = 0;
private const byte LoMask = 15; // 00001111
private const byte HiMask = 240; // 11110000
public byte Lo
{
get
{
// ----&&&&
return (byte)(this.hiAndLo & LoMask);
}
set
{
if (value > LoMask) //
{
// Values over 15 are too high.
throw new OverflowException();
}
// &&&&0000
// 0000----
// ||||||||
this.hiAndLo = (byte)((this.hiAndLo & HiMask) | value);
}
}
public byte Hi
{
get
{
// &&&&XXXX >> 0000&&&&
return (byte)((this.hiAndLo & HiMask) >> 4);
}
set
{
if (value > LoMask)
{
// Values over 15 are too high.
throw new OverflowException();
}
// -------- << ----0000
// XXXX&&&&
// ||||||||
this.hiAndLo = (byte)((hiAndLo & LoMask) | (value << 4 ));
}
}
In C I would do this
int number = 3510;
char upper = number >> 8;
char lower = number && 8;
SendByte(upper);
SendByte(lower);
Where upper and lower would both = 54
In C# I am doing this:
int number = Convert.ToInt16("3510");
byte upper = byte(number >> 8);
byte lower = byte(number & 8);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
comport.Write(data);
However in the debugger number = 3510, upper = 13 and lower = 0
this makes no sense, if I change the code to >> 6 upper = 54 which is absolutely strange.
Basically I just want to get the upper and lower byte from the 16 bit number, and send it out the com port after "GETDM"
How can I do this? It is so simple in C, but in C# I am completely stumped.
Your masking is incorrect - you should be masking against 255 (0xff) instead of 8. Shifting works in terms of "bits to shift by" whereas bitwise and/or work against the value to mask against... so if you want to only keep the bottom 8 bits, you need a mask which just has the bottom 8 bits set - i.e. 255.
Note that if you're trying to split a number into two bytes, it should really be a short or ushort to start with, not an int (which has four bytes).
ushort number = Convert.ToUInt16("3510");
byte upper = (byte) (number >> 8);
byte lower = (byte) (number & 0xff);
Note that I've used ushort here instead of byte as bitwise arithmetic is easier to think about when you don't need to worry about sign extension. It wouldn't actually matter in this case due to the way the narrowing conversion to byte works, but it's the kind of thing you should be thinking about.
You probably want to and it with 0x00FF
byte lower = Convert.ToByte(number & 0x00FF);
Full example:
ushort number = Convert.ToUInt16("3510");
byte upper = Convert.ToByte(number >> 8);
byte lower = Convert.ToByte(number & 0x00FF);
char upperc = Convert.ToChar(upper);
char lowerc = Convert.ToChar(lower);
data = "GETDM" + upperc + lowerc;
Even if the accepted answer fits the question, I consider it incomplete due to the simple fact that the question contains int and not short in header and it is misleading in search results, and as we know Int32 in C# has 32 bits and thus 4 bytes. I will post here an example that will be useful in the case of Int32 use. In the case of an Int32 we have:
LowWordLowByte
LowWordHighByte
HighWordLowByte
HighWordHighByte.
And as such, I have created the following method for converting the Int32 value into a little endian Hex string, in which every byte is separated from the others by a Whitespace. This is useful when you transmit data and want the receiver to do the processing faster, he can just Split(" ") and get the bytes represented as standalone hex strings.
public static String IntToLittleEndianWhitespacedHexString(int pValue, uint pSize)
{
String result = String.Empty;
pSize = pSize < 4 ? pSize : 4;
byte tmpByte = 0x00;
for (int i = 0; i < pSize; i++)
{
tmpByte = (byte)((pValue >> i * 8) & 0xFF);
result += tmpByte.ToString("X2") + " ";
}
return result.TrimEnd(' ');
}
Usage:
String value1 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 4);
String value2 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 4);
String value3 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x927C, 2);
String value4 = ByteArrayUtils.IntToLittleEndianWhitespacedHexString(0x3FFFF, 1);
The result is:
7C 92 00 00
FF FF 03 00
7C 92
FF.
If it is hard to understand the method which I created, then the following might be a more comprehensible one:
public static String IntToLittleEndianWhitespacedHexString(int pValue)
{
String result = String.Empty;
byte lowWordLowByte = (byte)(pValue & 0xFF);
byte lowWordHighByte = (byte)((pValue >> 8) & 0xFF);
byte highWordLowByte = (byte)((pValue >> 16) & 0xFF);
byte highWordHighByte = (byte)((pValue >> 24) & 0xFF);
result = lowWordLowByte.ToString("X2") + " " +
lowWordHighByte.ToString("X2") + " " +
highWordLowByte.ToString("X2") + " " +
highWordHighByte.ToString("X2");
return result;
}
Remarks:
Of course insteand of uint pSize there can be an enum specifying Byte, Word, DoubleWord
Instead of converting to hex string and creating the little endian string, you can convert to chars and do whatever you want to do.
Hope this will help someone!
Shouldn't it be:
byte lower = (byte) ( number & 0xFF );
To be a little more creative
[System.Runtime.InteropServices.StructLayout( System.Runtime.InteropServices.LayoutKind.Explicit )]
public struct IntToBytes {
[System.Runtime.InteropServices.FieldOffset(0)]
public int Int32;
[System.Runtime.InteropServices.FieldOffset(0)]
public byte First;
[System.Runtime.InteropServices.FieldOffset(1)]
public byte Second;
[System.Runtime.InteropServices.FieldOffset(2)]
public byte Third;
[System.Runtime.InteropServices.FieldOffset(3)]
public byte Fourth;
}
I was looking for a way to convert IEEE floating point numbers to IBM floating point format for a old system we are using.
Is there a general formula we can use in C# to this end?
Use:
// https://en.wikipedia.org/wiki/IBM_hexadecimal_floating-point
//
// float2ibm(-118.625F) == 0xC276A000
// 1 100 0010 0111 0110 1010 0000 0000 0000
//
// IBM/370 single precision, 4 bytes
// xxxx.xxxx xxxx.xxxx xxxx.xxxx xxxx.xxxx
// s|-exp--| |--------fraction-----------|
// (7) (24)
//
// value = (-1)**s * 16**(e - 64) * .f range = 5E-79 ... 7E+75
//
static int float2ibm(float fromFormat)
{
byte[] bytes = BitConverter.GetBytes(fromFormat);
int fconv = (bytes[3] << 24) | (bytes[2] << 16) | (bytes[1] << 8)| bytes[0];
if (fconv == 0)
return 0;
int fmant = (0x007fffff & fconv) | 0x00800000;
int t = (int)((0x7f800000 & fconv) >> 23) - 126;
while (0 != (t & 0x3)) {
++t;
fmant >>= 1;
}
fconv = (int)(0x80000000 & fconv) | (((t >> 2) + 64) << 24) | fmant;
return fconv; // Big-endian order
}
I changed a piece of code called static void float_to_ibm(int from[], int to[], int n, int endian).
The code above can be run correctly on a PC.
from is a little-endian float number.
return value is a big-endian IBM float number, but stored in type int.
An obvious approach would be to use textual representation of the number as the interchange format.
I recently had to convert one float to another. It looks like the XDR format uses an odd format for its floats. So when converting from XDR to standard floats, this code did it.
#include <rpc/rpc.h>
// Read in an XDR float array, copy to a standard float array.
// The 'out' array needs to be allocated before the function call.
bool convertFromXdrFloatArray(float *in, float *out, long size)
{
XDR xdrs;
xdrmem_create(&xdrs, (char *)in, size*sizeof(float), XDR_DECODE);
for(int i = 0; i < size; i++)
{
if(!xdr_float(&xdrs, out++)) {
fprintf(stderr, "%s:%d:ERROR:xdr_float\n", __FILE__, __LINE__);
exit(1);
}
}
xdr_destroy(&xdrs);
return true;
}
Using speeding's answer, I added the following that may be useful in some cases:
/// <summary>
/// Converts an IEEE floating number to its string representation (4 or 8 ASCII codes).
/// It is useful for SAS XPORT files format.
/// </summary>
/// <param name="from_">IEEE number</param>
/// <param name="padTo8_">When true, the output is 8 characters rather than 4</param>
/// <returns>Printable string according to the hardware's endianness</returns>
public static string Float2IbmAsAsciiCodes(float from_, bool padTo8_ = true)
{
StringBuilder sb = new StringBuilder();
string s;
byte[] bytes = BitConverter.GetBytes(Float2Ibm(from_)); // Big-endian order
if (BitConverter.IsLittleEndian)
{
// Revert bytes order
for (int i = 3; i > -1; i--)
sb.Append(Convert.ToChar(bytes[i]));
s = sb.ToString();
if (padTo8_)
s = s.PadRight(8, '\0');
return s;
}
else
{
for (int i = 0; i < 8; i++)
sb.Append(Convert.ToChar(bytes[i]));
s = sb.ToString();
if (padTo8_)
s = s.PadRight(8, '\0');
return s;
}
}