Hex number operations in c# - c#

I am making an application in C# and I have hex numbers such as 0x0FF8,0xFFFA etc.
Here I want only 12 bit from right to left. Suppose I have a number as 0x0FF8.
So I just want to make operation on FF8.(12 bits), and this is signed number.
It is the decimal number is -8. In my application I have to first find whether number is negative or not? And after that its value.
I am not getting how to do it in C# efficiently as I have to do it very fast.
The number representation is as 0x0FF8= -8 please see the link http://www.swarthmore.edu/NatSci/echeeve1/Ref/BinaryMath/NumSys.html

erm,
To get the right twelve bits only you could do,
var right12 = 0x0FFF & yourNumber;
To find out if it negative or positive do,
var positive = yourNumber >= 0;
var absoluteValue = Math.Abs(yourNumber); // Assuming yourNumber is Int32
var low12 = 0xFFF & absoluteValue;
This does a bitwise and against a bit mask for the twelve bits you want to keep.

To check if a signed integer value is negative, you have to check its left most bit. If the bit is set, the value is negative.
However, you only have 12 bits, while an int have 32 bits. So when you put the 12 bits in a int, 20 bits are reseted to zero (aka not set). So the left most bit (#31) is not set and the int value is not seen as a negative one.
You have to check the bit #11 and set the 20 other if the 11th is set:
int Value = 0x0FF8;
// Check bit #11
if ((Value & 0x0800) != 0)
{
// Set the 20 other bits to make the int value a negative one
Value |= 0xFFFFF000;
}
You can also do the same thing by using short instead of int. A short only have 16 bits, so:
short Value = 0x0FF8;
// Check bit #11
if ((Value & 0x0800) != 0)
{
// Set the 4 other bits to make the short value a negative one
Value |= 0xF000;
}
The int version is probably the best one to use to avoid casts in the code.

Related

display the 16 bits for the integer

i need your help , here is a part of my code but there is a problem that i cant solve .
Plese help me;
This is what i have done for now , i can get positive integer in 16 bits binary form
`
Console.WriteLine("Enter an integer : ");
string inputnumber = Console.ReadLine();
int num = int.Parse(inputnumber);
string value = Convert.ToString(num, 2).PadLeft(16, '0');
Console.WriteLine("The bits are : {0}", value);
Console.ReadKey();`
AND the issue is how will i get negative value of an integer in 16 bits binary form
like; when i input 5 , i can get : 0000000000000101
and i need -5 -------------> 1111111111111011
In C# int is a 32-bit type. You should use short (a 16-bit type) instead. For positive numbers up to 32767 the first (lower) 16 bits of int and short are the same, but for negative numbers it's different.
short num = short.Parse(inputnumber);
This is a correct behavior since it is stored in that way in a computer. That is the so called two's complement where the most significant bit (the most left) tells you that this is a negative number. Also keep in mind that int contains 32 bits

How to (theoretically) print all possible double precision numbers in C#?

For a little personal research project I want to generate a string list of all possible values a double precision floating point number can have.
I've found the "r" formatting option, which guarantees that the string can be parsed back into the exact same bit representation:
string s = myDouble.ToString("r");
But how to generate all possible bit combinations? Preferably ordered by value.
Maybe using the unchecked keyword somehow?
unchecked
{
//for all long values
myDouble[i] = myLong++;
}
Disclaimer: It's more a theoretical question, I am not going to read all the numbers... :)
using unsafe code:
ulong i = 0; //long is 64 bit, like double
unsafe
{
double* d = (double*)&i;
for(;i<ulong.MaxValue;i++)
Console.WriteLine(*d);
}
You can start with all possible values 0 <= x < 1. You can create those by having zero for exponent and use different values for the mantissa.
The mantissa is stored in 52 bits of the 64 bits that make a double precision number, so that makes for 2 ^ 52 = 4503599627370496 different numbers between 0 and 1.
From the description of the decimal format you can figure out how the bit pattern (eight bytes) should be for those numbers, then you can use the BitConverter.ToDouble method to do the conversion.
Then you can set the first bit to make the negative version of all those numbers.
All those numbers are unique, beyond that you will start getting duplicate values because there are several ways to express the same value when the exponent is non-zero. For each new non-zero exponent you would get the value that were not possible to express with the previously used expontents.
The values between 0 and 1 will however keep you busy for the forseeable future, so you can just start with those.
This should be doable in safe code: Create a bit string. Convert that to a double. Output. Increment. Repeat.... A LOT.
string bstr = "01010101010101010101010101010101"; // this is 32 instead of 64, adjust as needed
long v = 0;
for (int i = bstr.Length - 1; i >= 0; i--) v = (v << 1) + (bstr[i] - '0');
double d = BitConverter.ToDouble(BitConverter.GetBytes(v), 0);
// increment bstr and loop

BitArray change bit within range

How can I ensure that when changing a bit from a BitArray, the BitArray value remains in a range.
Example:
Given the range [-5.12, 5.12] and
a = 0100000000000000011000100100110111010010111100011010100111111100 ( = 2.048)
By changing a bit at a random position, I need to ensure that the new value remains in the given range.
I'm not 100% sure what you are doing and this answer assumes you are storing a as a 64-bit value (long) currently. The following code may help point you in the right direction.
const double minValue = -5.12;
const double maxValue = 5.12;
var initialValue = Convert.ToInt64("100000000000000011000100100110111010010111100011010100111111100", 2);
var changedValue = ChangeRandomBit(initialValue); // However you're doing this
var changedValueAsDouble = BitConverter.Int64BitsToDouble(initialValue);
if ((changedValueAsDouble < minValue) || (changedValueAsDouble > maxValue))
{
// Do something
}
It looks like double (64 bits and result has decimal point).
As you may know it has sign bit, exponent and fraction, so you can not change random bit and still have value in the range, with some exceptions:
sign bit can be changed without problem if your range is [-x;+x] (same x);
changing exponent or fraction will require to check new value range but:
changing exponent of fraction bit from 1 to 0 will make |a| less.
I don't know what you are trying to achieve, care to share? Perhaps you are trying to validate or correct something, then you may have a look at this.
Here's an extension method that undoes the set bit if the new value of the float is outside the given range (this is an example only, it relies on the BitArray holding a float with no checks, which is pretty horrible so just hack a solution out of this, incl changing to double):
static class Extension
{
public static void SetFloat(this BitArray array, int index, bool value, float min, float max)
{
bool old = array.Get(index);
array.Set(index, value);
byte[] bytes = new byte[4];
array.CopyTo(bytes, 0);
float f = BitConverter.ToSingle(bytes, 0);
if (f < min || f > max)
array.Set(index, old);
}
}
Example use:
static void Main(string[] args)
{
float f = 2.1f;
byte[] bytes = System.BitConverter.GetBytes(f);
BitArray array = new BitArray(bytes);
array.Set(20, true, -5.12f, 5.12f);
}
If you can actually limit your precision, then this would be a lot easier. For example given the range:
[-5.12, 5.12]
If I multiply 5.12 by 100, I get
[-512, 512]
And the integer 512 in binary is, of course:
1000000000
So now you know you can set any of the first 9 bits and you'll be < 512 if the 10th bit is 0. If you set the 10th bit, you will have to set all the other bits to 0. With a little extra effort, this can be extended to deal with 2's complement negative values too (although, I might be inclined just to convert them to positive values)
Now if you actually need to accommodate the 3 d.p. of 2.048, then you'll need to multiply all you values by 1000 instead and it will be a little more difficult because 5120 in binary is 1010000000000
You know you can do anything you want with everything except the most significant bit (MSB) if the MSB is 0. In this case, if the MSB is 1, but the next 2 bits are 0, you can do anything you want with the remaining bits.
The logic involved with dealing directly with the number in IEEE-754 floating point format is probably going to be torturous.
Or you could just go with the "mutate the value and then test it" approach, if it's out-of-range, go back and try again. Which might be suitable (in practice), but won't be guaranteed to exit.
A final thought, depending on exactly what you are doing, you might want to also look at Gray Codes. The idea of a Gray Code is to make it such that each value is only 1 bit flip apart. With naturally encoded binary, a flip of the MSB has orders of magnitude more impact on the final value than a flip of the LSB.

How to work with the bits in a byte

I have a single byte which contains two values. Here's the documentation:
The authority byte is split into two fields. The three least significant bits carry the user’s authority level (0-5). The five most
significant bits carry an override reject threshold. If these bits are
set to zero, the system reject threshold is used to determine whether
a score for this user is considered an accept or reject. If they are
not zero, then the value of these bits multiplied by ten will be the
threshold score for this user.
Authority Byte:
7 6 5 4 3 ......... 2 1 0
Reject Threshold .. Authority
I don't have any experience of working with bits in C#.
Can someone please help me convert a Byte and get the values as mentioned above?
I've tried the following code:
BitArray BA = new BitArray(mybyte);
But the length comes back as 29 and I would have expected 8, being each bit in the byte.
-- Thanks for everyone's quick help. Got it working now! Awesome internet.
Instead of BitArray, you can more easily use the built-in bitwise AND and right-shift operator as follows:
byte authorityByte = ...
int authorityLevel = authorityByte & 7;
int rejectThreshold = authorityByte >> 3;
To get the single byte back, you can use the bitwise OR and left-shift operator:
int authorityLevel = ...
int rejectThreshold = ...
Debug.Assert(authorityLevel >= 0 && authorityLevel <= 7);
Debug.Assert(rejectThreshold >= 0 && rejectThreshold <= 31);
byte authorityByte = (byte)((rejectThreshold << 3) | authorityLevel);
Your use of the BitArray is incorrect. This:
BitArray BA = new BitArray(mybyte);
..will be implicitly converted to an int. When that happens, you're triggering this constructor:
BitArray(int length);
..therefore, its creating it with a specific length.
Looking at MSDN (http://msdn.microsoft.com/en-us/library/x1xda43a.aspx) you want this:
BitArray BA = new BitArray(new byte[] { myByte });
Length will then be 8 (as expected).
To get a value of the five most significant bits in a byte as an integer, shift the byte to the right by 3 (i.e. by 8-5), and set the three upper bits to zero using bitwise AND operation, like this:
byte orig = ...
int rejThreshold = (orig >> 3) & 0x1F;
>> is the "shift right" operator. It moves bits 7..3 into positions 4..0, dropping the three lower bits.
0x1F is the binary number 00011111, which has the upper three bits set to zero, and the lower five bits set to one. AND-ing with this number zeroes out three upper bits.
This technique can be generalized to get other bit patterns and other integral data types. You shift the bits that you want into the least-significant position, and apply a mask that "cuts out" the number of bits that you want. In some cases, shifting would not be necessary (e.g. when you get the least significant group of bits). In other cases, such as above, the masking would not be necessary, because you get the most significant group of bits in an unsigned type (if the type is signed, ANDing would be required).
You're using the wrong constructor (probably).
The one that you're using is probably this one, while you need this one:
var bitArray = new BitArray(new [] { myByte } );

What does the CreateMask function of BitVector32 do?

What does the CreateMask() function of BitVector32 do?
I did not get what a Mask is.
Would like to understand the following lines of code. Does create mask just sets bit to true?
// Creates and initializes a BitVector32 with all bit flags set to FALSE.
BitVector32 myBV = new BitVector32( 0 );
// Creates masks to isolate each of the first five bit flags.
int myBit1 = BitVector32.CreateMask();
int myBit2 = BitVector32.CreateMask( myBit1 );
int myBit3 = BitVector32.CreateMask( myBit2 );
int myBit4 = BitVector32.CreateMask( myBit3 );
int myBit5 = BitVector32.CreateMask( myBit4 );
// Sets the alternating bits to TRUE.
Console.WriteLine( "Setting alternating bits to TRUE:" );
Console.WriteLine( " Initial: {0}", myBV.ToString() );
myBV[myBit1] = true;
Console.WriteLine( " myBit1 = TRUE: {0}", myBV.ToString() );
myBV[myBit3] = true;
Console.WriteLine( " myBit3 = TRUE: {0}", myBV.ToString() );
myBV[myBit5] = true;
Console.WriteLine( " myBit5 = TRUE: {0}", myBV.ToString() );
What is the practical application of this?
It returns a mask which you can use for easier retrieving of interesting bit.
You might want to check out Wikipedia for what a mask is.
In short: a mask is a pattern in form of array of 1s for the bits that you are interested in and 0s for the others.
If you have something like 01010 and you are interested in getting the last 3 bits, your mask would look like 00111. Then, when you perform a bitwise AND on 01010 and 00111 you will get the last three bits (00010), since AND only is 1 if both bits are set, and none of the bits beside the first three are set in the mask.
An example might be easier to understand:
BitVector32.CreateMask() => 1 (binary 1)
BitVector32.CreateMask(1) => 2 (binary 10)
BitVector32.CreateMask(2) => 4 (binary 100)
BitVector32.CreateMask(4) => 8 (binary 1000)
CreateMask(int) returns the given number multiplied by 2.
NOTE: The first bit is the least significant bit, i.e. the bit farthest to the right.
BitVector32.CreateMask() is a substitution for the left shift operator (<<) which in most cases results in multiplication by 2 (left shift is not circular, so you may start loosing digits, more is explained here)
BitVector32 vector = new BitVector32();
int bit1 = BitVector32.CreateMask();
int bit2 = BitVector32.CreateMask(bit1);
int bit3 = 1 << 2;
int bit5 = 1 << 4;
Console.WriteLine(vector.ToString());
vector[bit1 | bit2 | bit3 | bit5] = true;
Console.WriteLine(vector.ToString());
Output:
BitVector32{00000000000000000000000000000000}
BitVector32{00000000000000000000000000010111}
Check this other post link text.
And also, CreateMask does not return the given number multiplied by 2.
CreateMask creates a bit-mask based on an specific position in the 32-bit word (that's the paramater that you are passing), which is generally x^2 when you are talking about a single bit (flag).
I stumbled upon this question trying to find out what CreateMask does exactly. I did not feel the current answers answered the question for me. After some reading and experimenting, I would like to share my findings:
Basically what Maksymilian says is almast correct: "BitVector32.CreateMask is a substitution for the left shift operator (<<) which in most cases results in multiplication by 2".
Because << is a binary operator and CreateMask only takes one argument, I would like to add that BitVector32.CreateMask(x) is equivalant to x << 1.
Bordercases
However, BitVector32.CreateMask(x) is not equivalant to x << 1 for two border cases:
BitVector32.CreateMask(int.MinValue):
An InvalidOperationException will be thrown. int.MinValue corresponds to 10000000000000000000000000000000. This seems bit odd. Especially considering every other value with a 1 as the leftmost bit (i.e. negative numbers) works fine. In contrast: int.MinValue << 1 would not throw an exception and just return 0.
When you call BitVector32.CreateMask(0) (or BitVector32.CreateMask()). This will return 1
(i.e. 00000000000000000000000000000000 becomes 00000000000000000000000000000001),
whereas 0 << 1 would just return 0.
Multiplication by 2
CreateMask almost always is equivalent to multiplication by 2. Other than the above two special cases, it differs when the second bit from the left is different from the leftmost bit. An int is signed, so the leftmost bit indicates the sign. In that scenario the sign is flipped. E.g. CreateMask(-1) (or 11111111111111111111111111111111) results in -2 (or 11111111111111111111111111111110), but CreateMask(int.MaxValue) (or 01111111111111111111111111111111) also results in -2.
Anyway, you probably shouldn't use it for this purpose. As I understand, when you use a BitVector32, you really should only consider it a sequence of 32 bits. The fact that they use ints in combination with the BitVector32 is probably just because it's convenient.
When is CreateMask useful?
I honestly don't know. It seems from the documentation and the name "previous" of the argument of the function that they intended it to be used in some kind of sequence: "Use CreateMask() to create the first mask in a series and CreateMask(int) for all subsequent masks.".
However, in the code example, they use it to create the masks for the first 5 bits, to subsequently do some operations on those bits. I cannot imagine they expect you to write 32 calls in a row to CreateMask to be able to do some stuff with the bits near the left.

Categories

Resources