C# Rotate bits to left overflow issue - c#

I've been trying to get this to work for several days now, I've read a thousand guides and people's questions, but still, I can't find a way to do it properly.
What I want to do is to rotate the bits to the left, here's an example.
Original number = 10000001 = 129
What I need = 00000011 = 3
I have to rotate the bits to left a certain amount of times (it depends on what the user types), here's what I did:
byte b = (byte)129;
byte result = (byte)((byte)b << 1);
Console.WriteLine(result);
Console.Write("Press any key to continue . . . ");
Console.ReadKey(true);
The issue with this it that it causes an error (OverflowException) when I try to use the (<<) operator with that number (note that if I put a number which first bit is a 0; example: 3 = 00000011; it works as intended and it returns a 6 as a result.
The problem is, if the first bit is a 1, it gives me the (OverflowException) error. I know this isn't rotating, its just a shifting, the first bit goes away and on the end of the byte a 0 pops up, and I can then change it with an OR 000000001 operation to make it a 1 (if the first bit was a 1, if it was a 0 I just leave it there).

You're getting an overflow exception because you're operating in a checked context, apparently.
You can get around that by putting the code in an unchecked context - or just by making sure you don't perform the cast back to byte on a value that can be more than 255. For example:
int shifted = b << rotateLeftBits;
int highBits = shifted & 0xff;
int lowBits = shifted >> 8; // Previously high bits, rotated
byte result = (byte) (highBits | lowBits);
This will work for rotate sizes of up to 8. For greater sizes, just use rotateLeftBits % 8 (and normalize to a non-negative number if you might sometimes want to rotate right).

<< is a shift operator, not a rotate one.
If you want to rotate, you can use (with suitable casting):
b = (b >> 7) | ((b & 0x7f) << 1);
The first part of that gets the leftmost bit down to the rightmost, the second part shifts all the other left.
The or-ing them with | combines the two.

Thanks for your answers!
Shortly after i made this post i came up with an idea to solve this problem, let me show you (Before you ask, it works!):
byte b = (byte)129;
b = (byte)((byte)b & 127);
byte result = (byte)((byte)b << 1);
result = (byte)((byte)result | 1);
Console.WriteLine(result);
What this does is, removes the first bit (in case if it is a 1) it shifts to the left that zero (doesnt generate overflow) and once the shift is over, it changes that 0 back to 1. If that first bit was a 0, it will just move that zero (note that this is just a piece of the whole code, and as it is partially written in spanish (the comments and variables) i doubt you will understand most of it, so i decided to take out the problematic part to show it to you guys!
I will still try the things you told me and see how it goes, again, thanks a lot for your answers!

please try this function - rotates in both directions (left and right rotation of an 8 bit value) [I didn't tested that function!]
// just for 8Bit values (byte)
byte rot(byte value, int rotation)
{
rotation %= 8;
int result;
if(rotation < 0)
{
result = value << (8 + rotation);
}
else
{
result = value << rotation;
}
byte[] resultBytes = BitConverter.GetBytes(result);
result = resultBytes[0] | resultBytes[1];
return (byte)result;
}
short rot(short value, int rotation) { ... }
int rot(int value, int rotation) { ... }

Related

What does the << mean?

Thanks for taking a look at this question.
I saw the following piece of code inside a traditional for block, but was not sure what its significance was inside its context.
index <<= 1;
For further context, here is the full block of code.
ulong index = 1;
int distance = 0;
for (int i = 0; i < 64; i++)
{
if ((hash1 & index) != (hash2 & index))
{
distance++;
}
index <<= 1;
}
Is it simply making sure that index is still 1 and if it isn't, return it's value to 1?
Secondly, what is this called so I can read up on it some more.
Finally, Thank you for your time and consideration for this matter.
The code in question is spinning through a pair of 64-bit hashes (probably as ulongs, like the index), and checking how many bits differ between them. I'm going to use 4-bit values for example purposes, but the principle is the same.
if ((hash1 & index) != (hash2 & index))
The & operator is doing a bitwise-AND operation. When the hash is ANDed with the index value, you get either 0 or the index value back, depending on whether that specific bit was 0 or 1. (1010 & 0010 == 0010 and 1010 & 0100 == 0000).
If both ANDs produce a 0, or both produce the index value, then the two bits of the hash match. Otherwise, they don't, and we distance++; to indicate that they are off by one more bit than we knew of before.
index <<= 1;
This line merely bumps the index digit to the next bit. It does this by taking the old index (which starts as 1, equal to 0001), and left shifting by one place (<< 1), then setting that back into the index variable (<<= instead of <<). So after the first loop, index will be 0010, then 0100, and so on.
This has the effect of multiplying by 2, but that's not its intended use here.
So overall, you'd get a distance of 2 by running 0011 and 1111 through this algorithm, because two bits are different.
The code
index <<= 1;
Is a left shift by one bit. It has the same effect in this case as multiplying by two. But see comments for cautions.

C# possibilities tree from integers array

How can i build possibilities tree from integers array with C#? I need to make all possibles variants of array if in the every step delete one element from array.
example if we have array from three integers [1,2,3] then tree should looks like this: tree view
I would approach this as a binary arithmetic problem:
static void Main(string[] args)
{
int[] arr = { 1, 2, 3 };
PickElements(0, arr);
}
static void PickElements<T>(int depth, T[] arr, int mask = -1)
{
int bits = Math.Min(32, arr.Length);
// keep just the bits from mask that are represented in arr
mask &= ~(-1 << bits);
if (mask == 0) return;
// UI: write the options
for (int i = 0; i < depth; i++ )
Console.Write('>'); // indent to depth
for (int i = 0; i < arr.Length; i++)
{
if ((mask & (1 << i)) != 0)
{
Console.Write(' ');
Console.Write(arr[i]);
}
}
Console.WriteLine();
// recurse, taking away one bit (naive and basic bit sweep)
for (int i = 0; i < bits; i++)
{
// try and subtract each bit separately; if it
// is different, recurse
var childMask = mask & ~(1 << i);
if (childMask != mask) PickElements(depth + 1, arr, childMask);
}
}
For a TreeView, simply replace the Console.Write etc with node creation, presumably passing the parent node in (and down) as part of the recursion (in place of depth, perhaps).
To see what this is doing, consider the binary; -1 is:
11111111111111...111111111111111
we then look at bits, which we derive from the array length, and find to be 3 in this example. We only need to look at 3 bits, then; the line:
~(-1 << bits)
computes a mask for this, because:
-1 = 1111111....1111111111111
(-1 << 3) = 1111111....1111111111000 (left-shift back-fills with 0)
~(-1 << 3) = 0000000....0000000000111 (binary inverse)
we then apply this to our input mask, so we're only ever looking at the least significant 3 bits, via mask &= .... If that turns out to be zero, we've run out of things to do, so stop recursing.
The UI update is simple enough; we just scan over the 3 bits that we care about, checking whether the current bit is "on" for our mask; 1 << i creates a mask with just the "i-th set bit"; the & and != 0 checks whether that bit is set. If it is, we include the element in the output.
Finally, we need to start taking away bits, to look at the sub-tree; we could probably be more sophisticated about this, but I chose just to scan all the bits and try them - worst case this is 32 bit tests per level, which is nothing. As before, 1 << i creates a mask of just the "i-th set bit". This time we want to disable that bit, so we "negate" and "and" via mask & ~(...). It is possible that this bit was already disabled, so the childMask != mask check ensures we only actually recurse when we have disabled a bit that was previously enabled.
The end result is that we end up with the masks being successively:
11..1111111111111111 (special case for first call; all set)
110 (first bit disabled)
100 (first and second bits disabled)
010 (first and third bits disabled)
101 (second bit disabled)
100 (second and first bits disabled)
001 (second and third bits disabled)
011 (third bit disabled)
010 (third and first bits disabled)
001 (third and second bits disabled)
Note that for a simpler combination example, it would be possible to just iterate in a single for, using the bits to pick elements; however, I've done it a recursive way because we need to build a tree of successive subtractions, rather than just flat possibilities in no particular order.

Setting all low order bits to 0 until two 1s remain (for a number stored as a byte array)

I need to set all the low order bits of a given BigInteger to 0 until only two 1 bits are left. In other words leave the highest and second-highest bits set while unsetting all others.
The number could be any combination of bits. It may even be all 1s or all 0s. Example:
MSB 0000 0000
1101 1010
0010 0111
...
...
...
LSB 0100 1010
We can easily take out corner cases such as 0, 1, PowerOf2, etc. Not sure how to apply popular bit manipulation algorithms on a an array of bytes representing one number.
I have already looked at bithacks but have the following constraints. The BigInteger structure only exposes underlying data through the ToByteArray method which itself is expensive and unnecessary. Since there is no way around this, I don't want to slow things down further by implementing a bit counting algorithm optimized for 32/64 bit integers (which most are).
In short, I have a byte [] representing an arbitrarily large number. Speed is the key factor here.
NOTE: In case it helps, the numbers I am dealing with have around 5,000,000 bits. They keep on decreasing with each iteration of the algorithm so I could probably switch techniques as the magnitude of the number decreases.
Why I need to do this: I am working with a 2D graph and am particularly interested in coordinates whose x and y values are powers of 2. So (x+y) will always have two bits set and (x-y) will always have consecutive bits set. Given an arbitrary coordinate (x, y), I need to transform an intersection by getting values with all bits unset except the first two MSB.
Try the following (not sure if it's actually valid C#, but it should be close enough):
// find the next non-zero byte (I'm assuming little endian) or return -1
int find_next_byte(byte[] data, int i) {
while (data[i] == 0) --i;
return i;
}
// find a bit mask of the next non-zero bit or return 0
int find_next_bit(int value, int b) {
while (b > 0 && ((value & b) == 0)) b >>= 1;
return b;
}
byte[] data;
int i = find_next_byte(data, data.Length - 1);
// find the first 1 bit
int b = find_next_bit(data[i], 1 << 7);
// try to find the second 1 bit
b = find_next_bit(data[i], b >> 1);
if (b > 0) {
// found 2 bits, removing the rest
if (b > 1) data[i] &= ~(b - 1);
} else {
// we only found 1 bit, find the next non-zero byte
i = find_next_byte(data, i - 1);
b = find_next_bit(data[i], 1 << 7);
if (b > 1) data[i] &= ~(b - 1);
}
// remove the rest (a memcpy would be even better here,
// but that would probably require unmanaged code)
for (--i; i >= 0; --i) data[i] = 0;
Untested.
Probably this would be a bit more performant if compiled as unmanaged code or even with a C or C++ compiler.
As harold noted correctly, if you have no a priori knowledge about your number, this O(n) method is the best you can do. If you can, you should keep the position of the highest two non-zero bytes, which would drastically reduce the time needed to perform your transformation.
I'm not sure if this is getting optimised out or not but this code appears to be 16x faster than ToByteArray. It also avoids the memory copy and it means you get to the results as uint instead of byte so you should have further improvements there.
//create delegate to get private _bit field
var par = Expression.Parameter(typeof(BigInteger));
var bits = Expression.Field(par, "_bits");
var lambda = Expression.Lambda(bits, par);
var func = (Func<BigInteger, uint[]>)lambda.Compile();
//test call our delegate
var bigint = BigInteger.Parse("3498574578238348969856895698745697868975687978");
int time = Environment.TickCount;
for (int y = 0; y < 10000000; y++)
{
var x = func(bigint);
}
Console.WriteLine(Environment.TickCount - time);
//compare time to ToByteArray
time = Environment.TickCount;
for (int y = 0; y < 10000000; y++)
{
var x = bigint.ToByteArray();
}
Console.WriteLine(Environment.TickCount - time);
From there finding the top 2 bits should be pretty easy. The first bit will be in the first int I presume, then it is just a matter of searching for the second top most bit. If it is in the same integer then just set the first bit to zero and find the topmost bit, otherwise search for the next no zero int and find the topmost bit.
EDIT: to make things simple just copy/paste this class into your project. This creates extension methods that means you can just call mybigint.GetUnderlyingBitsArray(). I added a method to get the Sign also and, to make it more generic, have created a function that will allow accessing any private field of any object. I found this to be slower than my original code in debug mode but the same speed in release mode. I would advise performance testing this yourself.
static class BigIntegerEx
{
private static Func<BigInteger, uint[]> getUnderlyingBitsArray;
private static Func<BigInteger, int> getUnderlyingSign;
static BigIntegerEx()
{
getUnderlyingBitsArray = CompileFuncToGetPrivateField<BigInteger, uint[]>("_bits");
getUnderlyingSign = CompileFuncToGetPrivateField<BigInteger, int>("_sign");
}
private static Func<TObject, TField> CompileFuncToGetPrivateField<TObject, TField>(string fieldName)
{
var par = Expression.Parameter(typeof(TObject));
var field = Expression.Field(par, fieldName);
var lambda = Expression.Lambda(field, par);
return (Func<TObject, TField>)lambda.Compile();
}
public static uint[] GetUnderlyingBitsArray(this BigInteger source)
{
return getUnderlyingBitsArray(source);
}
public static int GetUnderlyingSign(this BigInteger source)
{
return getUnderlyingSign(source);
}
}

Invert 1 bit in C#

I have 1 bit in a byte (always in the lowest order position) that I'd like to invert.
ie given 00000001 I'd like to get 00000000 and with 00000000 I'd like 00000001.
I solved it like this:
bit > 0 ? 0 : 1;
I'm curious to see how else it could be done.
How about:
bit ^= 1;
This simply XOR's the first bit with 1, which toggles it.
If you want to flip bit #N, counting from 0 on the right towards 7 on the left (for a byte), you can use this expression:
bit ^= (1 << N);
This won't disturb any other bits, but if the value is only ever going to be 0 or 1 in decimal value (ie. all other bits are 0), then the following can be used as well:
bit = 1 - bit;
Again, if there is only going to be one bit set, you can use the same value for 1 as in the first to flip bit #N:
bit = (1 << N) - bit;
Of course, at that point you're not actually doing bit-manipulation in the same sense.
The expression you have is fine as well, but again will manipulate the entire value.
Also, if you had expressed a single bit as a bool value, you could do this:
bit = !bit;
Which toggles the value.
More of a joke:
Of course, the "enterprisey" way would be to use a lookup table:
byte[] bitTranslations = new byte[256];
bitTranslations[0] = 1;
bitTranslations[1] = 0;
bit = bitTranslations[bit];
Your solution isn't correct because if bit == 2 (10) then your assignment will yield bit == 0 (00).
This is what you want:
bit ^= 1;

What does the CreateMask function of BitVector32 do?

What does the CreateMask() function of BitVector32 do?
I did not get what a Mask is.
Would like to understand the following lines of code. Does create mask just sets bit to true?
// Creates and initializes a BitVector32 with all bit flags set to FALSE.
BitVector32 myBV = new BitVector32( 0 );
// Creates masks to isolate each of the first five bit flags.
int myBit1 = BitVector32.CreateMask();
int myBit2 = BitVector32.CreateMask( myBit1 );
int myBit3 = BitVector32.CreateMask( myBit2 );
int myBit4 = BitVector32.CreateMask( myBit3 );
int myBit5 = BitVector32.CreateMask( myBit4 );
// Sets the alternating bits to TRUE.
Console.WriteLine( "Setting alternating bits to TRUE:" );
Console.WriteLine( " Initial: {0}", myBV.ToString() );
myBV[myBit1] = true;
Console.WriteLine( " myBit1 = TRUE: {0}", myBV.ToString() );
myBV[myBit3] = true;
Console.WriteLine( " myBit3 = TRUE: {0}", myBV.ToString() );
myBV[myBit5] = true;
Console.WriteLine( " myBit5 = TRUE: {0}", myBV.ToString() );
What is the practical application of this?
It returns a mask which you can use for easier retrieving of interesting bit.
You might want to check out Wikipedia for what a mask is.
In short: a mask is a pattern in form of array of 1s for the bits that you are interested in and 0s for the others.
If you have something like 01010 and you are interested in getting the last 3 bits, your mask would look like 00111. Then, when you perform a bitwise AND on 01010 and 00111 you will get the last three bits (00010), since AND only is 1 if both bits are set, and none of the bits beside the first three are set in the mask.
An example might be easier to understand:
BitVector32.CreateMask() => 1 (binary 1)
BitVector32.CreateMask(1) => 2 (binary 10)
BitVector32.CreateMask(2) => 4 (binary 100)
BitVector32.CreateMask(4) => 8 (binary 1000)
CreateMask(int) returns the given number multiplied by 2.
NOTE: The first bit is the least significant bit, i.e. the bit farthest to the right.
BitVector32.CreateMask() is a substitution for the left shift operator (<<) which in most cases results in multiplication by 2 (left shift is not circular, so you may start loosing digits, more is explained here)
BitVector32 vector = new BitVector32();
int bit1 = BitVector32.CreateMask();
int bit2 = BitVector32.CreateMask(bit1);
int bit3 = 1 << 2;
int bit5 = 1 << 4;
Console.WriteLine(vector.ToString());
vector[bit1 | bit2 | bit3 | bit5] = true;
Console.WriteLine(vector.ToString());
Output:
BitVector32{00000000000000000000000000000000}
BitVector32{00000000000000000000000000010111}
Check this other post link text.
And also, CreateMask does not return the given number multiplied by 2.
CreateMask creates a bit-mask based on an specific position in the 32-bit word (that's the paramater that you are passing), which is generally x^2 when you are talking about a single bit (flag).
I stumbled upon this question trying to find out what CreateMask does exactly. I did not feel the current answers answered the question for me. After some reading and experimenting, I would like to share my findings:
Basically what Maksymilian says is almast correct: "BitVector32.CreateMask is a substitution for the left shift operator (<<) which in most cases results in multiplication by 2".
Because << is a binary operator and CreateMask only takes one argument, I would like to add that BitVector32.CreateMask(x) is equivalant to x << 1.
Bordercases
However, BitVector32.CreateMask(x) is not equivalant to x << 1 for two border cases:
BitVector32.CreateMask(int.MinValue):
An InvalidOperationException will be thrown. int.MinValue corresponds to 10000000000000000000000000000000. This seems bit odd. Especially considering every other value with a 1 as the leftmost bit (i.e. negative numbers) works fine. In contrast: int.MinValue << 1 would not throw an exception and just return 0.
When you call BitVector32.CreateMask(0) (or BitVector32.CreateMask()). This will return 1
(i.e. 00000000000000000000000000000000 becomes 00000000000000000000000000000001),
whereas 0 << 1 would just return 0.
Multiplication by 2
CreateMask almost always is equivalent to multiplication by 2. Other than the above two special cases, it differs when the second bit from the left is different from the leftmost bit. An int is signed, so the leftmost bit indicates the sign. In that scenario the sign is flipped. E.g. CreateMask(-1) (or 11111111111111111111111111111111) results in -2 (or 11111111111111111111111111111110), but CreateMask(int.MaxValue) (or 01111111111111111111111111111111) also results in -2.
Anyway, you probably shouldn't use it for this purpose. As I understand, when you use a BitVector32, you really should only consider it a sequence of 32 bits. The fact that they use ints in combination with the BitVector32 is probably just because it's convenient.
When is CreateMask useful?
I honestly don't know. It seems from the documentation and the name "previous" of the argument of the function that they intended it to be used in some kind of sequence: "Use CreateMask() to create the first mask in a series and CreateMask(int) for all subsequent masks.".
However, in the code example, they use it to create the masks for the first 5 bits, to subsequently do some operations on those bits. I cannot imagine they expect you to write 32 calls in a row to CreateMask to be able to do some stuff with the bits near the left.

Categories

Resources