Circular shift Int32 digits using C# - c#

Members,
What I am trying to do is to right or left shift the digits of an Int32(not the bits!!).
So if shift the constant:
123456789
by 3
I should get
789123456
So no digits get lost, because we talk about a circular shift.
After a bit of testing I've come up with this method, which works:
static uint[] Pow10 = new uint[] { 1, 10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000, uint.MaxValue };
static uint RotateShift10(uint value, int shift)
{
int r = (int)Math.Floor(Math.Log10(value) + 1);
while (r < shift)
shift = shift - r;
if (shift < 0) shift = 9 + shift;
uint x = value / Pow10[shift];
uint i = 0;
while (true)
{
if (x < Pow10[i])
return x + (value % Pow10[shift]) * Pow10[i];
i += 1;
}
}
The way I am looking for should be an arithmetic solution, not a string conversion and then the rotation.
I also assume that:
The Int32 value has no 0-digits in it, to prevent any loss of digits.
The Int32 is a non-negative number
A positive Rotation integer should shift to the right, and negative one to the left.
My algorithm already does all of that, and I like to know if there are way to tweak it a bit, if there is a better arithmetic solution to the problem?

Because I just can't resist a 'has to have an arithmetic approach' challenge :D , fiddled around with the following:
static uint RotateShift(uint value, int shift)
{
int len = (int)Math.Log10(value) + 1;
shift %= len;
if (shift < 0) shift += len;
uint pow = (uint)Math.Pow(10, shift);
return (value % pow) * (uint)Math.Pow(10, len - shift) + value / pow;
}
edit Also some test results
foreach(var val in new uint[]{123456789, 12345678})
foreach (var shift in new[] { 3, -3, 1, -1, 11, -11, 18 })
{
Console.WriteLine("Value {0} Shift {1} -> {2}", val, shift, RotateShift(val, shift));
}
Value 123456789 Shift 3 -> 789123456
Value 123456789 Shift -3 -> 456789123
Value 123456789 Shift 1 -> 912345678
Value 123456789 Shift -1 -> 234567891
Value 123456789 Shift 11 -> 891234567
Value 123456789 Shift -11 -> 345678912
Value 123456789 Shift 18 -> 123456789
Value 12345678 Shift 3 -> 67812345
Value 12345678 Shift -3 -> 45678123
Value 12345678 Shift 1 -> 81234567
Value 12345678 Shift -1 -> 23456781
Value 12345678 Shift 11 -> 67812345
Value 12345678 Shift -11 -> 45678123
Value 12345678 Shift 18 -> 78123456

Related

C# getting upper 4 bits of uint32 starting with first significant bit

My need is - have some (in-fact pseudo) random number of uint32, i need it's 4 first bits stating with 1st bit, which in not 0, e.g.
...000100101 => 1001
1000...0001 => 1000
...0001 => 0001
...0000 => 0000
etc
I understand i have to use something like this
uint num = 1157 (some random number)
uint high = num >> offset
The problem is - i don't know where the first bit is so i can't use >> with constant variable. Can some one explain how to find this offset?
You can first calculate the highest significant bit (HSB) and then shift accordingly. You can this like:
int hsb = -4;
for(uint cnum = num; cnum != 0; cnum >>= 1, hsb++);
if(hsb < 0) {
hsb = 0;
}
uint result = num >> hsb;
So we first aim to detect the index of the highest set bit (or that index minus four). We do this by incrementing hsb and shifting cnum (a copy of num) to the right, until there are no set bits anymore in cnum.
Next we ensure that there is such set bit and that it has at least index four (if not, then nothing is done). The result is the original num shifted to the right by that hsb.
If I run this on 0x123, I get 0x9 in the csharp interactive shell:
csharp> uint num = 0x123;
csharp> int hsb = -4;
csharp> for(uint cnum = num; cnum != 0; cnum >>= 1, hsb++);
csharp> if(hsb < 0) {
> hsb = 0;
> }
csharp> uint result = num >> hsb;
csharp> result
9
0x123 is 0001 0010 0011 in binary. So:
0001 0010 0011
1 001
And 1001 is 9.
Determining the position of the most significant non-zero bit is the same as computing the logarithm with base 2. There are "bit shift tricks" to do that quickly on a modern CPU:
int GetLog2Plus1(uint value)
{
value |= value >> 1;
value |= value >> 2;
value |= value >> 4;
value |= value >> 8;
value |= value >> 16;
value -= value >> 1 & 0x55555555;
value = (value >> 2 & 0x33333333) + (value & 0x33333333);
value = (value >> 4) + value & 0x0F0F0F0F;
value += value >> 8;
value += value >> 16;
value &= 0x0000003F;
return (int) value;
}
This will return a number from 0 to 32:
Value | Log2 + 1
-------------------------------------------+---------
0b0000_0000_0000_0000_0000_0000_0000_0000U | 0
0b0000_0000_0000_0000_0000_0000_0000_0001U | 1
0b0000_0000_0000_0000_0000_0000_0000_0010U | 2
0b0000_0000_0000_0000_0000_0000_0000_0011U | 2
0b0000_0000_0000_0000_0000_0000_0000_0100U | 3
...
0b0111_1111_1111_1111_1111_1111_1111_1111U | 31
0b1000_0000_0000_0000_0000_0000_0000_0000U | 32
0b1000_0000_0000_0000_0000_0000_0000_0001U | 32
... |
0b1111_1111_1111_1111_1111_1111_1111_1111U | 32
(Math nitpickers will notice that the logarithm of 0 is undefined. However, I hope the table above demonstrates how that is handled and makes sense for this problem.)
You can then compute the most significant non-zero bits taking into account that you want the 4 least significant bits if the value is less than 8 (where log2 + 1 is < 4):
var log2Plus1 = GetLog2Plus1(value);
var bitsToShift = Math.Max(log2Plus1 - 4, 0);
var upper4Bits = value >> bitsToShift;

Comparing bits efficiently ( overlap set of x )

I want to compare a stream of bits of arbitrary length to a mask in c# and return a ratio of how many bits were the same.
The mask to check against is anywhere between 2 bits long to 8k (with 90% of the masks being 5 bits long), the input can be anywhere between 2 bits up to ~ 500k, with an average input string of 12k (but yeah, most of the time it will be comparing 5 bits with the first 5 bits of that 12k)
Now my naive implementation would be something like this:
bool[] mask = new[] { true, true, false, true };
float dendrite(bool[] input) {
int correct = 0;
for ( int i = 0; i<mask.length; i++ ) {
if ( input[i] == mask[i] )
correct++;
}
return (float)correct/(float)mask.length;
}
but I expect this is better handled (more efficient) with some kind of binary operator magic?
Anyone got any pointers?
EDIT: the datatype is not fixed at this point in my design, so if ints or bytearrays work better, I'd also be a happy camper, trying to optimize for efficiency here, the faster the computation, the better.
eg if you can make it work like this:
int[] mask = new[] { 1, 1, 0, 1 };
float dendrite(int[] input) {
int correct = 0;
for ( int i = 0; i<mask.length; i++ ) {
if ( input[i] == mask[i] )
correct++;
}
return (float)correct/(float)mask.length;
}
or this:
int mask = 13; //1101
float dendrite(int input) {
return // your magic here;
} // would return 0.75 for an input
// of 101 given ( 1100101 in binary,
// matches 3 bits of the 4 bit mask == .75
ANSWER:
I ran each proposed answer against each other and Fredou's and Marten's solution ran neck to neck but Fredou submitted the fastest leanest implementation in the end. Of course since the average result varies quite wildly between implementations I might have to revisit this post later on. :) but that's probably just me messing up in my test script. ( i hope, too late now, going to bed =)
sparse1.Cyclone
1317ms 3467107ticks 10000iterations
result: 0,7851563
sparse1.Marten
288ms 759362ticks 10000iterations
result: 0,05066964
sparse1.Fredou
216ms 568747ticks 10000iterations
result: 0,8925781
sparse1.Marten
296ms 778862ticks 10000iterations
result: 0,05066964
sparse1.Fredou
216ms 568601ticks 10000iterations
result: 0,8925781
sparse1.Marten
300ms 789901ticks 10000iterations
result: 0,05066964
sparse1.Cyclone
1314ms 3457988ticks 10000iterations
result: 0,7851563
sparse1.Fredou
207ms 546606ticks 10000iterations
result: 0,8925781
sparse1.Marten
298ms 786352ticks 10000iterations
result: 0,05066964
sparse1.Cyclone
1301ms 3422611ticks 10000iterations
result: 0,7851563
sparse1.Marten
292ms 769850ticks 10000iterations
result: 0,05066964
sparse1.Cyclone
1305ms 3433320ticks 10000iterations
result: 0,7851563
sparse1.Fredou
209ms 551178ticks 10000iterations
result: 0,8925781
( testscript copied here, if i destroyed yours modifying it lemme know. https://dotnetfiddle.net/h9nFSa )
how about this one - dotnetfiddle example
using System;
namespace ConsoleApplication1
{
public class Program
{
public static void Main(string[] args)
{
int a = Convert.ToInt32("0001101", 2);
int b = Convert.ToInt32("1100101", 2);
Console.WriteLine(dendrite(a, 4, b));
}
private static float dendrite(int mask, int len, int input)
{
return 1 - getBitCount(mask ^ (input & (int.MaxValue >> 32 - len))) / (float)len;
}
private static int getBitCount(int bits)
{
bits = bits - ((bits >> 1) & 0x55555555);
bits = (bits & 0x33333333) + ((bits >> 2) & 0x33333333);
return ((bits + (bits >> 4) & 0xf0f0f0f) * 0x1010101) >> 24;
}
}
}
64 bits one here - dotnetfiddler
using System;
namespace ConsoleApplication1
{
public class Program
{
public static void Main(string[] args)
{
// 1
ulong a = Convert.ToUInt64("0000000000000000000000000000000000000000000000000000000000001101", 2);
ulong b = Convert.ToUInt64("1110010101100101011001010110110101100101011001010110010101100101", 2);
Console.WriteLine(dendrite(a, 4, b));
}
private static float dendrite(ulong mask, int len, ulong input)
{
return 1 - getBitCount(mask ^ (input & (ulong.MaxValue >> (64 - len)))) / (float)len;
}
private static ulong getBitCount(ulong bits)
{
bits = bits - ((bits >> 1) & 0x5555555555555555UL);
bits = (bits & 0x3333333333333333UL) + ((bits >> 2) & 0x3333333333333333UL);
return unchecked(((bits + (bits >> 4)) & 0xF0F0F0F0F0F0F0FUL) * 0x101010101010101UL) >> 56;
}
}
}
I came up with this code:
static float dendrite(ulong input, ulong mask)
{
// get bits that are same (0 or 1) in input and mask
ulong samebits = mask & ~(input ^ mask);
// count number of same bits
int correct = cardinality(samebits);
// count number of bits in mask
int inmask = cardinality(mask);
// compute fraction (0.0 to 1.0)
return inmask == 0 ? 0f : correct / (float)inmask;
}
// this is a little hack to count the number of bits set to one in a 64-bit word
static int cardinality(ulong word)
{
const ulong mult = 0x0101010101010101;
const ulong mask1h = (~0UL) / 3 << 1;
const ulong mask2l = (~0UL) / 5;
const ulong mask4l = (~0UL) / 17;
word -= (mask1h & word) >> 1;
word = (word & mask2l) + ((word >> 2) & mask2l);
word += word >> 4;
word &= mask4l;
return (int)((word * mult) >> 56);
}
This will check 64-bits at a time. If you need more than that you can just split the input data into 64-bit words and compare them one by one and compute the average result.
Here's a .NET fiddle with the code and a working test case:
https://dotnetfiddle.net/5hYFtE
I would change the code to something along these lines:
// hardcoded bitmask
byte mask = 255;
float dendrite(byte input) {
int correct = 0;
// store the xor:ed result
byte xored = input ^ mask;
// loop through each bit
for(int i = 0; i < 8; i++) {
// if the bit is 0 then it was correct
if(!(xored & (1 << i)))
correct++;
}
return (float)correct/(float)mask.length;
}
The above uses a mask and input of 8 bits, but of course you could modify this to use a 4 byte integer and so on.
Not sure if this will work as expected, but it might give you some clues on how to proceed.
For example if you only would like to check the first 4 bits you could change the code to something like:
float dendrite(byte input) {
// hardcoded bitmask i.e 1101
byte mask = 13;
// number of bits to check
byte bits = 4;
int correct = 0;
// store the xor:ed result
byte xored = input ^ mask;
// loop through each bit, notice that we only checking the first 4 bits
for(int i = 0; i < bits; i++) {
// if the bit is 0 then it was correct
if(!(xored & (1 << i)))
correct++;
}
return (float)correct/(float)bits;
}
Of course it might be faster to actually use a int instead of a byte.

c# get bits from int as int

I need to get 4 ints from a integer with the following format
int1 14 bits
int2 14 bits
int3 3 bits
int4 1 bit
I've found a lot of articles for reading individual bits from an int but I can't find anything on reading multiple values from a single integer so and help would be appreciated!
Asuming it is from left to right
int int1 = x >> 18;
int int2 = (x >> 4) & 0x3fff;
int int3 = (x >> 1) & 7;
int int4 = x & 1;
So let's assume your 32-bit int is arranged bit-wise as follows, and your target variables are X, Y, Z and W.
31 0 # bit index
XXXXXXXXXXXXXX YYYYYYYYYYYYYY ZZZ W # arrangement
......14...... ......14...... .3. 1 # bits per variable
............18 .............4 ..1 0 # required right-shift
To get at X, you would shift your integer right by 18 bits, then mask it by ((1<<14)-1) (that is 0x3FFF), etc.:
x = (i >> 18) & 0x3FFF
y = (i >> 4) & 0x3FFF
z = (i >> 1) & 7 # ((1<<3)-1) = 7
w = i & 1
You can use bitwise and to get this:
int source = somevalue;
int int1 = 16383&somevalue;
int int2 = 268419072&somevalue;
int int3 = 1879048192&somevalue;

Get the sum of powers of 2 for a given number + c#

I have a table with different codes. And their Id's are powers of 2. (20, 21, 22, 23...).
Based on different conditions my application will assign a value to the "Status" variable.
for ex :
Status = 272 ( which is 28+ 24)
Status = 21 ( Which is 24+ 22+20)
If Status = 21 then my method (C#) should tell me that 21 is sum of 16 + 4 + 1.
You can test all bits in the input value if they are checked:
int value = 21;
for (int i = 0; i < 32; i++)
{
int mask = 1 << i;
if ((value & mask) != 0)
{
Console.WriteLine(mask);
}
}
Output:
1
4
16
for (uint currentPow = 1; currentPow != 0; currentPow <<= 1)
{
if ((currentPow & QStatus) != 0)
Console.WriteLine(currentPow); //or save or print some other way
}
for QStatus == 21 it will give
1
4
16
Explanation:
A power of 2 has exactly one 1 in its binary representation. We take that one to be the rightmost one(least significant) and iteratively push it leftwards(towards more significant) until the number overflows and becomes 0. Each time we check that currentPow & QStatus is not 0.
This can probably be done much cleaner with an enum with the [Flags] attribute set.
This is basically binary (because binary is also base 2). You can bitshift values around !
uint i = 87;
uint mask;
for (short j = 0; j < sizeof(uint); j++)
{
mask = 1 << j;
if (i & mask == 1)
// 2^j is a factor
}
You can use bitwise operators for this (assuming that you have few enough codes that the values stay in an integer variable).
a & (a - 1) gives you back a after unsetting the last set bit. You can use that to get the value of the corresponding flag, like:
while (QStatus) {
uint nxtStatus = QStatus & (QStatus - 1);
processFlag(QStatus ^ nxtStatus);
QStatus = nxtStatus;
}
processFlag will be called with the set values in increasing order (e.g. 1, 4, 16 if QStatus is originally 21).

Convert ushort to string

I have a ushort that consists of two bytes.
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Y = bits 10-0, twos complement mantissa integer.
N = bits 15-11, twos complement integer.
X = Y * 2^N
I need to ouput X as a string.
This is what I have tried:
private string ConvertLinearToString(ushort data)
{
int N;
int Y;
int X;
N = Convert.ToInt16(GetBitRange((byte)data, 0, 5));
Y = Convert.ToInt16(GetBitRange((byte)data, 6, 11));
X = Convert.ToInt16(Y * Math.Pow(2, (double)N));
return Convert.ToString(result);
}
private byte GetBitRange(byte b, int offset, int count)
{
return Convert.ToByte((b >> offset) & ((1 << count) - 1));
}
I'm stuck trying to convert the GetBitRange() formula to use ushort and also how to handle the twos complement.
You can get the two's complement behavior by using a left shift to throw away bits you don't want followed by a right shift to sign extend. If you implement GetBitRange using 32-bit integers like this:
private static int GetBitRange(int data, int offset, int count)
{
return data << offset >> (32 - count);
}
Then just let the ushorts get converted to ints in ConvertLinearToString:
private static string ConvertLinearToString(ushort data)
{
var n = GetBitRange(data, 16, 5);
var y = GetBitRange(data, 21, 11);
var value = y * Math.Pow(2, n);
return value.ToString();
}
Just covert your unsigned short to an integer and use the method to do the conversion.
ushort u = 10;
string s = Convert.ToString((int)u);
This solution is reasonably safe from overflow. There maybe some future version of C# where this operation would no longer be safe.

Categories

Resources