Get Sign of a Number without Logical Statement in C# - c#

I want to get the Sign of a Number without a Logical Statement. Already a predefined method is available Math.Sign(). But I need to Implement in my own style.
The Tried C# Code:
public int GetSign(int value)
{
int bitFlag = 1;
var m = Convert.ToString(value, 2);
int length = m.Length;
if (m[length - 1] == '1')
{
bitFlag = -1;
}
return bitFlag;
}
Condition:
If the Last bit is 1 then return -1
If the Last bit is 0 then return 1
Kindly assist me, how to remove the above IF Statement...

Interesting thing about bit shifting:
If you right shift the bits, the leading bit will be propagated to the right.
Example byte : 10000000
Example byte >> 1 : 11000000
Integers take 32 bits to represent. So what happens if we shift the bits by 31 places? The leading bit will always be propagated, meaning all positive numbers will become 0 and all negative numbers will become -1.
Therefore :
public static int signOfInt(int input)
{
return (input >> 31);
}
will return 0 for positive numbers and -1 for negative numbers.

I think this will do it
public int GetSign(int value)
{
return -(((value & 1) << 1) - 1);
}

Related

Counting Precision Digits

How to count precision digits on a C# decimal type?
e.g. 12.001 = 3 precision digits.
I would like to thrown an error is a precision of greater than x is present.
Thanks.
public int CountDecPoint(decimal d){
string[] s = d.ToString().Split('.');
return s.Length == 1 ? 0 : s[1].Length;
}
Normally the decimal separator is ., but to deal with different culture, this code will be better:
public int CountDecPoint(decimal d){
string[] s = d.ToString().Split(Application.CurrentCulture.NumberFormat.NumberDecimalSeparator[0]);
return s.Length == 1 ? 0 : s[1].Length;
}
You can get the "scale" of a decimal like this:
static byte GetScale(decimal d)
{
return BitConverter.GetBytes(decimal.GetBits(d)[3])[2];
}
Explanation: decimal.GetBits returns an array of four int values of which we take only the last one. As described on the linked page, we need only the second to last byte from the four bytes that make up this int, and we do that with BitConverter.GetBytes.
Examples: The scale of the number 3.14m is 2. The scale of 3.14000m is 5. The scale of 123456m is 0. The scale of 123456.0m is 1.
If the code may run on a big-endian system, it is likely that you have to modify to BitConverter.GetBytes(decimal.GetBits(d)[3])[BitConverter.IsLittleEndian ? 2 : 1] or something similar. I have not tested that. See the comments by relatively_random below.
I know I'm resurrecting an ancient question, but here's a version that doesn't rely on string representations and actually ignores trailing zeros. If that's even desired, of course.
public static int GetMinPrecision(this decimal input)
{
if (input < 0)
input = -input;
int count = 0;
input -= decimal.Truncate(input);
while (input != 0)
{
++count;
input *= 10;
input -= decimal.Truncate(input);
}
return count;
}
I would like to thrown an error is a precision of greater than x is present
This looks like the simplest way:
void AssertPrecision(decimal number, int decimals)
{
if (number != decimal.Round(number, decimals, MidpointRounding.AwayFromZero))
throw new Exception()
};

Minimum number of bits to represent number

What is the most efficient way to find out how many bits are needed to represent some random int number?
For example number 30,000 is represented binary with
111010100110000
So it needs 15 bits
You may try:
Math.Floor(Math.Log(30000, 2)) + 1
or
(int) Math.Log(30000, 2) + 1
int v = 30000; // 32-bit word to find the log base 2 of
int r = 0; // r will be lg(v)
while ( (v >>= 1) != 0) // unroll for more speed...
{
r++;
}
For more advanced methods, see here http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
Note that this computes the index of the leftmost set bit (14 for 30000). If you want the number of bits, just add 1.
Try log(number)/log(2). Then round it up to the next whole number.

Checking bits in a byte using for loop

I was recently studying C# where i came across following for loop
// Display the bits within a byte.
using System;
class ShowBits {
static void Main() {
int t;
byte val;
val = 123;
for(t=128; t > 0; t = t/2) {
if((val & t) != 0)
Console.Write("1 ");
if((val & t) == 0)
Console.Write("0 ");
}
}
}
I am not able to understand that Why in doing t=t/2 in the incrementing/decrementing section of the for loop . plz explain
Decimal 128 is binary 10000000 - i.e. a mask for just the most significant bit of the byte. When you divide it by two, you get 01000000, i.e. the second most significant bit, etc.
Using & between the original value and the mask and just comparing with 0 indicates whether that bit is set in the original value.
Another alternative would be to shift the original value instead:
for (int i = 7; i >= 0; i--)
{
int shifted = val >> i;
// Take the bottom-most bit of the shifted value
Console.Write("{0} ", shifted & 1);
}
It's looping in decreasing powers of two and using that value in a mask.
(base 10): 128, 64, 32, 16, 8, 4, 2, 1
(base 2): 10000000, 01000000, 00100000, 00010000, 00001000, 00000100, 00000010, 00000001
128 is written as 10000000 in binary, so we check if the highest bit in a byte is on. Then we do t=t/2, which is t=128/2=64 which written as 01000000 in binary and so on. Any division shifts the one bit that is on one place to the right.
The t is used as a mask for the bits in val.
So it starts at 128, 10000000 in binary.
When it is divided by 2, it becomes 64 - or 01000000.
This goes until it reaches 0.
Then in each iteration, the '&' is used to mask the bits in val with the current bit in t.

Question On Validate Function On Byte Integer in C#

How can I validate a value is xx byte integer (singed or unsigned)
xx stand for 1, 2, 4, 8.
Supposed that I need validate 65(65 was a string value currently) is 1 byte integer or not?
How can I write a tiny function to validate it?
I don't know the exact meaning for byte integer.
bool Is1Byte(string val)
{
try
{
int num = int.Parse(val)
return (num >= -128) && (num <= 127);
}
catch(Exception)
{
return false;
}
}
It sounds like what you need is something that will test a number to see if it fits within a 1 byte integer. A 1 byte integer can contain a number between 0 and 255 (if unsigned) or -128 and 127 if signed. So you just need something that tests to see if the number falls within this range. byte is unsigned by default in C# so you just need:
return (x >= 0 && x <= 255);
Why these values? It's because a byte is eight bits of storage, which can store 2 to the 8 possible values. 2^8 = 256.

Number of unset bit left of most significant set bit?

Assuming the 64bit integer 0x000000000000FFFF which would be represented as
00000000 00000000 00000000 00000000
00000000 00000000 >11111111 11111111
How do I find the amount of unset bits to the left of the most significant set bit (the one marked with >) ?
In straight C (long long are 64 bit on my setup), taken from similar Java implementations: (updated after a little more reading on Hamming weight)
A little more explanation: The top part just sets all bit to the right of the most significant 1, and then negates it. (i.e. all the 0's to the 'left' of the most significant 1 are now 1's and everything else is 0).
Then I used a Hamming Weight implementation to count the bits.
unsigned long long i = 0x0000000000000000LLU;
i |= i >> 1;
i |= i >> 2;
i |= i >> 4;
i |= i >> 8;
i |= i >> 16;
i |= i >> 32;
// Highest bit in input and all lower bits are now set. Invert to set the bits to count.
i=~i;
i -= (i >> 1) & 0x5555555555555555LLU; // each 2 bits now contains a count
i = (i & 0x3333333333333333LLU) + ((i >> 2) & 0x3333333333333333LLU); // each 4 bits now contains a count
i = (i + (i >> 4)) & 0x0f0f0f0f0f0f0f0fLLU; // each 8 bits now contains a count
i *= 0x0101010101010101LLU; // add each byte to all the bytes above it
i >>= 56; // the number of bits
printf("Leading 0's = %lld\n", i);
I'd be curious to see how this was efficiency wise. Tested it with several values though and it seems to work.
Based on: http://www.hackersdelight.org/HDcode/nlz.c.txt
template<typename T> int clz(T v) {int n=sizeof(T)*8;int c=n;while (n){n>>=1;if (v>>n) c-=n,v>>=n;}return c-v;}
If you'd like a version that allows you to keep your lunch down, here you go:
int clz(uint64_t v) {
int n=64,c=64;
while (n) {
n>>=1;
if (v>>n) c-=n,v>>=n;
}
return c-v;
}
As you'll see, you can save cycles on this by careful analysis of the assembler, but the strategy here is not a terrible one. The while loop will operate Lg[64]=6 times; each time it will convert the problem into one of counting the number of leading bits on an integer of half the size.
The if statement inside the while loop asks the question: "can i represent this integer in half as many bits", or analogously, "if i cut this in half, have i lost it?". After the if() payload completes, our number will always be in the lowest n bits.
At the final stage, v is either 0 or 1, and this completes the calculation correctly.
If you are dealing with unsigned integers, you could do this:
#include <math.h>
int numunset(uint64_t number)
{
int nbits = sizeof(uint64_t)*8;
if(number == 0)
return nbits;
int first_set = floor(log2(number));
return nbits - first_set - 1;
}
I don't know how it will compare in performance to the loop and count methods that have already been offered because log2() could be expensive.
Edit:
This could cause some problems with high-valued integers since the log2() function is casting to double and some numerical issues may arise. You could use the log2l() function that works with long double. A better solution would be to use an integer log2() function as in this question.
// clear all bits except the lowest set bit
x &= -x;
// if x==0, add 0, otherwise add x - 1.
// This sets all bits below the one set above to 1.
x+= (-(x==0))&(x - 1);
return 64 - count_bits_set(x);
Where count_bits_set is the fastest version of counting bits you can find. See https://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel for various bit counting techniques.
I'm not sure I understood the problem correctly. I think you have a 64bit value and want to find the number of leading zeros in it.
One way would be to find the most significant bit and simply subtract its position from 63 (assuming lowest bit is bit 0). You can find out the most significant bit by testing whether a bit is set from within a loop over all 64 bits.
Another way might be to use the (non-standard) __builtin_clz in gcc.
I agree with the binary search idea. However two points are important here:
The range of valid answers to your question is from 0 to 64 inclusive. In other words - there may be 65 different answers to the question. I think (almost sure) all who posted the "binary search" solution missed this point, hence they'll get wrong answer for either zero or a number with the MSB bit on.
If speed is critical - you may want to avoid the loop. There's an elegant way to achieve this using templates.
The following template stuff finds the MSB correctly of any unsigned type variable.
// helper
template <int bits, typename T>
bool IsBitReached(T x)
{
const T cmp = T(1) << (bits ? (bits-1) : 0);
return (x >= cmp);
}
template <int bits, typename T>
int FindMsbInternal(T x)
{
if (!bits)
return 0;
int ret;
if (IsBitReached<bits>(x))
{
ret = bits;
x >>= bits;
} else
ret = 0;
return ret + FindMsbInternal<bits/2, T>(x);
}
// Main routine
template <typename T>
int FindMsb(T x)
{
const int bits = sizeof(T) * 8;
if (IsBitReached<bits>(x))
return bits;
return FindMsbInternal<bits/2>(x);
}
Here you go, pretty trivial to update as you need for other sizes...
int bits_left(unsigned long long value)
{
static unsigned long long mask = 0x8000000000000000;
int c = 64;
// doh
if (value == 0)
return c;
// check byte by byte to see what has been set
if (value & 0xFF00000000000000)
c = 0;
else if (value & 0x00FF000000000000)
c = 8;
else if (value & 0x0000FF0000000000)
c = 16;
else if (value & 0x000000FF00000000)
c = 24;
else if (value & 0x00000000FF000000)
c = 32;
else if (value & 0x0000000000FF0000)
c = 40;
else if (value & 0x000000000000FF00)
c = 48;
else if (value & 0x00000000000000FF)
c = 56;
// skip
value <<= c;
while(!(value & mask))
{
value <<= 1;
c++;
}
return c;
}
Same idea as user470379's, but counting down ...
Assume all 64 bits are unset. While value is larger than 0 keep shifting the value right and decrementing number of unset bits:
/* untested */
int countunsetbits(uint64_t val) {
int x = 64;
while (val) { x--; val >>= 1; }
return x;
}
Try
int countBits(int value)
{
int result = sizeof(value) * CHAR_BITS; // should be 64
while(value != 0)
{
--result;
value = value >> 1; // Remove bottom bits until all 1 are gone.
}
return result;
}
Use log base 2 to get you the most significant digit which is 1.
log(2) = 1, meaning 0b10 -> 1
log(4) = 2, 5-7 => 2.xx, or 0b100 -> 2
log(8) = 3, 9-15 => 3.xx, 0b1000 -> 3
log(16) = 4 you get the idea
and so on...
The numbers in between become fractions of the log result. So typecasting the value to an int gives you the most significant digit.
Once you get this number, say b, the simple 64 - n will be the answer.
function get_pos_msd(int n){
return int(log2(n))
}
last_zero = 64 - get_pos_msd(n)

Categories

Resources