I want to recover this long value, that was mistakenly converted to int
long longValue = 11816271602;
int intValue = (int)longValue; // gives -1068630286
long ActualLong = ?
Right shift 32 bits of (intValue >> 32) gives incorrect result.
Well, the initial value
long longValue = 11816271602L; // 0x02C04DFEF2
is five bytes long. When you cast the value to Int32 which is four bytes long
int intValue = (int)longValue; // 0xC04DFEF2 (note 1st byte 02 absence)
you inevitably lose the 1st byte and can't restore it back.
Unfortunately this is not possible. If you take a look at the binary representation you can see the reason:
10 1100 0000 0100 1101 1111 1110 1111 0010
As you can see, this number has 34-bit and not only 32-bit.
Related
I am having a question, I need to convert to two u short numbers lets say 1 and 2 to 1 byte. someting like
0 0 1 0 values of 2 and 0 0 0 1 value of 1
so in result i get a byte with value 00100001, Is it possible, I am not a master low level coder.
This should work:
(byte)(((value1 & 0xF)<<4) | (value2 & 0xF))
I am not a master low level coder.
Well, now is the time to become one!
Edit: this answer was made before the question was clear enough to understand exactly what was required. See other answers.
Use a 'bit mask' on the two numbers, then bitwise-OR them together.
I can't quite tell how you exactly want it, but let's say you wanted the first 4 bits of the first ushort, then the last 4 bits of the second ushort. To note: ushort is 16 bits wide.
ushort u1 = 44828; //10101111 00011100 in binary
ushort u2 = 65384; //11111111 01101000 in binary
int u1_first4bits = (u1 & 0xF000) >> 8;
The 'mask' is 0xF000. It masks over u1:
44828 1010 1111 0001 1100
0xF000 1111 0000 0000 0000
bitwise-AND 1010 0000 0000 0000
The problem is, this new number is still 16 bits long
- we must shift it by 8 bits with >> 8 to make it
0000 0000 1010 0000
Then another mask operation on the second number:
int u2_last4bits = u2 & 0x000F;
Illustrated:
65384 1111 1111 0110 1000
0x000F 0000 0000 0000 1111
bitwise-AND 0000 0000 0000 1000
Here, we did not need to shift the bits, as they are already where we want them.
Then we bitwise-OR them together:
byte b1 = (byte)(u1_first4bits | u2_last4bits);
//b1 is now 10101000 which is 168
Illustrated:
u1_first4bits 0000 0000 1010 0000
u2_last4bits 0000 0000 0000 1000
bitwise-OR 0000 0000 1010 1000
Notice that u1_first4bits and u2_first4bits needed to be of type int - this is because C# bitwise operations return int. To create our byte b1, we had to cast the bitwase-OR operation to a byte.
Assuming, you want to take the 2 ushorts (16 bit each) and convert them to a 32 bit representation (integer), you can use the "BitArray" Class, fill it with a 4 byte array, and convert it to an integer.
The following example will produce:
00000000 00000010 00000000 00000001
which is
131073
as integer.
ushort x1 = 1;
ushort x2 = 2;
//get the bytes of the ushorts. 2 byte per number.
byte[] b1 = System.BitConverter.GetBytes(x1);
byte[] b2 = System.BitConverter.GetBytes(x2);
//Combine the two arrays to one array of length 4.
byte[] result = new Byte[4];
result[0] = b1[0];
result[1] = b1[1];
result[2] = b2[0];
result[3] = b2[1];
//fill the bitArray.
BitArray br = new BitArray(result);
//test output.
int c = 0;
for (int i = br.Length -1; i >= 0; i--){
Console.Write(br.Get(i)? "1":"0");
if (++c == 8)
{
Console.Write(" ");
c = 0;
}
}
//convert to int and output.
int[] array = new int[1];
br.CopyTo(array, 0);
Console.WriteLine();
Console.Write(array[0]);
Console.ReadLine();
Of course you can alter this example and throw away 1 Byte per ushort. But this wouldn't be a correct "conversion" then.
This question already has answers here:
No overflow exception for int in C#?
(6 answers)
Closed 6 years ago.
We have an instance where a value being assigned to an integer is larger than the int max value (2,147,483,647). It doesn't throw an error, it just assigns a smaller number to the integer. How is this number calculated?
This has been fixed by changing the int to a long but I'm interested as to how the smaller value is being calculated and assigned to the int.
int contains a 32-bit number which means, it has 32 binary digits of 0 or 1 (first digit means plus for 0 and minus for 1), for example:
1 in decimal == 0000 0000 0000 0000 0000 0000 0000 0001 as int32 binary
2 147 483 647 == 0111 1111 1111 1111 1111 1111 1111 1111
So, if you'll increment int.MaxValue, you will get next result:
2 147 483 648 == 1000 0000 0000 0000 0000 0000 0000 0000
In two's complement representation this binary number equals to int.MinValue or -2 147 483 648
int.MaxValue: 2,147,483,647
The logic in loops is keeping track of the lowest number found. You can use int.MaxValue to start the value really high, and then any lower number will be valid.
Sample code:
using System;
class Program
{
static void Main()
{
int[] integerArray = new int[]
{
10000,
600,
1,
5,
7,
3,
1492
};
// This will track the lowest number found
int lowestFound = int.MaxValue;
foreach (int i in integerArray)
{
// By using int.MaxValue as the initial value,
// this check will usually succeed.
if (lowestFound > i)
{
lowestFound = i;
Console.WriteLine(lowestFound);
}
}
}
}
Output
10000
600
1
How to remove the leftmost bit?
I have a hexadecimal value BF
Its binary representation is 1011 1111
How can I remove the first bit, which is 1, and then it will become 0111 1110?
How to add "0" also to its last part?
To set bit N of variable x to 0
x &= ~(1 << N);
How it works: The expression 1 << N is one bit shifted N times to the left. For N = 7, this would be
1000 0000
The bitwise NOT operator ~ inverts this to
0111 1111
Then the result is bitwise ANDed with x, giving:
xxxx xxxx
0111 1111
--------- [AND]
0xxx xxxx
Result: bit 7 (zero-based count starting from the LSB) is turned off, all others retain their previous values.
To set bit N of variable x to 1
x |= 1 << N;
How it works: this time we take the shifted bit and bitwise OR it with x, giving:
xxxx xxxx
1000 0000
--------- [OR]
1xxx xxxx
Result: Bit 7 is turned on, all others retain their previous values.
Finding highest order bit set to 1:
If you don't know which is the highest bit set to 1 you can find out on the fly. There are many ways of doing this; a reasonable approach is
int x = 0xbf;
int highestSetBit = -1; // assume that to begin with, x is all zeroes
while (x != 0) {
++highestSetBit;
x >>= 1;
}
At the end of the loop, highestSetBit will be 7 as expected.
See it in action.
int i=0xbf;
int j=(i<<1) & 0xff;
or you could do:
(i*2) && 0xff
if you'd rather not do bit twiddling. >>1 is the equivalent of /2, and <<1 is the equivalent of *2.
Given that i have a uint value of 2402914, and i would like to grab the leftmost 17 bits, where is the fault in my logic by doing this code:
int testop = 0;
byte[] myArray = BitConverter.GetBytes(2402914);
fixed (byte* p = &myArray[0])
{
testop = *p >> 15;
}
my expected output is
50516.
You might want to get your expectations to match reality. A right-shift is equivalent to dividing by 2. You are effectively dividing by 2 fifteen times, which is the same as saying you are dividing by 2^15 = 32768. Note that 2402914 / 32768 = 73 (truncating the remainder).
Therefore, I would expect the result to be 73, not 50516.
In fact,
2402914_10 = 0000 0000 0010 0100 1010 1010 0110 0010_2
So that the left-most seventeen bits are
0000 0000 0010 0100 1
Note that
0000 0000 0010 0100 1 = 1 * 1 + 0 * 2 + 0 * 4 + 1 * 8 + 0 * 16 + 0 * 32 + 1 * 64
= 73
Note that you can obtain this result more simply with
int testop = 2402914 >> 15;
*p just gives you the first byte; it is equivalent to p[0]. You'll have to use shifting and ORing to combine bits from the first three bytes (or the last three bytes, depending on endianness...)
If this code is not a simplified version of something more complicated and you're actually trying to just extract the leftmost 17 bits from an int, this should do:
int testop = (someInt >> 15) & 0x1ffff;
(Edit: Added & 0x1ffff to make it work for negative integers too; thanks to #James.)
Wow, this has been a really fun puzzle to figure out. Not the programming part, but trying to figure out where you got the number 50516 and what you are trying to do with your code. It looks like you are taking the 16 least significant bits and ROTATING them LEFT 9 bits.
2402914: 0000 0000 0010 0100 1010 1010 0110 0010
left 9: 0100 1001 0101 0100 1100 010
match: ^^^^ ^^^
>>50516: 1100 0101 0101 0100
match: ^ ^^^^ ^^^^
right 7: 1 0101 0100 110 0010
int value2 = value & 0xffff;
int rotate9left = ((value2 << 9) & 0xffff) | ((value2) >> (16 - 9));
I don't know why you are using a byte array, but it seems like you think your fixed() statement is looping through the array, which it is not. Your statement in the fixed block is taking the byte value at myArray[0] and SHIFTing it right 15 bits (shifting fills with 0s as opposed to rotating which wraps the front bits around to the back). Any thing over 8 would give you zero.
From what I understand, you can apply the bit-shift operator directly to the int datatype, rather than going through the trouble of the unsafe code.
For example:
2402914 >> 15 = 73
This ties to the result predicted by Jason.
Further, I note that
2402914 >> 5 = 75091
and 2402914 >> 6 = 37545
This suggests that your required result cannot be achieved by any similar right shift.
let i have got two byte variable:
byte a= 255;
byte b= 121;
byte c= (byte) (a + b);
Console.WriteLine(c.ToString());
output:120
please explain me how this is adding values. i know that its crossing size limit of byte but don't know what exactly operation it performs in such situation because its not looking like its chopping the result.
Thanks
EDIT: sorry its 120 as a answer.
You are overflowing the byte storage of 255 so it starts from 0.
So: a + b is an integer = 376
Your code is equivalent to:
byte c = (byte)376;
That's one of the reasons why adding two bytes returns an integer. Casting it back to a byte should be done at your own risk.
If you want to store the integer 376 into bytes you need an array:
byte[] buffer = BitConverter.GetBytes(376);
As you can see the resulting array contains 4 bytes now which is what is necessary to store a 32 bit integer.
It gets obvious when you look at the binary representation of the values:
var | decimal | binary
----|----------------------
a | 255 | 1111 1111
b | 121 | 0111 1001
| |
a+b | 376 | 1 0111 1000
This gets truncated to 8 bits, the overflow bit is disregarded when casting the result to byte:
c | | 0111 1000 => 120
As others are saying, you are overflowing; the a+b operation results in an int, which you are explicitly casting to a byte. Documentation is here, essentially in an unchecked context, the cast is done by truncating the most significant bits.
I guess you mean byte c= (byte)(a + b);
On my end the result here is 120, and that is what I would expect.
a+b equals 376, and all bits that represent 256 and up gets stripped (since byte actually only hold 1 byte), then 120 is what you are left with inside your byte.