let i have got two byte variable:
byte a= 255;
byte b= 121;
byte c= (byte) (a + b);
Console.WriteLine(c.ToString());
output:120
please explain me how this is adding values. i know that its crossing size limit of byte but don't know what exactly operation it performs in such situation because its not looking like its chopping the result.
Thanks
EDIT: sorry its 120 as a answer.
You are overflowing the byte storage of 255 so it starts from 0.
So: a + b is an integer = 376
Your code is equivalent to:
byte c = (byte)376;
That's one of the reasons why adding two bytes returns an integer. Casting it back to a byte should be done at your own risk.
If you want to store the integer 376 into bytes you need an array:
byte[] buffer = BitConverter.GetBytes(376);
As you can see the resulting array contains 4 bytes now which is what is necessary to store a 32 bit integer.
It gets obvious when you look at the binary representation of the values:
var | decimal | binary
----|----------------------
a | 255 | 1111 1111
b | 121 | 0111 1001
| |
a+b | 376 | 1 0111 1000
This gets truncated to 8 bits, the overflow bit is disregarded when casting the result to byte:
c | | 0111 1000 => 120
As others are saying, you are overflowing; the a+b operation results in an int, which you are explicitly casting to a byte. Documentation is here, essentially in an unchecked context, the cast is done by truncating the most significant bits.
I guess you mean byte c= (byte)(a + b);
On my end the result here is 120, and that is what I would expect.
a+b equals 376, and all bits that represent 256 and up gets stripped (since byte actually only hold 1 byte), then 120 is what you are left with inside your byte.
Related
I want to recover this long value, that was mistakenly converted to int
long longValue = 11816271602;
int intValue = (int)longValue; // gives -1068630286
long ActualLong = ?
Right shift 32 bits of (intValue >> 32) gives incorrect result.
Well, the initial value
long longValue = 11816271602L; // 0x02C04DFEF2
is five bytes long. When you cast the value to Int32 which is four bytes long
int intValue = (int)longValue; // 0xC04DFEF2 (note 1st byte 02 absence)
you inevitably lose the 1st byte and can't restore it back.
Unfortunately this is not possible. If you take a look at the binary representation you can see the reason:
10 1100 0000 0100 1101 1111 1110 1111 0010
As you can see, this number has 34-bit and not only 32-bit.
Best way to describe my miss understanding is with the code itself:
var emptyByteArray = new byte[2];
var specificByteArray = new byte[] {150, 105}; //0x96 = 150, 0x69 = 105
var bitArray1 = new BitArray(specificByteArray);
bitArray1.CopyTo(emptyByteArray, 0); //[0]: 150, [1]:105
var hexString = "9669";
var intValueForHex = Convert.ToInt32(hexString, 16); //16 indicates to convert from hex
var bitArray2 = new BitArray(new[] {intValueForHex}) {Length = 16}; //Length=16 truncates the BitArray
bitArray2.CopyTo(emptyByteArray, 0); //[0]:105, [1]:150 (inversed, why??)
I've been reading that the bitarray iterates from the LSB to the MSB, what's the best way for me to initialize the bitarray starting from a hex string then?
I think you are thinking about it wrong. Why are you even using a BitArray? Endianness is a byte-related convention, BitArray is just an array of bits. Since it is least-significant bit first, the correct way to store a 32-bit number in a bit array is with bit 0 at index 0 and bit 31 at index 31. This isn't just just my personal bias towards little-endianness (bit 0 should be in byte 0 not byte 3 for goodness sake), it's because BitArray stores bit 0 of a byte at index 0 in the array. It also stores bit 0 of a 32-bit integer in bit 0 of the array, no matter the endianness of the platform you are on.
For example, instead of your integer 9669, let's look at 1234. No matter what platform you are on, that 16-bit number has the following bit representation, because we write a hex number with the most significant hex digit 1 to the left and the least significant hex digit 4 to the right, bit 0 is on the right (a human convention):
1 2 3 4
0001 0010 0011 0100
No matter how an architecture orders the bytes, bit 0 of a 16-bit number always means the least-significant bit (the right-most here) and bit 15 means the most-significant bit (the left-most here). Due to this, your bit array will always be like this, with bit 0 on the left because that's the way I read an array (with index 0 being bit 0 and index 15 being bit 15):
---4--- ---3--- ---2--- ---1---
0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0
What you are doing is trying to impose the byte order you want onto an array of bits where it doesn't belong. If you want to reverse the bytes, then you'll get this in the bit array which makes a lot less sense, and means you'll have to reverse the bytes again when you get the integer back out:
---2--- ---1--- ---4--- ---3---
0 1 0 0 1 0 0 0 0 0 1 0 1 1 0 0
I don't think this makes any kind of sense for storing an integer. If you want to store the big-endian representation of a 32-bit number in the BitArray then what you are really storing is a byte array that just happens to be the big-endian representation of a 32-bit number and you should convert to a byte array first make it big-endian if necessary before putting it in the BitArray:
int number = 0x1234;
byte[] bytes = BitConverter.GetBytes(number);
if (BitConverter.IsLittleEndian)
{
bytes = bytes.Reverse().ToArray();
}
BitArray ba = new BitArray(bytes);
I have two input integer numbers and an output list< int> myoutputlist. My inputs are lets say
A=0x110101
B=0x101100
then I have calculated a C integer number depending on A and B numbers.I already coded its algorithm, I can calculate C integer. C integer shows which bits should be changed. 1 value represents changing bits, 0 values represents unchanging bits. Only one bit should be changed in each time. As C integer depends on A and B inputs, sometimes 1 bit, sometimes 3 bits, sometimes 8 bits needs to be changed. In this given A and B values, I have C integer as follows
C=0x 010010 (1 represents changing values; second and fifth bits should be changed in this case)
As C integer has value "1" for two times; there should be 2 results in this case
result 1-Changing only second bit, other bits are same as A(0x110101) :
Change second bit of A => D1=1101 1 1
result 2-Changing only fifth bit, other bits are same as A(0x110101) :
Change fifth bit of A => D2=1 1 0101
What I am thinking is using a for loop, shifting A and C step by step, and using &1 mask to C? And check if it is equal to "1"
for(i=0;i<32;i++)
{int D=(C>>i)&1; //I tried to check if i.th value is equal to 1 or not
if(D==1)
{ int E=(A&(~(2^i))) | ((2^i)&(~B)) //in first brackets, I removed i.th bit of A, then replaced it with "not B" value.
myoutputlist.add(E);
}
}
I need to do lots of calculations but disturbing issue is that I need to check (D==1) for 32 times. I will use it many million times, some calculations take about 2 mins. I am looking for a faster way. Is there any idea, trick?
I hope I understood your question right.
You are looking for the XOR operator.
C = A ^ B
A 110101
B 101100
--------
C 011001
XOR will always be 1 if the two inputs are "different". See:
| A | B | XOR |
|---+---+-----|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
Then you will be able to loop through the bits of C like this:
for (int i = 0; i < 32; i++)
{
bool bit = (C & (1 << i)) != 0;
}
I need to convert 16-bit XRGB1555 into 24-bit RGB888. My function for this is below, but it's not perfect, i.e. a value of 0b11111 wil give 248 as the pixel value, not 255. This function is for little-endian, but can easily be modified for big-endian.
public static Color XRGB1555(byte b0, byte b1)
{
return Color.FromArgb(0xFF, (b1 & 0x7C) << 1, ((b1 & 0x03) << 6) | ((b0 & 0xE0) >> 2), (b0 & 0x1F) << 3);
}
Any ideas how to make it work?
You would normally copy the highest bits down to the bottom bits, so if you had five bits as follows:
Bit position: 4 3 2 1 0
Bit variable: A B C D E
You would extend that to eight bits as:
Bit position: 7 6 5 4 3 2 1 0
Bit variable: A B C D E A B C
That way, all zeros remains all zeros, all ones becomes all ones, and values in between scale appropriately.
(Note that A,B,C etc aren't supposed to be hex digits - they are variables representing a single bit).
I'd go with a lookup table. Since there are only 32 different values it even fits in a cache-line.
You can get the 8 bit value from the 5 bit value with:
return (x<<3)||(x>>2);
The rounding might not be perfect though. I.e. the result isn't always closest to the input, but it never is further away that 1/255.
Is there any difference between Arithmetic + and bitwise OR. In what way this is differing.
uint a = 10;
uint b = 20;
uint arithmeticresult = a + b;
uint bitwiseOR = a | b;
Both the results are 30.
Edit : Small changes to hide my stupidity.
(10 | 20) == 10 + 20 only because the 1-bits do not appear in the same digit.
1010 = 10
or 10100 = 20
————————
11110 = 30
However,
11 = 3 11 = 3
or 110 = 6 + 110 = 6
—————— ——¹——————
111 = 7 1001 = 9
# ^ ^
# (1|1==1) (1+1=2)
Counterexample:
2 + 2 == 42 | 2 == 2
Bitwise OR means, for each bit position in both numbers, if one or two bits are on, then the result bit is on. Example:
0b01101001
|
0b01011001
=
0b01111001
(0b is a prefix for binary literals supported in some programming languages)
At the bit level, addition is similar to bitwise OR, except that it carries:
0b01101001
+
0b01011001
=
0b11000010
In your case, 10+20 and 10|20 happen to be the same because 10 (0b1010) and 20 (0b10100) have no 1s in common, meaning no carry happens in addition.
Try setting a = 230 and b = 120. And you'll observer the difference in results.
The reason is very simple. In the arithmentic addition operation the bit-wise add operation may generate carry bit which is added in the next bit-wise addition on the bit-pair available on the subsequent position. But in case of bit wise OR it just performs ORing which never generates a carry bit.
The fact that you're getting same result in your case is that the
numbers co-incidentally don't generate any
carry-bit during addition.
Bit-wise arithmetic Addition
alt text http://www.is.wayne.edu/drbowen/casw01/AnimAdd.gif
Bitwise OR goes through every bit of two digits and applies the following truth table:
A B | A|B
0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 1
Meanwhile the arithmetic + operator actually goes through every bit applying the following table (where c is the carry-in, a and b are the bits of your number, s is the sum and c' is the carry out):
C A B | S C'
0 0 0 | 0 0
0 0 1 | 1 0
0 1 0 | 1 0
0 1 1 | 0 1
1 0 0 | 1 0
1 0 1 | 0 1
1 1 0 | 0 1
1 1 1 | 1 1
For obvious reasons, the carry-in starts-off being 0.
As you can see, sum is actually a lot more complicated. As a side effect of this, though, there as an easy trick you can do to detect overflow when adding positive signed numbers. More specifically, we expect that a+b >= a|b if that fails then you have an overflow!
The case when the two numbers will be the same is when every time a bit in one of the two numbers is set, the corresponding bit int he second number is NOT set. That is to say that you have three possible states: either both bits aren't set, the bit is set in A but not B, or the bit is set in B but not A. In that case the arithmetic + and the bit-wise or would produce the same result... as would the bitwise xor for that matter.
Using arithmetic operations to manipulate bitmasks can produce unexpected results and even overflow. For instance, turning on the n-th bit of a bitmask if it is already on will turn off the n-th bit and turn on the n+1-th bit. This will cause overflow if there are only n-bits.
Example of turning on bit 2:
Arithmetic ADD Bitwise OR
0101 0101
+ 0100 | 0100
---- ----
1001 0101 //expected result: 0101
Like-wise, using arithmetic subtract to turn off the n-th bit will fail if the n-th bit was not already on.
Example of turning off bit 2:
Arithmetic SUB Bitwise AND~
0001 0001
- 0100 &~ 0100
---- ----
0001 0001
+ 1100 & 1011
---- ----
1101 0001 //expected result: 0001
So bitwise operators are safer than arithmetic operators when you are working with bitmasks.
The following bitwise operations have analogous arithmetic operations:
Bitwise Arithmetic
Check n-th bit x & (1 << n) !(x - (1 << n))
Turn on n-th bit x |= (1 << n) x += (1 << n)
Turn off n-th bit x &= ~(1 << n) x -= (1 << n)
Try a = 1 and b = 1 ;)
+ and | have different when two bits at the same positions are 1
00000010
OR
00000010
Result
00000010
VS
00000010
+
00000010
Result
00000100