how does this assignment work in c#? - c#

My question is that how does this assignment happen in c#? I mean, how does it calculate the answer 1 (with 257), and how does it calculate 0(with 256)?
the code is:
int intnumber=257;
byte bytenumber=(byte)intnumber;//the out put of this code is 1
int intnumber=256;
byte bytenumber=(byte)intnumber;//the out put of this code is 0
My question is what happen,that the output in first code is:1 and in second one is:0

A byte only occupies one byte in memory. An int occupies 4 bytes in memory. Here is the binary representation of some int values you've mentioned:
most significant least significant
255: 00000000 00000000 00000000 11111111
256: 00000000 00000000 00000001 00000000
257: 00000000 00000000 00000001 00000001
You can also see how this works when casting negative int values to a byte. An int value of -255, when cast to a byte, is 1.
-255: 11111111 11111111 11111111 00000001
When you cast an int to a byte, only the least significant byte is assigned to the byte value. The three higher significance bytes are ignored.

A single byte only goes up to 255. The code wraps around to 0 for 256 and 1 for 257, etc...
The most significant bits are discarded and you're left with the rest.

255 is the maximum value that can be represented in a single byte:
Hex code: FF
256 does not fit in 1 byte. It takes 2 bites to represent that:
01 00
since you're trying to put that value in a variable of type byte (which of course may only contain 1 byte), the second byte is "cropped" away, leaving only:
00
Same happens for 257 and actually for any value.

1 is assigned because the arithmetic overflow of byte values (max 255) exceed by 2 unit.
0 is assigned because exceed by 1 unit.

the byte data type contains a number between 0 to 255. When converting an int to byte, it calculates the number modulo 256.
byte = int % 256

Related

Why is ~1 in an 8-bit byte equal to -2?

I would expect that a, due to a NOT 00000001 would turn into 11111110, otherwise known as 127, or -126 if counting the far left bit as the sign, if sign&magnitude was used.
Even in the instance of 2s compliment, I would expect the answer to result in -127
Why is it that the result is -2?
In two's complement:
-x = ~x + 1
By subtracting one from both sides we can see that:
~x = -x - 1
And so in your example, if we set x = 1 we get:
~1 = -1 - 1 = -2
Consider how the numbers wrap around.
If we start with 00000010 (2) and take away one then it is:
00000010
- 00000001
---------
00000001
Which is 1. We "borrow 1" from the column to the left just as we do with decimal subtraction, except that because it's binary 10 - 1 is 1 rather than 9.
Take 1 away again and we of course get zero:
00000001
- 00000001
---------
00000000
Now, take 1 away from that, and we're borrowing 1 from the column to the left every time, and that borrowing wraps us around, so 0 - 1 = -1 is:
00000000
- 00000001
-----------
11111111
So -1 is all-ones.
This is even easier to see in the other direction, in that 11111111 plus one must be 00000000 as it keeps carrying one until it is lost to the left, so if x is 11111111 then it must be the case that x + 1 == 0, so it must be -1.
Take away another one and we have:
11111111
- 00000001
--------
11111110
So -2 is 1111110, and of course ~1 means flipping every bit of 00000001, which is also 11111110. So ~1 must be -2.
Another factor to note here though is that arithmetic and complements in C# always converts up to int for anything smaller. For a byte the value 11111110 is 254, but because ~ casts up to int first you get -2 rather than 254.
byte b = 1;
var i = ~b; // i is an int, and is -2
b = unchecked((byte)~b); // Forced back into a byte, now is 254
To convert a negative 2-compliment number to its decimal representation we have to:
start scanning the bitstring from right to left, until the first '1' is encountered
start inverting every bit to the left of that first '1'
Thus, in 11111110 we see the sign bit is 1 (negative number), and above method yields the number 000000010, which is a decimal 2. In total, we thus get -2.

c# - Get Specific Bit and Get first 14 bits of a ushort value

After reading all of the questions and answers on bit shifting/masking, I simply cannot wrap my head around it. I'm just not understanding how it works on a fundamental level. I've been able to achieve various techniques by using BitArray and BitConverter instead, but I really would like to understand bit shifting/masking better.
The specific need I have is to do the following:
I have a ushort: 0x810E (33038)
Using bit shifting/masking, I'd like to know how to:
Get the 16th bit Result: 1
Get the 15th bit Result: 0
Get a range of bits to create a new ushort value, specifically the
first 14 bits Result: 270
As I said, I'm able to perform these tasks using BitArray which is how I've gotten the correct results, but I'd like to understand how to perform these operations using bit shifting/masking.
Any help would be appreciated.
Masking single bits
As you probably know, ushort is a 16-bit value, so your given number 0x810E can also be written as
‭10000001 00001110‬
Because there is no shifting operator for ushort, the value is first converted to int.
So, if you want to get the 15th bit, you can take a single bit
000000000 0000000 00000000 00000001
and shift it 14 times to the left (right side is filled with 0)
00000000 00000000 01000000 00000000
and you have created a bitmask.
Now if you combine mask and value with an bitwise and, you'll get only the value of the 15th bit:
‭00000000 00000000 10000001 00001110‬
& 00000000 00000000 01000000 00000000
= 00000000 00000000 00000000 00000000
which is 0 again. To access this bit, you'll have to shift the whole result back to the right 14 times and cast it to a ushort.
This can be expressed with the following code:
ushort value_15 = (ushort)(((1 << 14) & value) >> 14);
Can we do better?
Although this approach seems to be correct, but there is a simpler way to do this: shift the original value to the right 14 times
(result is 00000000 00000000 00000000 00000010, left side is filled with 0) and perform a simple bitwise & with 1:
00000000 00000000 00000000 00000000 00000000 00000010
& 00000000 00000000 00000000 00000000 00000000 00000001
= 00000000 00000000 00000000 00000000 00000000 00000000
This results in C# in:
ushort value_15 = (ushort)((value >> 14) & 1);
So you avoid one additional shift and we get the same results even when using
signed numbers (because there the highest bit, used for the sign, is left unchanged by shifting).
Masking a bit range
To mask a bit range, all you have to do is to change your mask. So, to get the value of the lower 14 bits you can use the mask
00000000 00000000 ‭10000001 00001110‬
& 00000000 00000000 00111111 11111111
= 00000000 ‭00000000 00000001 00001110
In C# this can be expressed with
ushort first14bits = (ushort)((0xFFFF >> 2) & value);
where (0xFFFF is 00000000 00000000 11111111 11111111).

C# primitive silent overflow - CLR via c#

I'm reading CLR via C# by Jeffrey Richter and on page 115 there is an example of an overflow resulting from arithmetic operations on primitives. Can someone pls explain?
Byte b = 100;
b = (Byte) (b+200); // b now contains 44 (or 2C in Hex).
I understand that there should be an overflow, since byte is an unsigned 8-bit value, but why does its value equal 44?
100+200 is 300; 300 is (in bits):
1 0010 1100
Of this, only the last 8 is kept, so:
0010 1100
which is: 44
The binary representation of 300 is 100101100. That's nine bits, one more than the byte type has room for. Therefore the higher bit is discarded, causing the result to be 00101100. When you translate this value to decimal you get 44.
A byte can hold represent integers in the range between 0 and 255 (inclusive). When you pass a value greater than the 255, like 300, the value of 300-256=44 is stored. This happens because a byte is consisted of 8 bits and each bit can either 0 or 1. So you can represent 2^8=256 integers using a byte, starting from 0.
Actually, you have to divide your number with the 256. The remainder of this can only be represented by a byte.

C# Unsigned bytes Encryption to Java Signed bytes Decryption

I have an application in C# that encrypt part of my files (because they are big files) using RijndaelManaged. So I convert my file to byte arrays and encrypt only a part of it.
Then I want to decrypt the file using Java. So I have to decrypt only part of the file (means those bytes) that was encrypted in C#.
Here the problem comes. Because in C# we have unsigned bytes and in Java we have signed bytes. So my encryption and decryption not working the way I want.
In C# I have joined the encrypted bytes and normal bytes together and saved them with File.WriteAllBytes. So I can't use sbyte here or I don't know how to do it:
byte[] myEncryptedFile = new byte[myFile.Length];
for (long i = 0; i < encryptedBlockBytes.Length; i++)
{
myEncryptedFile[i] = encryptedBlockBytes[i];
}
for (long i = encryptedBlockBytes.Length; i < myFile.Length; i++)
{
myEncryptedFile[i] = myFileBytes[i];
}
File.WriteAllBytes(#"C:\enc_file.big", myEncryptedFile);
( And there is an exact same code for decryption in Java )
So my questions are:
Is there any WriteAllsBytes in C#?
Or can I use unsigned bytes in Java?
Or any other solutions to my problem?
Although you cannot use unsigned bytes in Java, you may simply ignore the issue.
AES - and all modern symmetric ciphers - operates on bytes, and the input and output have been defined to be bytes (or octets). Input and output have been standardized by NIST and test vectors are available.
If you look at the separate bit content of the bytes then {200,201,202} in C# and {(byte)200, (byte)201, (byte)202} in Java are identical. This is because Java uses two-complement representation of bytes.
Take the number 200 as integer: this will be 11010000 in binary, representing the number -56 in Java if used in a (signed) byte in two complements. Now symmetric ciphers will simply transform these bits to another (normally using a full block of bits).
Once you have retrieved the answer you will see that they are identical both in C# and Java when you look at the separate bits. C# will however interpret those as unsigned values and Java as signed values.
If you want to print out or use these values as signed numbers in Java then you have to convert them to positive signed integers. The way to do this is to use int p = b & 0xFF.
This does the following (I'll use the number 200 again):
The (negative) byte value is expanded to a signed integer, remembering the sign bit:
11010000 becomes 11111111 11111111 11111111 11010000
This value is "masked" with 0xFF or 00000000 00000000 00000000 11111111 by performing the binary AND operator:
11111111 11111111 11111111 11010000 & 00000000 00000000 00000000 11111111 = 00000000 00000000 00000000 11010000
This value is identical to the value 200 as a signed integer.

C# Decimal.Epsilon

Why doesn't Decimal data type have Epsilon field?
From the manual, the range of decimal values is ±1.0 × 10e−28 to ±7.9 × 10e28.
The description of Double.Epsilon:
Represents the smallest positive Double value greater than zero
So it seems, Decimal has such a (non-trivial) value too. But why isn't it easily accessible?
I do understand that +1.0 × 10e−28 is exactly the smallest positive Decimal value greater than zero:
decimal Decimal_Epsilon = new decimal(1, 0, 0, false, 28); //1e-28m;
By the way, there are a couple of questions that give information about Decimal data type's internal representation:
decimal in c# misunderstanding?
What's the second minimum value that a decimal can represent?
Here's an example where Epsilon would be useful.
Lets say I have a weighted sum of values from some sampling set and sum of weights (or count) of samples taken. Now I want to compute the weighted mean value. But I know that the sum of weights (or count) may be still zero. To prevent division by zero I could do if... else... and check for the zero. Or I could write like this:
T weighted_mean = weighted_sum / (weighted_count + T.Epsilon)
This code is shorter in my eye. Or, alternatively I can skip the + T.Epsilon and instead initialize with:
T weighted_count = T.Epsilon;
I can do this when I know that the values of real weights are never close to Epsilon.
And for some data types and use cases this is maybe even faster since it does not involve branches. As I understand, the processors are not able to take both branches for computation, even when the branches are short. And I may know that the zeros occur randomly at 50% rate :=) For Decimal, the speed aspect is likely not important or even positively useful in the first case though.
My code may be generic (for example, generated) and I do not want to write separate code for decimals. Therefore one would like to see that Decimal have similar interface as other real-valued types.
Contrary to that definition, epsilon is actually a concept used to eliminate the ambiguity of conversion between binary and decimal representations of values. For example, 0.1 in decimal doesn't have a simple binary representation, so when you declare a double as 0.1, it is actually setting that value to an approximate representation in binary. If you add that binary representation number to itself 10 times (mathematically), you get a number that is approximately 1.0, but not exactly. An epsilon will let you fudge the math, and say that the approximate representation of 0.1 added to itself can be considered equivalent to the approximate representation of 0.2.
This approximation that is caused by the nature of the representations is not needed for the decimal value type, which is already a decimal representation. This is why any time you need to deal with actual numbers and numbers which are themselves not approximations (i.e. money as opposed to mass), the correct floating point type to use is decimal and not double.
Smallest number I can calculate for decimal is:
public static decimal DecimalEpsilon = (decimal) (1 / Math.Pow(10, 28));
This is from running the following in a C# Interactive Window:
for (int power = 0; power <= 50; power++) { Console.WriteLine($"1 / 10^{power} = {((decimal)(1 / (Math.Pow(10, power))))}"); }
Which has the following output:
1 / 10^27 = 0.000000000000000000000000001
1 / 10^28 = 0.0000000000000000000000000001
1 / 10^29 = 0
1 / 10^30 = 0
If we just think about the 96 bit mantissa, the Decimal type can be thought of as having an epsilon equal to the reciprocal of a BigInteger constructed with 96 set bits. That is obviously too small a number to represent with current intrinsic value types.
In other words, we would need a "BigReal" value to represent such a small fraction.
And frankly, that is just the "granularity" of the epsilon. We would then need to know the exponent (bits 16-23 of the highest Int32 from GetBits()) to arrive at the "real" epsilon for a GIVEN decimal value.
Obviously, the meaning of "epsilon" for Decimal is variable. You can use the granularity epsilon with the exponent and come up with a specific epsilon for a GIVEN decimal.
But consider the following rather problematic situation:
[TestMethod]
public void RealEpsilonTest()
{
var dec1 = Decimal.Parse("1.0");
var dec2 = Decimal.Parse("1.00");
Console.WriteLine(BitPrinter.Print(dec1, " "));
Console.WriteLine(BitPrinter.Print(dec2, " "));
}
DEC1: 00000000 00000001 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00001010
DEC2; 00000000 00000010 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 01100100
Despite the two parsed values seemingly being equal, their representation is not the same!
The moral of the story is... be very careful that you thoroughly understand Decimal before THINKING that you understand it !!!
HINT:
If you want the epsilon for Decimal (theoretically), create a UNION ([StructLayout[LayoutKind.Explicit]) combining Decimal(128 bits) and BigInteger(96 bits) and Exponent(8 bits). The getter for Epsilon would return the correct BigReal value based on granularity epsilon and exponent; assuming, of course, the existence of a BigReal definition (which I've been hearing for quite some time, will be coming).
The granularity epsilon, by the way, would be a constant or a static field...
static grain = new BigReal(1 / new BitInteger(new byte[] { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF });
HOMEWORK: Should the last byte to BigInteger be 0xFF or 0x7F (or something else altogether)?
PS: If all of that sounds rather more complicated than you were hoping, ... consider that comp science pays reasonably well. /-)

Categories

Resources