Byte conversion in c# - c#

Does not get correct result using this code. After inserting 300 as int, I am getting 44 as the converted byte value.
I was expecting 255 as this is the closest to 300.
Console.Write("Enter int value - ");
val1 = Convert.ToInt32(Console.ReadLine());
// converting int to byte
bval1 = (byte) val1;
Console.WriteLine("int explicit conversion");
Console.WriteLine("byte - {0}", bval1);

You have just experienced byte overflow. Try to use types that can actually hold the numbers that you work with.
[edit]
It looks that conversion can be also checked in C#:
bval1 = checked ((byte) val1);
and have the appropriate exception (OverflowException) when value is too big

A single unsigned byte can hold a range of 0 to 255. or 0x00 to 0xff. 300 is greater than 256 so it "wraps around" or begins counting again from 0. 300 - 44 = 256, that's your wrap.

Related

representing a hexadecimal value by converting it to char

so I am outputting the char 0x11a1 by converting it to char
than I multiply 0x11a1 by itself and output it again but I do not get what I expect to get as
by doing this {int hgvvv = chch0;} and outputting to the console I can see that the computer thinks that 0x11a1 * 0x11a1 equals 51009 but it actually equals 20367169
As a result I do not gat what I want.
Could you please explain to me why?
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0);
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
We know that 1 bytes is 8 bits.
We know that a char in c# is 2 bytes, which would be 16 bits.
If we multiply 0x11a1 X 0x11a1 we get 0x136c741.
0x136c741 in binary is 0001001101101100011101000001
Considering we only have 16 bits - we would only see the last 16 bits which is: 1100011101000001
1100011101000001 in hex is 0xc741.
This is 51009 that you are seeing.
You are being limited by the type size of char in c#.
Hope this answer cleared things up!
By enabling the checked context in your project or by adding it this way in your code:
checked {
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0); // OverflowException
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
}
You will see that you will get an OverflowException, because the char type (2 bytes big) is only able to store values up to Char.MaxValue = 0xFFFF.
The value you expect (20367169) is larger than than 0xFFFF and you basically get only the two least significate bytes the type was able to store. Which is:
Console.WriteLine(20367169 & 0xFFFF);
// prints: 51009

How does type casting work from double to byte

I have a code where type casting returns a value much different from the larger value.
static void Main(string[] args)
{
double a = 345.09;
byte c = (byte) a;
Console.WriteLine(c);
Console.ReadLine();
}
This returns the value 89. What is the reason behind it?
what happens is that only the last 8 bits are taken when you cast the number. Let me explain with an example using ints. I use int for simplicity, because if you cast double to a non-float number representation than the decimal places are discarded anyway.
First we look at the binary representation of the number 345:
string binaryInt = Convert.ToString(345, 2);
Console.WriteLine(binaryInt);
The output is :
1 0101 1001
When we now take only the last 8 bits, which is the size of a byte type:
string eightLastBits = binaryInt.Substring(binaryInt.Length-8);
Console.WriteLine(eightLastBits);
We get:
0101 1001
If we convert this to byte again you will see that the result is 89:
byte backAgain = Convert.ToByte(eightLastBits, 2);
Console.WriteLine(backAgain);
Output:
89
EDIT:
As pointed out by InBetween the int example is the real deal and what really happens during casting.
Looking at the implementation of the conversion of double to byte:
/// <internalonly/>
byte IConvertible.ToByte(IFormatProvider provider) {
return Convert.ToByte(m_value);
}
If we look at the Convert.ToByte method implementation: we see that the double if first converted to an integer:
public static byte ToByte(double value) {
return ToByte(ToInt32(value));
}
Range of byte starts from 0 to 255, you are getting result as 89 because it starts counting again from 0 once it goes beyond 255. i.e
byte c = (byte) a; //345 - 255 (consider `0` as well)
Console.WriteLine(c); // i.e. 89
The reason behind the range of byte:
byte stores 8-bit data. 0000 0000 and 1111 1111 (8-bit binary data) values can be
defined as Min and Max value to byte variable respectively, so
in integer it turns into 0 and 255
Note that this is one of the few cases where the result isn't specified.
From the language spec, omitting the irrelevant cases
For a conversion from float or double to an integral type, the processing depends on the overflow checking context (The checked and unchecked operators) in which the conversion takes place:
In an unchecked context, the conversion always succeeds, and proceeds as follows.
If the value of the operand is NaN or infinite, [...]
Otherwise, the source operand is rounded towards zero to the nearest integral value. If this integral value is within the range of the destination type, [...]
Otherwise, the result of the conversion is an unspecified value of the destination type
Presumably, the implementation simply stripped off the higher bits after converting to the nearest integral value, leaving just the least significant 8 bits, since that's likely the most convenient thing to do at that point. But do note that according to this spec, 0 is just as a valid value as 89 is, for this conversion.

Logic Behind Type-Casting [Explicitly] in c#

Basically Explicit type casting means there is Possible loss of precision
example :
short s = 256;
byte b = (byte) s;
Console.WriteLine(b);
// output : 0
or
short s = 257;
byte b = (byte) s;
Console.WriteLine(b);
// output : 1
or
short s = 1024;
byte b = (byte)s;
Console.WriteLine(b);
Console.ReadKey();
// output : 0
Base behind this output ... ?
Short is a 2-byte number, byte is 1-byte!
When you cast from two bytes to one you are losing the first
8 bits: 1024 (short: "0000 0100" "0000 0000").
Which in binary becomes (binary: "0000 0000") = 0.
The base behind your output is simple:
Every number is represented as bits, every 8 bits create 1 byte.
Byte holds numbers from "0" to "255".
If you convert bigger number to smaller in programing you are losing bits not just precision.
In your case you are losing every bit after the 8 (If the number you are trying to convert has value holed in its last 8 bits (bit can be 1 or 0) you will get it if not you will get 0).
P.S. Use windows calculator in programmer mode or find a program in google to convert number to bits and it will become more clear to you.

Cannot implicitly convert type 'long' to 'ulong'. (+random ulong)

I've been declaring a class with a ulong parameter, but when I want ot assign it in the constructor, the program tells me, thata something is wrong, but I don't understand what is. It is a ulong type, I don't want him to convert anything.
public class new_class
{
ulong data;
new_class()
{
Random rnd = new Random();
data = rnd.Next() * 4294967296 + rnd.Next(); //4294967296 = 2^32
}
}
bonus points if you can tell me if this declaration of random ulong is correct, it should first randomize int (32 bits) and store them in first 32 bits of ulong, then do new random number and store them in the second 32 bits.
Random.Next() only generates 31 bits of true randomness:
A 32-bit signed integer greater than or equal to zero and less than MaxValue.
Since the result is always positive, the most significant bit is always 0.
You could generate two ints and combine them to get a number with 62 bits of randomness:
data = ((ulong)rnd.Next() << 32) + rnd.Next();
If you want 64 bits of randomness, another method is to generate 8 random bytes using GetBytes:
byte[] buffer = new byte[8];
rnd.NextBytes(buffer);
data = BitConverter.ToUInt64(buffer, 0);
The problem here is that you're getting a long as the result of your expression. The error message should make that pretty clear:
Cannot implicitly convert type 'long' to 'ulong'.
An explicit conversion exists (are you missing a cast?)
You'd have to explicitly cast the result to ulong to assign it to data, making it
data = ((ulong) ( rnd.Next() * 4294967296 + rnd.Next() ) ;
However, your intent would be clearer, though, if you were to simply shift bits:
ulong data = ( ((ulong)rng.Next()) << 32 )
| ( ((ulong)rng.Next()) << 0 )
;
It would also be faster as bit shifts are a much simpler operation than multiplication.

converting hex number to signed short

I'm trying to convert a string that includes a hex value into its equivalent signed short in C#
for example:
the equivalent hex number of -1 is 0xFFFF (in two bytes)
I want to do the inverse, i.e I want to convert 0xFFFF into -1
I'm using
string x = "FF";
short y = Convert.ToInt16(x,16);
but the output y is 255 instead of -1, I need the signed number equivalent
can anyone help me?
thanks
When your input is "FF" you have the string representation in hex of a single byte.
If you try to assign it to a short (two bytes), the last bit is not considered for applying the sign to the converted number and thus you get the 255 value.
Instead a string representation of "FFFF" represents two bytes where the last bit is set to 1 so the result, if assigned to a signed type like Int16, is negative while, if assigned to an unsigned type like ushort, is 65535-
string number = "0xFFFF";
short n = Convert.ToInt16(number, 16);
ushort u = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(u);
number = "0xFF";
byte b = Convert.ToByte(number, 16);
short x = Convert.ToInt16(number, 16);
ushort z = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(x);
Console.WriteLine(z);
Output:
-1
65535
-1
255
255
You're looking to convert the string representation of a signed byte, not short.
You should use Convert.ToSByte(string) instead.
A simple unit test to demonstrate
[Test]
public void MyTest()
{
short myValue = Convert.ToSByte("FF", 16);
Assert.AreEqual(-1, myValue);
}
Please see http://msdn.microsoft.com/en-us/library/bb311038.aspx for full details on converting between hex strings and numeric values.

Categories

Resources