display the 16 bits for the integer - c#

i need your help , here is a part of my code but there is a problem that i cant solve .
Plese help me;
This is what i have done for now , i can get positive integer in 16 bits binary form
`
Console.WriteLine("Enter an integer : ");
string inputnumber = Console.ReadLine();
int num = int.Parse(inputnumber);
string value = Convert.ToString(num, 2).PadLeft(16, '0');
Console.WriteLine("The bits are : {0}", value);
Console.ReadKey();`
AND the issue is how will i get negative value of an integer in 16 bits binary form
like; when i input 5 , i can get : 0000000000000101
and i need -5 -------------> 1111111111111011

In C# int is a 32-bit type. You should use short (a 16-bit type) instead. For positive numbers up to 32767 the first (lower) 16 bits of int and short are the same, but for negative numbers it's different.
short num = short.Parse(inputnumber);

This is a correct behavior since it is stored in that way in a computer. That is the so called two's complement where the most significant bit (the most left) tells you that this is a negative number. Also keep in mind that int contains 32 bits

Related

representing a hexadecimal value by converting it to char

so I am outputting the char 0x11a1 by converting it to char
than I multiply 0x11a1 by itself and output it again but I do not get what I expect to get as
by doing this {int hgvvv = chch0;} and outputting to the console I can see that the computer thinks that 0x11a1 * 0x11a1 equals 51009 but it actually equals 20367169
As a result I do not gat what I want.
Could you please explain to me why?
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0);
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
We know that 1 bytes is 8 bits.
We know that a char in c# is 2 bytes, which would be 16 bits.
If we multiply 0x11a1 X 0x11a1 we get 0x136c741.
0x136c741 in binary is 0001001101101100011101000001
Considering we only have 16 bits - we would only see the last 16 bits which is: 1100011101000001
1100011101000001 in hex is 0xc741.
This is 51009 that you are seeing.
You are being limited by the type size of char in c#.
Hope this answer cleared things up!
By enabling the checked context in your project or by adding it this way in your code:
checked {
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0); // OverflowException
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
}
You will see that you will get an OverflowException, because the char type (2 bytes big) is only able to store values up to Char.MaxValue = 0xFFFF.
The value you expect (20367169) is larger than than 0xFFFF and you basically get only the two least significate bytes the type was able to store. Which is:
Console.WriteLine(20367169 & 0xFFFF);
// prints: 51009

How to (theoretically) print all possible double precision numbers in C#?

For a little personal research project I want to generate a string list of all possible values a double precision floating point number can have.
I've found the "r" formatting option, which guarantees that the string can be parsed back into the exact same bit representation:
string s = myDouble.ToString("r");
But how to generate all possible bit combinations? Preferably ordered by value.
Maybe using the unchecked keyword somehow?
unchecked
{
//for all long values
myDouble[i] = myLong++;
}
Disclaimer: It's more a theoretical question, I am not going to read all the numbers... :)
using unsafe code:
ulong i = 0; //long is 64 bit, like double
unsafe
{
double* d = (double*)&i;
for(;i<ulong.MaxValue;i++)
Console.WriteLine(*d);
}
You can start with all possible values 0 <= x < 1. You can create those by having zero for exponent and use different values for the mantissa.
The mantissa is stored in 52 bits of the 64 bits that make a double precision number, so that makes for 2 ^ 52 = 4503599627370496 different numbers between 0 and 1.
From the description of the decimal format you can figure out how the bit pattern (eight bytes) should be for those numbers, then you can use the BitConverter.ToDouble method to do the conversion.
Then you can set the first bit to make the negative version of all those numbers.
All those numbers are unique, beyond that you will start getting duplicate values because there are several ways to express the same value when the exponent is non-zero. For each new non-zero exponent you would get the value that were not possible to express with the previously used expontents.
The values between 0 and 1 will however keep you busy for the forseeable future, so you can just start with those.
This should be doable in safe code: Create a bit string. Convert that to a double. Output. Increment. Repeat.... A LOT.
string bstr = "01010101010101010101010101010101"; // this is 32 instead of 64, adjust as needed
long v = 0;
for (int i = bstr.Length - 1; i >= 0; i--) v = (v << 1) + (bstr[i] - '0');
double d = BitConverter.ToDouble(BitConverter.GetBytes(v), 0);
// increment bstr and loop

converting hex number to signed short

I'm trying to convert a string that includes a hex value into its equivalent signed short in C#
for example:
the equivalent hex number of -1 is 0xFFFF (in two bytes)
I want to do the inverse, i.e I want to convert 0xFFFF into -1
I'm using
string x = "FF";
short y = Convert.ToInt16(x,16);
but the output y is 255 instead of -1, I need the signed number equivalent
can anyone help me?
thanks
When your input is "FF" you have the string representation in hex of a single byte.
If you try to assign it to a short (two bytes), the last bit is not considered for applying the sign to the converted number and thus you get the 255 value.
Instead a string representation of "FFFF" represents two bytes where the last bit is set to 1 so the result, if assigned to a signed type like Int16, is negative while, if assigned to an unsigned type like ushort, is 65535-
string number = "0xFFFF";
short n = Convert.ToInt16(number, 16);
ushort u = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(u);
number = "0xFF";
byte b = Convert.ToByte(number, 16);
short x = Convert.ToInt16(number, 16);
ushort z = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(x);
Console.WriteLine(z);
Output:
-1
65535
-1
255
255
You're looking to convert the string representation of a signed byte, not short.
You should use Convert.ToSByte(string) instead.
A simple unit test to demonstrate
[Test]
public void MyTest()
{
short myValue = Convert.ToSByte("FF", 16);
Assert.AreEqual(-1, myValue);
}
Please see http://msdn.microsoft.com/en-us/library/bb311038.aspx for full details on converting between hex strings and numeric values.

Hex number operations in c#

I am making an application in C# and I have hex numbers such as 0x0FF8,0xFFFA etc.
Here I want only 12 bit from right to left. Suppose I have a number as 0x0FF8.
So I just want to make operation on FF8.(12 bits), and this is signed number.
It is the decimal number is -8. In my application I have to first find whether number is negative or not? And after that its value.
I am not getting how to do it in C# efficiently as I have to do it very fast.
The number representation is as 0x0FF8= -8 please see the link http://www.swarthmore.edu/NatSci/echeeve1/Ref/BinaryMath/NumSys.html
erm,
To get the right twelve bits only you could do,
var right12 = 0x0FFF & yourNumber;
To find out if it negative or positive do,
var positive = yourNumber >= 0;
var absoluteValue = Math.Abs(yourNumber); // Assuming yourNumber is Int32
var low12 = 0xFFF & absoluteValue;
This does a bitwise and against a bit mask for the twelve bits you want to keep.
To check if a signed integer value is negative, you have to check its left most bit. If the bit is set, the value is negative.
However, you only have 12 bits, while an int have 32 bits. So when you put the 12 bits in a int, 20 bits are reseted to zero (aka not set). So the left most bit (#31) is not set and the int value is not seen as a negative one.
You have to check the bit #11 and set the 20 other if the 11th is set:
int Value = 0x0FF8;
// Check bit #11
if ((Value & 0x0800) != 0)
{
// Set the 20 other bits to make the int value a negative one
Value |= 0xFFFFF000;
}
You can also do the same thing by using short instead of int. A short only have 16 bits, so:
short Value = 0x0FF8;
// Check bit #11
if ((Value & 0x0800) != 0)
{
// Set the 4 other bits to make the short value a negative one
Value |= 0xF000;
}
The int version is probably the best one to use to avoid casts in the code.

Arbitrarily large integers in C#

How can I implement this python code in c#?
Python code:
print(str(int(str("e60f553e42aa44aebf1d6723b0be7541"), 16)))
Result:
305802052421002911840647389720929531201
But in c# I have problems with big digits.
Can you help me?
I've got different results in python and c#. Where can be mistake?
Primitive types (such as Int32, Int64) have a finite length that it's not enough for such big number. For example:
Data type Maximum positive value
Int32 2,147,483,647
UInt32 4,294,967,295
Int64 9,223,372,036,854,775,808
UInt64 18,446,744,073,709,551,615
Your number 305,802,052,421,002,911,840,647,389,720,929,531,201
In this case to represent that number you would need 128 bits. With .NET Framework 4.0 there is a new data type for arbitrarily sized integer numbers System.Numerics.BigInteger. You do not need to specify any size because it'll be inferred by the number itself (it means that you may even get an OutOfMemoryException when you perform, for example, a multiplication of two very big numbers).
To come back to your question, first parse your hexadecimal number:
string bigNumberAsText = "e60f553e42aa44aebf1d6723b0be7541";
BigInteger bigNumber = BigInteger.Parse(bigNumberAsText,
NumberStyles.AllowHexSpecifier);
Then simply print it to console:
Console.WriteLine(bigNumber.ToString());
You may be interested to calculate how many bits you need to represent an arbitrary number, use this function (if I remember well original implementation comes from C Numerical Recipes):
public static uint GetNeededBitsToRepresentInteger(BigInteger value)
{
uint neededBits = 0;
while (value != 0)
{
value >>= 1;
++neededBits;
}
return neededBits;
}
Then to calculate the required size of a number wrote as string:
public static uint GetNeededBitsToRepresentInteger(string value,
NumberStyles numberStyle = NumberStyles.None)
{
return GetNeededBitsToRepresentInteger(
BigInteger.Parse(value, numberStyle));
}
If you just want to be able to use larger numbers there is BigInteger which has a lot of digits.
To find the number of bits you need to store a BigInteger N, you can use:
BigInteger N = ...;
int nBits = Mathf.CeilToInt((float)BigInteger.Log(N, 2.0));

Categories

Resources