Are there compiling differences between Hexadecimal and Decimal numbers? - c#

int foo = 510;
int bar = 0x0001FE;
Are there compiling differences between these two declarations?
For example, does your computer read these two values different?

No, but Hex is clerer to read as human if you look for bit / byte values.
Internally all is binary and the compiler does convert it to binary representation anyway.

Related

.NET decimal cross-platform standard

If I save C# decimal array to a binary blob with a binary writer, will I be able to seamlessly read this blob in Java or Python or R? Do these language have a corresponding data type or is there a cross-platform standard that .NET decimal adheres to?
I am converting all my numeric data to decimals before storing, because when an array of decimals is diffed and then compressed, the resulting blob size is much smaller for real world data than a float blob compressed by any algo (my tests show c.2x less space). And decimals do not lose precision on diffing. But I am not sure if I could read this blobs from any other language other than .NET ones.
I am reading the wiki page but cannot answer this simple question myself.
Update:
Currently I take an unsafe reference to decimal[] array and cast it to (byte* ) pointer, but this is equivalent to binary writer. The question is about cross-platform decimal interop, not serialization. See my comment in the first answer.
The documentation for the System.Decimal type does not state what internal representation it uses (unlike System.Single and System.Double which use IEEE-754). Given this, you cannot assume that the raw binary state representation of any decimal number will be consistent between .NET Framework versions or even between physical machines running the same version.
I assume you'd be using the BinaryWriter.Write(Decimal value) method. The documentation for this method does not make any statement about the actual format being used.
Looking at the .NET Reference source code, we see that BinaryWriter.Write(Decimal) uses an internal method: Decimal.GetBytes(Byte[] buffer) which uses an undocumented format which might change in future or may differ per-machine.
However! The Decimal class does provide a public method GetBits which does make guarantees about the format of data it returns, so I suggest you use this instead:
// Writing
Int32[] bits = myDecimal.GetBits();
binaryWriter.Write( (byte)bits.Length );
foreach(Int32 component in bits) binaryWriter.Write( component );
// Reading
byte count = binaryReader.ReadByte();
Int32[] bits = new Int32[ count ];
for(Int32 i = 0; i < bits.Count; i++ ) {
bits[i] = binaryReader.ReadInt32();
}
Decimal value = new Decimal( bits );
This approach uses 17 bytes to store a Decimal instance in a way that is documented and guaranteed to work regardless of the internal design of the Decimal type. 16 bytes store the actual value, and 1 byte stores the number of 32-bit integers needed to represent the decimal (in case the number ever changes from 4).

intBitsToFloat method in Java VS C#?

I'm getting the wrong number when converting bits to float in C#.
Let's use this bit number= 1065324597
In Java, if I want to convert from bits to float I would use intBitsToFloat method
int intbits= 1065324597;
System.out.println(Float.intBitsToFloat(intbits));
Output: 0.9982942 which the correct output the I want to get in C#
However, in C# I used
int intbits= 1065324597;
Console.WriteLine((float)intbits);
Output: 1.065325E+09 Wrong!!
My question is how would you convert inbitsToFloat in C#?
My attempt:
I looked to the documentation here http://msdn.microsoft.com/en-us/library/aa987800(v=vs.80).aspx
but I still have the same trouble
Just casting is an entirely different operation. You need BitConverter.ToSingle(byte[], int) having converted the int to a byte array - and possibly reversed the order, based on the endianness you want. (EDIT: Probably no need for this, as the same endianness is used for both conversions; any unwanted endianness will just fix itself.) There's BitConverter.DoubleToInt64Bits for double, but no direct float equivalent.
Sample code:
int x = 1065324597;
byte[] bytes = BitConverter.GetBytes(x);
float f = BitConverter.ToSingle(bytes, 0);
Console.WriteLine(f);
i want to add on top of what jon skeet said, that also, for big float, if you don't want the "E+" output you should do:
intbits.ToString("N0");
Just try this...
var myBytes = BitConverter.GetBytes(1065324597);
var mySingle = BitConverter.ToSingle(myBytes,0);
The BitConverter.GetBytes converts your integer into a four byte array. Then BitConverter.ToSingle converts your array into a float(single).

Does Integer.valueOf() in java translate to int.Parse() in C#?

I am trying to learn java from a C# background! I have found some nuances during my journey like C# doesn't have an Integer as reference type it only has int as a primitive type; this lead me to a doubt if this translation would be correct!
String line ="Numeric string";//Java
string line = "Numeric string";//C#
int size;
size = Integer.valueOf(line, 16).intValue(); //In Java
size = int.Parse(line,System.Globalization.NumberStyles.Integer);//In C#
No, that's not quite a valid translation - because the 16 in the Java code means it should be parsed as an integer. The Java code is closer to:
int size = Convert.ToInt32(line, 16);
That overload of Convert.ToInt32 allows you to specify the base.
Alternatively you could use:
int size = int.Parse("11", NumberStyles.AllowHexSpecifier);
I'm not keen on the name "allow hex specifier" as it suggests that a prefix of "0x" will be accepted... but in reality it means it's always interpreted as hex.
In java, this is better:
Integer.parseInt(line, 16);
it parse a string into int, while Integer.valueOf converts to Integer.

convert double value to binary value

How can i convert double value to binary value.
i have some value like this below 125252525235558554452221545332224587265 i want to convert this to binary format..so i am keeping it in double and then trying to convert to binary (1 & 0's).. i am using C#.net
Well, you haven't specified a platform or what sort of binary value you're interested in, but in .NET there's BitConverter.DoubleToInt64Bits which lets you get at the IEEE 754 bits making up the value very easily.
In Java there's Double.doubleToLongBits which does the same thing.
Note that if you have a value such as "125252525235558554452221545332224587265" then you've got more information than a double can store accurately in the first place.
In C, you can do it for instance this way, which is a classic use of the union construct:
int i;
union {
double x;
unsigned char byte[sizeof (double)];
} converter;
converter.x = 5.5555555555556e18;
for(i = 0; i < sizeof converter.byte; i++)
printf("%02x", converter.byte[i]);
If you stick this in a main() and run it, it might print something like this:
~/src> gcc -o floatbits floatbits.c
~/src> ./floatbits
ba b5 f6 15 53 46 d3 43
Note though that this, of course, is platform-dependent in its endianness. The above is from a Linux system running on a Sempron CPU, i.e. it's little endian.
A decade late but hopefully this will help someone:
// Converts a double value to a string in base 2 for display.
// Example: 123.5 --> "0:10000000101:1110111000000000000000000000000000000000000000000000"
// Created by Ryan S. White in 2020, Released under the MIT license.
string DoubleToBinaryString(double val)
{
long v = BitConverter.DoubleToInt64Bits(val);
string binary = Convert.ToString(v, 2);
return binary.PadLeft(64, '0').Insert(12, ":").Insert(1, ":");
}
If you mean you want to do it yourself, then this is not a programming question.
If you want to make a computer do it, the easiest way is to use a floating point input routine and then display the result in its hex form. In C++:
double f = atof ("5.5555555555556E18");
unsigned char *b = (unsigned char *) &f;
for (int j = 0; j < 8; ++j)
printf (" %02x", b [j]);
A double value already IS a binary value. It is just a matter of the representation that you wish it to have. In a programming language when you call it a double, then the language that you use will interpret it in one way. If you happen to call the same chunk of memory an int, then it is not the same number.
So it depends what you really want... If you need to write it to disk or to network, then you need to think about BigEndian/LittleEndian.
For these huge numbers (who cannot be presented accurately using a double) you need to use some specialized class to hold the information needed.
C# provides the Decimal class:
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.
If you need bigger precision than this, you need to make your own class I guess. There is one here for ints: http://sourceforge.net/projects/cpp-bigint/ although it seems to be for c++.

Hexadecimal notation and signed integers

This is a follow up question. So, Java store's integers in two's-complements and you can do the following:
int ALPHA_MASK = 0xff000000;
In C# this requires the use of an unsigned integer, uint, because it interprets this to be 4278190080 instead of -16777216.
My question, how do declare negative values in hexadecimal notation in c#, and how exactly are integers represented internally? What are the differences to Java here?
C# (rather, .NET) also uses the two's complement, but it supports both signed and unsigned types (which Java doesn't). A bit mask is more naturally an unsigned thing - why should one bit be different than all the other bits?
In this specific case, it is safe to use an unchecked cast:
int ALPHA_MASK = unchecked((int)0xFF000000);
To "directly" represent this number as a signed value, you write
int ALPHA_MASK = -0x1000000; // == -16777216
Hexadecimal is not (or should not) be any different from decimal: to represent a negative number, you need to write a negative sign, followed by the digits representing the absolute value.
Well, you can use an unchecked block and a cast:
unchecked
{
int ALPHA_MASK = (int)0xff000000;
}
or
int ALPHA_MASK = unchecked((int)0xff000000);
Not terribly convenient, though... perhaps just use a literal integer?
And just to add insult to injury, this will work too:
-0x7F000000

Categories

Resources