I figure my best way of really getting a feeling about double precision numbers is to play around with them a bit, and one of the things I want to do is to look at their (almost) binary representation. For this, in C#, the function BitConverter.DoubleToInt64Bits is very useful, as it (after converting into hexadecimal) gives me a look at what the "real" nature of the floating point number is.
The problem is that I don't seem to be able to find the equivalent function in Python, is there a way to do the same things as BitConverter.DoubleToInt64Bits in a python function?
Thank you.
EDIT:
An answer bellow suggested usinginascii.hexlify(struct.pack('d', 123.456) to convert the double into a hexadecimal representation, but I am still getting strange results.
For example,
inascii.hexlify(struct.pack('d', 123.456))
does indeed return '77be9f1a2fdd5e40' but if I run the code that should be equivalent in C#, i.e.
BitConverter.DoubleToInt64Bits(123.456).ToString("X")
I get a completely different number: "405EDD2F1A9FBE77". Where have I made my mistake?
How about using struct.pack and binascii.hexlify?
>>> import binascii
>>> import struct
>>> struct.pack('d', 0.0)
'\x00\x00\x00\x00\x00\x00\x00\x00'
>>> binascii.hexlify(struct.pack('d', 0.0))
'0000000000000000'
>>> binascii.hexlify(struct.pack('d', 1.0))
'000000000000f03f'
>>> binascii.hexlify(struct.pack('d', 123.456))
'77be9f1a2fdd5e40'
struct format specify d represent double c type (8 bytes = 64 bits). For other format, see Format characters.
UPDATE
By specifying #, =, <, >, ! as the first character of the format, you can indicate byte order. (Byte Order, Size, and Alignment)
>>> binascii.hexlify(struct.pack('<d', 123.456)) # little-enddian
'77be9f1a2fdd5e40'
>>> binascii.hexlify(struct.pack('>d', 123.456)) # big-endian
'405edd2f1a9fbe77'
Related
I work on a .net project and need a math expression parser to calculate simple formulas.
I used mXparser but it seemed unable to work with big decimal numbers(more than 16 digits) .
For example, the result of formula has to be 2469123211254289589
but it returns 2.46912321125428E+17 and when I use decimal.parse to convert it to decimal it gives me 2469123211254280000.
Is there another parser to solve this problem?
or
Is there another way to deal with this problem?
If you're happy dealing with integers then you should be able to use BigInteger to carry out these sorts of operations.
It has no theoretical upper or lower bounds so you shouldn't have a problem (unless you run out of memory to store that number that is).
Hello i am using visual studio 2015 with .net framework 4.5 if it matters and my Resharper keeps suggesting me to switch from decimal numbers to hex. Why is that ? Is there any performance bonus if im using hex ?
There is absolutely no performance difference between the format of numeric literals in a source language, because the conversion is done by the compiler. The only reason to switch from one representation to another is readability of your code.
Two common cases for using hexadecimal literals are representing colors and bit masks. Since color representation is often split at byte boundaries, parsing a number 0xFF00FF is much easier than 16711935: hex format tells you that the red and blue components are maxed out, while the green component is zero. Decimal format, on the other hand, requires you to perform the conversion.
Bit masks are similar: when you use hex or octal representation, it is very easy to see what bits are ones and what bits are zero. All you need to learn is a short table of sixteen bit patterns corresponding to hex digits 0 through F. You can immediately tell that 0xFF00 has the upper eight bits set to 1, and the lower eight bits set to 0. Doing the same with 65280 is much harder for most programmers.
There is absolutely no performance difference when writing constants in your code in decimal vs. hex. Both will be translated to the exact same IL and ultimately JITted to the same machine code.
Use whichever representation makes more sense for the work you are doing, is clearer and easier to understand in the context of the problem your code solves.
I have a problem that I want to pack very specific data into specific bits and store them in a float. I already investigated ways of simply packing numbers into floats, but using simple math won't work for what I want because I need to use the full 32 bits to store very specific values. If it was a 32 bit integer, this would be very easy.
So what I want to do is encode the data as a 32 bit integer and then turn this bit data directly into a float with all the bits remaining the same (unless someone has a better suggestion on how to do it). What languages will allow me to do a conversion like this? Obviously, not javascript and python because they don't support 32 bit floats. Will C# or C++ do it?
I need to decode the data in a GLSL or HLSL vertex shader. The shader would, of course, receive the 32 bit float. Is there an operator that will allow me to turn the float directly to integer with all the same bits instead of an ordinary cast? Or perhaps some other way to read the bits directly?
UPDATE: Eric Postpischil showed how to easily do the direct conversion in C in an answer below. Now I just need to know if there's a way to do a direct conversion from float to int or bit data in a vertex shader. Can anyone help on that part?
You can do this in C with:
#include <stdint.h>
float IntegerToFloat(uint32_t u)
{
return (union { uint32_t u; float f; }) {u} .f;
}
uint32_t FloatToInteger(float f)
{
return (union { float f; uint32_t u; }) {f} .u;
}
Naturally, this requires that float be 32 bits in the C implementation, and that uint32_t be a supported type (but you can use another 32-bit integer type if it is not, likely unsigned int). Some of the resulting float values may be NaNs, which might not remain unchanged in certain operations, such as conversion for printing or display and conversion back. Even normal float values will not generally remain unchanged unless they are displayed with sufficient precision and the C implementation uses correct rounding for decimal-to-binary and binary-to-decimal conversions.
So abusing the bits like this is a bad idea unless it is compelled with no alternative.
This is really annoying, I have two hex numbers I am 90% sure that one of them is exactly 2 increment higher. However when I type them into an online hex to decimal calculator they come out the same. How can this be?
lower number at
0x00010471000001BF001F = 18766781122258862000
higher number at
0x00010471000001BF0021 = 18766781122258862000
? What is going on ?
The calc I used is...
http://www.rapidtables.com/convert/number/hex-to-decimal.htm
The higher number is 2 higher instead of 1. 0x00010471000001BF0020 is in between. I think your problem is related to an overflow issue because the numbers are very large. Probably the calculators you are using are converting the values to floating point which looses accuracy.
The values you are posting need at least 9 bytes to represent (or at least 65 bits)
First basic knowledge of Hex should tell you that 20 is between 1F and 21, so the highest number is the lower number + 2.
Second, if you use an unknown tool, you have to be sure it's reliable. Your tool obviously can't handle such large numbers.
Wolfram Alpha gives you the correct answers :
http://www.wolframalpha.com/input/?i=0x00010471000001BF001F+in+decimal
http://www.wolframalpha.com/input/?i=0x00010471000001BF0021+in+decimal
First things first, why did you classify this question under the C# tag?
The problem is most like caused by the value being too big and the converter doesn't work well with big numbers.
Just because this is tagged with C#.
Add a reference to .NET component System.Numerics.
To convert from large hex to integer use BigInteger.
System.Numerics.BigInteger a;
System.Numerics.BigInteger.TryParse("00010471000001BF001F", System.Globalization.NumberStyles.HexNumber,null,out a);
Console.WriteLine(a.ToString());
System.Numerics.BigInteger.TryParse("00010471000001BF0021", System.Globalization.NumberStyles.HexNumber, null, out a);
Console.WriteLine(a.ToString());
output
18766781122258862111
18766781122258862113
I'm trying to find a CRC32 computation algorythm that output data in positive 8-character HEX (like winrar CRC for example).
All algorythms found return a positive/negative integer that I don't know how to handle...
I'll bet that all of the CRC32 algorithms you found return a 32-bit integer (e.g. int or uint). When you view it, you're probably viewing it as a base-10 number. By viewing I mean formatting the integer as a string without passing any format options to Int32.ToString or String.Format.
If you were to put Visual Studio into Hexadecimal View, you would get your "expected" output. The algorithms are all correct in this case, however, it is your expectation that is incorrect!
Instead, use a Number Format which produces the string representation you desire:
uint crc32 = Crc32Class.GetMeAValue(data);
// For example, we'll write base-10 and base-16 the output to the console
Console.WriteLine("dec: {0}", crc32);
Console.WriteLine("hex: {0:X8}", crc32);