If I save C# decimal array to a binary blob with a binary writer, will I be able to seamlessly read this blob in Java or Python or R? Do these language have a corresponding data type or is there a cross-platform standard that .NET decimal adheres to?
I am converting all my numeric data to decimals before storing, because when an array of decimals is diffed and then compressed, the resulting blob size is much smaller for real world data than a float blob compressed by any algo (my tests show c.2x less space). And decimals do not lose precision on diffing. But I am not sure if I could read this blobs from any other language other than .NET ones.
I am reading the wiki page but cannot answer this simple question myself.
Update:
Currently I take an unsafe reference to decimal[] array and cast it to (byte* ) pointer, but this is equivalent to binary writer. The question is about cross-platform decimal interop, not serialization. See my comment in the first answer.
The documentation for the System.Decimal type does not state what internal representation it uses (unlike System.Single and System.Double which use IEEE-754). Given this, you cannot assume that the raw binary state representation of any decimal number will be consistent between .NET Framework versions or even between physical machines running the same version.
I assume you'd be using the BinaryWriter.Write(Decimal value) method. The documentation for this method does not make any statement about the actual format being used.
Looking at the .NET Reference source code, we see that BinaryWriter.Write(Decimal) uses an internal method: Decimal.GetBytes(Byte[] buffer) which uses an undocumented format which might change in future or may differ per-machine.
However! The Decimal class does provide a public method GetBits which does make guarantees about the format of data it returns, so I suggest you use this instead:
// Writing
Int32[] bits = myDecimal.GetBits();
binaryWriter.Write( (byte)bits.Length );
foreach(Int32 component in bits) binaryWriter.Write( component );
// Reading
byte count = binaryReader.ReadByte();
Int32[] bits = new Int32[ count ];
for(Int32 i = 0; i < bits.Count; i++ ) {
bits[i] = binaryReader.ReadInt32();
}
Decimal value = new Decimal( bits );
This approach uses 17 bytes to store a Decimal instance in a way that is documented and guaranteed to work regardless of the internal design of the Decimal type. 16 bytes store the actual value, and 1 byte stores the number of 32-bit integers needed to represent the decimal (in case the number ever changes from 4).
Related
When I Initialize a ulong with the value 18446744073709551615 and then add a 1 to It and display to the Console It displays a 0 which is totally expected.
I know this question sounds stupid but I have to ask It. if my Computer has a 64-bit architecture CPU how is my calculator able to work with larger numbers than 18446744073709551615?
I suppose floating-point has a lot to do here.
I would like to know exactly how this happens.
Thank you.
working with larger numbers than 18446744073709551615
"if my Computer has a 64-bit architecture CPU" --> The architecture bit size is largely irrelevant.
Consider how you are able to add 2 decimal digits whose sum is more than 9. There is a carry generated and then used when adding the next most significant decimal place.
The CPU can do the same but with base 18446744073709551616 instead of base 10. It uses a carry bit as well as a sign and overflow bit to perform extended math.
I suppose floating-point has a lot to do here.
This is nothing to do with floating point.
; you say you're using ulong, which means your using unsigned 64-but arithmetic. The largest value you can store is therefore "all ones", for 64 bits - aka UInt64.MaxValue, which as you've discovered: https://learn.microsoft.com/en-us/dotnet/api/system.uint64.maxvalue
If you want to store arbitrarily large numbers: there are APIs for that - for example BigInteger. However, arbitrary size cones at a cost, so it isn't the default, and certainly isn't what you get when you use ulong (or double, or decimal, etc - all the compiler-level numeric types have fixed size).
So: consider using BigInteger
You either way have a 64 bits architecture processor and limited to doing 64 bits math - your problem is a bit hard to explain without taking an explicit example of how this is solved with BigInteger in System.Numerics namespace, available in .NET Framework 4.8 for example. The basis is to 'decompose' the number into an array representation.
mathematical expression 'decompose' here meaning :
"express (a number or function) as a combination of simpler components."
Internally BigInteger uses an internal array (actually multiple internal constructs) and a helper class called BigIntegerBuilder. In can implicitly convert an UInt64 integer without problem, for even bigger numbers you can use the + operator for example.
BigInteger bignum = new BigInteger(18446744073709551615);
bignum += 1;
You can read about the implicit operator here:
https://referencesource.microsoft.com/#System.Numerics/System/Numerics/BigInteger.cs
public static BigInteger operator +(BigInteger left, BigInteger right)
{
left.AssertValid();
right.AssertValid();
if (right.IsZero) return left;
if (left.IsZero) return right;
int sign1 = +1;
int sign2 = +1;
BigIntegerBuilder reg1 = new BigIntegerBuilder(left, ref sign1);
BigIntegerBuilder reg2 = new BigIntegerBuilder(right, ref sign2);
if (sign1 == sign2)
reg1.Add(ref reg2);
else
reg1.Sub(ref sign1, ref reg2);
return reg1.GetInteger(sign1);
}
In the code above from ReferenceSource you can see that we use the BigIntegerBuilder to add the left and right parts, which are also BigInteger constructs.
Interesting, it seems to keep its internal structure into an private array called "_bits", so that is the answer to your question. BigInteger keeps track of an array of 32-bits valued integer array and is therefore able to handle big integers, even beyond 64 bits.
You can drop this code into a console application or Linqpad (which has the .Dump() method I use here) and inspect :
BigInteger bignum = new BigInteger(18446744073709551615);
bignum.GetType().GetField("_bits",
BindingFlags.NonPublic | BindingFlags.Instance).GetValue(bignum).Dump();
A detail about BigInteger is revealed in a comment in the source code of BigInteger on Reference Source. So for integer values, BigInteger stores the value in the _sign field, for other values the field _bits is used.
Obviously, the internal array needs to be able to be converted into a representation in the decimal system (base-10) so humans can read it, the ToString() method converts the BigInteger to a string representation.
For a better in-depth understanding here, consider doing .NET source stepping to step way into the code how you carry out the mathematics here. But for a basic understanding, the BigInteger uses an internal representation of which is composed with 32 bits array which is transformed into a readable format which allows bigger numbers, bigger than even Int64.
// For values int.MinValue < n <= int.MaxValue, the value is stored in sign
// and _bits is null. For all other values, sign is +1 or -1 and the bits are in _bits
I have a system that stores a decimal value on a Mifare card
The C# decimal value is converted to a byte array of 16 bytes using this code
MemoryStream memStream = new MemoryStream();
BinaryWriter writer = new BinaryWriter(memStream);
try
{
try
{
writer.Write(m_Value);
tmp = memStream.ToArray();
}
finally
{
memStream.Close();
}
}
finally
{
writer.Close();
}
which gives a 16byte representation.
Now we have a customer which needs to read and convert these 16 bytes back to a decimal number, but in C
How is this done ?
Can anyone find a definition/spec of the C# byte representation, i cannot
You don't need to use MemoryStream, BinaryWriter, etc, here; just decimal.GetBits(value) is fine. The contents are fully documented, here: https://msdn.microsoft.com/en-us/library/system.decimal.getbits(v=vs.110).aspx. There is a constructor that accepts the same format.
Note that this gives you an int[], not a byte[], but... getting between int and byte is trivial; probably the easiest is to use shift and mask.
The BinaryWriter version is not officially documented, but is probably just the integers from GetBits() written sequentially.
It is not good idea to rely on internal implementation of System.Decimal as far as it is a subject of change.
It is mach better to use Decimal.GetBits(value) that is well documented.
Other question is how are you going to pass you values to unmanaged code. For example, if you are using pInvoke you just pass decimal argument to method and it will be automatically marshaled to native code (if you want specify memory representation you can use [MarshalAs(UnmanagedType.R8)] attribute).
As far as I know, C++ by default does not have decimal type.
However, if you really need memory representation of System.Decimal you can easily obtain it from source code:
Decimal.cs and Decimal.cpp.
I am maintaining a C# desktop application, on windows 7, using Visual Studio 2013. And somewhere in the code there is the following line, that tries to create a 0.01 decimal value, using a Decimal(Int32[]) constructor:
decimal d = new decimal(new int[] { 1, 0, 0, 131072 });
First question is, is it different from the following?
decimal d = 0.01M;
If it is not different, why the developer has gone through the trouble of coding like that?
I need to change this line in order to create dynamic values. Something like:
decimal d = (decimal) (1 / Math.Pow(10, digitNumber));
Am I going to cause some unwanted behavior this way?
It seems useful to me when the source of the decimal consists of bits.
The decimal used in .NET has an implementation that is based on a sequence of bit parameters (not just one stream of bits like with an int), so it can be useful to construct a decimal with bits when you communicate with other systems which return a decimal through a blob of bytes (a socket, from a piece of memory, etc).
It is easy now to convert the set of bits to a decimal now. No need for fancy conversion code. Also, you can construct a decimal from the inputs defined in the standard, which makes it convenient for testing the .NET framework too.
The decimal(int[] bits) constructor allows you to give a bitwise definition of the decimal you're creating bits must be a 4 int array where:
bits 0, 1, and 2 make up the 96-bit integer number.
bits 3 contains the scale factor and sign
It just allows you to get really precise with the definition of the decimal judging from your example I don't think you need that level of precision.
See here for more detail on using that constructor or here for other constructors that may be more appropriate for you
To more specifically answer your question if digitNumberis a 16bit exponent then decimal d = new decimal(new int[] { 1, 0, 0, digitNumber << 16 }); does what you want since the exponent goes in bits 16 - 23 of last int in the array
The definition in the xml is
//
// Summary:
// Initializes a new instance of System.Decimal to a decimal value represented
// in binary and contained in a specified array.
//
// Parameters:
// bits:
// An array of 32-bit signed integers containing a representation of a decimal
// value.
//
// Exceptions:
// System.ArgumentNullException:
// bits is null.
//
// System.ArgumentException:
// The length of the bits is not 4.-or- The representation of the decimal value
// in bits is not valid.
So for some unknown reason the original developer wanted to initialize his decimal this way. Maybe he was just wanted to confuse someone in the future.
It cant possibly affect your code if you change this to
decimal d = 0.01m;
because
(new decimal(new int[] { 1, 0, 0, 131072})) == 0.01m
You should exactly know how decimal stored in memory.
you can use this method to generate the desired value
public static decimal Base10FractionGenerator(int digits)
{
if (digits < 0 || digits > 28)
throw new ArgumentException($"'{nameof(digits)}' must be between 0 and 28");
return new decimal(new[] { 1, 0, 0, digits << 16 });
}
Use it like
Console.WriteLine(Base10FractionGenerator(0));
Console.WriteLine(Base10FractionGenerator(2));
Console.WriteLine(Base10FractionGenerator(5));
Here is the result
1
0.01
0.00001
The particular constructor you're talking about generates a decimal from four 32-bit values. Unfortunately, newer versions of the Common Language Infrastructure (CLI) leave its exact format unspecified (presumably to allow implementations to support different decimal formats) and now merely guarantee at least a specific precision and range of decimal numbers. However, earlier versions of the CLI do define that format exactly as Microsoft's implementation does, so it's probably kept that way in Microsoft's implementation for backward compatibility. However, it's not ruled out that other implementations of the CLI will interpret the four 32-bit values of the Decimal constructor differently.
Decimals are exact numerics, you can use == or != to test for equality.
Perhaps, this line of code comes from some other place where it made sense at some particular point of time.
I'd clean it up.
I'm using the software SharpDevelop (C#).
I've created a list of integers (array) like this:
int[] name = new int[number-of-elements]{elements-separated-by-commas}
In the {} I would like to put 1000 integers, some with over 70 digits.
But when I do that I get the following error:
Integral constant is too large (CS1021).
So how do I solve this problem?
The error does not mean that you have too many integers in your array. It means that one of the integers is larger than the maximum value representable in an int in C#, i.e. above 2,147,483,647.
If you need representation of 70-digit numbers, use BigInteger:
BigInteger[] numbers = new[] {
BigInteger.Parse("1234567890123456789012345678")
, BigInteger.Parse("2345678901234567890123456789")
, ...
};
From .Net Framework 4.0 Microsoft introduced System.Numerics.dll which contains a BigInteger structure which can represents an arbitrarily large signed integer. for more information you can refer to http://msdn.microsoft.com/en-us/library/system.numerics.biginteger%28v=vs.100%29.aspx
BigInteger[] name =
{
BigInteger.Parse("9999999999999999999999999999999999999999999999999999999999999999999999"),
BigInteger.Parse("9999999999999999999999999999999999999999999999999999999999999999999999")
};
for older versions of framework you can use IntX library. you can obtain the package either from Nuget with Intall-Package IntX command or https://intx.codeplex.com/
IntX[] name =
{
IntX.Parse("9999999999999999999999999999999999999999999999999999999999999999999999"),
IntX.Parse("9999999999999999999999999999999999999999999999999999999999999999999999")
};
other problem is the biggest integer literal you can define in c# is ulong with max value of 18,446,744,073,709,551,615 (larger values leads to compile error), which is obviously not enough in your case, easy solution would be to use BigInteger.Parse or in case of IntX library IntX.Parse.
How can i convert double value to binary value.
i have some value like this below 125252525235558554452221545332224587265 i want to convert this to binary format..so i am keeping it in double and then trying to convert to binary (1 & 0's).. i am using C#.net
Well, you haven't specified a platform or what sort of binary value you're interested in, but in .NET there's BitConverter.DoubleToInt64Bits which lets you get at the IEEE 754 bits making up the value very easily.
In Java there's Double.doubleToLongBits which does the same thing.
Note that if you have a value such as "125252525235558554452221545332224587265" then you've got more information than a double can store accurately in the first place.
In C, you can do it for instance this way, which is a classic use of the union construct:
int i;
union {
double x;
unsigned char byte[sizeof (double)];
} converter;
converter.x = 5.5555555555556e18;
for(i = 0; i < sizeof converter.byte; i++)
printf("%02x", converter.byte[i]);
If you stick this in a main() and run it, it might print something like this:
~/src> gcc -o floatbits floatbits.c
~/src> ./floatbits
ba b5 f6 15 53 46 d3 43
Note though that this, of course, is platform-dependent in its endianness. The above is from a Linux system running on a Sempron CPU, i.e. it's little endian.
A decade late but hopefully this will help someone:
// Converts a double value to a string in base 2 for display.
// Example: 123.5 --> "0:10000000101:1110111000000000000000000000000000000000000000000000"
// Created by Ryan S. White in 2020, Released under the MIT license.
string DoubleToBinaryString(double val)
{
long v = BitConverter.DoubleToInt64Bits(val);
string binary = Convert.ToString(v, 2);
return binary.PadLeft(64, '0').Insert(12, ":").Insert(1, ":");
}
If you mean you want to do it yourself, then this is not a programming question.
If you want to make a computer do it, the easiest way is to use a floating point input routine and then display the result in its hex form. In C++:
double f = atof ("5.5555555555556E18");
unsigned char *b = (unsigned char *) &f;
for (int j = 0; j < 8; ++j)
printf (" %02x", b [j]);
A double value already IS a binary value. It is just a matter of the representation that you wish it to have. In a programming language when you call it a double, then the language that you use will interpret it in one way. If you happen to call the same chunk of memory an int, then it is not the same number.
So it depends what you really want... If you need to write it to disk or to network, then you need to think about BigEndian/LittleEndian.
For these huge numbers (who cannot be presented accurately using a double) you need to use some specialized class to hold the information needed.
C# provides the Decimal class:
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.
If you need bigger precision than this, you need to make your own class I guess. There is one here for ints: http://sourceforge.net/projects/cpp-bigint/ although it seems to be for c++.