In Java BigDecimal class contains values as A*pow(10,B) where A is 2's complement which non-fix bit length and B is 32bit integer.
In C# Decimal contains values as pow (–1,s) × c × pow(10,-e) where the sign s is 0 or 1, the coefficient c is given by 0 ≤ c < pow(2,96) , and the scale e is such that 0 ≤ e ≤ 28 .
And i want to convert Java BigDecimal to Something like c# Decimal in JAVA.
Can you help me .
I have some thing like this
class CS_likeDecimal
{
private int hi;// for 32bit most sinificant bit of c
private int mid;// for 32bit in the middle
private int lo;// for 32 bit the last sinificant bit
.....
public CS_likeDecimal(BigDecimal data)
{
....
}
}
In fact I found this What's the best way to represent System.Decimal in Protocol Buffers?.
it a protocol buffer for send c# decimal ,but in the protobuff-net project use this to send message between c# (but i want between c# and JAVA)
message Decimal {
optional uint64 lo = 1; // the first 64 bits of the underlying value
optional uint32 hi = 2; // the last 32 bis of the underlying value
optional sint32 signScale = 3; // the number of decimal digits, and the sign
}
Thanks,
the Decimal I use in protobuf-net is primarily intended to support the likely usage of protobuf-net being used at both ends of the pipe, which supports a fixed range. It sounds like the range of the two types in discussion is not the same, so: are not robustly compatible.
I would suggest explicitly using an alternative representation. I don't know what representations are available to Java's BigDecimal - whether there is a pragmatic byte[] version, or a string version.
If you are confident that the scale and range won't be a problem, then it should be possible to fudge between the two layouts with some bit-fiddling.
I needed to write a BigDecimal to/from .Net Decimal converter.
Using this reference:
http://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx
I wrote this code that could works:
public static byte[] BigDecimalToNetDecimal(BigDecimal paramBigDecimal) throws IllegalArgumentException
{
// .Net Decimal target
byte[] result = new byte[16];
// Unscaled absolute value
BigInteger unscaledInt = paramBigDecimal.abs().unscaledValue();
int bitLength = unscaledInt.bitLength();
if (bitLength > 96)
throw new IllegalArgumentException("BigDecimal too big for .Net Decimal");
// Byte array
byte[] unscaledBytes = unscaledInt.toByteArray();
int unscaledFirst = 0;
if (unscaledBytes[0] == 0)
unscaledFirst = 1;
// Scale
int scale = paramBigDecimal.scale();
if (scale > 28)
throw new IllegalArgumentException("BigDecimal scale exceeds .Net Decimal limit of 28");
result[1] = (byte)scale;
// Copy unscaled value to bytes 8-15
for (int pSource = unscaledBytes.length - 1, pTarget = 15; (pSource >= unscaledFirst) && (pTarget >= 4); pSource--, pTarget--)
{
result[pTarget] = unscaledBytes[pSource];
}
// Signum at byte 0
if (paramBigDecimal.signum() < 0)
result[0] = -128;
return result;
}
public static BigDecimal NetDecimalToBigDecimal(byte[] paramNetDecimal)
{
int scale = paramNetDecimal[1];
int signum = paramNetDecimal[0] >= 0 ? 1 : -1;
byte[] magnitude = new byte[12];
for (int ptr = 0; ptr < 12; ptr++) magnitude[ptr] = paramNetDecimal[ptr + 4];
BigInteger unscaledInt = new BigInteger(signum, magnitude);
return new BigDecimal(unscaledInt, scale);
}
Related
I need to read 8 bool values and create a Byte from it, How is this done?
rather than hardcoding the following 1's and 0's - how can i create that binary value from a series of Boolean values in c#?
byte myValue = 0b001_0000;
There's many ways of doing it, for example to build it from an array:
bool[] values = ...;
byte result = 0;
for(int i = values.Length - 1; i >= 0; --i) // assuming you store them "in reverse"
result = result | (values[i] << (values.Length - 1 - i));
My solution with Linq:
public static byte CreateByte(bool[] bits)
{
if (bits.Length > 8)
{
throw new ArgumentOutOfRangeException();
}
return (byte)bits.Reverse().Select((val, i) => Convert.ToByte(val) << i).Sum();
}
The call to Reverse() is optional and dependent on if you want index 0 to be the LSB (without Reverse) or the MSB (with Reverse)
var values = new bool[8];
values [7] = true;
byte result = 0;
for (var i = 0; i < 8; i++)
{
//edited to bit shifting because of community complains :D
if (values [i]) result |= (byte)(1 << i);
}
// result => 128
This might be absolutely overkill, but I felt like playing around with SIMD. It could've probably been written even better but I don't know SIMD all that well.
If you want reverse bit order to what this generates, just remove the shuffling part from the SIMD approach and change (7 - i) to just i
For those not familiar with SIMD, this approach is about 3 times faster than a normal for loop.
public static byte ByteFrom8Bools(ReadOnlySpan<bool> bools)
{
if (bools.Length < 8)
Throw();
static void Throw() // Throwing in a separate method helps JIT produce better code, or so I've heard
{
throw new ArgumentException("Not enough booleans provided");
}
// these are JIT compile time constants, only one of the branches will be compiled
// depending on the CPU running this code, eliminating the branch entirely
if(Sse2.IsSupported && Ssse3.IsSupported)
{
// copy out the 64 bits all at once
ref readonly bool b = ref bools[0];
ref bool refBool = ref Unsafe.AsRef(b);
ulong ulongBools = Unsafe.As<bool, ulong>(ref refBool);
// load our 64 bits into a vector register
Vector128<byte> vector = Vector128.CreateScalarUnsafe(ulongBools).AsByte();
// this is just to propagate the 1 set bit in true bools to the most significant bit
Vector128<byte> allTrue = Vector128.Create((byte)1);
Vector128<byte> compared = Sse2.CompareEqual(vector, allTrue);
// reverse the bytes we care about, leave the rest in their place
Vector128<byte> shuffleMask = Vector128.Create((byte)7, 6, 5, 4, 3, 2, 1, 0, 8, 9, 10, 11, 12, 13, 14, 15);
Vector128<byte> shuffled = Ssse3.Shuffle(compared, shuffleMask);
// move the most significant bit of each byte into a bit of int
int mask = Sse2.MoveMask(shuffled);
// returning byte = returning the least significant byte from int
return (byte)mask;
}
else
{
// fall back to a more generic algorithm if there aren't the correct instructions on the CPU
byte bits = 0;
for (int i = 0; i < 8; i++)
{
bool b = bools[i];
bits |= (byte)(Unsafe.As<bool, byte>(ref b) << (7 - i));
}
return bits;
}
}
I want to create a custom data type which is 4 bits (nibble).
One option is this -
byte source = 0xAD;
var hiNybble = (source & 0xF0) >> 4; //Left hand nybble = A
var loNyblle = (source & 0x0F); //Right hand nybble = D
However, I want to store each 4 bits in an array.
So for example, a 2 byte array,
00001111 01010000
would be stored in the custom data type array as 4 nibbles -
0000
1111
0101
0000
Essentially I want to operate on 4 bit types.
Is there any way I can convert the array of bytes into array of nibbles?
Appreciate an example.
Thanks.
You can encapsulate a stream returning 4-bit samples by reading then converting (written from a phone without a compiler to test. Expect typos and off-by-one errors):
public static int ReadNibbles(this Stream s, byte[] data, int offset, int count)
{
if (s == null)
{
throw new ArgumentNullException(nameof(s));
}
if (data == null)
{
throw new ArgumentNullException(nameof(data));
}
if (data.Length < offset + length)
{
throw new ArgumentOutOfRangeException(nameof(length));
}
var readBytes = s.Read(data, offset, length / 2);
for (int n = readBytes * 2 - 1, k = readBytes - 1; k >= 0; k--)
{
data[offset + n--] = data[offset + k] & 0xf;
data[offset + n--] = data[offset + k] >> 4;
}
return readBytes * 2;
}
To do the same for 12-bit integers (assuming MSB nibble ordering):
public static int Read(this Stream stream, ushort[] data, int offset, int length)
{
if (stream == null)
{
throw new ArgumentNullException(nameof(stream));
}
if (data == null)
{
throw new ArgumentNullException(nameof(data));
}
if (data.Length < offset + length)
{
throw new ArgumentOutOfRangeException(nameof(length));
}
if (length < 2)
{
throw new ArgumentOutOfRangeException(nameof(length), "Cannot read fewer than two samples at a time");
}
// we always need a multiple of two
length -= length % 2;
// 3 bytes length samples
// --------- * -------------- = N bytes
// 2 samples 1
int rawCount = (length / 2) * 3;
// This will place GC load. Something like a buffer pool or keeping
// the allocation as a field on the reader would be a good idea.
var rawData = new byte[rawCount];
int readBytes = 0;
// if the underlying stream returns an even number of bytes, we will need to try again
while (readBytes < data.Length)
{
int c = stream.Read(rawData, readBytes, rawCount - readBytes);
if (c <= 0)
{
// End of stream
break;
}
readBytes += c;
}
// unpack
int k = 0;
for (int i = 0; i < readBytes; i += 3)
{
// EOF in second byte is undefined
if (i + 1 >= readBytes)
{
throw new InvalidOperationException("Unexpected EOF");
}
data[(k++) + offset] = (ushort)((rawData[i + 0] << 4) | (rawData[i + 1] >> 4));
// EOF in third byte == only one sample
if (i + 2 < readBytes)
{
data[(k++) + offset] = (ushort)(((rawData[i + 1] & 0xf) << 8) | rawData[i + 2]);
}
}
return k;
}
The best way to do this would be to look at the source for one of the existing integral data types. For example Int16.
If you look a that type, you can see that it implements a handful of interfaces:
[Serializable]
public struct Int16 : IComparable, IFormattable, IConvertible, IComparable<short>, IEquatable<short> { /* ... */ }
The implementation of the type isn't very complicated. It has a MaxValue a MinValue, a couple of CompareTo overloads, a couple of Equals overloads, the System.Object overrides (GetHashCode, GetType, ToString (plus some overloads)), a handful of Parse and ToParse overloads and a range of IConvertible implementations.
In other places, you can find things like arithmetic, comparison and conversion operators.
BUT:
What System.Int16 has that you can't have is this:
internal short m_value;
That's a native type (16-bit integer) member that holds the value. There is no 4-bit native type. The best you are going to be able to do is have a native byte in your implementation that will hold the value. You can write accessors that constrain it to the lower 4 bits, but you can't do much more than that. If someone creates a Nibble array, it will be implemented as an array of those values. As far as I know, there's no way to inject your implementation into that array. Similarly, if someone creates some other collection (e.g., List<Nibble>), then the collection will be of instances of your type, each of which will take up 8 bits.
However
You can create specialized collection classes, NibbleArray, NibbleList, etc. C#'s syntax allows you to provide your own collection initialization implementation for a collection, your own indexing method, etc.
So, if someone does something like this:
var nyblArray = new NibbleArray(32);
nyblArray[4] = 0xd;
Then your code can, under the covers, create a 16-element byte array, set the low nibble of the third byte to 0xd.
Similarly, you can implement code to allow:
var newArray = new NibbleArray { 0x1, 0x3, 0x5, 0x7, 0x9, 0xa};
or
var nyblList = new NibbleList { 0x0, 0x2, 0xe};
A normal array will waste space, but your specialized collection classes will do what you are talking about (with the expense of some bit-twizzling).
The closest you can get to what you want is to use an indexer:
// Indexer declaration
public int this[int index]
{
// get and set accessors
}
Within the body of the indexer you can translate the index to the actual byte that contains your 4 bits.
The next thing you can do is operator overloading. You can redefine +, -, *...
I'm looking to get the last 10 bits of a 32 Bit integer,
Essentially what I did was take a number and jam it into the first 22 bits, I can extract that number back out quite nicely with
int test = Convert.ToInt32(uIActivityId >> nMethodBits); //method bits == 10 so this just pushes the last ten bits off the range
Where test results in the number I put in in the first place.
But now I'm stumped. So my question to you guys is. How do i get the last ten bits of a 32 bit integer?
First create a mask that has a 1 bit for those bits you want and a 0 bit for those you're not interested in. Then use binary & to keep only the relevant bits.
const uint mask = (1 << 10) - 1; // 0x3FF
uint last10 = input & mask;
This one is another try !!
List<bool> BitsOfInt(int input , int bitCount)
{
List<bool> outArray = new BitArray (BitConverter.GetBytes(input)).OfType<bool>().ToList();
return outArray.GetRange(outArray.Count - bitCount, bitCount);
}
int i = Convert.ToInt32("11100111101", 2);
int mask = Convert.ToInt32("1111111111", 2);
int test = Convert.ToInt32(( i&mask));
int j = Convert.ToInt32("1100111101", 2);
if (test == j)
System.Console.Out.WriteLine("it works");
public bool[] GetLast10Bits(int input) {
BitArray array = new BitArray(new []{input});
List<bool> bits = new List<bool>();
if (array.Length > 10) {
return Enumerable.Range(0, 10).Select(i => array[i]).ToArray();
}
return new bool[0];
}
I was looking for a way to convert IEEE floating point numbers to IBM floating point format for a old system we are using.
Is there a general formula we can use in C# to this end?
Use:
// https://en.wikipedia.org/wiki/IBM_hexadecimal_floating-point
//
// float2ibm(-118.625F) == 0xC276A000
// 1 100 0010 0111 0110 1010 0000 0000 0000
//
// IBM/370 single precision, 4 bytes
// xxxx.xxxx xxxx.xxxx xxxx.xxxx xxxx.xxxx
// s|-exp--| |--------fraction-----------|
// (7) (24)
//
// value = (-1)**s * 16**(e - 64) * .f range = 5E-79 ... 7E+75
//
static int float2ibm(float fromFormat)
{
byte[] bytes = BitConverter.GetBytes(fromFormat);
int fconv = (bytes[3] << 24) | (bytes[2] << 16) | (bytes[1] << 8)| bytes[0];
if (fconv == 0)
return 0;
int fmant = (0x007fffff & fconv) | 0x00800000;
int t = (int)((0x7f800000 & fconv) >> 23) - 126;
while (0 != (t & 0x3)) {
++t;
fmant >>= 1;
}
fconv = (int)(0x80000000 & fconv) | (((t >> 2) + 64) << 24) | fmant;
return fconv; // Big-endian order
}
I changed a piece of code called static void float_to_ibm(int from[], int to[], int n, int endian).
The code above can be run correctly on a PC.
from is a little-endian float number.
return value is a big-endian IBM float number, but stored in type int.
An obvious approach would be to use textual representation of the number as the interchange format.
I recently had to convert one float to another. It looks like the XDR format uses an odd format for its floats. So when converting from XDR to standard floats, this code did it.
#include <rpc/rpc.h>
// Read in an XDR float array, copy to a standard float array.
// The 'out' array needs to be allocated before the function call.
bool convertFromXdrFloatArray(float *in, float *out, long size)
{
XDR xdrs;
xdrmem_create(&xdrs, (char *)in, size*sizeof(float), XDR_DECODE);
for(int i = 0; i < size; i++)
{
if(!xdr_float(&xdrs, out++)) {
fprintf(stderr, "%s:%d:ERROR:xdr_float\n", __FILE__, __LINE__);
exit(1);
}
}
xdr_destroy(&xdrs);
return true;
}
Using speeding's answer, I added the following that may be useful in some cases:
/// <summary>
/// Converts an IEEE floating number to its string representation (4 or 8 ASCII codes).
/// It is useful for SAS XPORT files format.
/// </summary>
/// <param name="from_">IEEE number</param>
/// <param name="padTo8_">When true, the output is 8 characters rather than 4</param>
/// <returns>Printable string according to the hardware's endianness</returns>
public static string Float2IbmAsAsciiCodes(float from_, bool padTo8_ = true)
{
StringBuilder sb = new StringBuilder();
string s;
byte[] bytes = BitConverter.GetBytes(Float2Ibm(from_)); // Big-endian order
if (BitConverter.IsLittleEndian)
{
// Revert bytes order
for (int i = 3; i > -1; i--)
sb.Append(Convert.ToChar(bytes[i]));
s = sb.ToString();
if (padTo8_)
s = s.PadRight(8, '\0');
return s;
}
else
{
for (int i = 0; i < 8; i++)
sb.Append(Convert.ToChar(bytes[i]));
s = sb.ToString();
if (padTo8_)
s = s.PadRight(8, '\0');
return s;
}
}
Scenario:
I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing.
Current Code:
string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127
int v;
for (int x = 0; x < strData.Length/2; x++)
{
v = HexToInt(strData.Substring(x * 2, 2));
Console.WriteLine(v); // do stuff with v
}
private int HexToInt(string _hexData)
{
string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0');
int i = Convert.ToInt32(strBinary.Substring(1, 7), 2);
i = (strBinary.Substring(0, 1) == "0" ? i : -i);
return i;
}
Question:
Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?
Just covert it to an int and handle the sign bit by testing the size of the converted number and masking off the sign bit.
private int HexToInt(string _hexData)
{
int number = Convert.ToInt32(_hexData, 16);
if (number >= 0x80)
return -(number & 0x7F);
return number;
}
Like this: (Tested)
(int)unchecked((sbyte)Convert.ToByte("FF", 16))
Explanation:
The unchecked cast to sbyte will perform a direct cast to a signed byte, interpreting the final bit as a sign bit.
However, it has a different range, so it won't help you.
sbyte SignAndMagnitudeToTwosComplement(byte b)
{
var isNegative = ((b & 0x80) >> 7);
return (sbyte)((b ^ 0x7F * isNegative) + isNegative);
}
Then:
sbyte ReadSignAndMagnitudeByte(string hex)
{
return SignAndMagnitudeToTwosComplement(Convert.ToByte(hex,16));
}