BitShifting for uint in c# - c#

If I want to convert 4 bytes into an int, I can do this:
byte[] b = BitConverter.GetBytes(i1);
int i2 = BitConverter.ToInt32(b,0);
int i3 = b[0] | (b[1]<<8) | (b[2]<<16) | (b[3]<<24);
and then i1,i2,i3 will all equal.
but how do I do the same for a uint? This:
uint u1 = uint.MaxValue-1000;
byte[] b = BitConverter.GetBytes(u1);
uint u2 = BitConverter.ToUInt32(b,0);
uint u3 = (uint)(b[0] | (b[1]<<8) | (b[2]<<16) | (b[3]<<24));
results in a overlflow for large uints.

It would only throw that exception if in a checked context. See: http://msdn.microsoft.com/en-us/library/y3d0kef1(v=vs.80).aspx.
No exception:
uint u1 = uint.MaxValue - 1000;
byte[] b = BitConverter.GetBytes(u1);
uint u2 = BitConverter.ToUInt32(b, 0);
uint u3 = (uint) (b[0] | (b[1] << 8) | (b[2] << 16) | (b[3] << 24));
exception:
checked
{
uint u1 = uint.MaxValue - 1000;
byte[] b = BitConverter.GetBytes(u1);
uint u2 = BitConverter.ToUInt32(b, 0);
uint u3 = (uint) (b[0] | (b[1] << 8) | (b[2] << 16) | (b[3] << 24));
}
no exception
checked
{
unchecked
{
uint u1 = uint.MaxValue - 1000;
byte[] b = BitConverter.GetBytes(u1);
uint u2 = BitConverter.ToUInt32(b, 0);
uint u3 = (uint) (b[0] | (b[1] << 8) | (b[2] << 16) | (b[3] << 24));
Console.WriteLine(u1 + " " + u2 + " " + u3);
}
}
Make sure you're not compiling with the /checked option.
The exception is thrown by casting from int to uint. Using the shift operator on the bytes (the line with uint u3 = ...) implicitly cast them to int. A uint with the MSB on ("1") is a negative int which is out of range for uint. Using int causes no such exception because there is no explicit cast which might elicit an overflow exception.

I ran your code with uint values up to 4,294,967,295, which is the max and it works fine in all cases.

Related

C# Pack 5 integers into 1

I'm trying to pack, and unpack, 5 integers ( max 999 for each ) into a single unique integer using bit shift operations:
static UInt64 Combine(uint a, uint b, uint c, uint d, uint e)
{
return (a << 48) | (b << 32 ) | (c << 16) | (d << 8) | e;
}
However, I am unable to unpack the number back.
Can anyone please guide me to what I could be doing wrong ?
thanks.
In order to pack the values 0..999, you need ten bits, not eight. Ten will give you the values 0..1023 whereas eight will only give you 0..255.
So the function you need is something like:
static UInt64 Combine(
uint a, uint b, uint c, uint d, uint e
) {
UInt64 retval = a;
retval = (retval << 10) | b;
retval = (retval << 10) | c;
retval = (retval << 10) | d;
retval = (retval << 10) | e;
return retval;
}
Then, to unpack them, just extract each group of ten bits, one at a time, such as:
static void Extract(UInt64 val, out uint a, out uint b,
out uint c, out uint d, out uint e
) {
e = Convert.ToUInt32(val & 0x3ff); val = val >> 10;
d = Convert.ToUInt32(val & 0x3ff); val = val >> 10;
c = Convert.ToUInt32(val & 0x3ff); val = val >> 10;
b = Convert.ToUInt32(val & 0x3ff); val = val >> 10;
a = Convert.ToUInt32(val & 0x3ff);
}
Another way of storing numbers. It's a little different. But, I thought I'd present it to you. Basically, we're just emulating "Unions":
using System.Runtime.InteropServices;
using System.Windows.Forms;
namespace Unions
{
public partial class Form1 : Form
{
[StructLayout(LayoutKind.Explicit)]
struct uShortArray
{
[FieldOffset(0)]
public ushort Bytes01;
[FieldOffset(2)]
public ushort Bytes23;
[FieldOffset(4)]
public ushort Bytes45;
[FieldOffset(6)]
public ushort Bytes67;
[FieldOffset(0)]
public long long1;
}
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, System.EventArgs e)
{
uShortArray ua = default(uShortArray);
ua.Bytes01 = 999;
ua.Bytes23 = 164;
ua.Bytes45 = 581;
ua.Bytes67 = 43;
MessageBox.Show($"ua = [Bytes 0 - 1 : {ua.Bytes01}] ... [Byte 2 - 3 : {ua.Bytes23}] ... [Bytes 4 - 5 : {ua.Bytes45}] ... [Bytes 6 - 7 : {ua.Bytes67}] ... [long1 : {ua.long1}]");
uShortArray ua2 = default(uShortArray);
Combine(out ua2, 543, 657, 23, 999);
MessageBox.Show($"ua2 = [Bytes 0 - 1 : {ua2.Bytes01}] ... [Byte 2 - 3 : {ua2.Bytes23}] ... [Bytes 4 - 5 : {ua2.Bytes45}] ... [Bytes 6 - 7 : {ua2.Bytes67}] ... [long1 : {ua2.long1}]");
uShortArray ua3 = default(uShortArray);
ua3.long1 = ua.long1; //As you can see, you don't need an extract. You just assign the "extract" value to long1.
MessageBox.Show($"ua3 = [Bytes 0 - 1 : {ua3.Bytes01}] ... [Byte 2 - 3 : {ua3.Bytes23}] ... [Bytes 4 - 5 : {ua3.Bytes45}] ... [Bytes 6 - 7 : {ua3.Bytes67}] ... [long1 : {ua3.long1}]");
}
private void Combine(out uShortArray inUA, ushort in1, ushort in2, ushort in3, ushort in4)
{
inUA = default(uShortArray);
inUA.Bytes01 = in1;
inUA.Bytes23 = in2;
inUA.Bytes45 = in3;
inUA.Bytes67 = in4;
}
}
}
This struct only stores 4 values. But, you can use a larger type, instead of long, to hold more numbers.

C++ vs C# bitwise operations on 64-bit ints - performance

I have 2D field of bits stored in an array of 5 unsigned longs.
I am going for the best performance.
I am working in C# but I tried to set a benchmark by implementing my class in C++.
The problem here is that the C# implementation takes about 10 seconds to finish where the C++ takes about 1 second making it 10 times faster. C++ is x64 build in VS2015. C# is in x64 VS2015 .NET 4.6. Both in Release of course.
EDIT: After optimizing the C# code a little it still takes 7 to 8 seconds vs C++ 1.3 seconds.
Note: C++ in x86 takes about 6 seconds to finish. I am running the code on 64-bit machine.
Question: What makes the C++ THAT much faster? And is there a way to optimize the C# code to be at least similarly fast? (Maybe some unsafe magic?)
What puzzles me is that we are talking just about iterating through arrays and bitwise operations. Shouldn't it be JITed to pretty much the same thing as C++?
Example code:
There are two simple functions in the implementation. Left() and Right() shifting the whole filed by 1 bit to the left resp. right with appropriate bit carrying between the longs.
C++
#include <iostream>
#include <chrono>
using namespace std;
using namespace std::chrono;
class BitField
{
private:
unsigned long long LEFTMOST_BIT = 0x8000000000000000;
unsigned long long RIGHTMOST_BIT = 1;
public:
unsigned long long Cells_l[5];
BitField()
{
for (size_t i = 0; i < 5; i++)
{
Cells_l[i] = rand(); // Random initialization
}
}
void Left()
{
unsigned long long carry = 0;
unsigned long long nextCarry = 0;
for (int i = 0; i < 5; i++)
{
nextCarry = (Cells_l[i] & LEFTMOST_BIT) >> 63;
Cells_l[i] = Cells_l[i] << 1 | carry;
carry = nextCarry;
}
}
void Right()
{
unsigned long long carry = 0;
unsigned long long nextCarry = 0;
for (int i = 4; i >= 0; i--)
{
nextCarry = (Cells_l[i] & RIGHTMOST_BIT) << 63;
Cells_l[i] = Cells_l[i] >> 1 | carry;
carry = nextCarry;
}
}
};
int main()
{
BitField bf;
high_resolution_clock::time_point t1 = high_resolution_clock::now();
for (int i = 0; i < 100000000; i++)
{
bf.Left();
bf.Left();
bf.Left();
bf.Right();
bf.Right();
bf.Left();
bf.Right();
bf.Right();
}
high_resolution_clock::time_point t2 = high_resolution_clock::now();
auto duration = duration_cast<milliseconds>(t2 - t1).count();
cout << "Time: " << duration << endl << endl;
// Print to avoid compiler optimizations
for (size_t i = 0; i < 5; i++)
{
cout << bf.Cells_l[i] << endl;
}
return 0;
}
C#
using System;
using System.Diagnostics;
namespace TestCS
{
class BitField
{
const ulong LEFTMOST_BIT = 0x8000000000000000;
const ulong RIGHTMOST_BIT = 1;
static Random rnd = new Random();
ulong[] Cells;
public BitField()
{
Cells = new ulong[5];
for (int i = 0; i < 5; i++)
{
Cells[i] = (ulong)rnd.Next(); // Random initialization
}
}
public void Left()
{
ulong carry = 0;
ulong nextCarry = 0;
for (int i = 0; i < 5; i++)
{
nextCarry = (Cells[i] & LEFTMOST_BIT) >> 63;
Cells[i] = Cells[i] << 1 | carry;
carry = nextCarry;
}
}
public void Right()
{
ulong carry = 0;
ulong nextCarry = 0;
for (int i = 4; i >= 0; i--)
{
nextCarry = (Cells[i] & RIGHTMOST_BIT) << 63;
Cells[i] = Cells[i] >> 1 | carry;
carry = nextCarry;
}
}
}
class Program
{
static void Main(string[] args)
{
BitField bf = new BitField();
Stopwatch sw = new Stopwatch();
// Call to remove the compilation time from measurements
bf.Left();
bf.Right();
sw.Start();
for (int i = 0; i < 100000000; i++)
{
bf.Left();
bf.Left();
bf.Left();
bf.Right();
bf.Right();
bf.Left();
bf.Right();
bf.Right();
}
sw.Stop();
Console.WriteLine($"Done in: {sw.Elapsed.TotalMilliseconds.ToString()}ms");
}
}
}
EDIT: Fixed "nextCarry" typos in example code.
I have got enough information from comments and a deleted answer from #AntoninLejsek that I can answer this myself.
TL;DR C++ compiler does much better job optimizing and C# managed array access costs a lot when done in loop. However unsafe code and fixed access is not enough to match C++.
It seems we need to optimize the C# code manually to get performance comparable to C++.
Unroll loops
Use unsafe code for fixed array access
Don't access the array repeatedly - rather store the item into local variable.
Following C# code runs as fast as C++ code (about 100 ms faster in fact). Compiled on .NET 4.6 VS 2015 Release x64.
unsafe struct BitField
{
static Random rnd = new Random();
public fixed ulong Cells[5];
public BitField(int nothing)
{
fixed (ulong* p = Cells)
{
for (int i = 0; i < 5; i++)
{
p[i] = (ulong)rnd.Next(); // Just some random number
}
}
}
public void StuffUnrolledNonManaged()
{
ulong u0;
ulong u1;
ulong u2;
ulong u3;
ulong u4;
fixed (ulong *p = Cells)
{
u0 = p[0];
u1 = p[1];
u2 = p[2];
u3 = p[3];
u4 = p[4];
}
ulong carry = 0;
ulong nextCarry = 0;
for (int i = 0; i < 100000000; i++)
{
//left
carry = 0;
nextCarry = u0 >> 63;
u0 = u0 << 1 | carry;
carry = nextCarry;
nextCarry = u1 >> 63;
u1 = u1 << 1 | carry;
carry = nextCarry;
nextCarry = u2 >> 63;
u2 = u2 << 1 | carry;
carry = nextCarry;
nextCarry = u3 >> 63;
u3 = u3 << 1 | carry;
carry = nextCarry;
u4 = u4 << 1 | carry;
//left
carry = 0;
nextCarry = u0 >> 63;
u0 = u0 << 1 | carry;
carry = nextCarry;
nextCarry = u1 >> 63;
u1 = u1 << 1 | carry;
carry = nextCarry;
nextCarry = u2 >> 63;
u2 = u2 << 1 | carry;
carry = nextCarry;
nextCarry = u3 >> 63;
u3 = u3 << 1 | carry;
carry = nextCarry;
u4 = u4 << 1 | carry;
//left
carry = 0;
nextCarry = u0 >> 63;
u0 = u0 << 1 | carry;
carry = nextCarry;
nextCarry = u1 >> 63;
u1 = u1 << 1 | carry;
carry = nextCarry;
nextCarry = u2 >> 63;
u2 = u2 << 1 | carry;
carry = nextCarry;
nextCarry = u3 >> 63;
u3 = u3 << 1 | carry;
carry = nextCarry;
u4 = u4 << 1 | carry;
//right
carry = 0;
nextCarry = u4 << 63;
u4 = u4 >> 1 | carry;
carry = nextCarry;
nextCarry = u3 << 63;
u3 = u3 >> 1 | carry;
carry = nextCarry;
nextCarry = u2 << 63;
u2 = u2 >> 1 | carry;
carry = nextCarry;
nextCarry = u1 << 63;
u1 = u1 >> 1 | carry;
carry = nextCarry;
u0 = u0 >> 1 | carry;
//right
carry = 0;
nextCarry = u4 << 63;
u4 = u4 >> 1 | carry;
carry = nextCarry;
nextCarry = u3 << 63;
u3 = u3 >> 1 | carry;
carry = nextCarry;
nextCarry = u2 << 63;
u2 = u2 >> 1 | carry;
carry = nextCarry;
nextCarry = u1 << 63;
u1 = u1 >> 1 | carry;
carry = nextCarry;
u0 = u0 >> 1 | carry;
//left
carry = 0;
nextCarry = u0 >> 63;
u0 = u0 << 1 | carry;
carry = nextCarry;
nextCarry = u1 >> 63;
u1 = u1 << 1 | carry;
carry = nextCarry;
nextCarry = u2 >> 63;
u2 = u2 << 1 | carry;
carry = nextCarry;
nextCarry = u3 >> 63;
u3 = u3 << 1 | carry;
carry = nextCarry;
u4 = u4 << 1 | carry;
//right
carry = 0;
nextCarry = u4 << 63;
u4 = u4 >> 1 | carry;
carry = nextCarry;
nextCarry = u3 << 63;
u3 = u3 >> 1 | carry;
carry = nextCarry;
nextCarry = u2 << 63;
u2 = u2 >> 1 | carry;
carry = nextCarry;
nextCarry = u1 << 63;
u1 = u1 >> 1 | carry;
carry = nextCarry;
u0 = u0 >> 1 | carry;
//right
carry = 0;
nextCarry = u4 << 63;
u4 = u4 >> 1 | carry;
carry = nextCarry;
nextCarry = u3 << 63;
u3 = u3 >> 1 | carry;
carry = nextCarry;
nextCarry = u2 << 63;
u2 = u2 >> 1 | carry;
carry = nextCarry;
nextCarry = u1 << 63;
u1 = u1 >> 1 | carry;
carry = nextCarry;
u0 = u0 >> 1 | carry;
}
fixed (ulong* p = Cells)
{
p[0] = u0;
p[1] = u1;
p[2] = u2;
p[3] = u3;
p[4] = u4;
}
}
Testing code
static void Main(string[] args)
{
BitField bf = new BitField(0);
Stopwatch sw = new Stopwatch();
// Call to remove the compilation time from measurements
bf.StuffUnrolledNonManaged();
sw.Start();
bf.StuffUnrolledNonManaged();
sw.Stop();
Console.WriteLine($"Non managed access unrolled in: {sw.Elapsed.TotalMilliseconds.ToString()}ms");
}
This code finishes in about 1.1 seconds.
Note: Only fixed array access is not enough to match the C++ performance. If we don't use the local variables - every instance of u0 is replaced by p[0] etc.. The time is about 3.6 seconds.
If we use only fixed access with the code from question (calling Left() and Right() functions in loop). The time is about 5.8 seconds.
Part of the difference may be because of the differences in code between the two versions - you don't assign to nextCarry in the C++ Left nor in the C# Right, but those could be typos in the example.
You'd want to look at the disassembly of both to see the difference, but primarily it is due to the C++ compiler having more time to spend optimizing the code. In this case it unrolls the loops, inlines all the function calls (including the constructor), and shoves all of the stuff in Cells_l into registers. So there's one big loop using registers and no accesses to memory.
I haven't looked at the C# compiled output but I doubt it does anything close to that.
Also, as mentioned in a comment, replace all the Cells.Length calls in your C# code to 5 (just like you have in the C++ code).

Operator shift overflow ? and UInt64

I tried to convert an objective-c project to c# .net and it was working great a few month ago but now I updated something and it gives me bad values. Maybe you will see what I'm doing wrong.
This is the original function from https://github.com/magiconair/map2sqlite/blob/master/map2sqlite.m
uint64_t RMTileKey(int tileZoom, int tileX, int tileY)
{
uint64_t zoom = (uint64_t) tileZoom & 0xFFLL; // 8bits, 256 levels
uint64_t x = (uint64_t) tileX & 0xFFFFFFFLL; // 28 bits
uint64_t y = (uint64_t) tileY & 0xFFFFFFFLL; // 28 bits
uint64_t key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
My buggy .net version :
public UInt64 RMTileKey(int tileZoom, int tileX, int tileY)
{
UInt64 zoom = (UInt64)tileZoom & 0xFFL; // 8bits, 256 levels
UInt64 x = (UInt64)tileX & 0xFFFFFFFL; // 28 bits
UInt64 y = (UInt64)tileY & 0xFFFFFFFL; // 28 bits
UInt64 key = (zoom << 56) | (x << 28) | (y << 0);
return key;
}
The parameters are : tileZoom = 1, tileX = 32, tileY = 1012
UInt64 key = (zoom << 56) | (x << 28) | (y << 0); gives me an incredibly big number.
Precisely :
if zoom = 1, it gives me zoom << 56 = 72057594037927936
if x = 32, it gives me (x << 28) = 8589934592
Maybe 0 was the result before ?
I checked the doc http://msdn.microsoft.com/en-us/library/f96c63ed(v=vs.110).aspx and it is said :
If the type is unsigned, they are set to 0. Otherwise, they are filled with copies of the sign bit. For left-shift operators without overflow, the statement
In my case it seems I get an overflow when using unsigned type or maybe my .net conversion is bad ?

Despite conventional wisdom, using + instead of | to combine bytes into an int always works?

Conventional wisdom has it that when you are ORing bytes together to make an int, you should use the | operator rather than the + operator, otherwise you could have problems with the sign bit.
But this doesn't appear to be the case in C#. It looks like you can happily use the + operator, and it still works even for negative results.
My questions:
Is this really true?
If so, why does it work? (And why do a lot of people think it shouldn't - including me! ;)
Here's a test program which I believe tests every possible combination of four bytes using the + operator and the | operator, and verifies that both approaches yield the same results.
Here's the test code:
using System;
using System.Diagnostics;
namespace Demo
{
class Program
{
int Convert1(byte b1, byte b2, byte b3, byte b4)
{
return b1 + (b2 << 8) + (b3 << 16) + (b4 << 24);
}
int Convert2(byte b1, byte b2, byte b3, byte b4)
{
return b1 | (b2 << 8) | (b3 << 16) | (b4 << 24);
}
void Run()
{
byte b = 0xff;
Trace.Assert(Convert1(b, b, b, b) == -1); // Sanity check.
Trace.Assert(Convert2(b, b, b, b) == -1);
for (int i = 0; i < 256; ++i)
{
Console.WriteLine(i);
byte b1 = (byte) i;
for (int j = 0; j < 256; ++j)
{
byte b2 = (byte) j;
for (int k = 0; k < 256; ++k)
{
byte b3 = (byte) k;
for (int l = 0; l < 256; ++l)
{
byte b4 = (byte) l;
Trace.Assert(Convert1(b1, b2, b3, b4) == Convert2(b1, b2, b3, b4));
}
}
}
}
Console.WriteLine("Done.");
}
static void Main()
{
new Program().Run();
}
}
}
[EDIT]
To see how this works, consider this:
byte b = 0xff;
int i1 = b;
int i2 = (b << 8);
int i3 = (b << 16);
int i4 = (b << 24);
Console.WriteLine(i1);
Console.WriteLine(i2);
Console.WriteLine(i3);
Console.WriteLine(i4);
int total = i1 + i2 + i3 + i4;
Console.WriteLine(total);
This prints:
255
65280
16711680
-16777216
-1
Aha!
Differences:
When bits overlap, | and + will produce different results:
2 | 3 = 3
2 + 3 = 5
When actually using signed bytes, the result will be different:
-2 | -3 = -1
-2 + (-3) = -5

How do I convert byte values into decimals?

I'm trying to load some decimal values from a file but I can't work out the correct way to take the raw values and convert them into decimals.
I've read the file out into a byte array, and each chunk of four bytes is supposed to represent one decimal value. To help figure it out, I've constructed a table of how the decimal values 1 through to 46 are represented as four byte chunks.
For instance, the number 1 appears as 0,0,128,63 the number 2 as 0,0,0,64 and so on up to 46, which is 0,0,56,66. The full table is available here.
There is also another series of numbers which go to three decimal places and include negatives, which is here.
The only documentation I have states
They are stored least significant byte first: 1's, 256's, 65536's, 16777216's. This makes the hex sequence 01 01 00 00 into the number 257 (decimal). In C/C++, to read e.g. a float, do: float x; fread(&x, sizeof(float), 1, fileptr);
However I'm using .NET's File.ReadAllBytes method so this isn't much help. If anyone can spare a few minutes to look at the examples files and see if they can spot a way to convert the values to decimals I'd be most grateful.
You can use BitConverter.ToSingle to read a float value from a byte array, so to get a sequence of floats, you could do something like this:
byte[] data = File.ReadAllBytes(fileName);
int count = data.Length / 4;
Debug.Assert(data.Length % 4 == 0);
IEnumerable<float> values = Enumerable.Range(0, count)
.Select(i => BitConverter.ToSingle(data, i*4));
Have you looked into using the BitConverter class? It converts between byte arrays and various types.
Edit:
MSDN has a helpful comment on the documentation for BitConverter at http://msdn.microsoft.com/en-us/library/system.bitconverter_methods(v=vs.85).aspx:
public static decimal ToDecimal(byte[] bytes)
{
int[] bits = new int[4];
bits[0] = ((bytes[0] | (bytes[1] << 8)) | (bytes[2] << 0x10)) | (bytes[3] << 0x18); //lo
bits[1] = ((bytes[4] | (bytes[5] << 8)) | (bytes[6] << 0x10)) | (bytes[7] << 0x18); //mid
bits[2] = ((bytes[8] | (bytes[9] << 8)) | (bytes[10] << 0x10)) | (bytes[11] << 0x18); //hi
bits[3] = ((bytes[12] | (bytes[13] << 8)) | (bytes[14] << 0x10)) | (bytes[15] << 0x18); //flags
return new decimal(bits);
}
public static byte[] GetBytes(decimal d)
{
byte[] bytes = new byte[16];
int[] bits = decimal.GetBits(d);
int lo = bits[0];
int mid = bits[1];
int hi = bits[2];
int flags = bits[3];
bytes[0] = (byte)lo;
bytes[1] = (byte)(lo >> 8);
bytes[2] = (byte)(lo >> 0x10);
bytes[3] = (byte)(lo >> 0x18);
bytes[4] = (byte)mid;
bytes[5] = (byte)(mid >> 8);
bytes[6] = (byte)(mid >> 0x10);
bytes[7] = (byte)(mid >> 0x18);
bytes[8] = (byte)hi;
bytes[9] = (byte)(hi >> 8);
bytes[10] = (byte)(hi >> 0x10);
bytes[11] = (byte)(hi >> 0x18);
bytes[12] = (byte)flags;
bytes[13] = (byte)(flags >> 8);
bytes[14] = (byte)(flags >> 0x10);
bytes[15] = (byte)(flags >> 0x18);
return bytes;
}
The .NET library implemented Decimal.GetBytes() method internally.
I've used the decompiled .NET library to create a simple conversion methods between decimal and byte arrary - you can find it here:
https://gist.github.com/eranbetzalel/5384006#file-decimalbytesconvertor-cs
EDIT : Here is the full source code from my link.
public decimal BytesToDecimal(byte[] buffer, int offset = 0)
{
var decimalBits = new int[4];
decimalBits[0] = buffer[offset + 0] | (buffer[offset + 1] << 8) | (buffer[offset + 2] << 16) | (buffer[offset + 3] << 24);
decimalBits[1] = buffer[offset + 4] | (buffer[offset + 5] << 8) | (buffer[offset + 6] << 16) | (buffer[offset + 7] << 24);
decimalBits[2] = buffer[offset + 8] | (buffer[offset + 9] << 8) | (buffer[offset + 10] << 16) | (buffer[offset + 11] << 24);
decimalBits[3] = buffer[offset + 12] | (buffer[offset + 13] << 8) | (buffer[offset + 14] << 16) | (buffer[offset + 15] << 24);
return new Decimal(decimalBits);
}
public byte[] DecimalToBytes(decimal number)
{
var decimalBuffer = new byte[16];
var decimalBits = Decimal.GetBits(number);
var lo = decimalBits.Value[0];
var mid = decimalBits.Value[1];
var hi = decimalBits.Value[2];
var flags = decimalBits.Value[3];
decimalBuffer[0] = (byte)lo;
decimalBuffer[1] = (byte)(lo >> 8);
decimalBuffer[2] = (byte)(lo >> 16);
decimalBuffer[3] = (byte)(lo >> 24);
decimalBuffer[4] = (byte)mid;
decimalBuffer[5] = (byte)(mid >> 8);
decimalBuffer[6] = (byte)(mid >> 16);
decimalBuffer[7] = (byte)(mid >> 24);
decimalBuffer[8] = (byte)hi;
decimalBuffer[9] = (byte)(hi >> 8);
decimalBuffer[10] = (byte)(hi >> 16);
decimalBuffer[11] = (byte)(hi >> 24);
decimalBuffer[12] = (byte)flags;
decimalBuffer[13] = (byte)(flags >> 8);
decimalBuffer[14] = (byte)(flags >> 16);
decimalBuffer[15] = (byte)(flags >> 24);
return decimalBuffer;
}
As others have mentioned, use the BitConverter class, see the example below:
byte[] bytez = new byte[] { 0x00, 0x00, 0x80, 0x3F };
float flt = BitConverter.ToSingle(bytez, 0); // 1.0
bytez = new byte[] { 0x00, 0x00, 0x00, 0x40 };
flt = BitConverter.ToSingle(bytez, 0); // 2.0
bytez = new byte[] { 0, 0, 192, 190 };
flt = BitConverter.ToSingle(bytez, 0); // -0.375

Categories

Resources