How to properly read 16 byte unsigned integer with BinaryReader - c#

I need to parse a binary stream in .NET to convert a 16 byte unsigned integer. I would like to use the BinaryReader.ReadUIntXX() functions but there isn't a BinaryReader.ReadUInt128() function available. I assume I will have to roll my own function using the ReadByte function and build an array but I don't know if this is the most efficient method?
Thanks!

I would love to take credit for this, but one quick search of the net, and viola:
http://msdn.microsoft.com/en-us/library/bb384066.aspx
Here is the code sample (which is on the same page)
byte[] bytes = { 0, 0, 0, 25 };
// If the system architecture is little-endian (that is, little end first),
// reverse the byte array.
if (BitConverter.IsLittleEndian)
Array.Reverse(bytes);
int i = BitConverter.ToInt32(bytes, 0);
Console.WriteLine("int: {0}", i);
// Output: int: 25
The only thing that most developers do not know is the difference between big-endian, and little-endian. Well like most things in life the human race simply can't agree on very simple things (left and right hand cars is a good example as well). When the bits (remember 1 and 0's and binary math), are laid out the order of the bits will determine the value of the field. One byte is eigth bits.. then there is signed and unsigned.. but lets stick to the order. The number 1 (one) can be represented in one of two ways , 10000000 or 00000001 (see clarification in comments for detailed explanation) - as the comment in the code suggests, the big-endian is the one with the one in front, litte-endian is the one with zero. (see http: // en.wikipedia.org/wiki/Endianness -sorry new user and they wont' let me hyperlink more than once....) Why can't we all just agree???
I learned this lesson many years ago when dealing with embedded systems....remember linking? :) Am I showing my age??

I think the comments from 0xA3, SLaks, and Darin Dimitrov answered the question but to put it all together.
BinaryReader.ReadUInt128() is not supported in the binary reader class in .NET and the only solution I could find was to create my own function. As 0xA3 mentioned, there is a BigInt data type in .NET 4.0. I am in the process of creating my own function based upon everyone's comments.
Thanks!

a guid is exactly 16 bytes in size.
Guid guid = new Guid(byteArray);
But you cannot do maths with a Guid. If you need to, you can search for some implementations of a BigInteger for .net on the internet. You can then convert your bytearray into a BigInteger.

Related

How do I use this CRC-32C C# library?

I've downloaded this library https://github.com/robertvazan/crc32c.net for my project I'm working on. I need to use CRC in a part of my project so I downloaded the library as it is obviously going to be much faster than anything I'm going to write in the near future.
I have some understanding of how crc works, I once made a software implementation of it (as a part of learning) that worked, but I have got to be doing something incredibly stupid while trying to get this library to work and not realize it. No matter what I do, I can't seem to be able to get crc = 0 even though the arrays were not changed.
Basically, my question is, how do I actually use this library to check for integrity of a byte array?
The way I understand it, I should call Crc32CAlgorithm.Compute(array) once to compute the crc the first time and then call it again on an array that has the previously returned value appended (I've tried to append it as well as set last 4 bytes of the array to zeroes before putting the returned value there) and if the second call returns 0 the array was unchanged.
Please help me, I don't know what I'm doing wrong.
EDIT: It doesn't work right when I do this: (yes, I realize linq is very slow, this is just an example)
using(var hash = new Crc32CAlgorithm())
{
var array = new byte[] { 1, 2, 3, 4, 5, 6, 7, 8 };
var crc = hash.ComputeHash(array);
var arrayWithCrc = array.Concat(crc).ToArray();
Console.WriteLine(string.Join(" ", hash.ComputeHash(arrayWithCrc)));
}
Console outputs: 199 75 103 72
You do not need to append a CRC to a message and compute the CRC of that in order to check a CRC. Just compute the CRC on the message on one end, send that CRC along with the message, compute CRC on just the message on the other end (not including the sent CRC), and then compare the CRC you computed to the one that was sent with the message.
They should be equal to each other. That's all there is to it. That works for any hash you might use, not just CRCs.
If you feel deeply compelled to make use of the lovely mathematical property of CRCs where computing the CRC on the message with its CRC appended gives a specific result, you can. You have to append the CRC bits in the correct order, and you need to look for the "residue" of the CRC, which may not be zero.
In your case, you are in fact appending the bits in the correct order (by appending the bytes in little-endian order), and the result you are getting is the correct residue for the CRC-32C. That residue is 0x48674bc7, which separated into bytes, in little-endian order, and then converted into decimal is your 199 75 103 72.
You will find that if you take any sequence of bytes, compute the CRC-32C of that, append that CRC to the sequence in little-endian order, and compute the CRC-32C of the sequence plus CRC, you will always get 0x48674bc7.
However that's smidge slower than just comparing the two CRC's, since now you have to compute a CRC on four more bytes than before. So, really, there's no need to do it this way.

Converting Int16 to UByte[] is not working in C# with BitConverter.GetBytes()

I am working on a super simple parser/compiler for an example language, and I am having some problems with number conversion. I have the following code as a test:
Console.WriteLine(BitConverter.GetBytes(0x010D)[0]);
Console.WriteLine(BitConverter.GetBytes(0x010D)[1]);
And in the console it prints:
13
1
I am confused because that means that the array is [13, 1]. I would assume that it should go from left to right like the original number does. Is there a way to fix this or do I just need to always treat it like it goes the other way?
Thanks a lot!
P.S.
Apologies if this is dumb, I just can't seem to find anything with my problem, which may well be because this is a user error.
I decided to answer this question because Jon Skeet commented with an appropriate answer.
The solution to this question is really quite simple, and it was just a quirk of working with bytes and binary that I was not aware of.
See:
Endianness Wikipedia Article
GetBytes Docs
Endianness essentially is which order the bytes in a number go in. In my case, with .NET, the numbers are little-endian, meaning that the smaller numbers come first, followed by the big numbers. For the question's example, 0x010D would be represented as { 0x0D, 0x01 } in little-endian, as it was. If it were to be represented in big-endian, however, it would be represented as { 0x01, 0x0D }
Thanks again to Jon Skeet for your helpful comment!

Reading numbers represented in both-endian format?

Conceptually, I'm having a hard time understanding how a 32-bit unsigned integer (which is 4 bytes) can be represented as 8 bytes, the first four of which are encoded using the little-endian format and the last four of which are encoded using the big-endian format.
I'm specifically referring to the ISO 9660 format which encodes some 16-bit and 32-bit integers in this fashion.
I tried the following but this obviously does not work because the BitConverter.ToUInt32() method only takes the first four bytes from the starting index.
byte[] leastSignificant = reader.ReadBytes(4, Endianness.Little);
byte[] mostSignificant = reader.ReadBytes(4, Endianness.Big);
byte[] buffer = new byte[8];
Array.Copy(leastSignificant, 0, buffer, 0, 4);
Array.Copy(mostSignificant, 0, buffer, 4, 4);
uint actualValue = BitConverter.ToUInt32(buffer, 0);
What is the proper way to read a 32-bit unsigned integer represented as 8 bytes encoded in both-endian format?
This is very typical for an ISO standard. The organization is not very good at creating decent standards, only good at creating compromises among its members. Two basic ways they do that, they either pick a sucky standard that makes everybody equally unhappy. Or pick more than one so that everybody can be happy. Encoding a number twice falls in the latter category.
There's some justification for doing it this way. Optical disks have lots of bits that are very cheap to duplicate. Their formats are often designed to keep the playback hardware as cheap as possible. Mastering a disk is often very convoluted because of that, the BlueRay standard is particularly painful.
Since your machine is little-endian, you only care about the little-endian value. Simply ignore the big-endian equivalent. Technically you could add a check that they are the same but that's just wasted effort.

Converting Uint64 to 5 bytes and vice versa in c#

I have an application that expects 5 bytes that is derived of a number. Right now I am using Uint32 which is 4 bytes. I been told to make a class that uses a byte and a Uint32. However, im not sure how to combine them since they are numbers.
I figured the best way may be to use a Uint64 and convert it down to 5 bytes. However, I am not sure how that can be done. I need to be able to convert it to 5 bytes and a separate function in the class to convert it back to a Uint64.
Does anyone have any ideas on the best way to do this?
Thank you
Use BitConverter.GetBytes and then just remove the three bytes you don't need.
To convert it back use BitConverter.ToUInt64 remembering that you'll have to first pad your byte array with three extra (zero) bytes.
Just watch out for endianness. You may need to reverse your array if the application you are sending these bytes to expects the opposite endianess. Also the endianess will dictate whether you need to add/remove bytes from the start or end of the array. (check BitConverter.IsLittleEndian)

C# analog for getBytes in java

There is wonderful method which class String has in java called getBytes.
In C# it's also implemented in another class - Encoding, but unfortunately it returns array of unsigned bytes, which is a problem.
How is it possible to get an array of signed bytes in C# from a string?
Just use Encoding.GetBytes but then convert the byte[] to an sbyte[] by using something like Buffer.BlockCopy. However, I'd strongly encourage you to use the unsigned bytes instead - work round whatever problem you're having with them instead of moving to signed bytes, which were frankly a mistake in Java to start with. The reason there's no built-in way of converting a string to a signed byte array is because it's rarely something you really want to be doing.
If you can tell us a bit about why the unsigned bytes are causing you a problem, we may well be able to help you with that instead.

Categories

Resources