I have an application that expects 5 bytes that is derived of a number. Right now I am using Uint32 which is 4 bytes. I been told to make a class that uses a byte and a Uint32. However, im not sure how to combine them since they are numbers.
I figured the best way may be to use a Uint64 and convert it down to 5 bytes. However, I am not sure how that can be done. I need to be able to convert it to 5 bytes and a separate function in the class to convert it back to a Uint64.
Does anyone have any ideas on the best way to do this?
Thank you
Use BitConverter.GetBytes and then just remove the three bytes you don't need.
To convert it back use BitConverter.ToUInt64 remembering that you'll have to first pad your byte array with three extra (zero) bytes.
Just watch out for endianness. You may need to reverse your array if the application you are sending these bytes to expects the opposite endianess. Also the endianess will dictate whether you need to add/remove bytes from the start or end of the array. (check BitConverter.IsLittleEndian)
Related
I'm reading up on the ProtectedMemory class in C# (which uses the Data Protection API in Windows (DPAPI)) and I see that in order to use the Protect() Method of the class, the data to be encrypted must be stored in a byte array whose size/length is a multiple of 16.
I know how to convert many different data types to byte array form and back again, but how can I guarantee that the size of a byte array is a multiple of 16? Do I literally need to create an array whose size is a multiple of 16 and keep track of the original data's length using another variable or am I missing something? With traditional block-ciphers all of these details are handled for you automatically with padding settings. Likewise, when I attempt to convert data back to its original form from a byte array, how do I ensure that any additional bytes are ignored, assuming of course that the original data wasn't a multiple of 16.
In the code sample provided in the .NET Framework documentation, the byte array utilised just so happens to be 16 bytes long so I'm not sure what best practice is in relation to this hence the question.
Yes, just to iterate over the possibilities given in the comments (and give an answer to this nice question), you can use:
a padding method that is also used for block cipher modes, see all the options on the Wikipedia page on the subject.
prefix a length in some form or other. A fixed size of 32 bits / 4 bytes is probably easiest. Do write down the type of encoding for the size (unsigned, little endian is probably best for C#).
Both of these already operate on bytes, so you may need to define a character encoding such as UTF-8 if you use a string.
You could also use a specific encoding of the string, e.g. one defined by ASN.1 / DER and then perform zero padding. That way you can even indicate the type of the data that has been encoded in a platform independent way. You may want to read up on masochism before taking this route.
I really hope someone can help me.
I have a single byte[] that has to show the amount of bytes in die byte[] to follow. Now my value is above 255. Is there a way to display/enter a large number?
A byte holds a value from 0 to 255. To represent 299, you either have to use 2 bytes, or use a scheme (which the receiver will have to use as well) where the value in the byte is interpreted as more than its nominal value in order to expand the possible range of values. For instance, the value could be the length / 2. This would allow lengths of 0 - 510, but would allow only even lengths (odd length arrays would need a pad byte).
You can use two (or more) bytes to represent a number larger than 255. Is that what you want ?
short value = 2451;
byte[] data = BitConverter.GetBytes(value);
If this is needed in order to exchange data with some external system, remember to read about Endianness.
That depends on what you consider a good approach. You can perform some form of encoding to allow you store larger than 2 bytes worth of data. I.e. perhaps setting the first byte as 0xFF means you will consider the next byte as part of its data.
[0x01,0x0A,0xFF,0x0A]
Would be interpreted as 3 values of [1,10,265]
There is wonderful method which class String has in java called getBytes.
In C# it's also implemented in another class - Encoding, but unfortunately it returns array of unsigned bytes, which is a problem.
How is it possible to get an array of signed bytes in C# from a string?
Just use Encoding.GetBytes but then convert the byte[] to an sbyte[] by using something like Buffer.BlockCopy. However, I'd strongly encourage you to use the unsigned bytes instead - work round whatever problem you're having with them instead of moving to signed bytes, which were frankly a mistake in Java to start with. The reason there's no built-in way of converting a string to a signed byte array is because it's rarely something you really want to be doing.
If you can tell us a bit about why the unsigned bytes are causing you a problem, we may well be able to help you with that instead.
I have an array of three bytes, I want to convert array into double using c#. Kindly guide me.
Well, that depends on what you want the conversion to do.
You can convert 8 bytes (in the right format) into a double using BitConverter.ToDouble - but with only three bytes it's a bit odd - after all, a double has 64 bits of information, normally. How do those three bytes represent a number? What's the format, basically? When you've figured that out, the rest may well be easy.
Well a double is an array of 8 bytes, so with 3 bytes you won't have all the possible values.
To do what you want:
var myBytes[] = {0,0,0,0,0,1,1,2}; //assume you pad your array with enough zeros to make it 8 bytes.
var myDouble = BitConverter.ToDouble(myBytes,0);
Depends on what exactly is stored in the bytes, but you might be able to just pad the array with 5 bytes all containing 0 and then use BitConverter.ToDouble.
Is it possible to get strings, ints, etc in binary format? What I mean is that assume I have the string:
"Hello" and I want to store it in binary format, so assume "Hello" is
11110000110011001111111100000000 in binary (I know it not, I just typed something quickly).
Can I store the above binary not as a string, but in the actual format with the bits.
In addition to this, is it actually possible to store less than 8 bits. What I am getting at is if the letter A is the most frequent letter used in a text, can I use 1 bit to store it with regards to compression instead of building a binary tree.
Is it possible to get strings, ints,
etc in binary format?
Yes. There are several different methods for doing so. One common method is to make a MemoryStream out of an array of bytes, and then make a BinaryWriter on top of that memory stream, and then write ints, bools, chars, strings, whatever, to the BinaryWriter. That will fill the array with the bytes that represent the data you wrote. There are other ways to do this too.
Can I store the above binary not as a string, but in the actual format with the bits.
Sure, you can store an array of bytes.
is it actually possible to store less than 8 bits.
No. The smallest unit of storage in C# is a byte. However, there are classes that will let you treat an array of bytes as an array of bits. You should read about the BitArray class.
What encoding would you be assuming?
What you are looking for is something like Huffman coding, it's used to represent more common values with a shorter bit pattern.
How you store the bit codes is still limited to whole bytes. There is no data type that uses less than a byte. The way that you store variable width bit values is to pack them end to end in a byte array. That way you have a stream of bit values, but that also means that you can only read the stream from start to end, there is no random access to the values like you have with the byte values in a byte array.
What I am getting at is if the letter
A is the most frequent letter used in
a text, can I use 1 bit to store it
with regards to compression instead of
building a binary tree.
The algorithm you're describing is known as Huffman coding. To relate to your example, if 'A' appears frequently in the data, then the algorithm will represent 'A' as simply 1. If 'B' also appears frequently (but less frequently than A), the algorithm usually would represent 'B' as 01. Then, the rest of the characters would be 00xxxxx... etc.
In essence, the algorithm performs statistical analysis on the data and generates a code that will give you the most compression.
You can use things like:
Convert.ToBytes(1);
ASCII.GetBytes("text");
Unicode.GetBytes("text");
Once you have the bytes, you can do all the bit twiddling you want. You would need an algorithm of some sort before we can give you much more useful information.
The string is actually stored in binary format, as are all strings.
The difference between a string and another data type is that when your program displays the string, it retrieves the binary and shows the corresponding (ASCII) characters.
If you were to store data in a compressed format, you would need to assign more than 1 bit per character. How else would you identify which character is the mose frequent?
If 1 represents an 'A', what does 0 mean? all the other characters?