How can I convert a byte to a number in C#? For example, 00000001 to 1, 00000011 to 3, 00001011 to 11. i have a byte array with numbers encoded as binary bytes, but I need to get those numbers and append them to a string.
You can do this.
// If the system architecture is little-endian (that is, little end first),
// reverse the byte array.
if (BitConverter.IsLittleEndian)
Array.Reverse(bytes);
int i = BitConverter.ToInt32(bytes, 0);
where bytes is your bytes[]. You would want to take a look here
In C# byte is already an unsigned number ranging from 0 to 255. You can freely assign them to integers, or convert to other numeric types.
Bytes are numbers.
If you want to get the numeric value of a single byte, just call ToString().
If you have an array of bytes that are part of a little-endian single number, you can use the BitConverter class to convert them to a 16, 32, or 64 bit signed or unsigned integer.
you can use built-in Convert
foreach (byte b in array) {
long dec = Convert.ToInt64(b,2);
}
Related
Why convert a byte array to BigInteger, If all members submit 0 or 255 the result is a single-item array instead of a multi-item array? for example:
byte[] byteArray = { 0, 0, 0, 0 };
BigInteger newBigInt = new BigInteger(byteArray);
MessageBox.Show(newBigInt.ToString());
byte[] bytes;
bytes = newBigInt.ToByteArray();
MessageBox.Show(string.Join(", ", bytes));
This code return only one zero instead of an array of four zeros. The same is true for the 255. Does anyone know why while another output array with similar items are converted separately?
BigIngteger doesn't waste space on redundant leading values. If all the individual byte values are zero, that makes the entire number zero which can be represented as a single byte with value zero. So 0000 is the same value as 0.
If all values are 255, that represents 0xFFFFFFFF or -1 since it's a signed integer. And since -1 can be represented as 0xFF, 0xFFFF, 0xFFFFFF etc., there's no point in keeping those redundant leading bytes.
BigInteger uses a variable length encoding to represent a number. It has to, since it needs to represent arbitrary large integers.
This is specified in the documentation to ToByteArray
Returns the value of this BigInteger as a byte array using the fewest number of bytes possible. If the value is zero, returns an array of one byte whose element is 0x00.
...
Negative values are written to the array using two's complement representation in the most compact form possible. For example, -1 is represented as a single byte whose value is 0xFF instead of as an array with multiple elements, such as 0xFF, 0xFF or 0xFF, 0xFF, 0xFF, 0xFF.
See also Variable Length Quantity/LEB128 for other kinds of variable length encodings used in some protocols.
I am trying to do the following regarding my specification:
The sales counter with the number of bytes N is starting with byte 0 in the BIG
ENDIAN format stored as a two's complement representation ("signed"). N corresponds
the number of bytes required to encode the sales counter. To have to
At least 5 bytes / 40 bits are used for the revenue counter.
and for this i have created the following code in C#
private static byte[] EncodeUmsatz(long umsatz)
{
// This gives an 8-byte array
byte[] umsatzBytes = BitConverter.GetBytes(umsatz);
// Pad with zeroes to get 16 bytes
int length = 16 * ((umsatzBytes.Length + 15) / 16);
Array.Resize(ref umsatzBytes, length);
// reverse to get big-endian array
Array.Reverse(umsatzBytes, 0, umsatzBytes.Length);
return umsatzBytes;
}
The Property IsLittleEndian of the BitConverter is false. So this should be right, or?
But the Test with an external tool says
"The calculated sales counter does not match the encrypted sales counter (see the DECRYPTED_TURNOVER_VALUE parameter), please check the sales counter encoding (BIG endian, two's complement) or the AES key used."
What I do not know if my code makes a two's complement representation?
I am not the specialist with bytes so has someone an idea what I can try
so the problem is solved - the c# code is correct for big endian - the problem was the value for the input parameter was wrong
Let's say I have a fixed string with 245 chars, for example
v0iRfw0rBic4HlLIDmIm5MtLlbKvakb3Q2kXxMWssNctLgw445dre2boZG1a1kQ+xTUZWvry61QBmTykFEJii217m+BW7gEz3xlMxwXZnWwk2P6Pk1bcOkK3Nklbx2ckhtj/3jtj6Nc05XvgpiROJ/zPfztD0/gXnmCenre32BeyJ0Es2r4xwO8nWq3a+5MdaQ5NjEgr4bLg50DaxUoffQ1jLn/jIQ==`
then I transform in an array byte using
System.Text.Encoding.UTF8.GetBytes
and the length of the array byte is 224.
Then I generate another string, eg
PZ2+Sxx4SjyjzIA1qGlLz4ZFjkzzflb7pQfdoHfMFDlHwQ/uieDFOpWqnA5FFXYTwpOoOVXVWb9Hw6YUm6rF1rhG7eZaXEWmgFS2SeFItY+Qyt3jI9rkcWhPp8Y5sJ/q5MVV/iePuGVOArgBHhDe/g0Wg9DN4bLeYXt+CrR/bNC1zGQb8rZoABF4lSEh41NXcai4IizOHQMSd52rEa2wzpXoS1KswgxWroK/VUyRvH4oJpkMxkqj565gCHsZvO9jx8aLOZcBq66cYXOpDsi2gboeg+oUpAdLRGSjS7qQPfKTW42FBYPmJ3vrb2TW+g==
but now the array length is 320.
So my question is: how can I determine the maximum length of a byte array resulted from a string fixed to 245 chars?
This is the class that I'm using for generating the random string
static class Utilities
{
static Random randomGenerator = new Random();
internal static string GenerateRandomString(int length)
{
byte[] randomBytes = new byte[randomGenerator.Next(length)];
randomGenerator.NextBytes(randomBytes);
return Convert.ToBase64String(randomBytes);
}
}
According to the RFC 3629:
In UTF-8, characters from the U+0000..U+10FFFF range (the UTF-16
accessible range) are encoded using sequences of 1 to 4 octets.
The maximum number of bytes per UTF-8 character is 4, so the maximum length of your byte array is 4 times 245 = 980.
If you are encoding using the Byte Order Mark (BOM) you'll need 3 extra bytes
[...] the BOM
will always appear as the octet sequence EF BB BF.
so 983 in total.
Additional Info:
In your example, you also converted the byte array to Base64, which uses 6 Bits per Character and therefore has a length of 4 * Math.Ceiling(Characters/3), or in your case 1312 ASCII Characters.
According to the design of UTF8, it is expandable.
https://en.wikipedia.org/wiki/UTF-8
In theory, you don't have a maximum length.
But of course, words in real world are limited.
In practice, byte lengths are limited to word count x 4.
245 chars => 980 bytes
If you look for a fixed length encoding, use Encoding.Unicode.
Also, Encoding provides a method giving maximum number of bytes.
Encoding.UTF8.GetMaxByteCount(charCount: 245)
Encoding.Unicode.GetMaxByteCount(charCount: 245)
Simply, you cant. Universal Text Format 8 (which you use), uses 1, 2, 3 or 4 bytes per char (like Tommy said), so the only way for you is to traverse all the chars (GetMaxByteCount()) and calculate it.
Perhaps, if you gonna keep using the BASE64-like strings, you don't not need UTF8, instead, you can use ASCII of any other 1-byte per char encoding and your total byte array size will be the Length of your string.
I am getting audio using the NAudio library which returns a 32 bit float[] of audio data. I'm trying to find a way to convert this to a 16 bit byte[] for playback.
private void sendData(float[] samples)
{
Buffer.BlockCopy(samples, 0, byteArray, 0, samples.Length);
byte[] encoded = codec.Encode(byteArray, 0, byteArray.Length);
waveProvider.AddSamples(byteArray, 0, byteArray.Length);
s.Send(encoded, SocketFlags.None);
}
The audio being sent to waveProvider is coming out static-y — I don't think I'm converting correctly. How can I convert to a byte array of 16 bit samples?
Buffer.BlockCopy copies a number of bytes, but you're passing it a number of elements. Since a float is 4 bytes and a byte is obviously 1, you're using a fourth of samples to fill up half of byteArray, leaving the rest untouched. That probably won't give you very good audio, to say the least.
What you'll need to do is convert from a floating-point value between −1 and 1 to a 16-bit integer value between −215 and 215−1. If we convert to shorts rather than bytes, it's rather simple:
shortSample = (short)Math.Floor(floatSample * 32767);
(If you know that floatSample will always be less than 1, you should multiply by 32,768 rather than 32,767.)
Of course, you want a byte array rather than a short array. Unfortunately, you've not given enough information for that last step. There's two things that you'll need to know: endianness and signedness. If it's unsigned, you'll need to convert that short to a ushort by adding 32,768. Then you need to split each short or ushort up into two bytes. Endianness determines the order, but you'll probably need to do some bit-shifting.
I'm trying to understand the following:
If I am declaring 64 bytes as the array length (buffer). When I convert to a base 64 string, it says the length is 88. Shouldn't the length only be 64, since I am passing in 64 bytes? I could be totally misunderstanding how this actual works. If so, could you please explain.
//Generate a cryptographic random number
RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
// Create byte array
byte[] buffer = new byte[64];
// Get random bytes
rng.GetBytes(buffer);
// This line gives me 88 as a result.
// Shouldn't it give me 64 as declared above?
throw new Exception(Convert.ToBase64String(buffer).Length.ToString());
// Return a Base64 string representation of the random number
return Convert.ToBase64String(buffer);
No, base-64 encoding uses a whole byte to represent six bits of the data being encoded. The lost two bits is the price of using only alphanumeric, plus and slash as your symbols (basically, excluding the numbers representing not visible or special characters in plain ASCII/UTF-8 encoding). The result that you are getting is (64*4/3) rounded up to the nearest 4-byte boundary.
Base64 encoding converts 3 octets into 4 encoded characters; therefore
(64/3)*4 ≈ (22*4) = 88 bytes.
Read here.
Shouldn't the length only be 64, since I am passing in 64 bytes?
No. You are passing 64 tokens in Base256 notation. Base64 has less information per token, so it needs more tokens. 88 sounds about right.