I m using random generator what takes the length of random bytes as an input and returns byte array. What i need now is to convert that byte array to 8 digit integer and from that to a string.
byte[] randomData = this.GetRandomArray(4);
SecretCode = Math.Abs(BitConverter.ToInt32(randomData, 0)).ToString().Substring(0, 7);
But some occasions the int is shorter than 8 digits and my method fails. How can i make sure that the byte array generated can be converted to 8 digi int number?
One more option:
myString = BitConverter.ToUInt32(randomData, 0).toString("D8");
Note - using ToUInt32 is a more sensible approach than converting to signed integer and taking the absolute value (it also doubles the number of values you can generate since -123 and 123 will result in a different string output, which they won't if you use Math.Abs.); and the format "D8" should convert to eight digits including leading zeros.
See https://stackoverflow.com/a/5418425/1967396
You could just use <stringCode>.PadLeft(8, "0")
Are you sure that your method is failing on the Substring? As far as I can see there's a number of issues:
It'll fail if you don't get 4 bytes back (ArgumentException on BitConverter.ToInt32)
It'll fail if the string isn't long enough (your problem from above)
It'll truncate at seven chars, not eight, as you want.
You can use the PadLeft function to pad with zeros. If want eight then code should look like:
var s = Math.Abs(
BitConverter.ToInt32(randomData, 0))
.ToString()
.PadLeft(8, '0')
.Substring(0, 8);
For seven, replace the 8 with a 7.
You need to concatenate eight zeros before trying to take the Substring(), then take the last 8 characters.
StringBuffer s = new StringBuffer("00000000").append(Math.Abs(BitConverter.ToInt32(randomData, 0)).ToString());
SecretCode = s.substring(s.length()-7);
Your other option is to use a formatter to ensure the stringification of the bits returns leading zeros.
Related
I have a byte array with 9 elements. I need to convert the entire byte array into a single hex string.
Above is fairly straightforward to do however, I need to drop leading 0 if an element contains a decimal value less than 16 i.e. F or less. See example below.
byte[] myByteArr = {10,11,12,13,14,15,16,17,18,19};
"Regular" method to convert the above to hex should give me 0x0A0B0C0D0E0F10111213
What I actually need is 0xABCDEF10111213
Is there a quick method to just drop the upper nibble and take the lower nibble if none of the bits in the upper nibble are set ?
Thanks in advance!
Let's say I have a fixed string with 245 chars, for example
v0iRfw0rBic4HlLIDmIm5MtLlbKvakb3Q2kXxMWssNctLgw445dre2boZG1a1kQ+xTUZWvry61QBmTykFEJii217m+BW7gEz3xlMxwXZnWwk2P6Pk1bcOkK3Nklbx2ckhtj/3jtj6Nc05XvgpiROJ/zPfztD0/gXnmCenre32BeyJ0Es2r4xwO8nWq3a+5MdaQ5NjEgr4bLg50DaxUoffQ1jLn/jIQ==`
then I transform in an array byte using
System.Text.Encoding.UTF8.GetBytes
and the length of the array byte is 224.
Then I generate another string, eg
PZ2+Sxx4SjyjzIA1qGlLz4ZFjkzzflb7pQfdoHfMFDlHwQ/uieDFOpWqnA5FFXYTwpOoOVXVWb9Hw6YUm6rF1rhG7eZaXEWmgFS2SeFItY+Qyt3jI9rkcWhPp8Y5sJ/q5MVV/iePuGVOArgBHhDe/g0Wg9DN4bLeYXt+CrR/bNC1zGQb8rZoABF4lSEh41NXcai4IizOHQMSd52rEa2wzpXoS1KswgxWroK/VUyRvH4oJpkMxkqj565gCHsZvO9jx8aLOZcBq66cYXOpDsi2gboeg+oUpAdLRGSjS7qQPfKTW42FBYPmJ3vrb2TW+g==
but now the array length is 320.
So my question is: how can I determine the maximum length of a byte array resulted from a string fixed to 245 chars?
This is the class that I'm using for generating the random string
static class Utilities
{
static Random randomGenerator = new Random();
internal static string GenerateRandomString(int length)
{
byte[] randomBytes = new byte[randomGenerator.Next(length)];
randomGenerator.NextBytes(randomBytes);
return Convert.ToBase64String(randomBytes);
}
}
According to the RFC 3629:
In UTF-8, characters from the U+0000..U+10FFFF range (the UTF-16
accessible range) are encoded using sequences of 1 to 4 octets.
The maximum number of bytes per UTF-8 character is 4, so the maximum length of your byte array is 4 times 245 = 980.
If you are encoding using the Byte Order Mark (BOM) you'll need 3 extra bytes
[...] the BOM
will always appear as the octet sequence EF BB BF.
so 983 in total.
Additional Info:
In your example, you also converted the byte array to Base64, which uses 6 Bits per Character and therefore has a length of 4 * Math.Ceiling(Characters/3), or in your case 1312 ASCII Characters.
According to the design of UTF8, it is expandable.
https://en.wikipedia.org/wiki/UTF-8
In theory, you don't have a maximum length.
But of course, words in real world are limited.
In practice, byte lengths are limited to word count x 4.
245 chars => 980 bytes
If you look for a fixed length encoding, use Encoding.Unicode.
Also, Encoding provides a method giving maximum number of bytes.
Encoding.UTF8.GetMaxByteCount(charCount: 245)
Encoding.Unicode.GetMaxByteCount(charCount: 245)
Simply, you cant. Universal Text Format 8 (which you use), uses 1, 2, 3 or 4 bytes per char (like Tommy said), so the only way for you is to traverse all the chars (GetMaxByteCount()) and calculate it.
Perhaps, if you gonna keep using the BASE64-like strings, you don't not need UTF8, instead, you can use ASCII of any other 1-byte per char encoding and your total byte array size will be the Length of your string.
I'm trying to write the largest int64 value to the command line. I tried using 0x1111111111111111 which is 16 ones, and visual studio says that is int64. I would have assumed that would be int16. What am missing here?
0x is the prefix for hexadecimal and not binary literals. This means that the binary representation of your number is 0001000100010001000100010001000100010001000100010001000100010001
There are unfortunately no binary literals in C#, so you either have to do the calculation yourself (0x7FFFFFFFFFFFFFFF) or use the Convert class, for example:
short s = Convert.ToInt16("1111111111111111", 2); // "2" for binary
In order to just get the largest Int64 number, you don't need to perform any calculations of your own, as it is already available for you in this field:
Int64.MaxValue
The literal 0x1111111111111111 is a hexadecimal number. Each hexadecimal digit can be represented using four bits so with 16 hexadecimal digits you need 4*16 = 64 bits. You probably intended to write the binary number 1111111111111111. You can convert from a binary literal string to an integer using the following code:
Convert.ToInt16("1111111111111111", 2)
This will return the desired number (-1).
To get the largest Int64 you can use Int64.MaxValue (0x7FFFFFFFFFFFFFFF) or if you really want the unsigned value you can use UInt64.MaxValue (0xFFFFFFFFFFFFFFFF).
The largest Int64 value is Int64.MaxValue. To print this in hex, try:
Console.WriteLine(Int64.MaxValue.ToString("X"));
I have an array of three bytes, I want to convert array into double using c#. Kindly guide me.
Well, that depends on what you want the conversion to do.
You can convert 8 bytes (in the right format) into a double using BitConverter.ToDouble - but with only three bytes it's a bit odd - after all, a double has 64 bits of information, normally. How do those three bytes represent a number? What's the format, basically? When you've figured that out, the rest may well be easy.
Well a double is an array of 8 bytes, so with 3 bytes you won't have all the possible values.
To do what you want:
var myBytes[] = {0,0,0,0,0,1,1,2}; //assume you pad your array with enough zeros to make it 8 bytes.
var myDouble = BitConverter.ToDouble(myBytes,0);
Depends on what exactly is stored in the bytes, but you might be able to just pad the array with 5 bytes all containing 0 and then use BitConverter.ToDouble.
I want to convert a number between 0 and 4096 ( 12-bits ) to its 3 character hexadecimal string representation in C#.
Example:
2748 to "ABC"
try
2748.ToString("X")
If you want exactly 3 characters and are sure the number is in range, use:
i.ToString("X3")
If you aren't sure if the number is in range, this will give you more than 3 digits. You could do something like:
(i % 0x1000).ToString("X3")
Use a lower case "x3" if you want lower-case letters.
Note: This assumes that you're using a custom, 12-bit representation. If you're just using an int/uint, then Muxa's solution is the best.
Every four bits corresponds to a hexadecimal digit.
Therefore, just match the first four digits to a letter, then >> 4 the input, and repeat.
The easy C solution may be adaptable:
char hexCharacters[17] = "0123456789ABCDEF";
void toHex(char * outputString, long input)
{
outputString[0] = hexCharacters[(input >> 8) & 0x0F];
outputString[1] = hexCharacters[(input >> 4) & 0x0F];
outputString[2] = hexCharacters[input & 0x0F];
}
You could also do it in a loop, but this is pretty straightforward, and loop has pretty high overhead for only three conversions.
I expect C# has a library function of some sort for this sort of thing, though. You could even use sprintf in C, and I'm sure C# has an analog to this functionality.
-Adam