As the title implies, I'm getting "An unhandled exception of type 'System.FormatException'" style message. Need some help figuring out why. I'm reading in a file that has a hex value I need to parse. The value I'm reading it could look something like this "0x12345678" (4 bytes in length).
The first byte (note this is little endian) is 'BE' and I can get that no problem. My issue is then trying to take the next three bytes and convert that into a human readable int. (Long story, but the program that generates the output I'm parsing takes a human readable decimal number and converts it to this LE nonsense.)
string parmVal = lineF.Substring((pos + length)); // this is "0x12345678"
string hexID = parmVal.Substring(2, 2); // stores '12'
byte[] testID = new byte[4];
testID[0] = Convert.ToByte(parmVal.Substring(4, 2)); <----error here
testID[1] = Convert.ToByte(parmVal.Substring(6, 2));
testID[2] = Convert.ToByte(parmVal.Substring(8, 2));
testID[3] = Convert.ToByte(0);
decimalID = int.Parse(hexID, System.Globalization.NumberStyles.HexNumber); // stores 18 (0x12)
testIDNumber = BitConverter.ToInt32(testID,0); // stores 345678
Interestingly, in the code I later output these values to a CSV file, and the values I print out look correct, even though its throwing the exception. I tried reading in the last 3 bytes the same way as the first by, but then when I do the int.Parse() on it, it gets its endian-ness backwards. Where I'd want it to convert 0x"785634", its converting 0x"345678".
Try to specify base in Convert.ToByte call.
In your case it has to be:
testID[0] = Convert.ToByte(parmVal.Substring(4, 2), 16);
Related
so I am outputting the char 0x11a1 by converting it to char
than I multiply 0x11a1 by itself and output it again but I do not get what I expect to get as
by doing this {int hgvvv = chch0;} and outputting to the console I can see that the computer thinks that 0x11a1 * 0x11a1 equals 51009 but it actually equals 20367169
As a result I do not gat what I want.
Could you please explain to me why?
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0);
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
We know that 1 bytes is 8 bits.
We know that a char in c# is 2 bytes, which would be 16 bits.
If we multiply 0x11a1 X 0x11a1 we get 0x136c741.
0x136c741 in binary is 0001001101101100011101000001
Considering we only have 16 bits - we would only see the last 16 bits which is: 1100011101000001
1100011101000001 in hex is 0xc741.
This is 51009 that you are seeing.
You are being limited by the type size of char in c#.
Hope this answer cleared things up!
By enabling the checked context in your project or by adding it this way in your code:
checked {
char chch0 = (char)0x11a1;
Console.WriteLine(chch0);
chch0 = (char)(chch0 * chch0); // OverflowException
Console.WriteLine(chch0);
int hgvvv = chch0;
Console.WriteLine(hgvvv);
}
You will see that you will get an OverflowException, because the char type (2 bytes big) is only able to store values up to Char.MaxValue = 0xFFFF.
The value you expect (20367169) is larger than than 0xFFFF and you basically get only the two least significate bytes the type was able to store. Which is:
Console.WriteLine(20367169 & 0xFFFF);
// prints: 51009
I need to convert the byte data to integer:
byte[] mode = new byte[3] {50, 53, 53};
I tried to convert using BitConverter.ToInt32(mode, 0) but got an exception:
Destination array is not long enough to copy all the items in the collection. Check array index and length.
Update: The expected result after conversion is 255.
It's not clear what you're trying to achieve, but BitConverter.ToInt32 requires 4 bytes of data to work with and you're passing it an array of 3 bytes. Add one more byte and it will work, meaning it will not throw an exception and will do the conversion, but I'm not sure if it will give you want you want.
Example:
byte[] mode = new byte[4] {50, 53, 53, 00};
var result = BitConverter.ToInt32(mode, 0); //Result will be 3487026
EDIT Apparently this array represents text, not an integer. To convert it, you need to know the encoding used. If it is guaranteed to have only numbers, then you can use ASCII:
byte[] mode = new byte[3]{50, 53, 53};
string result = System.Text.Encoding.ASCII.GetString(mode); //Result will be 255
Now, if you want to convert it to an integer, then it is simple. Use any conversion method like int.Parse() or Convert.ToInt32().
I have this name:
string name = "Centos 64 bit";
I want to generate a 168-bit (or whatever is feasible) uid from this name and to be able to get the name from this id vice versa
.
I tried this one GetHashCode() without success.
Result would be something like:
Centos 64 bit (=) 91C47A57-E605-4902-894B-74E791F37C1F
One solution I would recommend is to use a hash function and something like a dictionary. So, get a hash - say SHA256 - of your input string and truncate it to 168 bytes.
Now, to go back from a uid to original string, you would need to have a dictionary which stores pairs like (input_string, string_uid). input_string is original string and string_uid is the uid generated for input_string using method from first paragraph.
Using this dictionary you can easily go back to original input string using string_uid.
This is one way - of course in case, you are allowed to store mappings between string and uid.
The hash normally gives you result as byte array. Converting this byte array to string is a separate step.
For example if you have 10 bytes representing integers in the range [0, 255], converting it to string if you encode the byte array as hex string, will take 20 bytes.
So the next question is do you want the length of the uid as string to be 21 bytes?
Because this will mean the hash output must be somewhere like 10 bytes, this will poorly reflect on collision resistance of the output.
what you want is not achievable. You need to store a lookup table of hash to name. Since you dont give more details of yr system it hard to say if that has to be persistent or in memory. If in memory just use a dictionary of string->string
Here you go sir:
public byte[] GetUID(string name)
{
var bytes = Encoding.ASCII.GetBytes(name);
if (bytes.Length > 21)
throw new ArgumentException("Value is too long to be used as an ID");
var uid = new byte[21];
Buffer.BlockCopy(bytes, 0, uid, 0, bytes.Length);
return bytes;
}
public string GetName(byte[] UID)
{
int length = UID.Length;
for (int i = 0; i < UID.Length; i++)
{
if (UID[i] == 0)
{
length = i;
break;
}
}
return Encoding.ASCII.GetString(UID, 0, length);
}
Caveats: it works for strings up to 21 characters in length that only use ASCII characters (no Unicode support) and it doesn't encrypt the string in any way, but I believe it meets your requirements.
I have a string that only contains 1 and 0 and I need to save this to a .txt-File.
I also want it to be as small as possible. Since I have binary code, I can turn it into pretty much everything. Saving it as binary is not an option, since apparently every character will be a whole byte, even if it's a 1 or a 0.
I thought about turning my string into an Array of Byte but trying to convert "11111111" to Byte gave me a System.OverflowException.
My next thought was using an ASCII Codepage or something. But I don't know how reliable that is. Alternatively I could turn all of the 8-Bit pieces of my string into the corresponding numbers. 8 characters would turn into a maximum of 3 (255), which seems pretty nice to me. And since I know the highest individual number will be 255 I don't even need any delimiter for decoding.
But I'm sure there's a better way.
So:
What exactly is the best/most efficient way to store a string that only contains 1 and 0?
You could represent all your data as 64 bit integers and then write them to a binary file:
// The string we are working with.
string str = #"1010101010010100010101101";
// The number of bits in a 64 bit integer!
int size = 64;
// Pad the end of the string with zeros so the length of the string is divisible by 64.
str += new string('0', str.Length % size);
// Convert each 64 character segment into a 64 bit integer.
long[] binary = new long[str.Length / size]
.Select((x, idx) => Convert.ToInt64(str.Substring(idx * size, size), 2)).ToArray();
// Copy the result to a byte array.
byte[] bytes = new byte[binary.Length * sizeof(long)];
Buffer.BlockCopy(binary, 0, bytes, 0, bytes.Length);
// Write the result to file.
File.WriteAllBytes("MyFile.bin", bytes);
EDIT:
If you're only writing 64 bits then it's a one-liner:
File.WriteAllBytes("MyFile.bin", BitConverter.GetBytes(Convert.ToUInt64(str, 2)));
I would suggest using BinaryWriter. Like this:
BinaryWriter writer = new BinaryWriter(File.Open(fileName, FileMode.Create));
The problem
I have a byte[] that is converted to a hex string, and then that string is parsed like this BigInteger.Parse(thatString,NumberSyles.Hexnumber).
This seems wasteful since BigInteger is able to accept a byte[], as long as the two's complement is accounted for.
An working (inefficient) example
According to MSDN the most significant bit of the last byte should be zero in order for the following hex number be a positive one. The following is an example of a hex number that has this issue:
byte[] ripeHashNetwork = GetByteHash();
foreach (var item in ripeHashNetwork)
{
Console.Write(item + "," );
}
// Output:
// 0,1,9,102,119,96,6,149,61,85,103,67,158,94,57,248,106,13,39,59,238,214,25,103,246
// Convert to Hex string using this http://stackoverflow.com/a/624379/328397
// Output:
// 00010966776006953D5567439E5E39F86A0D273BEED61967F6`
Okay, let's pass that string into the static method of BigInteger:
BigInteger bi2 = BigInt.Parse(thatString,NumberSyles.Hexnumber);
// Output bi2.ToString() ==
// {25420294593250030202636073700053352635053786165627414518}
Now that I have a baseline of data, and known conversions that work, I want to make it better/faster/etc.
A not working (efficient) example
Now my goal is to round-trip a byte[] into BigInt and make the result look like 25420294593250030202636073700053352635053786165627414518. Let's get started:
So according to MSDN I need a zero in my last byte to avoid my number from being seen as a two's compliment. I'll add the zero and print it out to be sure:
foreach (var item in ripeHashNetwork)
{
Console.Write(item + "," );
}
// Output:
// 0,1,9,102,119,96,6,149,61,85,103,67,158,94,57,248,106,13,39,59,238,214,25,103,246,0
Okay, let's pass that byte[] into the constructor of BigInteger:
BigInteger bi2 = new BigInteger(ripeHashNetwork);
// Output bi2.ToString() ==
// {1546695054495833846267861247985902403343958296074401935327488}
What I skipped over is the sample of what bigInt does to my byte array if I don't add the trailing zero. What happens is that I get a negative number which is wrong. I'll post that if you want.
So what am I doing wrong?
When you are going via the hex string, the first byte of your array is becoming the most significant byte of the resulting BigInteger.
When you are adding a trailing zero, the last bye of your array is the most significant.
I'm not sure which case is right for you, but that's why you're getting different answers.
From MSDN "The individual bytes in the value array should be in little-endian order, from lowest-order byte to highest-order byte". So the mistake is the order of bytes:
BigInteger bi2 = new BigInteger(ripeHashNetwork.Reverse().ToArray<byte>());