Why is a negative value after hash? - c#

Can someone please advise, why after computed hash, stores a negative value to e?
static byte[] bytes;
BigInteger[] numbers = {A, ANeg, Aseed, AseedNeg, C1, C2, C1Neg, C2Neg};
foreach (BigInteger number in numbers)
{
bytes = number.ToByteArray();
}
SHA1 sha = new SHA1CryptoServiceProvider();
hash = sha.ComputeHash(bytes);
e = new BigInteger(hash);

For the same reason "-3" is negative and "2" isn't; the symbol used to indicate the number is below zero is there: In the case of someone writing down "-3" this is "-". In the case of a BigInteger, this is the most significant bit.

The BigInteger(byte[]) constructor documentation on MSDN offers the following:
if the highest-order bit of the highest-order byte in value is set, the resulting BigInteger value is negative
and
To prevent positive values from being misinterpreted as negative values, you can add a zero-byte value to the end of the array.
Applying the documentation:
e will be negative if the last byte in hash (i.e., hash[hash.Length - 1]) is greater than 0x7f (127).
To interpret the value of hash as an unsigned number, add a zero-byte value to the end of hash, e.g.,
e = new BigInteger(hash.Concat(new byte[]{0}).ToArray());

Related

How to read negative binary data using BinaryReader.ReadBytes(Int32)

I have a binary data file and it contains some negative and positive value also and it stored in 2's complement form.
Whenever I try to read those data using BinaryReader class I am always getting positive number.
private double nextByte(BinaryReader reader)
{
byte[] bValue = reader.ReadBytes(2);
StringBuilder hex = new StringBuilder(bValue.Length * 2);
foreach (byte b in bValue)
hex.AppendFormat("{0:x2}", b);
int decValue = int.Parse(hex.ToString(), System.Globalization.NumberStyles.HexNumber);
return Convert.ToDouble(decValue);
}
For Example:
Let's consider data file contains 1011100101011110. Equivalent Decimal value is 47454, and Decimal value of signed 2's complement is -18082.
The BinaryReader.ReadBytes(2) methods will always returns +ve value but I am excepting -18082 value.
The problem is data file contains both +ve and -ve value, So how can I achieve this please any one can help me.
If you want to continue with your crazy conversions you simply need to cast resulting int to short:
int decValue = 47454;
// int.Parse(hex.ToString(), System.Globalization.NumberStyles.HexNumber);
return (short)decValue;
The cast to short will simply trim 2 high bytes of int resulting in representation of negative value, and then resulting short value -18082 will be automatically converted to -18082.0f to match return type.
Note that reading short directly with ReadInt16 is likely what you want.

Is Guid.NewGuid().ToByteArray() to a string (number) line still unique

I need to generate a unique id which consists of numbers.
Is the following result string uniqueId as unique as the result of guid.ToString()?
Guid guid = Guid.NewGuid();
byte[] guidBytes = guid.ToByteArray();
// Is the result (uniqueId) as unique as guid.ToString()?
string uniqueId = string.Join(string.Empty, guidBytes);
You need seperator between byte values or fill with zero. Otherwise there is intersection.
Example: 3,5,6,7,123 => 003005006007123
Yes, there is a 1:1 mapping of byte arrays to Guids. No information is lost during the transformation so you still retain the same uniqueness as using the normal string representation of a Guid.
A Guid really is just a 16 byte number, it does not matter if you show it as {3F2504E0-4F89-41D3-9A0C-0305E82C3301}, 4AQlP4lP00GaDAMF6CwzAQ== or as 224004037063137079211065154012003005232044051001, it all still represents the same number.
EDIT: Oops, as mkysoft ponted out, you do have to deal with leading zeros. Padding the numbers to 3 digits solves the issue
var guid = new Guid("{3F2504E0-4F89-41D3-9A0C-0305E82C3301}");
Console.WriteLine(string.Join(string.Empty, guid.ToByteArray().Select(x=>x.ToString("000"))));
UPDATE: Actually I just thought of a better solution, a Guid is a 128-bit number, by using 2 64-bit numbers and padding the number's 0's out on the 2nd half you will get a shorter, but still unique number.
var guid = new Guid("{3F2504E0-4F89-41D3-9A0C-0305E82C3301}");
var guidBytes = guid.ToByteArray();
Console.WriteLine("{0}{1}", BitConverter.ToUInt64(guidBytes, 0), BitConverter.ToUInt64(guidBytes,8).ToString().PadLeft(20, '0'));
This will output a unique integer number that is between 21 and 40 digits long, {3F2504E0-4F89-41D3-9A0C-0305E82C3301} becomes 474322228343976880000086462192878292122,
Or you could use BigInteger.ToString() to handle making big numbers into strings (since it's really good at that)
var p = Guid.NewGuid().ToByteArray();
Array.Resize(ref p, p.Length + 1);
Console.WriteLine(new BigInteger(p));
The resize is only if you require positive numbers (otherwise there is a 50% chance you get a negative number). You could also use System.Security.Cryptography.RandomNumberGenerator.GetBytes to have a larger or smaller set of values (depending on how big you want your identifiers to be)

converting hex number to signed short

I'm trying to convert a string that includes a hex value into its equivalent signed short in C#
for example:
the equivalent hex number of -1 is 0xFFFF (in two bytes)
I want to do the inverse, i.e I want to convert 0xFFFF into -1
I'm using
string x = "FF";
short y = Convert.ToInt16(x,16);
but the output y is 255 instead of -1, I need the signed number equivalent
can anyone help me?
thanks
When your input is "FF" you have the string representation in hex of a single byte.
If you try to assign it to a short (two bytes), the last bit is not considered for applying the sign to the converted number and thus you get the 255 value.
Instead a string representation of "FFFF" represents two bytes where the last bit is set to 1 so the result, if assigned to a signed type like Int16, is negative while, if assigned to an unsigned type like ushort, is 65535-
string number = "0xFFFF";
short n = Convert.ToInt16(number, 16);
ushort u = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(u);
number = "0xFF";
byte b = Convert.ToByte(number, 16);
short x = Convert.ToInt16(number, 16);
ushort z = Convert.ToUInt16(number, 16);
Console.WriteLine(n);
Console.WriteLine(x);
Console.WriteLine(z);
Output:
-1
65535
-1
255
255
You're looking to convert the string representation of a signed byte, not short.
You should use Convert.ToSByte(string) instead.
A simple unit test to demonstrate
[Test]
public void MyTest()
{
short myValue = Convert.ToSByte("FF", 16);
Assert.AreEqual(-1, myValue);
}
Please see http://msdn.microsoft.com/en-us/library/bb311038.aspx for full details on converting between hex strings and numeric values.

How do I make BigInteger see the binary representation of this Hex string correctly?

The problem
I have a byte[] that is converted to a hex string, and then that string is parsed like this BigInteger.Parse(thatString,NumberSyles.Hexnumber).
This seems wasteful since BigInteger is able to accept a byte[], as long as the two's complement is accounted for.
An working (inefficient) example
According to MSDN the most significant bit of the last byte should be zero in order for the following hex number be a positive one. The following is an example of a hex number that has this issue:
byte[] ripeHashNetwork = GetByteHash();
foreach (var item in ripeHashNetwork)
{
Console.Write(item + "," );
}
// Output:
// 0,1,9,102,119,96,6,149,61,85,103,67,158,94,57,248,106,13,39,59,238,214,25,103,246
// Convert to Hex string using this http://stackoverflow.com/a/624379/328397
// Output:
// 00010966776006953D5567439E5E39F86A0D273BEED61967F6`
Okay, let's pass that string into the static method of BigInteger:
BigInteger bi2 = BigInt.Parse(thatString,NumberSyles.Hexnumber);
// Output bi2.ToString() ==
// {25420294593250030202636073700053352635053786165627414518}
Now that I have a baseline of data, and known conversions that work, I want to make it better/faster/etc.
A not working (efficient) example
Now my goal is to round-trip a byte[] into BigInt and make the result look like 25420294593250030202636073700053352635053786165627414518. Let's get started:
So according to MSDN I need a zero in my last byte to avoid my number from being seen as a two's compliment. I'll add the zero and print it out to be sure:
foreach (var item in ripeHashNetwork)
{
Console.Write(item + "," );
}
// Output:
// 0,1,9,102,119,96,6,149,61,85,103,67,158,94,57,248,106,13,39,59,238,214,25,103,246,0
Okay, let's pass that byte[] into the constructor of BigInteger:
BigInteger bi2 = new BigInteger(ripeHashNetwork);
// Output bi2.ToString() ==
// {1546695054495833846267861247985902403343958296074401935327488}
What I skipped over is the sample of what bigInt does to my byte array if I don't add the trailing zero. What happens is that I get a negative number which is wrong. I'll post that if you want.
So what am I doing wrong?
When you are going via the hex string, the first byte of your array is becoming the most significant byte of the resulting BigInteger.
When you are adding a trailing zero, the last bye of your array is the most significant.
I'm not sure which case is right for you, but that's why you're getting different answers.
From MSDN "The individual bytes in the value array should be in little-endian order, from lowest-order byte to highest-order byte". So the mistake is the order of bytes:
BigInteger bi2 = new BigInteger(ripeHashNetwork.Reverse().ToArray<byte>());

Shift a 128-bit signed BigInteger to always be positive

I'm converting a Guid to a BigInteger so I can base62 encode it. This works well, however, I can get negative numbers in BigInterger. How do I shift the BigInteger so the number is positive. I'll also need to be able to shift it back so I can convert back to a Guid.
// GUID is a 128-bit signed integer
Guid original = new Guid("{35db5c21-2d98-4456-88a0-af263ed87bc2}");
BigInteger b = new BigInteger(original.ToByteArray());
// shift so its a postive number?
Note: For url-safe version of Base64 consider using modifyed set of characters for Base64 ( http://en.wikipedia.org/wiki/Base64#URL_applications) instead of custom Base62.
I believe you can append 0 to the array first (will make higest byte always not to contain 1 in the highest bit) and then convert to BigInteger if you really need positive BigInteger.
do you mean base64 encode?
Convert.ToBase64String(Guid.NewGuid().ToByteArray());
If you sometimes get negative numbers, it means that your GUID value is large enough to fill all 128 bits of the BigInteger or else the BigInteger byte[] ctor is interpreting the data as such. To make sure your bytes are actually positive, check that you are getting <= 16 bytes (128 bits) and that the most-significant bit of the last byte (because it's little endian) is zero. If you have <16 bytes, you can simply append a zero byte to your array (again, append because it is little endian) to make sure the BigInteger ctor treats it as a positive number.
This article I think it can give you the solution:
In summary it is to add one more byte, to 0, if the most significant bit of the last byte is a 1
Guid original = Guid.NewGuid();
byte[] bytes = original.ToByteArray();
if ((bytes[bytes.Length - 1] & 0x80) > 0)
{
byte[] temp = new byte[bytes.Length];
Array.Copy(bytes, temp, bytes.Length);
bytes = new byte[temp.Length + 1];
Array.Copy(temp, bytes, temp.Length);
}
BigInteger guidPositive = new BigInteger(bytes);

Categories

Resources