Is there a way to create new "Key" object from an existing BigInteger?
Something like this?
var bigInt = new BigInteger(934157136952);
Key key = new Key(bigInt);
I tried searching documentation. But couldn't find.
Update 1
I also tried this. It gave me an exception with the message "Invalid String".
Key key = Key.Parse(bigInt.ToString(), Network.Main);
Seems It needed base58 string as the first argument.
Key key = Key.Parse(new Key().GetWif(Network.Main).ToString(), Network.Main); // Works fine
why don't you create a Key instance using a byte []. See the Key implementation of NBitcoin here
C# supports converting BigInteger to byte arrays.
So your code could hypothetically look like this:
byte [] bytes = new BigInteger(934157136952).ToByteArray();
Key key = new Key(bytes);
The other parameters are optional.
However, in the Key implementation, you can see that your Key must be 32 bytes long. See line 14. So your example will throw an ArgumentException.
So creating an instance of BigInteger with a long wont work and you should instead parse a string. The code below gives a 32 byte BigInteger and compiles successfully on my machine.
byte[] bytes = BigInteger.Parse("11725344435539341571369527625736255434253725643527635436353563846746536545463").ToByteArray();
Key key = new Key(bytes);
Console.WriteLine(key.PubKey);
//prints : 0391c33d7a367471827b59372e79f7d727ec81ddf2abd5762c91151fe5a2b49f92
Update 1
Best thing do to covert a byte array of the BigInteger is to add leading zeros as follows.
byte [] byteArr = new BigInteger(934157136952).ToByteArray();
byte zero = (new BigInteger(0)).ToByteArray()[0]; // getting the byte representation of zero in the same encoding as BigInterger.ToByteArray()
for (int i = 0; i < byteArr.Length; i++)
{
byteArr32[i] = byteArr[i];
}
for (int i = byteArr.Length; i < 32-(byteArr.Length); i++)
{
byteArr32[i] = zero; // leading zero means trailing zeros to the array.
}
Key key = new Key(byteArr32);
Update 2
C# provides a 1-line solution to resizing arrays in-situ. The result will be zero padded by default for byte arrays.
byte[] bytes = new BigInteger(934157136952).ToByteArray();
Array.Resize<byte>(ref bytes, 32);
Key key = new Key(bytes);
Related
When porting a snippet of code from Java to C#, I have come across a specific function which I am struggling to find a solution to. Basically when decoding, an array of bytes from an EC PublicKey needs to be converted to a PublicKey object and everything I have found on the internet doesn't seem to help.
I am developing this on Xamarin.Android using Java.Security libraries and BouncyCastle on Mono 6.12.0.
This is the code I am using in Java:
static PublicKey getPublicKeyFromBytes(byte[] pubKey) throws NoSuchAlgorithmException, InvalidKeySpecException {
ECNamedCurveParameterSpec spec = ECNamedCurveTable.getParameterSpec("secp256r1");
KeyFactory kf = KeyFactory.getInstance("EC", new BouncyCastleProvider());
ECNamedCurveSpec params = new ECNamedCurveSpec("secp256r1", spec.getCurve(), spec.getG(), spec.getN());
ECPoint point = ECPointUtil.decodePoint(params.getCurve(), pubKey);
ECPublicKeySpec pubKeySpec = new ECPublicKeySpec(point, params);
return (ECPublicKey) kf.generatePublic(pubKeySpec);
}
This was the best solution I could come up with which didn't throw any errors in VS. Sadly, it throws an exception and tells me that the spec is wrong:
X9ECParameters curve = CustomNamedCurves.GetByName("secp256r1");
ECDomainParameters domain = new ECDomainParameters(curve.Curve, curve.G, curve.N, curve.H);
ECPoint point = curve.Curve.DecodePoint(pubKey);
ECPublicKeyParameters pubKeySpec = new ECPublicKeyParameters(point, domain);
// Get the encoded representation of the public key
byte[] encodedKey = pubKeySpec.Q.GetEncoded();
// Create a KeyFactory object for EC keys
KeyFactory keyFactory = KeyFactory.GetInstance("EC");
// Generate a PublicKey object from the encoded key data
var pbKey = keyFactory.GeneratePublic(new X509EncodedKeySpec(encodedKey));
I have previously created a PrivateKey in a similar way where I generate a PrivateKey and then export the key in PKCS#8 format, then generating the object from this format. However I couldn't get this to work from an already set array of bytes.
Importing a raw public EC key (e.g. for secp256r1) is possible with pure Xamarin classes, BouncyCastle is not needed for this. The returned key can be used directly when generating the KeyAgreement:
using Java.Security.Spec;
using Java.Security;
using Java.Math;
using Java.Lang;
...
private IPublicKey GetPublicKeyFromBytes(byte[] rawXY) // assuming a valid raw key
{
int size = rawXY.Length / 2;
ECPoint q = new ECPoint(new BigInteger(1, rawXY[0..size]), new BigInteger(1, rawXY[size..]));
AlgorithmParameters algParams = AlgorithmParameters.GetInstance("EC");
algParams.Init(new ECGenParameterSpec("secp256r1"));
ECParameterSpec ecParamSpec = (ECParameterSpec)algParams.GetParameterSpec(Class.FromType(typeof(ECParameterSpec)));
KeyFactory keyFactory = KeyFactory.GetInstance("EC");
return keyFactory.GeneratePublic(new ECPublicKeySpec(q, ecParamSpec));
}
In the above example rawXY is the concatenation of the x and y coordinates of the public key. For secp256r1, both coordinates are 32 bytes each, so the total raw key is 64 bytes.
However, the Java reference code does not import raw keys, but an uncompressed or compressed EC key. The uncompressed key corresponds to the concatenation of x and y coordinate (i.e. the raw key) plus an additional leading 0x04 byte, the compressed key consists of the x coordinate plus a leading 0x02 (for even y) or 0x03 (for odd y) byte.
For secp256r1 the uncompressed key is 65 bytes, the compressed key 33 bytes. A compressed key can be converted to an uncompressed key using BouncyCastle. An uncompressed key is converted to a raw key by removing the leading 0x04 byte.
To apply the above import in the case of an uncompressed or compressed key, it is necessary to convert it to a raw key, which can be done with BouncyCastle, e.g. as follows:
using Org.BouncyCastle.Asn1.X9;
using Org.BouncyCastle.Crypto.EC;
...
private byte[] ConvertToRaw(byte[] data) // assuming a valid uncompressed (leading 0x04) or compressed (leading 0x02 or 0x03) key
{
if (data[0] != 4)
{
X9ECParameters curve = CustomNamedCurves.GetByName("secp256r1");
Org.BouncyCastle.Math.EC.ECPoint point = curve.Curve.DecodePoint(data).Normalize();
data = point.GetEncoded(false);
}
return data[1..];
}
Test: Import of a compressed key:
using Java.Util;
using Hex = Org.BouncyCastle.Utilities.Encoders.Hex;
...
byte[] compressed = Hex.Decode("023291D3F8734A33BCE3871D236431F2CD09646CB574C64D07FD3168EA07D3DB78");
pubKey = GetPublicKeyFromBytes(ConvertToRaw(compressed));
Console.WriteLine(Base64.GetEncoder().EncodeToString(pubKey.GetEncoded())); // MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEMpHT+HNKM7zjhx0jZDHyzQlkbLV0xk0H/TFo6gfT23ish58blPNhYrFI51Q/czvkAwCtLZz/6s1n/M8aA9L1Vg==
As can be easily verified with an ASN.1 parser (e.g. https://lapo.it/asn1js/), the exported X.509/SPKI key MFkw... contains the raw key, i.e. the compressed key was imported correctly.
I'm tinkering with RSA signing of data.
I'm using a plaintext string, which i convert to byte array. i then generate private certificate, sign the byte array and then generate public key.
next i'm using the same byte array to verify the signature.
but i want to convert signature, in between steps, to the string - idea is to append it later on to the file that's being signed.
static void TestSigning(string privateKey)
{
string data = "TEST_TEST-TEST+test+TEst";
Console.WriteLine("==MESSAGE==");
Console.WriteLine(data);
byte[] dataByte = Encoding.Unicode.GetBytes(data);
using (var rsa = new RSACryptoServiceProvider())
{
rsa.FromXmlString(privateKey);
var publicKey = rsa.ToXmlString(false);
byte[] signature = rsa.SignData(dataByte, CryptoConfig.MapNameToOID("SHA512"));
string signatureString = Encoding.Unicode.GetString(signature);
byte[] roundtripSignature = Encoding.Unicode.GetBytes(signatureString);
Console.WriteLine("==TEST==");
Console.WriteLine(signature.Length.ToString());
Console.WriteLine(roundtripSignature.Length.ToString());
using (var checkRSA = new RSACryptoServiceProvider())
{
checkRSA.FromXmlString(publicKey);
bool verification = checkRSA.VerifyData(
dataByte,
CryptoConfig.MapNameToOID("SHA512"),
roundtripSignature);
Console.WriteLine("==Verification==");
Console.WriteLine(verification.ToString());
Console.ReadKey();
}
}
}
now here's the fun part
if i use UTF8 encoding i get byte arrays of different length
256 is the original size
484 is the roundtrip
UTF7 returns different sizes too
256 vs 679
both ASCII and Unicode return proper sizes 256 vs 256.
i've tried using
var sb = new StringBuilder();
for (int i = 0; i < signature.Length; i++)
{
sb.Append(signature[i].ToString("x2"));
}
to get the string. I'm then using Encoding.UTF8.GetBytes() method
this time i get the sizes of:
256 vs 512
if i remove the format from toString() i get:
256 vs 670
signature verification alwayas failed.
it works fine if i use 'signature' instead of roundtripSignature.
my question: Why, despite using same encoding type i get different byte arrays and strings? shouldn't this conversion be lossless?
Unicode isn't a good choice because, at minimum, \0, CR, LF, <delete>, <backspace> (and the rest of the control codes) can mess things up. (See an answer about this for Encrypt/Decrypt for more).
As #JamesKPolk said, you need to use a suitable binary-to-text encoding. Base64 and hex/Base16 are the most common, but there are plenty of other viable choices.
I am trying to write an Encoded file.The file has 9 to 12 bit symbols. While writing a file I guess that it is not written correctly the 9 bit symbols because I am unable to decode that file. Although when file has only 8 bit symbols in it. Everything works fine. This is the way I am writing a file
File.AppendAllText(outputFileName, WriteBackContent, ASCIIEncoding.Default);
Same goes for reading with ReadAllText function call.
What is the way to go here?
I am using ZXing library to encode my file using RS encoder.
ReedSolomonEncoder enc = new ReedSolomonEncoder(GenericGF.AZTEC_DATA_12);//if i use AZTEC_DATA_8 it works fine beacuse symbol size is 8 bit
int[] bytesAsInts = Array.ConvertAll(toBytes.ToArray(), c => (int)c);
enc.encode(bytesAsInts, parity);
byte[] bytes = bytesAsInts.Select(x => (byte)x).ToArray();
string contentWithParity = (ASCIIEncoding.Default.GetString(bytes.ToArray()));
WriteBackContent += contentWithParity;
File.AppendAllText(outputFileName, WriteBackContent, ASCIIEncoding.Default);
Like in the code I am initializing my Encoder with AZTEC_DATA_12 which means 12 bit symbol. Because RS Encoder requires int array so I am converting it to int array. And writing to file like here.But it works well with AZTEC_DATA_8 beacue of 8 bit symbol but not with AZTEC_DATA_12.
Main problem is here:
byte[] bytes = bytesAsInts.Select(x => (byte)x).ToArray();
You are basically throwing away part of the result when converting the single integers to single bytes.
If you look at the array after the call to encode(), you can see that some of the array elements have a value higher than 255, so they cannot be represented as bytes. However, in your code quoted above, you cast every single element in the integer array to byte, changing the element when it has a value greater than 255.
So to store the result of encode(), you have to convert the integer array to a byte array in a way that the values are not lost or modified.
In order to make this kind of conversion between byte arrays and integer arrays, you can use the function Buffer.BlockCopy(). An example on how to use this function is in this answer.
Use the samples from the answer and the one from the comment to the answer for both conversions: Turning a byte array to an integer array to pass to the encode() function and to turn the integer array returned from the encode() function back into a byte array.
Here are the sample codes from the linked answer:
// Convert byte array to integer array
byte[] result = new byte[intArray.Length * sizeof(int)];
Buffer.BlockCopy(intArray, 0, result, 0, result.Length);
// Convert integer array to byte array (with bugs fixed)
int bytesCount = byteArray.Length;
int intsCount = bytesCount / sizeof(int);
if (bytesCount % sizeof(int) != 0) intsCount++;
int[] result = new int[intsCount];
Buffer.BlockCopy(byteArray, 0, result, 0, byteArray.Length);
Now about storing the data into files: Do not turn the data into a string directly via Encoding.GetString(). Not all bit sequences are valid representations of characters in any given character set. So, converting a random sequence of random bytes into a string will sometimes fail.
Instead, either store/read the byte array directly into a file via File.WriteAllBytes() / File.ReadAllBytes() or use Convert.ToBase64() and Convert.FromBase64() to work with a base64 encoded string representation of the byte array.
Combined here is some sample code:
ReedSolomonEncoder enc = new ReedSolomonEncoder(GenericGF.AZTEC_DATA_12);//if i use AZTEC_DATA_8 it works fine beacuse symbol size is 8 bit
int[] bytesAsInts = Array.ConvertAll(toBytes.ToArray(), c => (int)c);
enc.encode(bytesAsInts, parity);
// Turn int array to byte array without loosing value
byte[] bytes = new byte[bytesAsInts.Length * sizeof(int)];
Buffer.BlockCopy(bytesAsInts, 0, bytes, 0, bytes.Length);
// Write to file
File.WriteAllBytes(outputFileName, bytes);
// Read from file
bytes = File.ReadAllBytes(outputFileName);
// Turn byte array to int array
int bytesCount = bytes.Length * 40;
int intsCount = bytesCount / sizeof(int);
if (bytesCount % sizeof(int) != 0) intsCount++;
int[] dataAsInts = new int[intsCount];
Buffer.BlockCopy(bytes, 0, dataAsInts, 0, bytes.Length);
// Decoding
ReedSolomonDecoder dec = new ReedSolomonDecoder(GenericGF.AZTEC_DATA_12);
dec.decode(dataAsInts, parity);
I was wondering if there's a difference in security between the following:
CASE A:
byte[] data = new byte[47];
using(RNGCryptoServiceProvider crypto = new RNGCryptoServiceProvider())
{
crypto.GetBytes(data);
}
CASE B:
byte[] data = new byte[47];
using(RNGCryptoServiceProvider crypto = new RNGCryptoServiceProvider())
{
for(int i = 0; i < 47; i++)
{
byte[] byte = new byte[1];
crypto.GetBytes(byte);
data[i] = byte;
}
}
I was wondering because I was inspired by the example of MSDN. Which basically checks whether the byte received was fair due to the unfair distribution of using modulo on a limited value. (I was building a random string generator and I don't want to give the characters early in the alphabet the advantage of an unfair distribution)
So basically my question is, is there a difference in security whether I loop "GetBytes" to get N bytes (case b), or use "GetBytes" directly to get N bytes (case a).
Thank you for your time
No, there isn't. The bytes are generated the same way no matter whichever way you get them.
I get the following error when I try to create a IV initialization vector for TripleDES encryptor.
Please see the code example:
TripleDESCryptoServiceProvider tripDES = new TripleDESCryptoServiceProvider();
byte[] key = Encoding.ASCII.GetBytes("SomeKey132123ABC");
byte[] v4 = key;
byte[] connectionString = Encoding.ASCII.GetBytes("SomeConnectionStringValue");
byte[] encryptedConnectionString = Encoding.ASCII.GetBytes("");
// Read the key and convert it to byte stream
tripDES.Key = key;
tripDES.IV = v4;
This is the exception that I get from the VS.
Specified initialization vector (IV) does not match the block size for this algorithm.
Where am I going wrong?
Thank you
MSDN explicitly states that:
...The size of the IV property must be the same as the BlockSize property.
For Triple DES it is 64 bits.
The size of the initialization vector must match the block size - 64 bit in case of TripleDES. Your initialization vector is much longer than eight bytes.
Further you should really use a key derivation function like PBKDF2 to create strong keys and initialization vectors from password phrases.
Key should be 24 bytes and IV should be 8 bytes.
tripDES.Key = Encoding.ASCII.GetBytes("123456789012345678901234");
tripDES.IV = Encoding.ASCII.GetBytes("12345678");
The IV must be the same length (in bits) as tripDES.BlockSize. This will be 8 bytes (64 bits) for TripleDES.
I've upvoted every answer (well the ones that are here before mine!) here as they're all correct.
However there's a bigger mistake you're making (one which I also made v.early on) - DO NOT USE A STRING TO SEED THE IV OR KEY!!!
A compile-time string literal is a unicode string and, despite the fact that you will not be getting either a random or wide-enough spread of byte values (because even a random string contains lots of repeating bytes due to the narrow byte range of printable characters), it's very easy to get a character which actually requires 2 bytes instead of 1 - try using 8 of some of the more exotic characters on the keyboard and you'll see what I mean - when converted to bytes you can end up with more than 8 bytes.
Okay - so you're using ASCII Encoding - but that doesn't solve the non-random problem.
Instead you should use RNGCryptoServiceProvider to initialise your IV and Key and, if you need to capture a constant value for this for future use, then you should still use that class - but capture the result as a hex string or Base-64 encoded value (I prefer hex, though).
To achieve this simply, I've written a macro that I use in VS (bound to the keyboard shortcut CTRL+SHIFT+G, CTRL+SHIFT+H) which uses the .Net PRNG to produce a hex string:
Public Sub GenerateHexKey()
Dim result As String = InputBox("How many bits?", "Key Generator", 128)
Dim len As Int32 = 128
If String.IsNullOrEmpty(result) Then Return
If System.Int32.TryParse(result, len) = False Then
Return
End If
Dim oldCursor As Cursor = Cursor.Current
Cursor.Current = Cursors.WaitCursor
Dim buff((len / 8) - 1) As Byte
Dim rng As New System.Security.Cryptography.RNGCryptoServiceProvider()
rng.GetBytes(buff)
Dim sb As New StringBuilder(CType((len / 8) * 2, Integer))
For Each b In buff
sb.AppendFormat("{0:X2}", b)
Next
Dim selection As EnvDTE.TextSelection = DTE.ActiveDocument.Selection
Dim editPoint As EnvDTE.EditPoint
selection.Insert(sb.ToString())
Cursor.Current = oldCursor
End Sub
Now all you need to do is to turn your hex string literal into a byte array - I do this with a helpful extension method:
public static byte[] FromHexString(this string str)
{
//null check a good idea
int NumberChars = str.Length;
byte[] bytes = new byte[NumberChars / 2];
for (int i = 0; i < NumberChars; i += 2)
bytes[i / 2] = Convert.ToByte(str.Substring(i, 2), 16);
return bytes;
}
There are probably better ways of doing that bit - but it works for me.
I do it like this:
var derivedForIv = new Rfc2898DeriveBytes(passwordBytes, _saltBytes, 3);
_encryptionAlgorithm.IV = derivedForIv.GetBytes(_encryptionAlgorithm.LegalBlockSizes[0].MaxSize / 8);
The IV gets bytes from the derive bytes 'smusher' using the block size as described by the algorithm itself via the LegalBlockSizes property.