C# AES-256 Unicode Key - c#

I need to make strong key for AES-256 in a) Unicode characters, b) key in bytes.
a) I have to generate 50 random Unicode characters and then convert them to bytes. Is this possible to use Unicode characters as AES256 key?
For e.g. I want to use this page to create password.
is there any way to import all characters from Windows characters table to program in Windows Form App?
b) I'm using this code:
System.Security.Cryptography.AesCryptoServiceProvider key = new System.Security.Cryptography.AesCryptoServiceProvider();
key.KeySize = 256;
key.GenerateKey();
byte[] AESkey = key.Key;
It's enough or I should change something?
Also I have one more question. Making an AES key longer then 43 ASCII characters will be more secure or it will be anyway hashed to 256bit? And there is difference between ASCII key of 43 characters and 100?

a) I have to generate 50 random Unicode characters and then convert them to bytes. Is this possible to use Unicode characters as AES256 key?
Yes, this is possible. Since you have plenty of space for characters you can just encode it. ceil(32 / 3) * 4 = 44, so you'd have enough characters for this. You would not be using the additional space provided by Unicode encoding though. Obviously you would need to convert it back to binary before using it.
b) is aes.GenerateKey "enough"?
Yes, aes.GenerateKey is enough to generate a binary AES key.
c) Making an AES key longer then 43 ASCII characters will be more secure or it will be anyway hashed to 256bit? And there is difference between ASCII key of 43 characters and 100?
An AES key is not hashed at all. It's just 128, 192 or 256 bits (i.e. 16, 24 or 32 bytes) of data that should be indistinguishable from random (to somebody that doesn't know the value, of course). If you want to hash something you'd have to do it yourself - but please read on.
The important thing to understand is that a password is not a key, and that keys for modern ciphers are almost always encoded as binary. For AES there is no such thing as an ASCII key. If you need to encode the key, use base 64.
If you want to use a password then you need to use a key derivation function or KDF. Furthermore, if you want to protect against dictionary and rainbow table attacks you will want to use a password based key derivation function or PBKDF. Such a KDF is also called a "password hash". In case of .NET your best bet is Rfc2898DeriveBytes which implements PBKDF2. PBKDF2 is defined in the RFC
2898 titled: PKCS #5: Password-Based Cryptography Specification Version 2.0 which you may want to read.

Related

SHA1 with RSA encryption: bad length error

Probably I have several misunderstandings.
AFAIK signing a byte array with RSA-SHA1 generates a byte array (signature) of the same lenght as the RSA key used. Is that right?
From another side signing, roughly means generate a hash using SHA1 (so it is 160 bites long) and then with or without a padding scheme encrypt it with the private key.Is that right?
Later on, in order to recover this hash (with or without padding schema on it) I would need to encrypt the signature with the public key. Is that right?
Something is broken in my logic because I'm not able to encrypt the signature with the public key.
Or my code is wrong. I'm using .net RSACryptoServiceProvider and it raises a bad data length error when trying to encrypt a signature... I assume encrypt means apply RSA using public key, right?
When trying to decrypt it raises a Key Not found exception. As expected because I only have the public key.
EDIT:
Given a byte array and RSACryptoServiceProvider I could Encrypt, Decrypt and SignData. I thought that SignData (without padding schema to simplify the question) is a shortcut of apply SHA, then Decrypt. For Encrypt I mean applying the RSA formula using public key as input, and for Decrypt I mean applying the RSA formula (the very same formula) using private key as input. Are this definitions ok?
EDIT2:
For exemple have a look at the next signed xml: http://www.facturae.gob.es/formato/Versiones/factura_ejemplo2_32v1.xml
And the next powershell script:
$signb64="oYR1T06OSaryEDv8VF9/JgWmwf0KSyOXKpBWY4uAD0YoMh7hedEj8GyRnKpVpaFanqycIAwGGCgl vtCNm+qeLvZXuI0cfl2RF421F8Ay+Q0ani/OtzUUE49wuvwTCClPaNdhv2vfUadR8ExR7e/gI/IL 51uc3mEJX+bQ8dxAQ2w=";
$certB64="MIIDtDCCAx2gAwIBAgICAIcwDQYJKoZIhvcNAQELBQAwcjELMAkGA1UEBhMCRVMxDzANBgNVBAgT Bk1hZHJpZDEPMA0GA1UEBxMGTWFkcmlkMQ4wDAYDVQQKEwVNSVR5QzEbMBkGA1UECxMSTUlUeUMg RE5JZSBQcnVlYmFzMRQwEgYDVQQDEwtDQSB1c3VhcmlvczAeFw0wOTEwMTUxNjA5MzRaFw0xMDEw MTUxNjA5MzRaMHExCzAJBgNVBAYTAkVTMQ8wDQYDVQQIEwZNYWRyaWQxDzANBgNVBAcTBk1hZHJp ZDEOMAwGA1UEChMFTUlUeUMxGzAZBgNVBAsTEk1JVHlDIEROSWUgUHJ1ZWJhczETMBEGA1UEAxMK VXN1YXJpbyA1NDCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAroms65axKuQK18YDfD/x6DIn 0zKZ+6bv1K2hItJxel/JvU3JJ80/nY5o0Zbn+PrvlR2xF3poWYcPHLZpesgxhCMfnP7Jb5OUfceL g44m6T9P3PG1lSAZs3H6/TabyWGJy+cNRZMWs13KnB9fDAjJ5Jw0HVkwYNwmb1c7sHCuyxcCAwEA AaOCAVgwggFUMAkGA1UdEwQCMAAwCwYDVR0PBAQDAgXgMB0GA1UdDgQWBBTYhqU2tppJoHl+S1py BOH+dliYhzCBmAYDVR0jBIGQMIGNgBT1oWqod09bsQSMp35I8Q6fxXaPG6FypHAwbjEPMA0GA1UE CBMGTWFkcmlkMQ8wDQYDVQQHEwZNYWRyaWQxDjAMBgNVBAoTBU1JVHlDMRswGQYDVQQLExJNSVR5 QyBETkllIFBydWViYXMxEDAOBgNVBAMTB1Jvb3QgQ0ExCzAJBgNVBAYTAkVTggEDMAkGA1UdEQQC MAAwNgYDVR0SBC8wLYYraHR0cDovL21pbmlzdGVyLThqZ3h5OS5taXR5Yy5hZ2UvUEtJL0NBLmNy dDA9BgNVHR8ENjA0MDKgMKAuhixodHRwOi8vbWluaXN0ZXItOGpneHk5Lm1pdHljLmFnZS9QS0kv Y3JsLmNybDANBgkqhkiG9w0BAQsFAAOBgQAhAN/KVouQrHOgd74gBJqGXyBXfVOeTVW+UTthhfCv DatXzTcrkYPQMfBAQMgGEa5KaQXcqKKhaoCUvrzFqE0HnAGX+ytX41oxZiM2fGNxRZcyUApLEX67 m8HOA/Cs2ZDlpU2W7wiOX5qr+ToTyfXsnRwPWvJ8VUmmXwyMEKcuzg==";
$signb=[System.Convert]::FromBase64String($signB64);
$certb=[System.Convert]::FromBase64String($certB64);
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 -ArgumentList #(,$certb)
$rsacsp = [System.Security.Cryptography.RSACryptoServiceProvider] $cert.PublicKey.Key;
$signb.Length*8;
$rsacsp;
$rsacsp.Encrypt($signb,0);
I tried:
$rsacsp.Encrypt($signb,[System.Security.Cryptography.RSAEncryptionPadding]::Pkcs1);
instead of
$rsacsp.Encrypt($signb,0);
But I always get a bad length error:
Exception calling "Encrypt" with "2" argument(s): "Bad Length.
EDIT 3:
After reading, I can see my main issue was "From another side signing, roughly means generate a hash using SHA1 (so it is 160 bites long) and then with or without a padding scheme encrypt it with the private key.Is that right?".
RSA sign (with a n bits key length) could be viewed as an operation that takes an arbitraty byte array and outputs n bits. In order to do that, it uses a hash function like SHA1 that takes an arbitrary byte array and produces a fixed output (160 bits for SHA1). Now in theory I could "encrypt" with the private key but then the output would be 160 bits long too it is not the way RSA is implemented. RSA Signing needs to apply padding function after the hash in order to produces an n bits text before "encrypting" it.
Another source of confusion is the meaning of the Encrypt method of .NET RSACryptoProvider. It turns out that this method has two parameters: a byte array and a flag indicating the padding function. It takes the byte array, applies the padding and then "encrypts" with the public key. It is of no use for a signature scenario. The operations decrypt and encrypt in RSACryptoProvider are not simmetrical. You can "decrypt" whatever has been "encrypt", but not the other way around.
At the end the confusion lies in that "atomic" functions used when encrypting/decrypting and the ones used when signin are the same, but they are used in incompatible ways.
AFAIK signing a byte array with RSA-SHA1 generates a byte array (signature) of the same lenght as the RSA key used. Is that right?
Usually yes, although the size will of course be encoded as octet stream (aka byte array) it is possible that the size of the signature is actually up to 7 bits larger. The key size is normally a multiple of 8 (bits) so this doesn't come up much.
From another side signing, roughly means generate a hash using SHA1 (so it is 160 bites long) and then with or without a padding scheme encrypt it with the private key.Is that right?
No, you should never perform modular exponentiation in RSA without padding; a padding scheme is required for security. Note that you should not talk about encryption here. Encryption is used to provide confidentiality. That RSA signature generation and encryption both uses modular exponentiation - although with different keys - doesn't mean one equates the other.
It is important to note that the padding scheme for PKCS#1 v1.5 encryption is different from the one used for signature generation. Furthermore there are also the newer OAEP padding scheme for encryption and the PSS padding scheme for signature generation which are rather distinct.
Later on, in order to recover this hash (with or without padding schema on it) I would need to encrypt the signature with the public key. Is that right?
Perform modular exponentiation and then verify the result, yes. But as the verification requires verifying the padding in a secure way you should really let an API handle this.
Something is broken in my logic because I'm not able to encrypt the signature with the public key.
Try something written for verification instead, like the method VerifyHash as seen in this example.
You can try and find a raw RSA implementation to find out what is within the RSA signature. You should only do this to analyze the signature.
So if you "encrypt" the data with the public key (i.e. just perform modular exponentiation) you would get:
0001ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff003021300906052b0e03021a05000414a2304127e2fe3b8a8203b219feafdd9b58558310
as result. This is clearly PCKS#1 v1.5 padding for signature generation. It includes an encoded hash value:
SEQUENCE(2 elem)
SEQUENCE(2 elem)
OBJECT IDENTIFIER1.3.14.3.2.26
NULL
OCTET STRING(20 byte) A2304127E2FE3B8A8203B219FEAFDD9B58558310

Generating Random Salt and IV - Must be Stored in Encrypted Data?

I'm trying to create some classes that wrap and simply the use of the .NET cryptography library classes.
But if I have the caller supply only a password, and I generate the salt and IV automatically, it seems like I then need to store the salt and IV along with my encrypted data. (This is because I am told I should never use duplicate IVs and my code wouldn't be very simple if the caller had to generate and track different salts and IVs.)
Ideally, my encrypted data would be as compact as possible. But this adds quite a few bytes to my encrypted results, especially if I'm only encrypting a small amount of data such as a single byte.
My questions are:
Is there any way to reduce the size of my encrypted data without making my classes considerably more work to use?
Can anyone see any other flaws to my thinking here?
You could just use a random salt and generate a key and IV from it. The best way to do this is first to generate a key seed using a Password Based Key Derivation Function or PBKDF. In the case of C# PBKDF2, as implemented in Rfc2898DeriveBytes is probably the easiest choice. You'll need at least 8 bytes of salt and a high iteration count.
OK, so now you have a key seed, you need to generate a key from it. For this you need a Key Based Key Derivation Function or KBKDF. The most up to date one is HKDF (in which case you'd only need HKDF-extract to be precise). However KBKDF's are pretty uncommon in crypto libraries, so you it would take some time to implement it.
If you cannot find a good enough library then you may go for KDF1 as specified on this page. Just use the ASCII encoded string for "key" and "IV" as OtherInfo and set the counter to four bytes set to zero. The result is key = Hash(key seed | 00 00 00 00 | "key") and IV = Hash(key seed | 00 00 00 00 | "IV"). Here | means concatenation. If you have too many bytes, just use the leftmost ones from the result. SHA-1, SHA-256 and SHA-512 will do but I would recommend to use one that has enough output bits for your particular key - SHA-256 probably makes the most sense.
Once you have a key and IV you can use any kind of encryption method. For minimum overhead you can use CTR based encryption. Beware that if man-in-the-middle is possible (i.e. any transport protocol) then you need to have an authentication tag as well.
So there you have it: just 8 bytes of overhead, unless you need the authentication tag as well. What you should not do is to use a static key or IV. Neither a static key or IV is considered secure.

Format-preserving Encryption sample

I want to encrypt/decrypt digits into string (with only digits and/or upper characters) with the same length using Format-preserving Encryption. But I don't find implementation steps. So, can anyone please provide WORKING sample for C# 2.0?
For an example,
If I encrypt fixed length plaintext like 99991232 (with or without fixed key) then the cipher should be like 23220978 or ED0FTS. If the length of encrypted string is less than plain text then also it would be all right. But cipher text length must not be greater than plain text and the cipher text must of of fixed length.
From your question I assume that the plain text is numeric, where the cipher text could be alphanumeric. Due to this it is quite easy to make an encoding scheme. This makes your format preservation less stringent and this can be taken advantage of (this won't work if your plain text is also alphanumeric).
First, find a power of 2 that is greater than the number of discrete values that you have, for example, in the numeric case you have 10 discrete values - so you would use 16 (2 ^ 4). Create a 'BaseX' encoding scheme for this (in this case Base16) and decode the plain text to binary using it.
Thus given the plain text:
1, 2, 3, 4
We encode it to:
0001-0010 0011-0100
You can then run this through your length-preserving cipher (one example of a length-preserving cipher is AES in counter mode). Say you get the following value back:
1001-1100 1011-1100
Encode this using your 'BaseX' encoder, and in our case we would get:
9, C, B, C
Which is the same length. I threw together a sample for you (bit large to paste here).
As Henk said, "Format Preserving Encryption" is not defined. I can think of two possible answers:
Use AES and convert the cyphertext byte array to a hex string or to Base64.
Use a simple Vigenère cipher just replacing the characters you want to replace.
You need to specify your requirement more clearly.
ETA: You do not say how secure you need this to be. Standard Vigenère is not secure against any sort of strong attack, but will be safe from casual users. Vigenère can be made absolutely secure, but that requires as much true random key material as there is plaintext to encypher, and is usually impractical.

Can someone explain how BCrypt verifies a hash?

I'm using C# and BCrypt.Net to hash my passwords.
For example:
string salt = BCrypt.Net.BCrypt.GenerateSalt(6);
var hashedPassword = BCrypt.Net.BCrypt.HashPassword("password", salt);
//This evaluates to True. How? I'm not telling it the salt anywhere, nor
//is it a member of a BCrypt instance because there IS NO BCRYPT INSTANCE.
Console.WriteLine(BCrypt.Net.BCrypt.Verify("password", hashedPassword));
Console.WriteLine(hashedPassword);
How is BCrypt verifying the password with the hash if it's not saving the salt anywhere. The only idea I have is that it's somehow appending the salt at the end of the hash.
Is this a correct assumption?
A BCrypt hash string looks like:
$2a$10$Ro0CUfOqk6cXEKf3dyaM7OhSCvnwM9s4wIX9JeLapehKK5YdLxKcm
\__/\/ \____________________/\_____________________________/
| | Salt Hash
| Cost
Version
Where
2a: Algorithm Identifier (BCrypt, UTF8 encoded password, null terminated)
10: Cost Factor (210 = 1,024 rounds)
Ro0CUfOqk6cXEKf3dyaM7O: OpenBSD-Base64 encoded salt (22 characters, 16 bytes)
hSCvnwM9s4wIX9JeLapehKK5YdLxKcm: OpenBSD-Base64 encoded hash (31 characters, 24 bytes)
Edit: i just noticed these words fit exactly. i had to share:
$2a$10$TwentytwocharactersaltThirtyonecharacterspasswordhash
$==$==$======================-------------------------------
BCrypt does create a 24-byte binary hash, using 16-byte salt. You're free to store the binary hash and the salt however you like; nothing says you have to base-64 encode it into a string.
But BCrypt was created by guys who were working on OpenBSD. OpenBSD already defines a format for their password file:
$[HashAlgorithmIdentifier]$[AlgorithmSpecificData]
This means that the "bcrypt specification" is inexorably linked to the OpenBSD password file format. And whenever anyone creates a "bcrypt hash" they always convert it to an ISO-8859-1 string of the format:
$2a$[Cost]$[Base64Salt][Base64Hash]
A few important points:
2a is the algorithm identifier
1: MD5
2: early bcrypt, which had confusion over which encoding passwords are in (obsolete)
2a: current bcrypt, which specifies passwords as UTF-8 encoded
Cost is a cost factor used when computing the hash. The "current" value is 10, meaning the internal key setup goes through 1,024 rounds
10: 210 = 1,024 iterations
11: 211 = 2,048 iterations
12: 212 = 4,096 iterations
the base64 algorithm used by the OpenBSD password file is not the same Base64 encoding that everybody else uses; they have their own:
Regular Base64 Alphabet: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/
BSD Base64 Alphabet: ./ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
So any implementations of bcrypt cannot use any built-in, or standard, base64 library
Armed with this knowledge, you can now verify a password correctbatteryhorsestapler against the saved hash:
$2a$12$mACnM5lzNigHMaf7O1py1O3vlf6.BA8k8x3IoJ.Tq3IB/2e7g61Km
BCrypt variants
There is a lot of confusion around the bcrypt versions.
$2$
BCrypt was designed by the OpenBSD people. It was designed to hash passwords for storage in the OpenBSD password file. Hashed passwords are stored with a prefix to identify the algorithm used. BCrypt got the prefix $2$.
This was in contrast to the other algorithm prefixes:
$1$: MD5
$5$: SHA-256
$6$: SHA-512
$2a$
The original BCrypt specification did not define how to handle non-ASCII characters, or how to handle a null terminator. The specification was revised to specify that when hashing strings:
the string must be UTF-8 encoded
the null terminator must be included
$2x$, $2y$ (June 2011)
A bug was discovered in crypt_blowfish🕗, a PHP implementation of BCrypt. It was mis-handling characters with the 8th bit set.
They suggested that system administrators update their existing password database, replacing $2a$ with $2x$, to indicate that those hashes are bad (and need to use the old broken algorithm). They also suggested the idea of having crypt_blowfish emit $2y$ for hashes generated by the fixed algorithm. Nobody else, including canonical OpenBSD, adopted the idea of 2x/2y. This version marker was was limited to crypt_blowfish🕗.
The versions $2x$ and $2y$ are not "better" or "stronger" than $2a$. They are remnants of one particular buggy implementation of BCrypt.
$2b$ (February 2014)
A bug was discovered in the OpenBSD implementation of BCrypt. They wrote their implementation in a language that doesn't have support strings - so they were faking it with a length-prefix, a pointer to a character, and then indexing that pointer with []. Unfortunately they were storing the length of their strings in an unsigned char. If a password was longer than 255 characters, it would overflow and wrap at 255. BCrypt was created for OpenBSD. When they have a bug in their library, they decided its ok to bump the version. This means that everyone else needs to follow suit if you want to remain current to "their" specification.
http://undeadly.org/cgi?action=article&sid=20140224132743 🕗
http://marc.info/?l=openbsd-misc&m=139320023202696 🕗
There is no difference between 2a, 2x, 2y, and 2b. If you wrote your implementation correctly, they all output the same result.
If you were doing the right thing from the beginning (storing strings in utf8 and also hashing the null terminator) then: there is no difference between 2, 2a, 2x, 2y, and 2b. If you wrote your implementation correctly, they all output the same result.
The version $2b$ is not "better" or "stronger" than $2a$. It is a remnant of one particular buggy implementation of BCrypt. But since BCrypt canonically belongs to OpenBSD, they get to change the version marker to whatever they want.
The versions $2x$ and $2y$ are not better, or even preferable, to anything. They are remnants of a buggy implementation - and should summarily forgotten.
The only people who need to care about 2x and 2y are those you may have been using crypt_blowfish back in 2011. And the only people who need to care about 2b are those who may have been running OpenBSD.
All other correct implementations are identical and correct.
How is BCrypt verifying the password with the hash if it's not saving the salt anywhere?
Clearly it is not doing any such thing. The salt has to be saved somewhere.
Let's look up password encryption schemes on Wikipedia. From http://en.wikipedia.org/wiki/Crypt_(Unix) :
The output of the function is not merely the hash: it is a text string which also encodes the salt and identifies the hash algorithm used.
Alternatively, an answer to your previous question on this subject included a link to the source code. The relevant section of the source code is:
StringBuilder rs = new StringBuilder();
rs.Append("$2");
if (minor >= 'a') {
rs.Append(minor);
}
rs.Append('$');
if (rounds < 10) {
rs.Append('0');
}
rs.Append(rounds);
rs.Append('$');
rs.Append(EncodeBase64(saltBytes, saltBytes.Length));
rs.Append(EncodeBase64(hashed,(bf_crypt_ciphertext.Length * 4) - 1));
return rs.ToString();
Clearly the returned string is version information, followed by the number of rounds used, followed by the salt encoded as base64, followed by the hash encoded as base64.

Generate a printable HMAC Shared key in .Net

I'm using HMACSHA512 to hash data using a shared key. Since the key is shared I'd like for it to be all printable characters for ease of transport. I'm wondering what the best approach is to generating these keys.
I'm currently using the GetBytes() method of RNGCryptoServiceProvider to generate a key, but the byte array it returns contains non-printable characters. So I'm wondering if it is secure to base64 encode the result or does that erode the randomness too much and make things much less secure? If that isn't a good approach can you suggest one?
I do understand that by limiting the keys to printable characters I am limiting the overall breadth of the key space (ie: lopping off 1 of the 8 bits), but I am OK with that.
If you can handle not auto-generating the key then http://www.grc.com/passwords is a good source of VERY random key material.
Base64 wouldn't reduce the underlying entropy of the byte array. You could generate the key and use it in its raw form, but Base64 encode it to transport it to where you need it to be. You'd then Base64 decode it back to the raw form before you use it in the new location. There is no loss of entropy in this operation. The Base64 encoding reduces the entropy to 6-bits per byte instead of 8, but the result of the coding is longer, so overall the entropy is the same.
The other way you could do it would be to get 24 random bytes for 192-bits worth of entropy. Base64 encoding this would give you a 32 character string (256-bits) which still has the original randomness and 192-bits of entropy. You could use this as your shared key directly.
BASE64 transforms a byte sequence so it uses only certain printable characters.
This transformation does not change the information in any way, just how it is stored. It is also reversible: you can get the original byte sequence by decoding the BASE64 output.
So using BASE64 does not "erode the randomness" or limit the key space in any way.

Categories

Resources