I have the next code in c# to encrypt a decrypt
Crypto.BlockSize = 64;
Crypto.FeedbackSize = 8;
Crypto.Mode = CipherMode.ECB;
Crypto.Padding = PaddingMode.None;
Encryptor = Crypto.CreateEncryptor(Encoding.ASCII.GetBytes("12345678"), Encoding.UTF8.GetBytes("12345678"));
Decryptor = Crypto.CreateDecryptor(Encoding.ASCII.GetBytes("12345678"), Encoding.UTF8.GetBytes("12345678"));
I have coded another encrypter/decrypter in other language.
DES uses 8 bytes padding. When the text to encrypt is 8 bytes or mult of 8, both works exactly.
The problem is when the size is 12. C# uses 12 bytes since it doesnt need any kind of padding... but in the other library i'm forced to use mult of 8 as input so i have checked to complete with all 0x00 and 0xFF but then i get different results in the last bytes.
What is c# doing when no padding and no mult of 8?... from where is it getting the rest of the missing bytes?
Related
I have found an example that uses AES encrypt to encrypt text. The code is this:
public static string Encrypt(string PlainText, string Password,
string Salt = "Kosher", string HashAlgorithm = "SHA1",
int PasswordIterations = 2, string InitialVector = "OFRna73m*aze01xY",
int KeySize = 256)
{
if (string.IsNullOrEmpty(PlainText))
return "";
byte[] InitialVectorBytes = Encoding.ASCII.GetBytes(InitialVector);
byte[] SaltValueBytes = Encoding.ASCII.GetBytes(Salt);
byte[] PlainTextBytes = Encoding.UTF8.GetBytes(PlainText);
PasswordDeriveBytes DerivedPassword = new PasswordDeriveBytes(Password, SaltValueBytes, HashAlgorithm, PasswordIterations);
byte[] KeyBytes = DerivedPassword.GetBytes(KeySize / 8);
RijndaelManaged SymmetricKey = new RijndaelManaged();
SymmetricKey.Mode = CipherMode.CBC;
byte[] CipherTextBytes = null;
using (ICryptoTransform Encryptor = SymmetricKey.CreateEncryptor(KeyBytes, InitialVectorBytes))
{
using (MemoryStream MemStream = new MemoryStream())
{
using (CryptoStream CryptoStream = new CryptoStream(MemStream, Encryptor, CryptoStreamMode.Write))
{
CryptoStream.Write(PlainTextBytes, 0, PlainTextBytes.Length);
CryptoStream.FlushFinalBlock();
CipherTextBytes = MemStream.ToArray();
MemStream.Close();
CryptoStream.Close();
}
}
}
SymmetricKey.Clear();
return Convert.ToBase64String(CipherTextBytes);
}
My question is: how is the key for the AES algorithm generated? These 2 lines:
PasswordDeriveBytes DerivedPassword = new PasswordDeriveBytes(Password, SaltValueBytes, HashAlgorithm, PasswordIterations);
byte[] KeyBytes = DerivedPassword.GetBytes(KeySize / 8);
First, it creates a derived key of 256 bytes, and later, create a key getting pseudo random bytes of this derived key. It has to be divided by 8 because the AES algorithm need 128, 182 or 256 bits, not bytes. In this case, how derived key is 256 bytes, the key for AES will be 256 bits.
But why does it do that? Wouldn't it better create the derived key with the needed length, not 256 bytes but 256 bits (256 bytes / 8)? So It wouldn't be needed to create a new key taken the 1/8 bytes of the derived key.
Also, the getBytes() method, in the description of the method, it says it returns pseudo-random key bytes. So doesn't it do the AES key would be different in each case? How to generate again the AES key from decryption if it is pseudo random key bytes?
Thanks.
First, it creates a derived key of 256 bytes
Where? I don't see any 256-byte key being created.
and later, create a key getting pseudo random bytes of this derived key. It has to be divided by 8 because the AES algorithm need 128, 182 or 256 bits, not bytes
Yes, the function input of KeySize (which should be keySize by normal C# naming conventions) is in bits, but GetBytes wants an input in bytes. x / 8 is one of the three right answers for that conversion ((x + 7) / 8 is another, and x & 7 == 0 ? x / 8 : throw new ArgumentException(nameof(x)) is the third)
But why does it do that? Wouldn't it better create the derived key with the needed length, not 256 bytes but 256 bits (256 bytes / 8)? So It wouldn't be needed to create a new key taken the 1/8 bytes of the derived key.
It would be good to do that. But since it is already doing that, there's no "better" to be had.
Also, the getBytes() method, in the description of the method, it says it returns pseudo-random key bytes. So doesn't it do the AES key would be different in each case? How to generate again the AES key from decryption if it is pseudo random key bytes?
I have to make a pedantic point: There is no getBytes method. C# is a case-sensitive language, and the method name is GetBytes.
pseudorandom: noting or pertaining to random numbers generated by a definite computational process to satisfy a statistical test.
PasswordDeriveBytes is an implementation of PBKDF1 (except it continues beyond the limits of PBKDF1), which is a deterministic algorithm. Given the same inputs (password, seed, iteration count, pseudo-random function (hash algorithm)) the same output is produced. Change any of the inputs slightly, and the output is significantly different.
Rfc2898DeriveBytes (an implementation of PBKDF2) is also a deterministic, but chaotic, algorithm.
So you produce the same answer again in either of them (but not across them) by giving all the same inputs.
When using password-based encryption (PKCS#5) the flow is
Pick a PRF
Pick an iteration count
Generate a random salt
Write down these choices
Apply these three things, plus the password to generate a key
Encrypt the data
Write down to the encrypted data
When decrypting one
Read the PRF
Read the iteration count
Read the salt
Apply these three things, plus the password to generate a key
Read the encrypted data
Decrypt it
Party on
While this code is doing that part right, the IV and Salt should not be ASCII (or UTF8) strings, they should be "just bytes" (byte[]). If they need to be transported as strings then they should be base64, or some other "arbitrary" binary-to-text encoding.
I am converting a classic asp application to C#, and would like to be able to decrypt strings in c# that were originally encrypted in classic asp. the classic asp code is here, and the c# code is here. The problem that i am facing is that the signatures of the Encrypt and Decrypt methods in asp vs C# are different. here is my asp code for decrypting, which wraps the decrypt code.
Function AESDecrypt(sCypher)
if sCypher <> "" then
Dim bytIn()
Dim bytOut
Dim bytPassword()
Dim lCount
Dim lLength
Dim sTemp
Dim sPassword
sPassword = "My_Password"
lLength = Len(sCypher)
ReDim bytIn(lLength/2-1)
For lCount = 0 To lLength/2-1
bytIn(lCount) = CByte("&H" & Mid(sCypher,lCount*2+1,2))
Next
lLength = Len(sPassword)
ReDim bytPassword(lLength-1)
For lCount = 1 To lLength
bytPassword(lCount-1) = CByte(AscB(Mid(sPassword,lCount,1)))
Next
bytOut = DecryptData(bytIn, bytPassword) //' this is the problem child
lLength = UBound(bytOut) + 1
sTemp = ""
For lCount = 0 To lLength - 1
sTemp = sTemp & Chr(bytOut(lCount))
Next
AESDecrypt = sTemp
End if
End Function
However, in c# i am struggling to convert this function because the c# equivalent of DecryptData has more params
public static byte[] DecryptData(byte[] message, byte[] password,
byte[] initialisationVector, BlockSize blockSize,
KeySize keySize, EncryptionMode cryptMode)
{...}
what values can i use for initialisationVector, blockSize, keySize, cryptMode so as to be able to decrypt the same way the classic asp code does.
Using Phil Fresle's C# Rijndael implementation, you can use the following code to have successfully decrypt a value that was encrypted with Phil's ASP/VBScript version.
You can read my answer about encrypting here: Password encryption/decryption between classic asp and ASP.NET
public string DecryptData(string encryptedMessage, string password)
{
if (encryptedMessage.Length % 2 == 1)
throw new Exception("The binary key cannot have an odd number of digits");
byte[] byteArr = new byte[encryptedMessage.Length / 2];
for (int index = 0; index < byteArr.Length; index++)
{
string byteValue = encryptedMessage.Substring(index * 2, 2);
byteArr[index] = byte.Parse(byteValue, NumberStyles.HexNumber, CultureInfo.InvariantCulture);
}
byte[] result = Rijndael.DecryptData(
byteArr,
Encoding.ASCII.GetBytes(password),
new byte[] { }, // Initialization vector
Rijndael.BlockSize.Block256, // Typically 128 in most implementations
Rijndael.KeySize.Key256,
Rijndael.EncryptionMode.ModeECB // Rijndael.EncryptionMode.ModeCBC
);
return ASCIIEncoding.ASCII.GetString(result);
}
Most default implementations will use a key size of 128, 192, or 256 bits. A block size at 128 bits is standard. Although some implementations allow block sizes other than 128 bits, changing the block size will just add another item into the mix to cause confusion when trying to get data encrypted in one implementation to properly decrypt in another.
UPDATE
Turns out I was wrong about one piece here; the EncryptionMode should be set as EncryptionMode.ModeECB, not EncryptionMode.ModeCBC. "ECB" is less secure (https://crypto.stackexchange.com/questions/225/should-i-use-ecb-or-cbc-encryption-mode-for-my-block-cipher) because it doesn't cycle like CBC does, but that is how it was implemented in the VB version of the encryption.
Interestingly enough, using CBC on an ECB encrypted value WILL work for the first handful of bytes up until a certain point (i'd imagine this has to do with the block size) at which point the remainder of the value is mangled. You can see this particularly clearly when encrypting a long-ish string in the VB version and decrypting it with the code I posted above with a mode of EncryptionMode.ModeECB
I know very little about Encryption, but my goal is to essentially decrypt strings. I have been given the AES(128) key.
However, I must retrieve the IV from the Encrypted string, which is the first 16 bits.
Heres the doc for salesforce for more information (if what i explained was incorrect)
Encrypts the blob clearText using the specified algorithm and private
key. Use this method when you want Salesforce to generate the
initialization vector for you. It is stored as the first 128 bits (16
bytes) of the encrypted blob
http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_classes_restful_crypto.htm (encryptWithManagedIV)
For Retrieving the IV I've tried something like this (I don't believe it's right though):
public string retrieveIv()
{
string iv = "";
string input = "bwZ6nKpBEsuAKM8lDTYH1Yl69KkHN1i3XehALbfgUqY=";
byte[] bytesToEncode = Encoding.UTF8.GetBytes(input);
for(int i = 0; i <= 15; i++){
iv += bytesToEncode[i].ToString(); ;
}
return iv;
}
(Just ignore the fact that the input is hardcoded and not parameterized; easier for testing purposes)
Then use the Best answer from this question to decrypt the string
The IV shouldn't be expressed as a string - it should be as a byte array, as per the AesManaged.IV property.
Also, using Encoding.UTF8 is almost certainly wrong. I suspect you want:
public static byte[] RetrieveIv(string encryptedBase64)
{
// We don't need to base64-decode everything... just 16 bytes-worth
encryptedBase64 = encryptedBase64.Substring(0, 24);
// This will be 18 bytes long (4 characters per 3 bytes)
byte[] encryptedBinary = Convert.FromBase64String(encryptedBase64);
byte[] iv = new byte[16];
Array.Copy(encryptedBinary, 0, iv, 0, 16);
return iv;
}
I'm working on software which encrypts/decrypts files.
I would like to be able to guess the length of the data after the encryption but I can't use CryptoStream.Length (It throws a NotSupportedException).
Is there any way to guess it ?
I'm using RijndaelManaged (.Net Framework 4.0)
This says it much better than I can
http://www.obviex.com/Articles/CiphertextSize.aspx
From there:
In the most generic case, the size of the ciphertext can be calculated as:
CipherText = PlainText + Block - (PlainText MOD Block)
where CipherText, PlainText, and Block indicate the sizes of the ciphertext, plaintext, and encryption block respectively. Basically, the resulting ciphertext size is computed as the size of the plaintext extended to the next block. If padding is used and the size of the plaintext is an exact multiple of the block size, one extra block containing padding information will be added.
Let's say that you want to encrypt a nine-digit Social Security Number (SSN) using the Rijndael encryption algorithm with the 128-bit (16-byte) block size and PKCS #7 padding. (For the purpose of the illustration, assume that dashes are removed from the SSN value before the encryption, so that "123-45-6789" becomes "123456789", and the value is treated as a string, not as a number.) If the digits in the SSN are defined as ASCII characters, the size of the ciphertext can be calculated as:
CipherText = 9 + 16 - (9 MOD 16) = 9 + 16 - 9 = 16 (bytes)
Notice that if the size of the plaintext value is the exact multiple of the block size, an extra block containing padding information will be appended to the ciphertext. For example, if you are to encrypt a 16-digit credit card number (defined as a 16-character ASCII string), the size of the ciphertext will be:
CipherText = 16 + 16 - (16 MOD 16) = 16 + 16 - 0 = 32 (bytes)
That depends on the cipher you use... usually the length is the same as the length of the original stream... worstcase is that it gets padded to a multiple of the block length of the cipher
This is my code with RijndaelManaged:
MemoryStream textBytes = new MemoryStream();
string password = #"myKey123"; // Your Key Here
UnicodeEncoding UE = new UnicodeEncoding();
byte[] key = UE.GetBytes(password);
FileStream fsInput = new FileStream(#"C:\myEncryptFile.txt", FileMode.Open);
RijndaelManaged RMCrypto = new RijndaelManaged();
CryptoStream cs = new CryptoStream(fsInput, RMCrypto.CreateDecryptor(key, key),
CryptoStreamMode.Read);
cs.CopyTo(textBytes);
cs.Close();
fsInput.Close();
string myDecriptText = Encoding.UTF8.GetString(textBytes.ToArray());
You can use this function to get both length and data.
public static int GetLength(CryptoStream cs, out byte[] data)
{
var bytes = new List<byte>();
int b;
while ((b = cs.ReadByte()) != -1)
bytes.Add((byte)b);
data = bytes.ToArray();
return data.Length;
}
I get the following error when I try to create a IV initialization vector for TripleDES encryptor.
Please see the code example:
TripleDESCryptoServiceProvider tripDES = new TripleDESCryptoServiceProvider();
byte[] key = Encoding.ASCII.GetBytes("SomeKey132123ABC");
byte[] v4 = key;
byte[] connectionString = Encoding.ASCII.GetBytes("SomeConnectionStringValue");
byte[] encryptedConnectionString = Encoding.ASCII.GetBytes("");
// Read the key and convert it to byte stream
tripDES.Key = key;
tripDES.IV = v4;
This is the exception that I get from the VS.
Specified initialization vector (IV) does not match the block size for this algorithm.
Where am I going wrong?
Thank you
MSDN explicitly states that:
...The size of the IV property must be the same as the BlockSize property.
For Triple DES it is 64 bits.
The size of the initialization vector must match the block size - 64 bit in case of TripleDES. Your initialization vector is much longer than eight bytes.
Further you should really use a key derivation function like PBKDF2 to create strong keys and initialization vectors from password phrases.
Key should be 24 bytes and IV should be 8 bytes.
tripDES.Key = Encoding.ASCII.GetBytes("123456789012345678901234");
tripDES.IV = Encoding.ASCII.GetBytes("12345678");
The IV must be the same length (in bits) as tripDES.BlockSize. This will be 8 bytes (64 bits) for TripleDES.
I've upvoted every answer (well the ones that are here before mine!) here as they're all correct.
However there's a bigger mistake you're making (one which I also made v.early on) - DO NOT USE A STRING TO SEED THE IV OR KEY!!!
A compile-time string literal is a unicode string and, despite the fact that you will not be getting either a random or wide-enough spread of byte values (because even a random string contains lots of repeating bytes due to the narrow byte range of printable characters), it's very easy to get a character which actually requires 2 bytes instead of 1 - try using 8 of some of the more exotic characters on the keyboard and you'll see what I mean - when converted to bytes you can end up with more than 8 bytes.
Okay - so you're using ASCII Encoding - but that doesn't solve the non-random problem.
Instead you should use RNGCryptoServiceProvider to initialise your IV and Key and, if you need to capture a constant value for this for future use, then you should still use that class - but capture the result as a hex string or Base-64 encoded value (I prefer hex, though).
To achieve this simply, I've written a macro that I use in VS (bound to the keyboard shortcut CTRL+SHIFT+G, CTRL+SHIFT+H) which uses the .Net PRNG to produce a hex string:
Public Sub GenerateHexKey()
Dim result As String = InputBox("How many bits?", "Key Generator", 128)
Dim len As Int32 = 128
If String.IsNullOrEmpty(result) Then Return
If System.Int32.TryParse(result, len) = False Then
Return
End If
Dim oldCursor As Cursor = Cursor.Current
Cursor.Current = Cursors.WaitCursor
Dim buff((len / 8) - 1) As Byte
Dim rng As New System.Security.Cryptography.RNGCryptoServiceProvider()
rng.GetBytes(buff)
Dim sb As New StringBuilder(CType((len / 8) * 2, Integer))
For Each b In buff
sb.AppendFormat("{0:X2}", b)
Next
Dim selection As EnvDTE.TextSelection = DTE.ActiveDocument.Selection
Dim editPoint As EnvDTE.EditPoint
selection.Insert(sb.ToString())
Cursor.Current = oldCursor
End Sub
Now all you need to do is to turn your hex string literal into a byte array - I do this with a helpful extension method:
public static byte[] FromHexString(this string str)
{
//null check a good idea
int NumberChars = str.Length;
byte[] bytes = new byte[NumberChars / 2];
for (int i = 0; i < NumberChars; i += 2)
bytes[i / 2] = Convert.ToByte(str.Substring(i, 2), 16);
return bytes;
}
There are probably better ways of doing that bit - but it works for me.
I do it like this:
var derivedForIv = new Rfc2898DeriveBytes(passwordBytes, _saltBytes, 3);
_encryptionAlgorithm.IV = derivedForIv.GetBytes(_encryptionAlgorithm.LegalBlockSizes[0].MaxSize / 8);
The IV gets bytes from the derive bytes 'smusher' using the block size as described by the algorithm itself via the LegalBlockSizes property.