I am using System.Security.Cryptography.RijndaelManaged class in C#(.NET 3.5) to do encryption with settings:
RijndaelManaged AesCrypto = new RijndaelManaged();
AesCrypto.BlockSize = 128;
AesCrypto.Mode = CipherMode.CBC;
CryptoStream CryptStream = new CryptoStream(memStream1,
AesCrypto.CreateEncryptor(EncryptionKey1, EncryptionIV1),
CryptoStreamMode.Write);
And with 256 bit key and IV. I believe that results in AES256. Am I right?
Would there be any differences if I am using System.Security.Cryptography.AesManaged class?
Also, I was thinking, we TRUST Microsoft implementation of AES, can this be verified, or maybe one should write his own implementation of AES?
About the differences between AesManaged and RijndaelManaged:
The AES algorithm is essentially the Rijndael symmetric algorithm with a fixed block size and iteration count. This class functions the same way as the RijndaelManaged class but limits blocks to 128 bits and does not allow feedback modes.
Taken from MSDN, here is the http://msdn.microsoft.com/en-us/library/system.security.cryptography.aesmanaged.aspx
Related
I've run into a confusing use of AES with Rfc2898DeriveBytes. Here's the code that I've found....
public static string Decrypt(string encryptionKey, string cipherValue)
{
byte[] cipherBytes = Convert.FromBase64String(cipherValue);
using (var encryptor = Aes.Create())
{
var pdb = new Rfc2898DeriveBytes(encryptionKey, new byte[] { (13 element byte array) });
if (encryptor != null)
{
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (var ms = new MemoryStream())
{
using (var cs = new CryptoStream(ms, encryptor.CreateDecryptor(), CryptoStreamMode.Write))
{
cs.Write(cipherBytes, 0, cipherBytes.Length);
cs.Close();
}
cipherValue = Encoding.Unicode.GetString(ms.ToArray());
}
}
}
return cipherValue;
}
So, "cipherValue" is an encrypted string...as well as "encryptionKey". The other examples of how to use AES and Rfc2898Derive bytes don't seem to fit this code. The other examples I've seen have something very plain-text in place of the "encryptionKey" parameter up above, but those examples are usually demonstrating encryption rather than decryption.
This code is being used to decrypt a password in the config file of my application. The encryption has already been done and I have no resources available to me to tell me how it was accomplished. I'm assuming that the password was encrypted using the indicated "encryptionKey" and the salt value, along with the default 1000 iterations and the max size Key and IV.
I'm curious mostly about how the "encryptionKey" parameter figures into things. The "cipherValue" is what's being decrypted and is giving me the right output. What methodology was at work here, and what advantages, if any, does this have over the other examples I've seen?
Encryption and security aren't my strong suits yet...let me know if I've left out anything important that might shed more light on this. Thanks in advance!
RFC2898DeriveBytes is a poorly-named implementation of PBKDF2, which is defined in RFC 2898. (Part of why it's poorly named is RFC 2898 also describes the PBKDF1 algorithm, which is what PasswordDeriveBytes uses)
You can read the full algorithm in the linked RFC section, but what it does is use the password as an HMAC key, then take the HMAC of the salt and some state data, then take the HMAC of that, and of that, up to iterations HMACs.
The purpose is to take an input password (low entropy) and predictably turn it into a cryptographic key (with high entropy) in a manner that makes it hard to figure out what the original password is.
As long as all the inputs are the same, it produces the same answer. But changing any input just a little makes a wildly different answer.
If the other approaches you've seen turn the password into a key by just using Encoding.UTF8.GetBytes() (or similar), then yes: this is a better approach (it's harder to break, and it doesn't care how long your password is).
We are trying to integrate with a legacy c# application that uses RijndaelManaged for symmetric encryption. However it appears that they have used a 13 byte string as an encryption key!
The code is basically:
var initVectorBytes = Encoding.ASCII.GetBytes("16-char string");
var keyBytes = Encoding.ASCII.GetBytes("13-char string");
var symmetricKey = new RijndaelManaged { Mode = CipherMode.CBC };
var decryptor = symmetricKey.CreateDecryptor(keyBytes, initVectorBytes);
var memoryStream = new System.IO.MemoryStream(encryptedbytes);
var cryptoStream = new CryptoStream(memoryStream, decryptor, CryptoStreamMode.Read);
....
In theory this shouldn't work - the docs clearly say "The key size must be 128, 192, or 256 bits" and when we try this (on a Xamarin/Mono compiler - don't have easy access to .net at the moment) it throws an exception.
But it apparently works on the legacy system, and they have unit tests that also call CreateDecryptor with a 13 byte key; so presumably a real .net system does somehow do something with this code. (I note that the docs for .net version 2.0 don't talk about key length restrictions - the code is compiled using .net 3.5 however)
Is it possible that it uses the Rijndael algorithm with a 104 byte key and block size? Or would it somehow pad the key or something?
I'm using AES on WP8 (Windows Phone 8) in C# on Visual Studio, and System.Security.Cryptography does not contain the attribute 'Mode' for AESManaged.
I've looked up this problem for the past 3 days now, and haven't found any reference or anything to import.
The code I am currently using is:
AesManaged cipher = new AesManaged();
cipher.BlockSize = 8;
/*cipher.Mode = CipherMode.CFB;
cipher.Padding = PaddingMode.None;*/
//cipher.KeySize = 128;
//cipher.FeedbackSize = 8;
cipher.Key = key;
cipher.IV = key;
return cipher;
While the BlockSize throws an exception 'Specified block size is not valid for this algorithm.'
I was originally using RijndaelManaged but that isn't available on WP8 but according to this it should be available.
Silverlight verison of AES have no mode property. Here is MSDN article about that.
"The AES algorithm is essentially the Rijndael symmetric algorithm with a fixed block size and iteration count. This class functions the same way as the .NET Framework RijndaelManaged class but limits blocks to 128 bits and does not allow feedback modes.
The cipher mode is always CBC, and the padding mode is always PKCS7."
You can extract AES from BounceCastle library if you need more modes and flexibility. I did that before.
So this is how I am doing encryption right now:
public static byte[] Encrypt(byte[] Data, string Password, string Salt)
{
char[] converter = Salt.ToCharArray();
byte[] salt = new byte[converter.Length];
for (int i = 0; i < converter.Length; i++)
{
salt[i] = (byte)converter[i];
}
PasswordDeriveBytes pdb = new PasswordDeriveBytes(Password, salt);
MemoryStream ms = new MemoryStream();
Aes aes = new AesManaged();
aes.Key = pdb.GetBytes(aes.KeySize / 8);
aes.IV = pdb.GetBytes(aes.BlockSize / 8);
CryptoStream cs = new CryptoStream(ms, aes.CreateEncryptor(), CryptoStreamMode.Write);
cs.Write(Data, 0, Data.Length);
cs.Close();
return ms.ToArray();
}
I am using this algorithm on data streaming over a network. The problem is it is a bit slow for what I am trying to do. So I was wondering if anyone has better way of doing it? I am no expert on encryption this method was pieced together from different sources. I am not entirely sure how it works.
I have clocked it at about 0.5-1.5ms and I need to get it down to about 0.1ms any ideas?
I'm pretty sure that performance is the least of your problems here.
Is the salt re-used for each packet? If so, you're using a strong cypher in a weak fashion. You're starting each packet with the cypher in exactly the same state. This is a security flaw. Someone skilled in cryptography would be able to crack your encryption after only a couple thousand packets.
I'm assuming you're sending a stream of packets to the same receiver. In that case, your use of AES will be much stronger if you keep the Aes object around and re-use it. That will make your use of the cypher much, much stronger and speed things up greatly.
As to the performance question, most of your time is being spent initializing the cypher. If you don't re-initialize it every time, you'll speed up quite a lot.
aes.KeySize>>3 would be faster than aes.KeySize / 8.
Has anyone been able to use the SSCrypto Framework for Cocoa to encrypt text and then decrypt it in C#/.NET ? Or can someone offer some guidance?
I'm pretty sure my issue has to do with getting the crypto settings correct but I am far from fluent in Cocoa so I can't really tell what settings are being used in the library. However my attempt at deciphering it seems like md5 hashing, CBC mode, padding with zeros and I have no idea if the IV is set or not...
My C# code looks like this:
public static string Decrypt( string toDecrypt, string key, bool useHashing )
{
byte[] keyArray;
byte[] toEncryptArray = Convert.FromBase64String( toDecrypt );
if( useHashing )
{
MD5CryptoServiceProvider hashmd5 = new MD5CryptoServiceProvider();
keyArray = hashmd5.ComputeHash( UTF8Encoding.UTF8.GetBytes( key ) );
hashmd5.Clear();
}
else
keyArray = UTF8Encoding.UTF8.GetBytes( key );
TripleDESCryptoServiceProvider tdes = new TripleDESCryptoServiceProvider();
tdes.Key = keyArray;
tdes.Mode = CipherMode.CBC;
tdes.Padding = PaddingMode.Zeros;
ICryptoTransform cTransform = tdes.CreateDecryptor();
byte[] resultArray = cTransform.TransformFinalBlock( toEncryptArray, 0, toEncryptArray.Length );
tdes.Clear();
return UTF8Encoding.UTF8.GetString( resultArray );
}
When I run encryption on the Cocoa side I get the encrypted text:
UMldOZh8sBnHAbfN6E/9KfS1VyWAa7RN
but that won't decrypt on the C# side with the same key.
Any help is appreciated, thanks.
You could use OpenSSL directly in C# with the OpenSSL.NET wrapper!
A couple of things to watch out for:
1- Make sure that you're interpreting the key and data strings correctly. For example, is the key encoded in ASCII instead of UTF8? Does it perhaps represented in binhex format instead?
2- You're not initializing the IV (Initialization Vector) before decrypting. It needs to match the IV you're using to encrypt on the Cocoa side.
IIRC, OpenSSL uses what MS calls PKCS7 padding (though OpenSSL refers to it as PKCS5, and I'm not enough of a standards wonk to care why).
One of the classic issues in moving data back and forth from Mac to PC is byte ordering. You didn't say what the execution platform is for the Cocoa code, but that's something to look out for, especially if it's a PowerPC Mac.
There could be something to do with endianness,
Try to call Array.Reverse before decryption.
var reversedArr = Array.Reverse(toEncrytArray)
byte[] resultArray = cTransform.TransformFinalBlock( reversedArr, 0, reversedArr.Length );
You should really post the Cocoa code, too, to give us a chance to find your problem.
But there are some hints hidden in what you have posted:
Decrypting PyPqLI/d18Q= (base64) with the key and iv gives "97737D09E48B0202" (hex). This looks like the plaintext "97737D09E48B" with PKCS7-padding. So I would start by changing the .NET code to use PaddingMode.PKCS7 and look closely at where you pass the plaintext to the Cocoa code.