Padding is invalid and cannot be removed? - c#

I have looked online for what this exception means in relation to my program but can't seem to find a solution or the reason why it's happening to my specific program. I have been using the example provided my msdn for encrypting and decrypting an XmlDocument using the Rijndael algorithm. The encryption works fine but when I try to decrypt, I get the following exception:
Padding is invalid and cannot be removed
Can anyone tell me what I can do to solve this issue? My code below is where I get the key and other data. If the cryptoMode is false, it will call the decrypt method, which is where the exception occurs:
public void Cryptography(XmlDocument doc, bool cryptographyMode)
{
RijndaelManaged key = null;
try
{
// Create a new Rijndael key.
key = new RijndaelManaged();
const string passwordBytes = "Password1234"; //password here
byte[] saltBytes = Encoding.UTF8.GetBytes("SaltBytes");
Rfc2898DeriveBytes p = new Rfc2898DeriveBytes(passwordBytes, saltBytes);
// sizes are devided by 8 because [ 1 byte = 8 bits ]
key.IV = p.GetBytes(key.BlockSize/8);
key.Key = p.GetBytes(key.KeySize/8);
if (cryptographyMode)
{
Ecrypt(doc, "Content", key);
}
else
{
Decrypt(doc, key);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
finally
{
// Clear the key.
if (key != null)
{
key.Clear();
}
}
}
private void Decrypt(XmlDocument doc, SymmetricAlgorithm alg)
{
// Check the arguments.
if (doc == null)
throw new ArgumentNullException("Doc");
if (alg == null)
throw new ArgumentNullException("alg");
// Find the EncryptedData element in the XmlDocument.
XmlElement encryptedElement = doc.GetElementsByTagName("EncryptedData")[0] as XmlElement;
// If the EncryptedData element was not found, throw an exception.
if (encryptedElement == null)
{
throw new XmlException("The EncryptedData element was not found.");
}
// Create an EncryptedData object and populate it.
EncryptedData edElement = new EncryptedData();
edElement.LoadXml(encryptedElement);
// Create a new EncryptedXml object.
EncryptedXml exml = new EncryptedXml();
// Decrypt the element using the symmetric key.
byte[] rgbOutput = exml.DecryptData(edElement, alg); <---- I GET THE EXCEPTION HERE
// Replace the encryptedData element with the plaintext XML element.
exml.ReplaceData(encryptedElement, rgbOutput);
}

Rijndael/AES is a block cypher. It encrypts data in 128 bit (16 character) blocks. Cryptographic padding is used to make sure that the last block of the message is always the correct size.
Your decryption method is expecting whatever its default padding is, and is not finding it. As #NetSquirrel says, you need to explicitly set the padding for both encryption and decryption. Unless you have a reason to do otherwise, use PKCS#7 padding.

Make sure that the keys you use to encrypt and decrypt are the same. The padding method even if not explicitly set should still allow for proper decryption/encryption (if not set they will be the same). However if you for some reason are using a different set of keys for decryption than used for encryption you will get this error:
Padding is invalid and cannot be removed
If you are using some algorithm to dynamically generate keys that will not work. They need to be the same for both encryption and decryption. One common way is to have the caller provide the keys in the constructor of the encryption methods class, to prevent the encryption/decryption process having any hand in creation of these items. It focuses on the task at hand (encrypting and decrypting data) and requires the iv and key to be supplied by the caller.

For the benefit of people searching, it may be worth checking the input being decrypted. In my case, the info being sent for decryption was (wrongly) going in as an empty string. It resulted in the padding error.
This may relate to rossum's answer, but thought it worth mentioning.

If the same key and initialization vector are used for encoding and decoding, this issue does not come from data decoding but from data encoding.
After you called Write method on a CryptoStream object, you must ALWAYS call FlushFinalBlock method before Close method.
MSDN documentation on CryptoStream.FlushFinalBlock method says:
"Calling the Close method will call FlushFinalBlock ..."
https://msdn.microsoft.com/en-US/library/system.security.cryptography.cryptostream.flushfinalblock(v=vs.110).aspx
This is wrong. Calling Close method just closes the CryptoStream and the output Stream.
If you do not call FlushFinalBlock before Close after you wrote data to be encrypted, when decrypting data, a call to Read or CopyTo method on your CryptoStream object will raise a CryptographicException exception (message: "Padding is invalid and cannot be removed").
This is probably true for all encryption algorithms derived from SymmetricAlgorithm (Aes, DES, RC2, Rijndael, TripleDES), although I just verified that for AesManaged and a MemoryStream as output Stream.
So, if you receive this CryptographicException exception on decryption, read your output Stream Length property value after you wrote your data to be encrypted, then call FlushFinalBlock and read its value again. If it has changed, you know that calling FlushFinalBlock is NOT optional.
And you do not need to perform any padding programmatically, or choose another Padding property value. Padding is FlushFinalBlock method job.
.........
Additional remark for Kevin:
Yes, CryptoStream calls FlushFinalBlock before calling Close, but it is too late: when CryptoStream Close method is called, the output stream is also closed.
If your output stream is a MemoryStream, you cannot read its data after it is closed. So you need to call FlushFinalBlock on your CryptoStream before using the encrypted data written on the MemoryStream.
If your output stream is a FileStream, things are worse because writing is buffered. The consequence is last written bytes may not be written to the file if you close the output stream before calling Flush on FileStream. So before calling Close on CryptoStream you first need to call FlushFinalBlock on your CryptoStream then call Flush on your FileStream.

I came across this as a regression bug when refactoring code from traditional using blocks to the new C# 8.0 using declaration style, where the block ends when the variable falls out of scope at the end of the method.
Old style:
//...
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, aesCrypto.CreateDecryptor(), CryptoStreamMode.Write))
{
cs.Write(rawCipherText, 0, rawCipherText.Length);
}
return Encoding.Unicode.GetString(ms.ToArray());
}
New, less indented style:
//...
using MemoryStream ms = new MemoryStream();
using CryptoStream cs = new CryptoStream(ms, aesCrypto.CreateDecryptor(), CryptoStreamMode.Write);
cs.Write(rawCipherText, 0, rawCipherText.Length);
cs.FlushFinalBlock();
return Encoding.Unicode.GetString(ms.ToArray());
With the old style, the using block for the CryptoStream terminated and the finalizer was called before memory stream gets read in the return statement, so the CryptoStream was automatically flushed.
With the new style, the memory stream is read before the CryptoStream finalizer gets called, so I had to manually call FlushFinalBlock() before reading from the memory stream in order to fix this issue. I had to manually flush the final block for both the encrypt and the decrypt methods, when they were written in the new using style.

A serval times of fighting, I finally solved the problem.
(Note: I use standard AES as symmetric algorithm. This answer may not suitable
for everyone.)
Change the algorithm class. Replace the RijndaelManaged class to AESManaged one.
Do not explicit set the KeySize of algorithm class, left them default.
(This is the very important step. I think there is a bug in KeySize property.)
Here is a list you want to check which argument you might have missed:
Key
(byte array, length must be exactly one of 16, 24, 32 byte for different key size.)
IV
(byte array, 16 bytes)
CipherMode
(One of CBC, CFB, CTS, ECB, OFB)
PaddingMode
(One of ANSIX923, ISO10126, None, PKCS7, Zeros)

My issue was that the encrypt's passPhrase didn't match the decrypt's passPhrase... so it threw this error .. a little misleading.

The solution that fixed mine was that I had inadvertently applied different keys to Encryption and Decryption methods.

This will fix the problem:
aes.Padding = PaddingMode.Zeros;

I had the same problem trying to port a Go program to C#. This means that a lot of data has already been encrypted with the Go program. This data must now be decrypted with C#.
The final solution was PaddingMode.None or rather PaddingMode.Zeros.
The cryptographic methods in Go:
import (
"crypto/aes"
"crypto/cipher"
"crypto/sha1"
"encoding/base64"
"io/ioutil"
"log"
"golang.org/x/crypto/pbkdf2"
)
func decryptFile(filename string, saltBytes []byte, masterPassword []byte) (artifact string) {
const (
keyLength int = 256
rfc2898Iterations int = 6
)
var (
encryptedBytesBase64 []byte // The encrypted bytes as base64 chars
encryptedBytes []byte // The encrypted bytes
)
// Load an encrypted file:
if bytes, bytesErr := ioutil.ReadFile(filename); bytesErr != nil {
log.Printf("[%s] There was an error while reading the encrypted file: %s\n", filename, bytesErr.Error())
return
} else {
encryptedBytesBase64 = bytes
}
// Decode base64:
decodedBytes := make([]byte, len(encryptedBytesBase64))
if countDecoded, decodedErr := base64.StdEncoding.Decode(decodedBytes, encryptedBytesBase64); decodedErr != nil {
log.Printf("[%s] An error occur while decoding base64 data: %s\n", filename, decodedErr.Error())
return
} else {
encryptedBytes = decodedBytes[:countDecoded]
}
// Derive key and vector out of the master password and the salt cf. RFC 2898:
keyVectorData := pbkdf2.Key(masterPassword, saltBytes, rfc2898Iterations, (keyLength/8)+aes.BlockSize, sha1.New)
keyBytes := keyVectorData[:keyLength/8]
vectorBytes := keyVectorData[keyLength/8:]
// Create an AES cipher:
if aesBlockDecrypter, aesErr := aes.NewCipher(keyBytes); aesErr != nil {
log.Printf("[%s] Was not possible to create new AES cipher: %s\n", filename, aesErr.Error())
return
} else {
// CBC mode always works in whole blocks.
if len(encryptedBytes)%aes.BlockSize != 0 {
log.Printf("[%s] The encrypted data's length is not a multiple of the block size.\n", filename)
return
}
// Reserve memory for decrypted data. By definition (cf. AES-CBC), it must be the same lenght as the encrypted data:
decryptedData := make([]byte, len(encryptedBytes))
// Create the decrypter:
aesDecrypter := cipher.NewCBCDecrypter(aesBlockDecrypter, vectorBytes)
// Decrypt the data:
aesDecrypter.CryptBlocks(decryptedData, encryptedBytes)
// Cast the decrypted data to string:
artifact = string(decryptedData)
}
return
}
... and ...
import (
"crypto/aes"
"crypto/cipher"
"crypto/sha1"
"encoding/base64"
"github.com/twinj/uuid"
"golang.org/x/crypto/pbkdf2"
"io/ioutil"
"log"
"math"
"os"
)
func encryptFile(filename, artifact string, masterPassword []byte) (status bool) {
const (
keyLength int = 256
rfc2898Iterations int = 6
)
status = false
secretBytesDecrypted := []byte(artifact)
// Create new salt:
saltBytes := uuid.NewV4().Bytes()
// Derive key and vector out of the master password and the salt cf. RFC 2898:
keyVectorData := pbkdf2.Key(masterPassword, saltBytes, rfc2898Iterations, (keyLength/8)+aes.BlockSize, sha1.New)
keyBytes := keyVectorData[:keyLength/8]
vectorBytes := keyVectorData[keyLength/8:]
// Create an AES cipher:
if aesBlockEncrypter, aesErr := aes.NewCipher(keyBytes); aesErr != nil {
log.Printf("[%s] Was not possible to create new AES cipher: %s\n", filename, aesErr.Error())
return
} else {
// CBC mode always works in whole blocks.
if len(secretBytesDecrypted)%aes.BlockSize != 0 {
numberNecessaryBlocks := int(math.Ceil(float64(len(secretBytesDecrypted)) / float64(aes.BlockSize)))
enhanced := make([]byte, numberNecessaryBlocks*aes.BlockSize)
copy(enhanced, secretBytesDecrypted)
secretBytesDecrypted = enhanced
}
// Reserve memory for encrypted data. By definition (cf. AES-CBC), it must be the same lenght as the plaintext data:
encryptedData := make([]byte, len(secretBytesDecrypted))
// Create the encrypter:
aesEncrypter := cipher.NewCBCEncrypter(aesBlockEncrypter, vectorBytes)
// Encrypt the data:
aesEncrypter.CryptBlocks(encryptedData, secretBytesDecrypted)
// Encode base64:
encodedBytes := make([]byte, base64.StdEncoding.EncodedLen(len(encryptedData)))
base64.StdEncoding.Encode(encodedBytes, encryptedData)
// Allocate memory for the final file's content:
fileContent := make([]byte, len(saltBytes))
copy(fileContent, saltBytes)
fileContent = append(fileContent, 10)
fileContent = append(fileContent, encodedBytes...)
// Write the data into a new file. This ensures, that at least the old version is healthy in case that the
// computer hangs while writing out the file. After a successfully write operation, the old file could be
// deleted and the new one could be renamed.
if writeErr := ioutil.WriteFile(filename+"-update.txt", fileContent, 0644); writeErr != nil {
log.Printf("[%s] Was not able to write out the updated file: %s\n", filename, writeErr.Error())
return
} else {
if renameErr := os.Rename(filename+"-update.txt", filename); renameErr != nil {
log.Printf("[%s] Was not able to rename the updated file: %s\n", fileContent, renameErr.Error())
} else {
status = true
return
}
}
return
}
}
Now, decryption in C#:
public static string FromFile(string filename, byte[] saltBytes, string masterPassword)
{
var iterations = 6;
var keyLength = 256;
var blockSize = 128;
var result = string.Empty;
var encryptedBytesBase64 = File.ReadAllBytes(filename);
// bytes -> string:
var encryptedBytesBase64String = System.Text.Encoding.UTF8.GetString(encryptedBytesBase64);
// Decode base64:
var encryptedBytes = Convert.FromBase64String(encryptedBytesBase64String);
var keyVectorObj = new Rfc2898DeriveBytes(masterPassword, saltBytes.Length, iterations);
keyVectorObj.Salt = saltBytes;
Span<byte> keyVectorData = keyVectorObj.GetBytes(keyLength / 8 + blockSize / 8);
var key = keyVectorData.Slice(0, keyLength / 8);
var iv = keyVectorData.Slice(keyLength / 8);
var aes = Aes.Create();
aes.Padding = PaddingMode.Zeros;
// or ... aes.Padding = PaddingMode.None;
var decryptor = aes.CreateDecryptor(key.ToArray(), iv.ToArray());
var decryptedString = string.Empty;
using (var memoryStream = new MemoryStream(encryptedBytes))
{
using (var cryptoStream = new CryptoStream(memoryStream, decryptor, CryptoStreamMode.Read))
{
using (var reader = new StreamReader(cryptoStream))
{
decryptedString = reader.ReadToEnd();
}
}
}
return result;
}
How can the issue with the padding be explained? Just before encryption the Go program checks the padding:
// CBC mode always works in whole blocks.
if len(secretBytesDecrypted)%aes.BlockSize != 0 {
numberNecessaryBlocks := int(math.Ceil(float64(len(secretBytesDecrypted)) / float64(aes.BlockSize)))
enhanced := make([]byte, numberNecessaryBlocks*aes.BlockSize)
copy(enhanced, secretBytesDecrypted)
secretBytesDecrypted = enhanced
}
The important part is this:
enhanced := make([]byte, numberNecessaryBlocks*aes.BlockSize)
copy(enhanced, secretBytesDecrypted)
A new array is created with an appropriate length, so that the length is a multiple of the block size. This new array is filled with zeros. The copy method then copies the existing data into it. It is ensured that the new array is larger than the existing data. Accordingly, there are zeros at the end of the array.
Thus, the C# code can use PaddingMode.Zeros. The alternative PaddingMode.None just ignores any padding, which also works. I hope this answer is helpful for anyone who has to port code from Go to C#, etc.

I came across this error while attempting to pass an un-encrypted file path to the Decrypt method.The solution was to check if the passed file is encrypted first before attempting to decrypt
if (Sec.IsFileEncrypted(e.File.FullName))
{
var stream = Sec.Decrypt(e.File.FullName);
}
else
{
// non-encrypted scenario
}

Another scenario, again for the benefit of people searching.
For me this error occurred during the Dispose() method which masked a previous error unrelated to encryption.
Once the other component was fixed, this exception went away.

I encountered this padding error when i would manually edit the encrypted strings in the file (using notepad) because i wanted to test how decryption function will behave if my encrypted content was altered manually.
The solution for me was to place a
try
decryption stuff....
catch
inform decryption will not be carried out.
end try
Like i said my padding error was because i was manually typing over the decrypted text using notepad. May be my answer may guide you to your solution.

I had the same error. In my case it was because I have stored the encrypted data in a SQL Database. The table the data is stored in, has a binary(1000) data type. When retreiving the data from the database, it would decrypt these 1000 bytes, while there where actually 400 bytes. So removing the trailing zero's (600) from the result it fixed the problem.

I had this error and was explicitly setting the blocksize: aesManaged.BlockSize = 128;
Once I removed that, it worked.

This can also happen if you have the wrong encryption key with a padding mode set.
I saw this when I was testing concurrency issues and messed up my testbed. I created a new instance of the AES class for each transform (encrypt/decrypt) without setting the key, and this got thrown when I was trying to decrypt the result.

This happened to me when I chaneged from PlayerPrefs to CPlayerPrefs, all I did is clear previous PlayerPrefs and let CPlayerPrefs make the new ones.

Related

aes decryption not working properly sometime

I am using aes for encryption/decryption of the text but sometime its giving me exact value after decryption while some times i am getting error. I referred to different answers over but didn't get the root cause of my problem .
private static string DecryptStringFromBytes(byte[] cipherText, byte[] key, byte[] iv)
{
// Declare the string used to hold the decrypted text.
string plaintext = null;
// Create an RijndaelManaged object
// with the specified key and IV.
using (var rijAlg = new System.Security.Cryptography.RijndaelManaged())
{
//Settings
rijAlg.Mode = System.Security.Cryptography.CipherMode.CBC;
rijAlg.Padding = System.Security.Cryptography.PaddingMode.PKCS7;
rijAlg.FeedbackSize = 128;
rijAlg.Key = key;
rijAlg.IV = iv;
// Create a decrytor to perform the stream transform.
var decryptor = rijAlg.CreateDecryptor(rijAlg.Key, rijAlg.IV);
try
{
// Create the streams used for decryption.
using (var msDecrypt = new System.IO.MemoryStream(cipherText))
{
using (var csDecrypt = new System.Security.Cryptography.CryptoStream(msDecrypt, decryptor, System.Security.Cryptography.CryptoStreamMode.Read))
{
using (var srDecrypt = new System.IO.StreamReader(csDecrypt))
{
// Read the decrypted bytes from the decrypting stream
// and place them in a string.
plaintext = srDecrypt.ReadToEnd();
}
}
}
}
catch
{
plaintext = "keyError";
}
}
return plaintext;
}
It throws error "Padding is invalid and cannot be removed"
I seen some suggestion like to remove padding but it didn't seems proper solution.
I am not able to find the cause behind this as sometimes it runs perfectly without throwing error .
Any help or suggestion is really appreciated.
For Encryption - The encryption is being done on to client side in js and passing encryped text to server.
var key = CryptoJS.enc.Utf8.parse("16 digit number here");
var iv = CryptoJS.enc.Utf8.parse("16 digit number here");
var EncryptedString = CryptoJS.AES.encrypt(CryptoJS.enc.Utf8.parse("entered string to encrypt"), key,
{ keySize: 128 / 8, iv: iv, mode: CryptoJS.mode.CBC, padding: CryptoJS.pad.Pkcs7 });
By using a similar encryption routine in .NET to the decryption function you give I was able to successfully round-trip plaintext to ciphertext and back to plaintext, so it seems that the decryption function itself is ok. It therefore seems very likely that the key and/or IV you're using to encrypt does not match byte-for-byte with the values you're using when decrypting.
Given that your encryption code is using the UTF-8 encoded version of string values to form the key and IV, it would be worth doing the same in your decryption code (using Encoding.UTF8.GetBytes()).
However, it would be worth noting that whilst this might resolve the immediate issue, it is in itself a bad practice to use string values directly for keys without some form of key-derivation process (e.g. Rfc2898DeriveBytes), and IVs should be generated randomly for every application of the encryption function. Those are just a few issues with your use of cryptography (and are independent of whether the code works or not).

PBKDF2 Python keys vs .NET Rfc2898

I am trying to write a Python module that will encrypt text that our existing .NET classes can decrypt. As far as I can tell, my code lines, up but it isn't decrypting (I get an 'Invalid padding length' error on the C# side). My pkcs7 code looks good, but research indicates that invalid keys could cause this same problem.
What's different between these two setups?
Python:
derived_key = PBKDF2(crm_key, salt, 256 / 8, iterations)
iv = PBKDF2(crm_key, salt, 128 / 8, iterations)
encoder = pkcs7.PKCS7Encoder()
cipher = AES.new(derived_key, AES.MODE_CBC, iv)
decoded = cipher.decrypt(encoded_secret)
#encode - just stepped so i could debug.
padded_secret = encoder.encode(secret) # 1
encodedtext = cipher.encrypt(padded_secret) # 2
based_secret = base64.b64encode(encodedtext) # 3
I thought that based_secret could get passed up to C# and decoded there. But it fails. The same encrypting c# code is:
var rfc = new Rfc2898DeriveBytes(key, saltBytes);
// create provider & encryptor
using (var cryptoProvider = new AesManaged())
{
// Set cryptoProvider parameters
cryptoProvider.BlockSize = cryptoProvider.LegalBlockSizes[0].MaxSize;
cryptoProvider.KeySize = cryptoProvider.LegalKeySizes[0].MaxSize;
cryptoProvider.Key = rfc.GetBytes(cryptoProvider.KeySize / 8);
cryptoProvider.IV = rfc.GetBytes(cryptoProvider.BlockSize / 8);
using (var encryptor = cryptoProvider.CreateEncryptor())
{
// Create a MemoryStream.
using (var memoryStream = new MemoryStream())
{
// Create a CryptoStream using the MemoryStream and the encryptor.
using (var cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write))
{
// Convert the passed string to a byte array.
var valueBytes = Encoding.UTF8.GetBytes(plainValue);
// Write the byte array to the crypto stream and flush it.
cryptoStream.Write(valueBytes, 0, valueBytes.Length);
cryptoStream.FlushFinalBlock();
// Get an array of bytes from the
// MemoryStream that holds the
// encrypted data.
var encryptBytes = memoryStream.ToArray();
// Close the streams.
cryptoStream.Close();
memoryStream.Close();
// Return the encrypted buffer.
return Convert.ToBase64String(encryptBytes);
}
}
}
The Python pkcs7 implementation I'm using is:
https://gist.github.com/chrix2/4171336
First off, I verified that Rfc2898 and PBKDF2 are the same thing. Then, as stated above, the problem appears to be a .net ism. I found on msdn
that the implementation of GetBytes inside of Rfc2898DeriveBytes changes on each call, ie. it holds state. (see the remarks about halfway down the page)
Example in Python (pseudo output):
derived_key = PBKDF2(key, salt, 32, 1000)
iv = PBKDF2(key, salt, 16, 1000)
print(base64.b64encode(derived_key))
print(base64.b64encode(iv))
$123456789101112134==
$12345678==
Same(ish) code in .NET (again, pseudo output):
var rfc = new Rfc2898DeriveBytes(key, saltBytes);
using (var cryptoProvider = new AesManaged())
{
// Set cryptoProvider parameters
cryptoProvider.BlockSize = cryptoProvider.LegalBlockSizes[0].MaxSize;
cryptoProvider.KeySize = cryptoProvider.LegalKeySizes[0].MaxSize;
cryptoProvider.Key = rfc.GetBytes(cryptoProvider.KeySize / 8);
cryptoProvider.IV = rfc.GetBytes(cryptoProvider.BlockSize / 8);
}
Console.Writeline(Convert.ToBase64(cryptoProvider.Key));
Console.Writeline(Convert.ToBase64(cryptoProvider.IV));
$123456789101112134==
$600200300==
Subsequent calls to rfc.GetBytes always produces different results. MSDN says it compounds the key sizes on the calls. So if you call GetBytes(20), twice, it's the same as calling GetBytes(20+20) or GetBytes(40). Theoretically, this should just increase the size of the key, not completely change it.
There are some solutions to get around this issue, which could be generating a longer key on the first call, then slicing it into both a derived key AND an IV, or randomly generating an IV, appending it to the encoded message and peeling it off before decrypting it.
Slicing the python output produces the same results as .NET. It looks like this:
derived_key = PBKDF2(key, salt, 32, 1000)
iv = PBKDF2(key, salt, 32 + 16, 1000) # We need 16, but we're compensating for .NETs 'already called' awesomeness on the GetBytes method
split_key = iv[32:]
print(base64.b64encode(derived_key))
print(base64.b64encode(iv))
print(base64.b64encode(split_key))
$ 123456789101112134== # matches our derived key
$ 12345678== # doesn't match
$ 600200300== # matches. this is the base 64 encoded version of the tailing 16 bytes.
Enjoy,

Trouble with decrypting AES/ECB padded using PCKS5 in C#

I need to decrypt a picture coming from an online service (which is not mine, so I must use this way of encryption).
This picture is encryted using AES/ECB with a single synchronous key and padded using PKCS5.
I tried several ways to achieve this, but none of them worked. I use the BoucyCastle cryptography library.
Here's my decryption code :
public static byte[] Decrypt(string input)
{
var cipher = CipherUtilities.GetCipher("AES/ECB/PKCS5Padding");
cipher.Init(false, new KeyParameter(Encoding.UTF8.GetBytes(KEY)));
byte[] todo = Encoding.UTF8.GetBytes(Pad(input));
byte[] bytes = cipher.ProcessBytes(todo);
byte[] final = cipher.DoFinal();
// Write the decrypt bytes & final to memory...
var decryptedStream = new MemoryStream(bytes.Length);
decryptedStream.Write(bytes, 0, bytes.Length);
decryptedStream.Write(final, 0, final.Length);
decryptedStream.Flush();
var decryptedData = new byte[decryptedStream.Length];
decryptedStream.Read(decryptedData, 0, (int)decryptedStream.Length);
return decryptedData;
}
private static string Pad(string data)
{
int len = data.Length;
int toAdd = (16 - len % 16);
for (int i = 0; i < toAdd; i++)
{
data += (char)toAdd;
}
return data;
}
When I try, it raises an InvalidCipherTextExpression with the message "pad block corrupted", at the byte[] final = cipher.DoFinal(); line.
I tested my padding function and it seemed to work as expected.
I tried to look inside the BouncyCastle source code to look for my error, and what I found is that the last block doesn't have any padding, and that's what is causing the error. So I'm wondering if I'm doing something wrong somewhere else, because it may not come from the padding.
Maybe the input string, which is retrieved from a http server with this :
// grab te response and print it out to the console along with the status code
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
return new StreamReader(response.GetResponseStream()).ReadToEnd();
What I want to achieve is exactly the same thing as here : C# Decrypting AES/ECB Paddded Using PKCS#5
But there's no awnsers as the asker didn't try anything..
Thanks in advance, and I'm really sorry for my bad english.
You have an incomplete ciphertext - data encrypted with AES-128 will be always a multiple of 128/8 = 16 bytes. I.e. Last block incomplete in decryption means you have, say, 127 bytes of ciphertext instead of 128.
As said in comments, you must not pad ciphertext before decryption. "It worked" because your function did not actually pad anything - because of the reason highlighted above.
Are you sure you're using same "bitness" flavours of AES? (for example, it may be that you decrypt AES-128 ciphertext with AES-192)
P.S. On an unrelated note, cipher.Init(false, new KeyParameter(Encoding.UTF8.GetBytes(KEY))); does also look suspicious. Are you sure it's not Base64 or the like?

AesManaged implementation breaks under Mono...?

I've got a program that under Windows and .NET Framework 4 works perfectly well however, under Mono (built in MonoDevelop 2.6), the Encrypt() and Decrypt() function seem to only half work...
To the point at which if I locally encrypt something and then immediately decrypt it (under Mono), the first 10 or so characters of the message are scrambled jiberish, but anything following looks perfectly valid!
The Encrypt function is as follows:
public byte[] Encrypt(string plainText)
{
DateTime now = DateTime.Now;
string timeStamp = now.Millisecond.ToString("000") + "." + now.Second.ToString("00") + "." +
now.Minute.ToString("00") + "." + now.Hour.ToString("00") + Constants.MessageSplitChar;
plainText = plainText.Insert(0, timeStamp);
MemoryStream memoryStream = new MemoryStream();
lock (this.encryptor)
{
CryptoStream cryptoStream = new CryptoStream(memoryStream, this.encryptor, CryptoStreamMode.Write);
StreamWriter writer = new StreamWriter(cryptoStream);
try
{
writer.Write(plainText);
}
finally
{
writer.Close();
cryptoStream.Close();
memoryStream.Close();
}
}
byte[] encryptedMessage = memoryStream.ToArray();
return this.AppendArrays(BitConverter.GetBytes(encryptedMessage.Length), encryptedMessage);
}
The Decrypt function is as follows:
public string Decrypt(byte[] cipherText)
{
try
{
string plainText = string.Empty;
MemoryStream memoryStream = new MemoryStream(cipherText);
lock (this.decryptor)
{
CryptoStream cryptoStream = new CryptoStream(memoryStream, this.decryptor, CryptoStreamMode.Read);
StreamReader reader = new StreamReader(cryptoStream);
try
{
plainText = reader.ReadToEnd();
plainText = plainText.Substring(plainText.IndexOf("|") + 1);
plainText = plainText.TrimEnd("\0".ToCharArray());
}
finally
{
reader.Close();
cryptoStream.Close();
memoryStream.Close();
}
}
return plainText;
}
catch (Exception ex)
{
CMicroBingoServer.LogManager.Write(ex.ToString(), MessagePriority.Error);
return "DECRYPTION_FAILED";
}
}
Your code does not show how your decryptor and encryptor instances are created.
This can be a problem because if you're reusing the instance then you must check ICryptoTransform.CanReuseTransform. If it returns false then you cannot reuse the same encryptor/decryptor and must create new instances.
That's even more important since Mono and .NET have different defaults for some algorithms. But in any case skipping this check means that any changes in future .NET framework (or via configuration files, since cryptography is pluggable using CryptoConfig) is likely to break your code someday.
It may be important how many characters bytes at the beginning are seemingly gibberish... If the first block is coming out nonsense but the rest is fine then it could be that your initialization vector is not correct on decryption.
Under the most common block mode, CBC, when decrypting the IV only effects teh decryption of the first block of data since after that its the cipher text that acts as the IV for later blocks.
Are you explicitly setting IVs for encrypting and decrypting? If not then I would imagine that the two have different behaviours when dealing with unset IVs (eg windows uses all zeros and mono generates a random IV - this would cause windows to decrypt fine becuase the IV is the same whereas mono may be generating two different IVs for the encrypt and decrypt process.
I don't know mono stuff well enough to look into the exact solution but something along these lines seems likely.

How to make this: J2ME encrypt C# decrypt And J2ME decrypt C# encrypt?

C#
string keystr = "0123456789abcdef0123456789abcdef";
string plainText = "www.bouncycastle.org";
RijndaelManaged crypto = new RijndaelManaged();
crypto.KeySize = 128;
crypto.Mode = CipherMode.CBC;
crypto.Padding = PaddingMode.PKCS7;
crypto.Key = keystr.ToCharArray().Select(c=>(byte)c).ToArray();
// get the IV and key for writing to a file
byte[] iv = crypto.IV;
byte[] key = crypto.Key;
// turn the message into bytes
// use UTF8 encoding to ensure that Java can read in the file properly
byte[] plainBytes = Encoding.UTF8.GetBytes(plainText.ToCharArray());
// Encrypt the Text Message using AES (Rijndael) (Symmetric algorithm)
ICryptoTransform sse = crypto.CreateEncryptor();
MemoryStream encryptedFs = new MemoryStream();
CryptoStream cs = new CryptoStream(encryptedFs, sse, CryptoStreamMode.Write);
try
{
cs.Write(plainBytes, 0, plainBytes.Length);
cs.FlushFinalBlock();
encryptedFs.Position = 0;
string result = string.Empty;
for (int i = 0; i < encryptedFs.Length; i++)
{
int read = encryptedFs.ReadByte();
result += read.ToString("x2");
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
finally
{
encryptedFs.Close();
cs.Close();
}
}
Java:
private String key = "0123456789abcdef0123456789abcdef";
private String plainText = "www.bouncycastle.org";
cipherText = performEncrypt(Hex.decode(key.getBytes()), plainText);
private byte[] performEncrypt(byte[] key, String plainText)
{
byte[] ptBytes = plainText.getBytes();
final RijndaelEngine rijndaelEngine = new RijndaelEngine();
cipher = new PaddedBufferedBlockCipher(new CBCBlockCipher(rijndaelEngine));
String name = cipher.getUnderlyingCipher().getAlgorithmName();
message("Using " + name);
byte[]iv = new byte[16];
final KeyParameter keyParameter = new KeyParameter(key);
cipher.init(true, keyParameter);
byte[] rv = new byte[cipher.getOutputSize(ptBytes.length)];
int oLen = cipher.processBytes(ptBytes, 0, ptBytes.length, rv, 0);
try
{
cipher.doFinal(rv, oLen);
}
catch (CryptoException ce)
{
message("Ooops, encrypt exception");
status(ce.toString());
}
return rv;
}
C# produces: ff53bc51c0caf5de53ba850f7ba08b58345a89a51356d0e030ce1367606c5f08
java produces: 375c52fd202696dba679e57f612ee95e707ccb05aff368b62b2802d5fb685403
Can somebody help me to fix my code?
In the Java code, you do not use the IV.
I am not savvy enough in C# to help you directly, but I can give some information.
Rijndael, aka "the AES", encrypts blocks of 16 bytes. To encrypt a long message (e.g. your test message, when encoding, is 20 bytes long), Rijndael must be invoked several times, with some way to chain the invocations together (also, there is some "padding" to make sure that the input length is a multiple of 16). The CBC mode performs such chaining.
In CBC, each block of data is combined (bitwise XOR) with the previous encrypted block prior to being itself encrypted. Since the first block of data has no previous block, we add a new conventional "zero-th block" called the IV. The IV should be chosen as 16 random bytes. The decrypting party will need the IV. The IV needs not be secret (that's the difference between the IV and the key) so it is often transmitted along the message.
In your Java code, you do not specify the IV, you just create a variable called iv and do not use it. So the Rijndael implementation is on its own for that. Chances are that it generated a random IV. Similarly, you do not give an IV to the Rijndael implementation in the C# code. So it is quite plausible that there again a random IV was selected. But not the same than the one in the Java code, hence the distinct results.
(Note: you 20-byte input string is padded to 32 bytes. You give two "results" in hexadecimal, of length 32 bytes each. This is coherent but means that those results do not include the IV -- otherwise they would be 48-byte long.)
I think the algorithm is built in slighty different way and/or the salt key is interpered in different way.

Categories

Resources