iOS & .NET Produce Different AES256 Results - c#

I've been at this for a few days now. My original (and eventual) goal was to use CommonCrypto on iOS to encrypt a password with a given IV and key, then successfully decrypt it using .NET. After tons of research and failures I've narrowed down my goal to simply producing the same encrypted bytes on iOS and .NET, then going from there.
I've created simple test projects in .NET (C#, framework 4.5) and iOS (8.1). Please note the following code is not intended to be secure, but rather winnow down the variables in the larger process. Also, iOS is the variable here. The final .NET encryption code will be deployed by a client so it's up to me to bring the iOS encryption in line. Unless this is confirmed impossible the .NET code will not be changed.
The relevant .NET encryption code:
static byte[] EncryptStringToBytes_Aes(string plainText, byte[] Key, byte[] IV)
{
byte[] encrypted;
// Create an Aes object
// with the specified key and IV.
using (Aes aesAlg = Aes.Create())
{
aesAlg.Padding = PaddingMode.PKCS7;
aesAlg.KeySize = 256;
aesAlg.BlockSize = 128;
// Create an encryptor to perform the stream transform.
ICryptoTransform encryptor = aesAlg.CreateEncryptor(Key, IV);
// Create the streams used for encryption.
using (MemoryStream msEncrypt = new MemoryStream())
{
using (CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write))
{
using (StreamWriter swEncrypt = new StreamWriter(csEncrypt))
{
//Write all data to the stream.
swEncrypt.Write(plainText);
}
encrypted = msEncrypt.ToArray();
}
}
}
return encrypted;
}
The relevant iOS encryption code:
+(NSData*)AES256EncryptData:(NSData *)data withKey:(NSData*)key iv:(NSData*)ivector
{
Byte keyPtr[kCCKeySizeAES256+1]; // Pointer with room for terminator (unused)
// Pad to the required size
bzero(keyPtr, sizeof(keyPtr));
// fetch key data
[key getBytes:keyPtr length:sizeof(keyPtr)];
// -- IV LOGIC
Byte ivPtr[16];
bzero(ivPtr, sizeof(ivPtr));
[ivector getBytes:ivPtr length:sizeof(ivPtr)];
// Data length
NSUInteger dataLength = data.length;
// See the doc: For block ciphers, the output size will always be less than or equal to the input size plus the size of one block.
// That's why we need to add the size of one block here
size_t bufferSize = dataLength + kCCBlockSizeAES128;
void *buffer = malloc(bufferSize);
size_t numBytesEncrypted = 0;
CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding,
keyPtr, kCCKeySizeAES256,
ivPtr,
data.bytes, dataLength,
buffer, bufferSize,
&numBytesEncrypted);
if (cryptStatus == kCCSuccess) {
return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted];
}
free(buffer);
return nil;
}
The relevant code for passing the pass, key, and IV in .NET and printing result:
byte[] c_IV = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
byte[] c_Key = { 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
String passPhrase = "X";
// Encrypt
byte[] encrypted = EncryptStringToBytes_Aes(passPhrase, c_Key, c_IV);
// Print result
for (int i = 0; i < encrypted.Count(); i++)
{
Console.WriteLine("[{0}] {1}", i, encrypted[i]);
}
The relevant code for passing the parameters and printing the result in iOS:
Byte c_iv[16] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
Byte c_key[16] = { 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
NSString* passPhrase = #"X";
// Convert to data
NSData* ivData = [NSData dataWithBytes:c_iv length:sizeof(c_iv)];
NSData* keyData = [NSData dataWithBytes:c_key length:sizeof(c_key)];
// Convert string to encrypt to data
NSData* passData = [passPhrase dataUsingEncoding:NSUTF8StringEncoding];
NSData* encryptedData = [CryptoHelper AES256EncryptData:passData withKey:keyData iv:ivData];
long size = sizeof(Byte);
for (int i = 0; i < encryptedData.length / size; i++) {
Byte val;
NSRange range = NSMakeRange(i * size, size);
[encryptedData getBytes:&val range:range];
NSLog(#"[%i] %hhu", i, val);
}
Upon running the .NET code it prints out the following bytes after encryption:
[0] 194
[1] 154
[2] 141
[3] 238
[4] 77
[5] 109
[6] 33
[7] 94
[8] 158
[9] 5
[10] 7
[11] 187
[12] 193
[13] 165
[14] 70
[15] 5
Conversely, iOS prints the following after encryption:
[0] 77
[1] 213
[2] 61
[3] 190
[4] 197
[5] 191
[6] 55
[7] 230
[8] 150
[9] 144
[10] 5
[11] 253
[12] 253
[13] 158
[14] 34
[15] 138
I cannot for the life of me determine what is causing this difference. Some things I've already confirmed:
Both iOS and .NET can successfully decrypt their encrypted data.
The lines of code in the .NET project:
aesAlg.Padding = PaddingMode.PKCS7;
aesAlg.KeySize = 256;
aesAlg.BlockSize = 128;
Do not affect the result. They can be commented and the output is the same. I assume this means they are the default valus. I've only left them in to make it obvious I'm matching iOS's encryption properties as closely as possible for this example.
If I print out the bytes in the iOS NSData objects "ivData" and "keyData" it produces the same list of bytes that I created them with- so I don't think this is a C <-> ObjC bridging problem for the initial parameters.
If I print out the bytes in the iOS variable "passData" it prints the same single byte as .NET (88). So I'm fairly certain they are starting the encryption with the exact same data.
Due to how concise the .NET code is I've run out of obvious avenues of experimentation. My only thought is that someone may be able to point out a problem in my "AES256EncryptData:withKey:iv:" method. That code has been modified from the ubiquitous iOS AES256 code floating around because the key we are provided is a byte array- not a string. I'm pretty studied at ObjC but not nearly as comfortable with the C nonsense- so it's certainly possible I've fumbled the required modifications.
All help or suggestions would be greatly appreciated.

I notice you are using AES256 but have a 128-bit key! 16-bytes x 8-bits. You can not count on various functions to pad a key the same, that is undefined.

You're likely dealing with an issue of string encoding. In your iOS code I see that you are passing the string as UTF-8, which would result in a one-byte string of "X". .NET by default uses UTF-16, which means you have a two-byte string of "X".
You can use How to convert a string to UTF8? to convert your string to a UTF-8 byte array in .NET. You can try writing out the byte array of the plain-text string in both cases to determine that you are in fact passing the same bytes.

Related

c# DES PaddingMode.None

I have the next code in c# to encrypt a decrypt
Crypto.BlockSize = 64;
Crypto.FeedbackSize = 8;
Crypto.Mode = CipherMode.ECB;
Crypto.Padding = PaddingMode.None;
Encryptor = Crypto.CreateEncryptor(Encoding.ASCII.GetBytes("12345678"), Encoding.UTF8.GetBytes("12345678"));
Decryptor = Crypto.CreateDecryptor(Encoding.ASCII.GetBytes("12345678"), Encoding.UTF8.GetBytes("12345678"));
I have coded another encrypter/decrypter in other language.
DES uses 8 bytes padding. When the text to encrypt is 8 bytes or mult of 8, both works exactly.
The problem is when the size is 12. C# uses 12 bytes since it doesnt need any kind of padding... but in the other library i'm forced to use mult of 8 as input so i have checked to complete with all 0x00 and 0xFF but then i get different results in the last bytes.
What is c# doing when no padding and no mult of 8?... from where is it getting the rest of the missing bytes?

C# - why does System.Guid flip the bytes in a byte array?

When entering the following code into the C# immediate window, it yields some unusual results, which I can only assume are because internally, System.Guid flips certain bytes:
When using an ordinal byte array from 0 to 15
new Guid(new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15})
[03020100-0504-0706-0809-0a0b0c0d0e0f]
When using a non-ordinal byte array with values 0 to 15
new Guid(new byte[] {3, 2, 1, 0, 5, 4, 7, 6, 8, 9, 10, 11, 12, 13, 14, 15})
[00010203-0405-0607-0809-0a0b0c0d0e0f]
Why are the first 3 groups flipped?
Found on Wikipedia regarding UUID.
Other systems, notably Microsoft's marshalling of UUIDs in their COM/OLE libraries, use a mixed-endian format, whereby the first three components of the UUID are little-endian, and the last two are big-endian.
For example, 00112233-4455-6677-8899-aabbccddeeff is encoded as the bytes 33 22 11 00 55 44 77 66 88 99 aa bb cc dd ee ff
The first 4 byte block belong to an Int32 value, the next 2 blocks belong to Int16 values that are assigned to the Guid in reverse because of byte order. Perhaps you should try the other constructor that has matching integer data types as parameters and gives a more intuitive ordering:
Guid g = new Guid(0xA, 0xB, 0xC,
new Byte[] { 0, 1, 2, 3, 4, 5, 6, 7 } );
Console.WriteLine("{0:B}", g);
// The example displays the following output:
// {0000000a-000b-000c-0001-020304050607}
Look at the source code of Guid.cs to see the structure behind it:
// Represents a Globally Unique Identifier.
public struct Guid : IFormattable, IComparable,
IComparable<Guid>, IEquatable<Guid> {
// Member variables
private int _a; // <<== First group, 4 bytes
private short _b; // <<== Second group, 2 bytes
private short _c; // <<== Third group, 2 bytes
private byte _d;
private byte _e;
private byte _f;
private byte _g;
private byte _h;
private byte _i;
private byte _j;
private byte _k;
...
}
As you can see, internally Guid consists of a 32-bit integer, two 16-bit integers, and 8 individual bytes. On little-endian architectures the bytes of the first int and two shorts that follow it are stored in reverse order. The order of the remaining eight bytes remains unchanged.

C#: AES error: Padding is invalid and cannot be removed. Same key and everything, help

I'm quite new to C# so please be patient with me. I know this question was asked a lot if times, but I couldn't find an answer to my problem.
I'm saving some data and before writing it to a file I convert it to binary and store it in array, which I encrypt and then write to file. I encrypt data in chunks (32 bytes). In the same way I read data in chunks of 32 bytes and then decrypt that data and then this should repeat till the end of file. But when it comes to decryption the following error is thrown:
Padding is invalid and cannot be removed.
I use the same key and iv (hardcoded just until I get it working)
Here is my encryption code, which works without problems:
//result
byte[] data = new byte[32];
//setup encryption (AES)
SymmetricAlgorithm aes = Aes.Create();
byte[] key = { 145, 12, 32, 245, 98, 132, 98, 214, 6, 77, 131, 44, 221, 3, 9,50};
byte[] iv = { 15, 122, 132, 5, 93, 198, 44, 31, 9, 39, 241, 49, 250, 188, 80, 7 };
ICryptoTransform encryptor = aes.CreateEncryptor(key, iv);
FileStream fStream = new FileStream(file, FileMode.OpenOrCreate, FileAccess.Write, FileShare.Read, 1024, false);
//prepare data to write (byte array 'data') ...
//encrypt
MemoryStream m = new MemoryStream();
using (Stream c = new CryptoStream(m, encryptor, CryptoStreamMode.Write))
c.Write(data, 0, data.Length);
data = m.ToArray();
fStream.Write(data, 0, data.Length);
And here is my decryption code:
FileStream fStream = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read, 4096, false);
//setup encryption (AES)
SymmetricAlgorithm aes = Aes.Create();
byte[] key = { 145, 12, 32, 245, 98, 132, 98, 214, 6, 77, 131, 44, 221, 3, 9, 50 };
byte[] iv = { 15, 122, 132, 5, 93, 198, 44, 31, 9, 39, 241, 49, 250, 188, 80, 7 };
ICryptoTransform decryptor = aes.CreateDecryptor(key, iv);
//result
byte[] data = new byte[32];
//loop for reading the whole file ...
int len = fStream.Read(data, 0, 32);
//decrypt
MemoryStream m = new MemoryStream();
using (Stream c = new CryptoStream(m, decryptor, CryptoStreamMode.Write))
c.Write(data, 0, data.Length); //The exception is thrown in this line
data = m.ToArray();
//using the decrypted data and then looping back to reading and decrypting...
I tried all I could think of (which is not much because I'm very new to cryptography), I searched everywhere and I couldn't find a solution to my problem. I also helped myself with the book C# in a Nutshell .
If anyone has ideas on why this could happen I'll be really thankful because I have no ideas.
Thank you for your time and answers.
EDIT:
It seems that the size of the encrypted data is 48 bytes (12 bytes more than the original). Why is that so? I thought that it only adds bytes if they are not a multiple of the block size (16 bytes, my data is 32 bytes). Is data always larger, and with constant increase (I need to know that in order to properly read and decrypt).
Note: I can't directly use other streams because I need to have control over the output format and I believe it is also safer and faster to encrypt in memory.
Based on your edit:
EDIT: It seems that the size of the encrypted data is 48 bytes (12 bytes more than the original). Why is that so? I thought that it only adds bytes if they are not a multiple of the block size (16 bytes, my data is 32 bytes). Is data always larger, and with constant increase (I need to know that in order to properly read and decrypt).
If the encrypted data is 48 bytes, thats 16 bytes larger than your original array. This makes sense because the algorithm with pad the data because the default is PKCS7 (even if the size matches the block size, because it pads to the next multiple of the block-size). If you wish to keep it exactly 32 bytes, just change the Padding to None
aes.Padding = PaddingMode.None;
You seem to be treating the length of the plaintext as the length of the ciphertext. That's not a safe assumption.
Why are you copying between FileStream and MemoryStream, you can pass a FileStream directly to the encryptor/decryptor.
In PKCS7, there is a minimum of one padding byte (to store the number of padding bytes). So the output size will be Ceil16(input.Length + 1), or (input.Length & ~15) + 1.
The short of it is that AES encrypts messages in blocks of 16 bytes. If your message isn't an even multiple of 16 bytes, the algorithm needs to be a little different for the last block; specifically, the last block must be "padded" with a value known to the algorithm as a padding value (usually zero, sometimes something else like a space character value).
You're doing that yourself, by putting the data into a fixed-length byte array. You padded the data yourself, but the decrypter is now attempting to de-pad the last block and getting byte values it doesn't recognize as the padding that its encrypter counterpart would have added.
The key is not to pad the message. You can use the BitConverter class to cast byte arrays to and from IConvertible types (value types and strings), and then use that instead of rolling your own byte array. Then, when you decrypt, you can read from the decryption stream up to the ciphertext length, but don't expect there to be that many actual bytes in the decrypted result.

Decoding RTP G,711 A-Law Packet Problem , difference between c# & c++

first sorry for my English
by using c#
I'm try to decoding RTP A-Law Packet but it is gave me a noise,
i checked my code with wireshark code which can gave me the voice without noise , i can not get the difference between wireshark code (c++) and my code (c#) which because I can not Debugging wireshark , but i get the difference Bytes resulting from my code and wireshark code
i will write the Bytes result from two code and simple of my code and wireshark
for example :
when the alaw_exp_table[data[i]] = -8
in my cod the bytes result are : 248 , 255
in wireshark code the bytes result are : 255 , 248
are you see 248,255 : 255,248 i think it was a reflection but the next example not
when the alaw_exp_table[data[i]] = 8
in my cod the bytes result are : 8 , 0
in wireshark code the bytes result are : 0 , 0
this wireshark
int
decodeG711a(void *input, int inputSizeBytes, void *output, int *outputSizeBytes)
{
guint8 *dataIn = (guint8 *)input;
gint16 *dataOut = (gint16 *)output;
int i;
for (i=0; i<inputSizeBytes; i++)
{
dataOut[i] = alaw_exp_table[dataIn[i]];
}
*outputSizeBytes = inputSizeBytes * 2;
return 0;
}
static short[] alaw_exp_table= {
-5504, -5248, -6016, -5760, -4480, -4224, -4992, -4736,
-7552, -7296, -8064, -7808, -6528, -6272, -7040, -6784,
-2752, -2624, -3008, -2880, -2240, -2112, -2496, -2368,
-3776, -3648, -4032, -3904, -3264, -3136, -3520, -3392,
-22016,-20992,-24064,-23040,-17920,-16896,-19968,-18944,
-30208,-29184,-32256,-31232,-26112,-25088,-28160,-27136,
-11008,-10496,-12032,-11520, -8960, -8448, -9984, -9472,
-15104,-14592,-16128,-15616,-13056,-12544,-14080,-13568,
-344, -328, -376, -360, -280, -264, -312, -296,
-472, -456, -504, -488, -408, -392, -440, -424,
-88, -72, -120, -104, -24, -8, -56, -40,
-216, -200, -248, -232, -152, -136, -184, -168,
-1376, -1312, -1504, -1440, -1120, -1056, -1248, -1184,
-1888, -1824, -2016, -1952, -1632, -1568, -1760, -1696,
-688, -656, -752, -720, -560, -528, -624, -592,
-944, -912, -1008, -976, -816, -784, -880, -848,
5504, 5248, 6016, 5760, 4480, 4224, 4992, 4736,
7552, 7296, 8064, 7808, 6528, 6272, 7040, 6784,
2752, 2624, 3008, 2880, 2240, 2112, 2496, 2368,
3776, 3648, 4032, 3904, 3264, 3136, 3520, 3392,
22016, 20992, 24064, 23040, 17920, 16896, 19968, 18944,
30208, 29184, 32256, 31232, 26112, 25088, 28160, 27136,
11008, 10496, 12032, 11520, 8960, 8448, 9984, 9472,
15104, 14592, 16128, 15616, 13056, 12544, 14080, 13568,
344, 328, 376, 360, 280, 264, 312, 296,
472, 456, 504, 488, 408, 392, 440, 424,
88, 72, 120, 104, 24, 8, 56, 40,
216, 200, 248, 232, 152, 136, 184, 168,
1376, 1312, 1504, 1440, 1120, 1056, 1248, 1184,
1888, 1824, 2016, 1952, 1632, 1568, 1760, 1696,
688, 656, 752, 720, 560, 528, 624, 592,
944, 912, 1008, 976, 816, 784, 880, 848};
and this is my code
public static void ALawDecode(byte data, out byte[] decoded)
{
int size = data.Length;
decoded = new byte[size * 2];
for (int i = 0; i < size; i++)
{
//First byte is the less significant byte
decoded[2 * i] = (byte)(alaw_exp_table[data[i]] & 0xff);
//Second byte is the more significant byte
decoded[2 * i + 1] = (byte)(alaw_exp_table[data[i]] >> 8);
}
}
the alaw_exp_table is same in my code and wireshark code
please tell me what is the wrong in my code which do that noise ?
thanks in advance
Your are probably handling the endianess incorrectly.
Try swapping the the two decoding operations in your C# sample. Eg:
decoded[2 * i + 1] = (byte)(alaw_exp_table[data[i]] & 0xff);
decoded[2 * i] = (byte)(alaw_exp_table[data[i]] >> 8);
You are decoding eight bit A-law samples into 16 bit signed PCM, so it would make sense for you to use an array of shorts for the output. This is close to what the C code is doing.
If you don't have a particular reason for using a byte array as output, I would suggest just having the A-law lookup table be a short array an just move 16-bit signed values around instead of messing around with byte ordering.
If you really do care about bytes and byte ordering, you need to get the byte ordering right, as #leppie says. This will depend on what you actually do with the output.

Convert C++ to C#

in c++:
byte des16[16];
....
byte *d = des16+8;
in c#?
byte des16[16];
[0] 207 'Ï' unsigned char
[1] 216 'Ø' unsigned char
[2] 108 'l' unsigned char
[3] 93 ']' unsigned char
[4] 249 'ù' unsigned char
[5] 249 'ù' unsigned char
[6] 100 'd' unsigned char
[7] 0 unsigned char
[8] 76 'L' unsigned char
[9] 50 '2' unsigned char
[10] 104 'h' unsigned char
[11] 118 'v' unsigned char
[12] 104 'h' unsigned char
[13] 191 '¿' unsigned char
[14] 171 '«' unsigned char
[15] 199 'Ç' unsigned char
after
byte *d = des16+8;
d = "L2hvh¿«Ç†¿æ^ òÎL2hvh¿«Ç"
C# (generally speaking) has no pointers. Maybe the following is what you are after:
byte[] des16 = new byte[16];
byte byteAtIndex8 = des16[8];
This code gives you the element at index 8.
If I read your code correctly, you are trying to get the address of the element at index 8. This is - generally speaking - not possible with C# (unless you use unsafe code).
I think this would be more appropriate (though it depends on how d is used):
byte[] des16 = new byte[16];
IEnumerable<byte> d = des16.Skip(8);
Using pure managed code, you cannot use pointers to locations. Since d takes a pointer to the 8th element of the array, the closest analog would be creating an enumeration of des16 skipping the first 8 items. If you are just iterating through the items, this would be the best choice.
I should also mention that Skip() is one of many extension methods available for arrays (and other IEnumerables) in .Net 3.5 (VS2008/VS2010) and up which I could only assume you were using. You wouldn't be able to use it if you are using .Net 2.0 (VS2003/VS2005).
If d is used to access the offset elements in des16 like an array, it could be converted to an array as well.
byte[] d = des16.Skip(8).ToArray();
Note this creates a separate instance of an array which contains the items in des16 excluding the first 8.
Otherwise it's not completely clear what the best use would be without seeing how it is used.
[edit]
It appears you are working with null-terminated strings in a buffer in .Net 2.0 possibly (if Skip() isn't available). If you want the string representation, you can convert it to a native string object.
byte[] des16 = new byte[16];
char[] chararr = Array.ConvertAll(des16, delegate(byte b) { return (char)b; }); //convert to an array of characters
string str = new String(chararr, 8, chararr-8); //create the string
byte[] des16 = new byte[16];
....
byte d = des16[8];
Unless you use unsafe code you cannot retrieve a pointer.
#JeffMercado, Thanks for opening my eyes.
In c++:
byte des16[16];
byte *d = des16+8;
In c#:
byte[] des16 = new byte[16];
byte[] b = new byte[8];
System.Array.Copy(des16, 8, b, 0, 8);
Pointers are basically getting converted. We can change it to an collection in c# .
In c++ if you need to change collection (string) to byte[] collection in c# you can execute code as below.
byte[] toBytes = Encoding.ASCII.GetBytes(somestring);

Categories

Resources