When entering the following code into the C# immediate window, it yields some unusual results, which I can only assume are because internally, System.Guid flips certain bytes:
When using an ordinal byte array from 0 to 15
new Guid(new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15})
[03020100-0504-0706-0809-0a0b0c0d0e0f]
When using a non-ordinal byte array with values 0 to 15
new Guid(new byte[] {3, 2, 1, 0, 5, 4, 7, 6, 8, 9, 10, 11, 12, 13, 14, 15})
[00010203-0405-0607-0809-0a0b0c0d0e0f]
Why are the first 3 groups flipped?
Found on Wikipedia regarding UUID.
Other systems, notably Microsoft's marshalling of UUIDs in their COM/OLE libraries, use a mixed-endian format, whereby the first three components of the UUID are little-endian, and the last two are big-endian.
For example, 00112233-4455-6677-8899-aabbccddeeff is encoded as the bytes 33 22 11 00 55 44 77 66 88 99 aa bb cc dd ee ff
The first 4 byte block belong to an Int32 value, the next 2 blocks belong to Int16 values that are assigned to the Guid in reverse because of byte order. Perhaps you should try the other constructor that has matching integer data types as parameters and gives a more intuitive ordering:
Guid g = new Guid(0xA, 0xB, 0xC,
new Byte[] { 0, 1, 2, 3, 4, 5, 6, 7 } );
Console.WriteLine("{0:B}", g);
// The example displays the following output:
// {0000000a-000b-000c-0001-020304050607}
Look at the source code of Guid.cs to see the structure behind it:
// Represents a Globally Unique Identifier.
public struct Guid : IFormattable, IComparable,
IComparable<Guid>, IEquatable<Guid> {
// Member variables
private int _a; // <<== First group, 4 bytes
private short _b; // <<== Second group, 2 bytes
private short _c; // <<== Third group, 2 bytes
private byte _d;
private byte _e;
private byte _f;
private byte _g;
private byte _h;
private byte _i;
private byte _j;
private byte _k;
...
}
As you can see, internally Guid consists of a 32-bit integer, two 16-bit integers, and 8 individual bytes. On little-endian architectures the bytes of the first int and two shorts that follow it are stored in reverse order. The order of the remaining eight bytes remains unchanged.
I'm quite new to C# so please be patient with me. I know this question was asked a lot if times, but I couldn't find an answer to my problem.
I'm saving some data and before writing it to a file I convert it to binary and store it in array, which I encrypt and then write to file. I encrypt data in chunks (32 bytes). In the same way I read data in chunks of 32 bytes and then decrypt that data and then this should repeat till the end of file. But when it comes to decryption the following error is thrown:
Padding is invalid and cannot be removed.
I use the same key and iv (hardcoded just until I get it working)
Here is my encryption code, which works without problems:
//result
byte[] data = new byte[32];
//setup encryption (AES)
SymmetricAlgorithm aes = Aes.Create();
byte[] key = { 145, 12, 32, 245, 98, 132, 98, 214, 6, 77, 131, 44, 221, 3, 9,50};
byte[] iv = { 15, 122, 132, 5, 93, 198, 44, 31, 9, 39, 241, 49, 250, 188, 80, 7 };
ICryptoTransform encryptor = aes.CreateEncryptor(key, iv);
FileStream fStream = new FileStream(file, FileMode.OpenOrCreate, FileAccess.Write, FileShare.Read, 1024, false);
//prepare data to write (byte array 'data') ...
//encrypt
MemoryStream m = new MemoryStream();
using (Stream c = new CryptoStream(m, encryptor, CryptoStreamMode.Write))
c.Write(data, 0, data.Length);
data = m.ToArray();
fStream.Write(data, 0, data.Length);
And here is my decryption code:
FileStream fStream = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read, 4096, false);
//setup encryption (AES)
SymmetricAlgorithm aes = Aes.Create();
byte[] key = { 145, 12, 32, 245, 98, 132, 98, 214, 6, 77, 131, 44, 221, 3, 9, 50 };
byte[] iv = { 15, 122, 132, 5, 93, 198, 44, 31, 9, 39, 241, 49, 250, 188, 80, 7 };
ICryptoTransform decryptor = aes.CreateDecryptor(key, iv);
//result
byte[] data = new byte[32];
//loop for reading the whole file ...
int len = fStream.Read(data, 0, 32);
//decrypt
MemoryStream m = new MemoryStream();
using (Stream c = new CryptoStream(m, decryptor, CryptoStreamMode.Write))
c.Write(data, 0, data.Length); //The exception is thrown in this line
data = m.ToArray();
//using the decrypted data and then looping back to reading and decrypting...
I tried all I could think of (which is not much because I'm very new to cryptography), I searched everywhere and I couldn't find a solution to my problem. I also helped myself with the book C# in a Nutshell .
If anyone has ideas on why this could happen I'll be really thankful because I have no ideas.
Thank you for your time and answers.
EDIT:
It seems that the size of the encrypted data is 48 bytes (12 bytes more than the original). Why is that so? I thought that it only adds bytes if they are not a multiple of the block size (16 bytes, my data is 32 bytes). Is data always larger, and with constant increase (I need to know that in order to properly read and decrypt).
Note: I can't directly use other streams because I need to have control over the output format and I believe it is also safer and faster to encrypt in memory.
Based on your edit:
EDIT: It seems that the size of the encrypted data is 48 bytes (12 bytes more than the original). Why is that so? I thought that it only adds bytes if they are not a multiple of the block size (16 bytes, my data is 32 bytes). Is data always larger, and with constant increase (I need to know that in order to properly read and decrypt).
If the encrypted data is 48 bytes, thats 16 bytes larger than your original array. This makes sense because the algorithm with pad the data because the default is PKCS7 (even if the size matches the block size, because it pads to the next multiple of the block-size). If you wish to keep it exactly 32 bytes, just change the Padding to None
aes.Padding = PaddingMode.None;
You seem to be treating the length of the plaintext as the length of the ciphertext. That's not a safe assumption.
Why are you copying between FileStream and MemoryStream, you can pass a FileStream directly to the encryptor/decryptor.
In PKCS7, there is a minimum of one padding byte (to store the number of padding bytes). So the output size will be Ceil16(input.Length + 1), or (input.Length & ~15) + 1.
The short of it is that AES encrypts messages in blocks of 16 bytes. If your message isn't an even multiple of 16 bytes, the algorithm needs to be a little different for the last block; specifically, the last block must be "padded" with a value known to the algorithm as a padding value (usually zero, sometimes something else like a space character value).
You're doing that yourself, by putting the data into a fixed-length byte array. You padded the data yourself, but the decrypter is now attempting to de-pad the last block and getting byte values it doesn't recognize as the padding that its encrypter counterpart would have added.
The key is not to pad the message. You can use the BitConverter class to cast byte arrays to and from IConvertible types (value types and strings), and then use that instead of rolling your own byte array. Then, when you decrypt, you can read from the decryption stream up to the ciphertext length, but don't expect there to be that many actual bytes in the decrypted result.
first sorry for my English
by using c#
I'm try to decoding RTP A-Law Packet but it is gave me a noise,
i checked my code with wireshark code which can gave me the voice without noise , i can not get the difference between wireshark code (c++) and my code (c#) which because I can not Debugging wireshark , but i get the difference Bytes resulting from my code and wireshark code
i will write the Bytes result from two code and simple of my code and wireshark
for example :
when the alaw_exp_table[data[i]] = -8
in my cod the bytes result are : 248 , 255
in wireshark code the bytes result are : 255 , 248
are you see 248,255 : 255,248 i think it was a reflection but the next example not
when the alaw_exp_table[data[i]] = 8
in my cod the bytes result are : 8 , 0
in wireshark code the bytes result are : 0 , 0
this wireshark
int
decodeG711a(void *input, int inputSizeBytes, void *output, int *outputSizeBytes)
{
guint8 *dataIn = (guint8 *)input;
gint16 *dataOut = (gint16 *)output;
int i;
for (i=0; i<inputSizeBytes; i++)
{
dataOut[i] = alaw_exp_table[dataIn[i]];
}
*outputSizeBytes = inputSizeBytes * 2;
return 0;
}
static short[] alaw_exp_table= {
-5504, -5248, -6016, -5760, -4480, -4224, -4992, -4736,
-7552, -7296, -8064, -7808, -6528, -6272, -7040, -6784,
-2752, -2624, -3008, -2880, -2240, -2112, -2496, -2368,
-3776, -3648, -4032, -3904, -3264, -3136, -3520, -3392,
-22016,-20992,-24064,-23040,-17920,-16896,-19968,-18944,
-30208,-29184,-32256,-31232,-26112,-25088,-28160,-27136,
-11008,-10496,-12032,-11520, -8960, -8448, -9984, -9472,
-15104,-14592,-16128,-15616,-13056,-12544,-14080,-13568,
-344, -328, -376, -360, -280, -264, -312, -296,
-472, -456, -504, -488, -408, -392, -440, -424,
-88, -72, -120, -104, -24, -8, -56, -40,
-216, -200, -248, -232, -152, -136, -184, -168,
-1376, -1312, -1504, -1440, -1120, -1056, -1248, -1184,
-1888, -1824, -2016, -1952, -1632, -1568, -1760, -1696,
-688, -656, -752, -720, -560, -528, -624, -592,
-944, -912, -1008, -976, -816, -784, -880, -848,
5504, 5248, 6016, 5760, 4480, 4224, 4992, 4736,
7552, 7296, 8064, 7808, 6528, 6272, 7040, 6784,
2752, 2624, 3008, 2880, 2240, 2112, 2496, 2368,
3776, 3648, 4032, 3904, 3264, 3136, 3520, 3392,
22016, 20992, 24064, 23040, 17920, 16896, 19968, 18944,
30208, 29184, 32256, 31232, 26112, 25088, 28160, 27136,
11008, 10496, 12032, 11520, 8960, 8448, 9984, 9472,
15104, 14592, 16128, 15616, 13056, 12544, 14080, 13568,
344, 328, 376, 360, 280, 264, 312, 296,
472, 456, 504, 488, 408, 392, 440, 424,
88, 72, 120, 104, 24, 8, 56, 40,
216, 200, 248, 232, 152, 136, 184, 168,
1376, 1312, 1504, 1440, 1120, 1056, 1248, 1184,
1888, 1824, 2016, 1952, 1632, 1568, 1760, 1696,
688, 656, 752, 720, 560, 528, 624, 592,
944, 912, 1008, 976, 816, 784, 880, 848};
and this is my code
public static void ALawDecode(byte data, out byte[] decoded)
{
int size = data.Length;
decoded = new byte[size * 2];
for (int i = 0; i < size; i++)
{
//First byte is the less significant byte
decoded[2 * i] = (byte)(alaw_exp_table[data[i]] & 0xff);
//Second byte is the more significant byte
decoded[2 * i + 1] = (byte)(alaw_exp_table[data[i]] >> 8);
}
}
the alaw_exp_table is same in my code and wireshark code
please tell me what is the wrong in my code which do that noise ?
thanks in advance
Your are probably handling the endianess incorrectly.
Try swapping the the two decoding operations in your C# sample. Eg:
decoded[2 * i + 1] = (byte)(alaw_exp_table[data[i]] & 0xff);
decoded[2 * i] = (byte)(alaw_exp_table[data[i]] >> 8);
You are decoding eight bit A-law samples into 16 bit signed PCM, so it would make sense for you to use an array of shorts for the output. This is close to what the C code is doing.
If you don't have a particular reason for using a byte array as output, I would suggest just having the A-law lookup table be a short array an just move 16-bit signed values around instead of messing around with byte ordering.
If you really do care about bytes and byte ordering, you need to get the byte ordering right, as #leppie says. This will depend on what you actually do with the output.
in c++:
byte des16[16];
....
byte *d = des16+8;
in c#?
byte des16[16];
[0] 207 'Ï' unsigned char
[1] 216 'Ø' unsigned char
[2] 108 'l' unsigned char
[3] 93 ']' unsigned char
[4] 249 'ù' unsigned char
[5] 249 'ù' unsigned char
[6] 100 'd' unsigned char
[7] 0 unsigned char
[8] 76 'L' unsigned char
[9] 50 '2' unsigned char
[10] 104 'h' unsigned char
[11] 118 'v' unsigned char
[12] 104 'h' unsigned char
[13] 191 '¿' unsigned char
[14] 171 '«' unsigned char
[15] 199 'Ç' unsigned char
after
byte *d = des16+8;
d = "L2hvh¿«Ç†¿æ^ òÎL2hvh¿«Ç"
C# (generally speaking) has no pointers. Maybe the following is what you are after:
byte[] des16 = new byte[16];
byte byteAtIndex8 = des16[8];
This code gives you the element at index 8.
If I read your code correctly, you are trying to get the address of the element at index 8. This is - generally speaking - not possible with C# (unless you use unsafe code).
I think this would be more appropriate (though it depends on how d is used):
byte[] des16 = new byte[16];
IEnumerable<byte> d = des16.Skip(8);
Using pure managed code, you cannot use pointers to locations. Since d takes a pointer to the 8th element of the array, the closest analog would be creating an enumeration of des16 skipping the first 8 items. If you are just iterating through the items, this would be the best choice.
I should also mention that Skip() is one of many extension methods available for arrays (and other IEnumerables) in .Net 3.5 (VS2008/VS2010) and up which I could only assume you were using. You wouldn't be able to use it if you are using .Net 2.0 (VS2003/VS2005).
If d is used to access the offset elements in des16 like an array, it could be converted to an array as well.
byte[] d = des16.Skip(8).ToArray();
Note this creates a separate instance of an array which contains the items in des16 excluding the first 8.
Otherwise it's not completely clear what the best use would be without seeing how it is used.
[edit]
It appears you are working with null-terminated strings in a buffer in .Net 2.0 possibly (if Skip() isn't available). If you want the string representation, you can convert it to a native string object.
byte[] des16 = new byte[16];
char[] chararr = Array.ConvertAll(des16, delegate(byte b) { return (char)b; }); //convert to an array of characters
string str = new String(chararr, 8, chararr-8); //create the string
byte[] des16 = new byte[16];
....
byte d = des16[8];
Unless you use unsafe code you cannot retrieve a pointer.
#JeffMercado, Thanks for opening my eyes.
In c++:
byte des16[16];
byte *d = des16+8;
In c#:
byte[] des16 = new byte[16];
byte[] b = new byte[8];
System.Array.Copy(des16, 8, b, 0, 8);
Pointers are basically getting converted. We can change it to an collection in c# .
In c++ if you need to change collection (string) to byte[] collection in c# you can execute code as below.
byte[] toBytes = Encoding.ASCII.GetBytes(somestring);