I'm writing a software in C# which has as input a generic class made by primitive types and should generate a bytestream. This bytestream must be sent to a PLC buffer.
I read so many articles here in StackOverflow which basically go in three solutions:
Using binary formatter. This solution brings me a serializated binary stream (trying back Deserialize method returns me the original class) but checking the buffer I discovered that I can't find my data (see details below). Moreover I'm little bit worried by the fact that by checking Microsoft documentation the class is deprecated.
Using binary writer. This works correctly, but having a lot of different class declarations I needs to use Reflections in order retrieve dinamically each type and serialize it and it sounds a little bit too complicated to me (I'm leaving it as a "plan b" solution).
Using TypeDescriptor to convert the object. It doesn't work at all and Runtime engine returns "TypeConverter' is unable to convert X class"
Binary formatter code try is as follows:
TYPE_S_TESTA test = new TYPE_S_TESTA();
test.b_tipo = 54;
using (MemoryStream ms = new MemoryStream())
{
BinaryFormatter bf = new BinaryFormatter(); // BinaryFormatter is deprecated
bf.Serialize(ms, test);
ms.Seek(0, SeekOrigin.Begin);
byte[] test3 = ms.ToArray();
}
Class to serialize is defined as follows:
[Serializable]
public class TYPE_S_TESTA
{
public short b_tipo;
public char b_modo;
public char b_area;
public char b_sorgente;
public char b_destinatario;
public short w_lunghezza;
public short w_contatore;
public short w_turno;
public short w_tempo;
}
I already defined one value in my class as per test purposes. I expected a 14 bytes array with '54' inside (btw, another question is, what's the serialization order? I need exactly the same order as my definition). What I see with debugger on test3 buffer is instead:
_buffer {byte[512]} byte[]
[0] 0 byte
[1] 1 byte
[2] 0 byte
[3] 0 byte
[4] 0 byte
[5] 255 byte
[6] 255 byte
[7] 255 byte
[8] 255 byte
[9] 1 byte
[10] 0 byte
[11] 0 byte
[12] 0 byte
[13] 0 byte
[14] 0 byte
[15] 0 byte
[16] 0 byte
[17] 12 byte
[18] 2 byte
[19] 0 byte
[20] 0 byte
[21] 0 byte
[22] 71 byte
[23] 70 byte
[24] 97 byte
[25] 99 byte
So, no trace of my 54 and a 512 bytes buffer (why is it so big?).
You can declare TYPE_S_TESTA as follows:
using System.Runtime.InteropServices
[StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct TYPE_S_TESTA
{
public short b_tipo;
public char b_modo;
public char b_area;
public char b_sorgente;
public char b_destinatario;
public short w_lunghezza;
public short w_contatore;
public short w_turno;
public short w_tempo;
}
You can convert an instance of a TYPE_S_TESTA to a byte array like this:
TYPE_S_TESTA test = new TYPE_S_TESTA();
test.b_tipo = 54;
int size = Marshal.SizeOf(typeof(TYPE_S_TESTA));
byte[] test3 = new byte[size];
IntPtr ptr = Marshal.AllocHGlobal(size);
Marshal.StructureToPtr(test, ptr, true);
Marshal.Copy(ptr, test3, 0, size);
I plan to send raw binary data from a GSM module to a Web API controller. I'm simulating sending the data with Fiddler. The data format is 8 bytes eg.
0x18 0x01 0x1B 0x02 0x12 0x10 0x2D 0x0A
I receive the data at the controller in a 16 byte array:
the data looks correct:
byte 0 = 49 (Ascii char 1) (binary 0011 0001)
byte 1 = 56 (Ascii char 8) (binary 0011 1000)
I need to combine both these bytes to create a single byte of 0x18 (binary 0001 1000)
Looking at the binary values, it looks like i need to shift byte 0 left 4 places, then use the and operator with byte 1?
I'm a bit stuck if anyone could please help.
Thank you
Using bit operators:
byte a = 49;
byte b = 56;
a <<= 4;
b <<= 4;
b >>= 4;
byte result = (byte)(b + a);
Console.WriteLine("{0}", result);
I've been at this for a few days now. My original (and eventual) goal was to use CommonCrypto on iOS to encrypt a password with a given IV and key, then successfully decrypt it using .NET. After tons of research and failures I've narrowed down my goal to simply producing the same encrypted bytes on iOS and .NET, then going from there.
I've created simple test projects in .NET (C#, framework 4.5) and iOS (8.1). Please note the following code is not intended to be secure, but rather winnow down the variables in the larger process. Also, iOS is the variable here. The final .NET encryption code will be deployed by a client so it's up to me to bring the iOS encryption in line. Unless this is confirmed impossible the .NET code will not be changed.
The relevant .NET encryption code:
static byte[] EncryptStringToBytes_Aes(string plainText, byte[] Key, byte[] IV)
{
byte[] encrypted;
// Create an Aes object
// with the specified key and IV.
using (Aes aesAlg = Aes.Create())
{
aesAlg.Padding = PaddingMode.PKCS7;
aesAlg.KeySize = 256;
aesAlg.BlockSize = 128;
// Create an encryptor to perform the stream transform.
ICryptoTransform encryptor = aesAlg.CreateEncryptor(Key, IV);
// Create the streams used for encryption.
using (MemoryStream msEncrypt = new MemoryStream())
{
using (CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write))
{
using (StreamWriter swEncrypt = new StreamWriter(csEncrypt))
{
//Write all data to the stream.
swEncrypt.Write(plainText);
}
encrypted = msEncrypt.ToArray();
}
}
}
return encrypted;
}
The relevant iOS encryption code:
+(NSData*)AES256EncryptData:(NSData *)data withKey:(NSData*)key iv:(NSData*)ivector
{
Byte keyPtr[kCCKeySizeAES256+1]; // Pointer with room for terminator (unused)
// Pad to the required size
bzero(keyPtr, sizeof(keyPtr));
// fetch key data
[key getBytes:keyPtr length:sizeof(keyPtr)];
// -- IV LOGIC
Byte ivPtr[16];
bzero(ivPtr, sizeof(ivPtr));
[ivector getBytes:ivPtr length:sizeof(ivPtr)];
// Data length
NSUInteger dataLength = data.length;
// See the doc: For block ciphers, the output size will always be less than or equal to the input size plus the size of one block.
// That's why we need to add the size of one block here
size_t bufferSize = dataLength + kCCBlockSizeAES128;
void *buffer = malloc(bufferSize);
size_t numBytesEncrypted = 0;
CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding,
keyPtr, kCCKeySizeAES256,
ivPtr,
data.bytes, dataLength,
buffer, bufferSize,
&numBytesEncrypted);
if (cryptStatus == kCCSuccess) {
return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted];
}
free(buffer);
return nil;
}
The relevant code for passing the pass, key, and IV in .NET and printing result:
byte[] c_IV = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
byte[] c_Key = { 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
String passPhrase = "X";
// Encrypt
byte[] encrypted = EncryptStringToBytes_Aes(passPhrase, c_Key, c_IV);
// Print result
for (int i = 0; i < encrypted.Count(); i++)
{
Console.WriteLine("[{0}] {1}", i, encrypted[i]);
}
The relevant code for passing the parameters and printing the result in iOS:
Byte c_iv[16] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
Byte c_key[16] = { 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
NSString* passPhrase = #"X";
// Convert to data
NSData* ivData = [NSData dataWithBytes:c_iv length:sizeof(c_iv)];
NSData* keyData = [NSData dataWithBytes:c_key length:sizeof(c_key)];
// Convert string to encrypt to data
NSData* passData = [passPhrase dataUsingEncoding:NSUTF8StringEncoding];
NSData* encryptedData = [CryptoHelper AES256EncryptData:passData withKey:keyData iv:ivData];
long size = sizeof(Byte);
for (int i = 0; i < encryptedData.length / size; i++) {
Byte val;
NSRange range = NSMakeRange(i * size, size);
[encryptedData getBytes:&val range:range];
NSLog(#"[%i] %hhu", i, val);
}
Upon running the .NET code it prints out the following bytes after encryption:
[0] 194
[1] 154
[2] 141
[3] 238
[4] 77
[5] 109
[6] 33
[7] 94
[8] 158
[9] 5
[10] 7
[11] 187
[12] 193
[13] 165
[14] 70
[15] 5
Conversely, iOS prints the following after encryption:
[0] 77
[1] 213
[2] 61
[3] 190
[4] 197
[5] 191
[6] 55
[7] 230
[8] 150
[9] 144
[10] 5
[11] 253
[12] 253
[13] 158
[14] 34
[15] 138
I cannot for the life of me determine what is causing this difference. Some things I've already confirmed:
Both iOS and .NET can successfully decrypt their encrypted data.
The lines of code in the .NET project:
aesAlg.Padding = PaddingMode.PKCS7;
aesAlg.KeySize = 256;
aesAlg.BlockSize = 128;
Do not affect the result. They can be commented and the output is the same. I assume this means they are the default valus. I've only left them in to make it obvious I'm matching iOS's encryption properties as closely as possible for this example.
If I print out the bytes in the iOS NSData objects "ivData" and "keyData" it produces the same list of bytes that I created them with- so I don't think this is a C <-> ObjC bridging problem for the initial parameters.
If I print out the bytes in the iOS variable "passData" it prints the same single byte as .NET (88). So I'm fairly certain they are starting the encryption with the exact same data.
Due to how concise the .NET code is I've run out of obvious avenues of experimentation. My only thought is that someone may be able to point out a problem in my "AES256EncryptData:withKey:iv:" method. That code has been modified from the ubiquitous iOS AES256 code floating around because the key we are provided is a byte array- not a string. I'm pretty studied at ObjC but not nearly as comfortable with the C nonsense- so it's certainly possible I've fumbled the required modifications.
All help or suggestions would be greatly appreciated.
I notice you are using AES256 but have a 128-bit key! 16-bytes x 8-bits. You can not count on various functions to pad a key the same, that is undefined.
You're likely dealing with an issue of string encoding. In your iOS code I see that you are passing the string as UTF-8, which would result in a one-byte string of "X". .NET by default uses UTF-16, which means you have a two-byte string of "X".
You can use How to convert a string to UTF8? to convert your string to a UTF-8 byte array in .NET. You can try writing out the byte array of the plain-text string in both cases to determine that you are in fact passing the same bytes.
I start with a signed byte array and convert to unsigned.. so is the printed result correct?
byte[] unsigned = new byte[] {10,100,120,180,200,220,240};
sbyte[] signed = Utils.toSignedByteArray(unsigned);
And the print (I just append them with a StringBuilder):
signed: [10,100,120,-76,-56,-36,-16]
unsigned : [10,100,120,180,200,220,240]
where:
public static sbyte[] toSignedByteArray(byte[] unsigned){
sbyte[] signed = new sbyte[unsigned.Length];
Buffer.BlockCopy(unsigned, 0, signed, 0, unsigned.Length);
return signed;
}
If I change to this I get the same result.
sbyte[] signed = (sbyte[])(Array)unsigned;
Shouldn't -128 (signed) become 0, -118 become 10, and so on.. and not 10 (signed) = 10 (unsigned)!?
Because
sbyte -128 to 127
byte 0 to 255
So??
Signed integers are represented in the Two's complement system.
Examples:
Bits Unsigned 2's complement
value value
00000000 0 0
00000001 1 1
00000010 2 2
01111110 126 126
01111111 127 127
10000000 128 −128
10000001 129 −127
10000010 130 −126
11111110 254 −2
11111111 255 −1
in c++:
byte des16[16];
....
byte *d = des16+8;
in c#?
byte des16[16];
[0] 207 'Ï' unsigned char
[1] 216 'Ø' unsigned char
[2] 108 'l' unsigned char
[3] 93 ']' unsigned char
[4] 249 'ù' unsigned char
[5] 249 'ù' unsigned char
[6] 100 'd' unsigned char
[7] 0 unsigned char
[8] 76 'L' unsigned char
[9] 50 '2' unsigned char
[10] 104 'h' unsigned char
[11] 118 'v' unsigned char
[12] 104 'h' unsigned char
[13] 191 '¿' unsigned char
[14] 171 '«' unsigned char
[15] 199 'Ç' unsigned char
after
byte *d = des16+8;
d = "L2hvh¿«Ç†¿æ^ òÎL2hvh¿«Ç"
C# (generally speaking) has no pointers. Maybe the following is what you are after:
byte[] des16 = new byte[16];
byte byteAtIndex8 = des16[8];
This code gives you the element at index 8.
If I read your code correctly, you are trying to get the address of the element at index 8. This is - generally speaking - not possible with C# (unless you use unsafe code).
I think this would be more appropriate (though it depends on how d is used):
byte[] des16 = new byte[16];
IEnumerable<byte> d = des16.Skip(8);
Using pure managed code, you cannot use pointers to locations. Since d takes a pointer to the 8th element of the array, the closest analog would be creating an enumeration of des16 skipping the first 8 items. If you are just iterating through the items, this would be the best choice.
I should also mention that Skip() is one of many extension methods available for arrays (and other IEnumerables) in .Net 3.5 (VS2008/VS2010) and up which I could only assume you were using. You wouldn't be able to use it if you are using .Net 2.0 (VS2003/VS2005).
If d is used to access the offset elements in des16 like an array, it could be converted to an array as well.
byte[] d = des16.Skip(8).ToArray();
Note this creates a separate instance of an array which contains the items in des16 excluding the first 8.
Otherwise it's not completely clear what the best use would be without seeing how it is used.
[edit]
It appears you are working with null-terminated strings in a buffer in .Net 2.0 possibly (if Skip() isn't available). If you want the string representation, you can convert it to a native string object.
byte[] des16 = new byte[16];
char[] chararr = Array.ConvertAll(des16, delegate(byte b) { return (char)b; }); //convert to an array of characters
string str = new String(chararr, 8, chararr-8); //create the string
byte[] des16 = new byte[16];
....
byte d = des16[8];
Unless you use unsafe code you cannot retrieve a pointer.
#JeffMercado, Thanks for opening my eyes.
In c++:
byte des16[16];
byte *d = des16+8;
In c#:
byte[] des16 = new byte[16];
byte[] b = new byte[8];
System.Array.Copy(des16, 8, b, 0, 8);
Pointers are basically getting converted. We can change it to an collection in c# .
In c++ if you need to change collection (string) to byte[] collection in c# you can execute code as below.
byte[] toBytes = Encoding.ASCII.GetBytes(somestring);