Convert class object to byte[] - c#

I'm writing a software in C# which has as input a generic class made by primitive types and should generate a bytestream. This bytestream must be sent to a PLC buffer.
I read so many articles here in StackOverflow which basically go in three solutions:
Using binary formatter. This solution brings me a serializated binary stream (trying back Deserialize method returns me the original class) but checking the buffer I discovered that I can't find my data (see details below). Moreover I'm little bit worried by the fact that by checking Microsoft documentation the class is deprecated.
Using binary writer. This works correctly, but having a lot of different class declarations I needs to use Reflections in order retrieve dinamically each type and serialize it and it sounds a little bit too complicated to me (I'm leaving it as a "plan b" solution).
Using TypeDescriptor to convert the object. It doesn't work at all and Runtime engine returns "TypeConverter' is unable to convert X class"
Binary formatter code try is as follows:
TYPE_S_TESTA test = new TYPE_S_TESTA();
test.b_tipo = 54;
using (MemoryStream ms = new MemoryStream())
{
BinaryFormatter bf = new BinaryFormatter(); // BinaryFormatter is deprecated
bf.Serialize(ms, test);
ms.Seek(0, SeekOrigin.Begin);
byte[] test3 = ms.ToArray();
}
Class to serialize is defined as follows:
[Serializable]
public class TYPE_S_TESTA
{
public short b_tipo;
public char b_modo;
public char b_area;
public char b_sorgente;
public char b_destinatario;
public short w_lunghezza;
public short w_contatore;
public short w_turno;
public short w_tempo;
}
I already defined one value in my class as per test purposes. I expected a 14 bytes array with '54' inside (btw, another question is, what's the serialization order? I need exactly the same order as my definition). What I see with debugger on test3 buffer is instead:
_buffer {byte[512]} byte[]
[0] 0 byte
[1] 1 byte
[2] 0 byte
[3] 0 byte
[4] 0 byte
[5] 255 byte
[6] 255 byte
[7] 255 byte
[8] 255 byte
[9] 1 byte
[10] 0 byte
[11] 0 byte
[12] 0 byte
[13] 0 byte
[14] 0 byte
[15] 0 byte
[16] 0 byte
[17] 12 byte
[18] 2 byte
[19] 0 byte
[20] 0 byte
[21] 0 byte
[22] 71 byte
[23] 70 byte
[24] 97 byte
[25] 99 byte
So, no trace of my 54 and a 512 bytes buffer (why is it so big?).

You can declare TYPE_S_TESTA as follows:
using System.Runtime.InteropServices
[StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct TYPE_S_TESTA
{
public short b_tipo;
public char b_modo;
public char b_area;
public char b_sorgente;
public char b_destinatario;
public short w_lunghezza;
public short w_contatore;
public short w_turno;
public short w_tempo;
}
You can convert an instance of a TYPE_S_TESTA to a byte array like this:
TYPE_S_TESTA test = new TYPE_S_TESTA();
test.b_tipo = 54;
int size = Marshal.SizeOf(typeof(TYPE_S_TESTA));
byte[] test3 = new byte[size];
IntPtr ptr = Marshal.AllocHGlobal(size);
Marshal.StructureToPtr(test, ptr, true);
Marshal.Copy(ptr, test3, 0, size);

Related

How to interpret byte as int?

I have a byte array recived from Cpp program.
arr[0..3] // a real32,
arr[4] // a uint8,
How can I interpret arr[4] as int?
(uint)arr[4] // Err: can't implicitly convert string to int.
BitConverter.ToUint16(arr[4]) // Err: Invalid argument.
buff[0+4] as int // Err: must be reference or nullable type
Do I have to zero consecutive byte to interpret it as a UInt16?
OK, here is the confusion. Initially, I defined my class.
byte[] buff;
buff = getSerialBuffer();
public class Reading{
public string scale_id;
public string measure;
public int measure_revised;
public float wt;
}
rd = new Reading();
// !! here is the confusion... !!
// Err: Can't implicitly convert 'string' to 'int'
rd.measure = string.Format("{0}", buff[0 + 4]);
// then I thought, maybe I should convert buff[4] to int first ?
// I throw all forms of conversion here, non worked.
// but, later it turns out:
rd.measure_revised = buff[0+4]; // just ok.
So basically, I don't understand why this happens
rd.measure = string.Format("{0}", buff[0 + 4]);
//Err: Can't implicitly convert 'string' to 'int'
If buff[4] is a byte and byte is uint8, what does it mean by can't implicitly convert string to int ?... It confuses me.
You were almost there. Assuming you wanted a 32-bit int from the first 4 bytes (it's hard to interpret your question):
BitConverter.ToInt32(arr, 0);
This says to take the 4 bytes from arr, starting at index 0, and turn them into a 32-bit int. (docs)
Note that BitConverter uses the endianness of the computer, so on x86/x64 this will be little-endian.
If you want to use an explicit endianness, you'll need to construct the int by hand:
int littleEndian = arr[0] | (arr[1] << 8) | (arr[2] << 16) | (arr[3] << 24);
int bigEndian = arr[3] | (arr[2] << 8) | (arr[1] << 16) | (arr[0] << 24);
If instead you wanted a 32-bit floating-point number from the first 4 bytes, see Dmitry Bychenko's answer.
If I've understood you right you have byte (not string) array
byte[] arr = new byte[] {
182, 243, 157, 63, // Real32 - C# Single or float (e.g. 1.234f)
123 // uInt8 - C# byte (e.g. 123)
};
To get float and byte back you can try BitConverter
// read float / single starting from 0th byte
float realPart = BitConverter.ToSingle(arr, 0);
byte bytePart = arr[4];
Console.Write($"Real Part: {realPart}; Integer Part: {bytePart}");
Outcome:
Real Part: 1.234; Integer Part: 123
Same idea (BitConverter class) if we want to encode arr:
float realPart = 1.234f;
byte bytePart = 123;
byte[] arr =
BitConverter.GetBytes(realPart)
.Concat(new byte[] { bytePart })
.ToArray();
Console.Write(string.Join(" ", arr));
Outcome:
182 243 157 63 123

StructLayoutAttribute.Pack confusion

I have some problem interpreting the result of two pieces of code.
using System;
using System.Runtime.InteropServices;
[StructLayout(LayoutKind.Sequential, Pack = 0)]
struct MyStruct
{
public byte b1;
public char c2;
public int i3;
}
public class Example
{
public unsafe static void Main()
{
MyStruct myStruct = new MyStruct();
byte* addr = (byte*)&myStruct;
Console.WriteLine("Size: {0}", sizeof(MyStruct));
Console.WriteLine("b1 Offset: {0}", &myStruct.b1 - addr);
Console.WriteLine("c2 Offset: {0}", (byte*)&myStruct.c2 - addr);
Console.WriteLine("i3 Offset: {0}", (byte*)&myStruct.i3 - addr);
Console.ReadLine();
}
}
The above result is
Size : 8
b1 Offset: 0
c2 Offset: 2
i3 Offset: 4
If I comment out public char c2; and Console.WriteLine("c2 Offset: {0}", (byte*)&myStruct.c2 - addr);, I would get
Size : 8
b1 Offset: 0
i3 Offset: 4
Now I think I can explain the second scenario, where the default packing size is the size of the largest element size of myStruct when Pack = 0. So it is 1 byte + 3 bytes of padding + 4 bytes = 8.
But the same does not seem apply for the first scenario. My expected result would be (1 byte + 3 bytes of padding) + (2 bytes for char + 2 bytes of padding) + (4 bytes for int). So the total size should be 12 as the packing size of 4 byte(the size of int), and the respective offset are 0, 4, 8.
What am I missing here?
Thanks
To understand alignment it might be helpful to think about something reading your struct in X-byte chunks (where X is your type alignment). In your examples that X is 4. If no padding is added, reading first 4 bytes of your first struct (with char) will read byte, char, and then one byte of the next int field. This (avoid reading partial field bytes) is why padding is needed. To "fix" the problem - one byte of padding is needed. Then first 4-byte read will read byte and char fields (and one byte of padding), and next 4-byte read will read integer field. It's wasteful to add padding as you expected, because you can achieve the same goal with smaller total size (8 bytes over your expected 12).

Bit shifting c# what's going on here?

So I'm curious, what exactly is going on here?
static void SetUInt16 (byte [] bytes, int offset, ushort val)
{
bytes [offset] = (byte) ((val & 0x0ff00) >> 8);
bytes [offset + 1] = (byte) (val & 0x0ff);
}
Basically the idea in this code is to set a 16 bit int into a byte buffer at a specific location, but the problem is I'm trying to emulate it using
using(var ms = new MemoryStream())
using(var w = new BinaryWriter(ms))
{
w.Write((ushort)1);
}
I'm expecting to read 1 but instead I'm getting 256. Is this an endianness issue?
The code writes a 16-bit integer in big-endian order. Upper byte is written first. Not the same thing that BinaryWriter does, it writes in little-endian order.
When you decode the data, are you getting 256 when you expect 1? BinaryWriter.Write uses little-endian encoding, your SetUInt16 method is using big-endian.

Convert C++ to C#

in c++:
byte des16[16];
....
byte *d = des16+8;
in c#?
byte des16[16];
[0] 207 'Ï' unsigned char
[1] 216 'Ø' unsigned char
[2] 108 'l' unsigned char
[3] 93 ']' unsigned char
[4] 249 'ù' unsigned char
[5] 249 'ù' unsigned char
[6] 100 'd' unsigned char
[7] 0 unsigned char
[8] 76 'L' unsigned char
[9] 50 '2' unsigned char
[10] 104 'h' unsigned char
[11] 118 'v' unsigned char
[12] 104 'h' unsigned char
[13] 191 '¿' unsigned char
[14] 171 '«' unsigned char
[15] 199 'Ç' unsigned char
after
byte *d = des16+8;
d = "L2hvh¿«Ç†¿æ^ òÎL2hvh¿«Ç"
C# (generally speaking) has no pointers. Maybe the following is what you are after:
byte[] des16 = new byte[16];
byte byteAtIndex8 = des16[8];
This code gives you the element at index 8.
If I read your code correctly, you are trying to get the address of the element at index 8. This is - generally speaking - not possible with C# (unless you use unsafe code).
I think this would be more appropriate (though it depends on how d is used):
byte[] des16 = new byte[16];
IEnumerable<byte> d = des16.Skip(8);
Using pure managed code, you cannot use pointers to locations. Since d takes a pointer to the 8th element of the array, the closest analog would be creating an enumeration of des16 skipping the first 8 items. If you are just iterating through the items, this would be the best choice.
I should also mention that Skip() is one of many extension methods available for arrays (and other IEnumerables) in .Net 3.5 (VS2008/VS2010) and up which I could only assume you were using. You wouldn't be able to use it if you are using .Net 2.0 (VS2003/VS2005).
If d is used to access the offset elements in des16 like an array, it could be converted to an array as well.
byte[] d = des16.Skip(8).ToArray();
Note this creates a separate instance of an array which contains the items in des16 excluding the first 8.
Otherwise it's not completely clear what the best use would be without seeing how it is used.
[edit]
It appears you are working with null-terminated strings in a buffer in .Net 2.0 possibly (if Skip() isn't available). If you want the string representation, you can convert it to a native string object.
byte[] des16 = new byte[16];
char[] chararr = Array.ConvertAll(des16, delegate(byte b) { return (char)b; }); //convert to an array of characters
string str = new String(chararr, 8, chararr-8); //create the string
byte[] des16 = new byte[16];
....
byte d = des16[8];
Unless you use unsafe code you cannot retrieve a pointer.
#JeffMercado, Thanks for opening my eyes.
In c++:
byte des16[16];
byte *d = des16+8;
In c#:
byte[] des16 = new byte[16];
byte[] b = new byte[8];
System.Array.Copy(des16, 8, b, 0, 8);
Pointers are basically getting converted. We can change it to an collection in c# .
In c++ if you need to change collection (string) to byte[] collection in c# you can execute code as below.
byte[] toBytes = Encoding.ASCII.GetBytes(somestring);

BinaryFormatter with MemoryStream Question

I am testing BinaryFormatter to see how it will work for me and I have a simple question:
When using it with the string HELLO, and I convert the MemoryStream to an array, it gives me 29 dimensions, with five of them being the actual data towards the end of the dimensions:
BinaryFormatter bf = new BinaryFormatter();
MemoryStream ms = new MemoryStream();
byte[] bytes;
string originalData = "HELLO";
bf.Serialize(ms, originalData);
ms.Seek(0, 0);
bytes = ms.ToArray();
returns
- bytes {Dimensions:[29]} byte[]
[0] 0 byte
[1] 1 byte
[2] 0 byte
[3] 0 byte
[4] 0 byte
[5] 255 byte
[6] 255 byte
[7] 255 byte
[8] 255 byte
[9] 1 byte
[10] 0 byte
[11] 0 byte
[12] 0 byte
[13] 0 byte
[14] 0 byte
[15] 0 byte
[16] 0 byte
[17] 6 byte
[18] 1 byte
[19] 0 byte
[20] 0 byte
[21] 0 byte
[22] 5 byte
[23] 72 byte
[24] 69 byte
[25] 76 byte
[26] 76 byte
[27] 79 byte
[28] 11 byte
Is there a way to only return the data encoded as bytes without all the extraneous information?
All of that extraneous information tells the other BinaryFormatter (that will deserialize the object) what type of object is being deserialized (in this case, System.String). Depending on the type, it includes other information needed to reconstruct the object (for instance, if it were a StringBuilder, the Capacity would also be encoded in there.
If all you want to do is stuff a string into a MemoryStream buffer:
using (MemoryStream ms = new MemoryStream())
using (TextWriter writer = new StreamWriter(ms))
{
writer.Write("HELLO");
writer.Flush();
byte[] bytes = ms.ToArray();
}
For a simple string, use a BinaryWriter. The overhead will be reduced to a small length prefix.
BinaryFormatter is intended for serializing (complex) object clusters and needs some auxiliary data structures to do that.
It depends what you actually want. You can get a UTF8 byte array from a string with Encoding.UTF8.GetBytes.
You shouldn't strip away all that "extraneous" information. The deserializer needs it on the other end when you want to reconstitute the object from the serialized data.
Are you just trying to convert the string to a byte array? If that is your goal, you can do something more like:
byte[] bits = System.Text.Encoding.UTF8.GetBytes("HELLO");

Categories

Resources