I have some problem interpreting the result of two pieces of code.
using System;
using System.Runtime.InteropServices;
[StructLayout(LayoutKind.Sequential, Pack = 0)]
struct MyStruct
{
public byte b1;
public char c2;
public int i3;
}
public class Example
{
public unsafe static void Main()
{
MyStruct myStruct = new MyStruct();
byte* addr = (byte*)&myStruct;
Console.WriteLine("Size: {0}", sizeof(MyStruct));
Console.WriteLine("b1 Offset: {0}", &myStruct.b1 - addr);
Console.WriteLine("c2 Offset: {0}", (byte*)&myStruct.c2 - addr);
Console.WriteLine("i3 Offset: {0}", (byte*)&myStruct.i3 - addr);
Console.ReadLine();
}
}
The above result is
Size : 8
b1 Offset: 0
c2 Offset: 2
i3 Offset: 4
If I comment out public char c2; and Console.WriteLine("c2 Offset: {0}", (byte*)&myStruct.c2 - addr);, I would get
Size : 8
b1 Offset: 0
i3 Offset: 4
Now I think I can explain the second scenario, where the default packing size is the size of the largest element size of myStruct when Pack = 0. So it is 1 byte + 3 bytes of padding + 4 bytes = 8.
But the same does not seem apply for the first scenario. My expected result would be (1 byte + 3 bytes of padding) + (2 bytes for char + 2 bytes of padding) + (4 bytes for int). So the total size should be 12 as the packing size of 4 byte(the size of int), and the respective offset are 0, 4, 8.
What am I missing here?
Thanks
To understand alignment it might be helpful to think about something reading your struct in X-byte chunks (where X is your type alignment). In your examples that X is 4. If no padding is added, reading first 4 bytes of your first struct (with char) will read byte, char, and then one byte of the next int field. This (avoid reading partial field bytes) is why padding is needed. To "fix" the problem - one byte of padding is needed. Then first 4-byte read will read byte and char fields (and one byte of padding), and next 4-byte read will read integer field. It's wasteful to add padding as you expected, because you can achieve the same goal with smaller total size (8 bytes over your expected 12).
Related
This might be a really beginer's question but I've been reading about this and I'm finding it hard to understand.
This is a sample from the msdn page about this subject (just a little smaller).
using System;
class SetByteDemo
{
// Display the array contents in hexadecimal.
public static void DisplayArray(Array arr, string name)
{
// Get the array element width; format the formatting string.
int elemWidth = Buffer.ByteLength(arr) / arr.Length;
string format = String.Format(" {{0:X{0}}}", 2 * elemWidth);
// Display the array elements from right to left.
Console.Write("{0,7}:", name);
for (int loopX = arr.Length - 1; loopX >= 0; loopX--)
Console.Write(format, arr.GetValue(loopX));
Console.WriteLine();
}
public static void Main()
{
// These are the arrays to be modified with SetByte.
short[] shorts = new short[2];
Console.WriteLine("Initial values of arrays:\n");
// Display the initial values of the arrays.
DisplayArray(shorts, "shorts");
// Copy two regions of source array to destination array,
// and two overlapped copies from source to source.
Console.WriteLine("\n" +
" Array values after setting byte 1 = 1 and byte 3 = 200\n");
Buffer.SetByte(shorts, 1, 1);
Buffer.SetByte(shorts, 3, 10);
// Display the arrays again.
DisplayArray(shorts, "shorts");
Console.ReadKey();
}
}
SetByte should be easy to understand, but if I print the shorts array before doing the SetByte operation the array looks like this
{short[2]}
[0]: 0
[1]: 0
After doing the first Buffer.SetByte(shorts, 1, 1); the array becomes
{short[2]}
[0]: 256
[1]: 0
And after setting Buffer.SetByte(shorts, 3, 10); the array becomes
{short[2]}
[0]: 256
[1]: 2560
At the end, in the example they print the array from right to left:
0A00 0100
I don't understand how this works, can someone give me to some information about this?
The Buffer class allows you to manipulate memory as if you were using a void pointer in c, it's like a sum of memcpy, memset, and so on to manipulate in a fast way memory on .net .
When you passed the "shorts" array, the Buffer class "sees" it as a pointer to four consecutive bytes (two shorts, each of them two bytes) :
|[0][1]|[2][3]|
short short
So the uninitialized array looks like this:
|[0][0]|[0][0]|
short short
When you do Buffer.SetByte(shorts, 1, 1); you instruct the Buffer class to change the second byte on the byte array, so it will be:
|[0][1]|[0][0]|
short short
If you convert the two bytes (0x00, 0x01) to a short it is 0x0100 (note as these are the two bytes one after other, but in reverse order, that's because the C# compiler uses little endianness), or 256
The second line basically does the same Buffer.SetByte(shorts, 3, 10);changes third byte to 10:
|[0][1]|[0][10]|
short short
And then 0x00,0x0A as a short is 0x0A00 or 2560.
The .NET types use little endianness. That means that the first byte (0th, actually) of a short, int, etc. contains the least significant bits.
After setting the array it seems like this as byte[]:
0, 1, 0, 10
As short[] it is interpreted like this:
0 + 1*256 = 256, 0 + 10*256 = 2560
i think the part that people might struggle with is that the Buffer.SetByte() method is basically iterating over the array differently than a regular assignment with the array indexer [], which would separate the array according to the width of the containing type(shorts/doubles/etc.) instead of bytes... to use your example:
the short array is usually seen as
arr = [xxxx, yyyy](in base 16)
but the SetByte method "sees" it as:
arr = [xx, yy, zz, ww]
so a call like Buffer.SetByte(arr, 1, 5) would address the second byte in the arry, which is still inside the first short. setting the value there and that's it.
the result should look like:
[05 00, 00 00] in hex or [1280,0].
Basically Explicit type casting means there is Possible loss of precision
example :
short s = 256;
byte b = (byte) s;
Console.WriteLine(b);
// output : 0
or
short s = 257;
byte b = (byte) s;
Console.WriteLine(b);
// output : 1
or
short s = 1024;
byte b = (byte)s;
Console.WriteLine(b);
Console.ReadKey();
// output : 0
Base behind this output ... ?
Short is a 2-byte number, byte is 1-byte!
When you cast from two bytes to one you are losing the first
8 bits: 1024 (short: "0000 0100" "0000 0000").
Which in binary becomes (binary: "0000 0000") = 0.
The base behind your output is simple:
Every number is represented as bits, every 8 bits create 1 byte.
Byte holds numbers from "0" to "255".
If you convert bigger number to smaller in programing you are losing bits not just precision.
In your case you are losing every bit after the 8 (If the number you are trying to convert has value holed in its last 8 bits (bit can be 1 or 0) you will get it if not you will get 0).
P.S. Use windows calculator in programmer mode or find a program in google to convert number to bits and it will become more clear to you.
Does not get correct result using this code. After inserting 300 as int, I am getting 44 as the converted byte value.
I was expecting 255 as this is the closest to 300.
Console.Write("Enter int value - ");
val1 = Convert.ToInt32(Console.ReadLine());
// converting int to byte
bval1 = (byte) val1;
Console.WriteLine("int explicit conversion");
Console.WriteLine("byte - {0}", bval1);
You have just experienced byte overflow. Try to use types that can actually hold the numbers that you work with.
[edit]
It looks that conversion can be also checked in C#:
bval1 = checked ((byte) val1);
and have the appropriate exception (OverflowException) when value is too big
A single unsigned byte can hold a range of 0 to 255. or 0x00 to 0xff. 300 is greater than 256 so it "wraps around" or begins counting again from 0. 300 - 44 = 256, that's your wrap.
I have a control that has a byte array in it.
Every now and then there are two bytes that tell me some info about number of future items in the array.
So as an example I could have:
...
...
Item [4] = 7
Item [5] = 0
...
...
The value of this is clearly 7.
But what about this?
...
...
Item [4] = 0
Item [5] = 7
...
...
Any idea on what that equates to (as an normal int)?
I went to binary and thought it may be 11100000000 which equals 1792. But I don't know if that is how it really works (ie does it use the whole 8 items for the byte).
Is there any way to know this with out testing?
Note: I am using C# 3.0 and visual studio 2008
BitConverter can easily convert the two bytes in a two-byte integer value:
// assumes byte[] Item = someObject.GetBytes():
short num = BitConverter.ToInt16(Item, 4); // makes a short
// out of Item[4] and Item[5]
A two-byte number has a low and a high byte. The high byte is worth 256 times as much as the low byte:
value = 256 * high + low;
So, for high=0 and low=7, the value is 7. But for high=7 and low=0, the value becomes 1792.
This of course assumes that the number is a simple 16-bit integer. If it's anything fancier, the above won't be enough. Then you need more knowledge about how the number is encoded, in order to decode it.
The order in which the high and low bytes appear is determined by the endianness of the byte stream. In big-endian, you will see high before low (at a lower address), in little-endian it's the other way around.
You say "this value is clearly 7", but it depends entirely on the encoding. If we assume full-width bytes, then in little-endian, yes; 7, 0 is 7. But in big endian it isn't.
For little-endian, what you want is
int i = byte[i] | (byte[i+1] << 8);
and for big-endian:
int i = (byte[i] << 8) | byte[i+1];
But other encoding schemes are available; for example, some schemes use 7-bit arithmetic, with the 8th bit as a continuation bit. Some schemes (UTF-8) put all the continuation bits in the first byte (so the first has only limited room for data bits), and 8 bits for the rest in the sequence.
If you simply want to put those two bytes next to each other in binary format, and see what that big number is in decimal, then you need to use this code:
if (BitConverter.IsLittleEndian)
{
byte[] tempByteArray = new byte[2] { Item[5], Item[4] };
ushort num = BitConverter.ToUInt16(tempByteArray, 0);
}
else
{
ushort num = BitConverter.ToUInt16(Item, 4);
}
If you use short num = BitConverter.ToInt16(Item, 4); as seen in the accepted answer, you are assuming that the first bit of those two bytes is the sign bit (1 = negative and 0 = positive). That answer also assumes you are using a big endian system. See this for more info on the sign bit.
If those bytes are the "parts" of an integer it works like that. But beware, that the order of bytes is platform specific and that it also depends on the length of the integer (16 bit=2 bytes, 32 bit=4bytes, ...)
In case that item[5] is the MSB
ushort result = BitConverter.ToUInt16(new byte[2] { Item[5], Item[4] }, 0);
int result = 256 * Item[5] + Item[4];
It is possible to get size of struct using
Marshal.SizeOf(typeof(mystruct));
Is it possible to get size of a part of a structure(for example I pass to function the last field of a structure and it returns sum of sizes of previous fields)?
As I understand is it possible using reflection?
[StructLayout(LayoutKind.Explicit)]
public struct SomeStruct
{
[FieldOffset(0)]
public byte b1;
[FieldOffset(3)]
public byte b2;
[FieldOffset(7)]
public int i1;
[FieldOffset(12)]
public int i2;
}
class Program
{
static FieldOffsetAttribute GetFieldOffset(string fieldName)
{
return (FieldOffsetAttribute)typeof(SomeStruct)
.GetField(fieldName)
.GetCustomAttributes(typeof(FieldOffsetAttribute), false)[0];
}
static void Main(string[] args)
{
var someStruct = new SomeStruct { b1 = 1, b2 = 2, i1 = 3, i2 = 4 };
Console.WriteLine("field b1 offset: {0}", GetFieldOffset("b1").Value);
Console.WriteLine("field b2 offset: {0}", GetFieldOffset("b2").Value);
Console.WriteLine("field i1 offset: {0}", GetFieldOffset("i1").Value);
Console.WriteLine("field i2 offset: {0}", GetFieldOffset("i2").Value);
Console.ReadLine();
}
}
Prints:
field b1 offset: 0
field b2 offset: 3
field i1 offset: 7
field i2 offset: 12
The memory layout of a struct is not discoverable in .NET. The JIT compiler takes advantage of this, it re-orders the fields of a struct to get a more efficient layout. This plays havoc with any attempt to use the struct in a way that bypasses the normal marshaling mechanisms. Yes, Marshal.SiseOf() produces a size for a struct. But that size is only valid after using Marshal.StructureToPtr().
A direct answer to your question: no, you can't discover the size, not even with Reflection. By design.
Well, I'm not sure but I think it's impossible due to possible optimization and alignment issues.