Please, can anyone tell what is the size of the structure, and how to calculate it:
[StructLayoutAttribute(LayoutKind.Sequential, Pack=2)]
public struct SomeStruct
{
public short sh;
public int i;
public byte b;
public ushort u;
[MarshalAs(UnmanagedType.ByValArray, SizeConst=7)]
public byte[] arr;
}
I suppose you want to understand how is memory allocated, otherwise you can use:
System.Runtime.InteropServices.Marshal.SizeOf(TheStruct)
Well you choose to pack in a 2 byte boundary, so you basically allocate memory at step of two.
so short ( signed or unsigned ) take 2 bytes, int take four bytes byte would take just one, but due to the packing you specify take two as well, for the same reason for the array we need to consider 8.
So we have
2+4+2+2+8 = 18 bytes
You can get exact size of this structure using following code -
System.Runtime.InteropServices.Marshal.SizeOf(SomeStruct)
Try this:
int size = System.Runtime.InteropServices.Marshal.SizeOf(new SomeStruct());
Console.WriteLine(size);
or, from a related post, this:
int size;
unsafe
{
size = sizeof(SomeStruct);
}
Both will yield 18, as Felice Pollano has explained.
Related
Good day. For a current project I need to know how datatypes are represented as bytes. For example, if I use :
long three = 500;var bytes = BitConverter.GetBytes(three);
I get the values 244,1,0,0,0,0,0,0. I get that it is a 64 bit value, and 8 bits go int a bit, thus are there 8 bytes. But how does 244 and 1 makeup 500? I tried Googling it, but all I get is use BitConverter. I need to know how the bitconverter works under the hood. If anybody can perhaps point me to an article or explain how this stuff works, it would be appreciated.
It's quite simple.
BitConverter.GetBytes((long)1); // {1,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)10); // {10,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)100); // {100,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)255); // {255,0,0,0,0,0,0,0};
BitConverter.GetBytes((long)256); // {0,1,0,0,0,0,0,0}; this 1 is 256
BitConverter.GetBytes((long)500); // {244,1,0,0,0,0,0,0}; this is yours 500 = 244 + 1 * 256
If you need source code you should check Microsoft GitHub since implementation is open source :)
https://github.com/dotnet
From the source code:
// Converts a long into an array of bytes with length
// eight.
[System.Security.SecuritySafeCritical] // auto-generated
public unsafe static byte[] GetBytes(long value)
{
Contract.Ensures(Contract.Result<byte[]>() != null);
Contract.Ensures(Contract.Result<byte[]>().Length == 8);
byte[] bytes = new byte[8];
fixed(byte* b = bytes)
*((long*)b) = value;
return bytes;
}
This might be a really beginer's question but I've been reading about this and I'm finding it hard to understand.
This is a sample from the msdn page about this subject (just a little smaller).
using System;
class SetByteDemo
{
// Display the array contents in hexadecimal.
public static void DisplayArray(Array arr, string name)
{
// Get the array element width; format the formatting string.
int elemWidth = Buffer.ByteLength(arr) / arr.Length;
string format = String.Format(" {{0:X{0}}}", 2 * elemWidth);
// Display the array elements from right to left.
Console.Write("{0,7}:", name);
for (int loopX = arr.Length - 1; loopX >= 0; loopX--)
Console.Write(format, arr.GetValue(loopX));
Console.WriteLine();
}
public static void Main()
{
// These are the arrays to be modified with SetByte.
short[] shorts = new short[2];
Console.WriteLine("Initial values of arrays:\n");
// Display the initial values of the arrays.
DisplayArray(shorts, "shorts");
// Copy two regions of source array to destination array,
// and two overlapped copies from source to source.
Console.WriteLine("\n" +
" Array values after setting byte 1 = 1 and byte 3 = 200\n");
Buffer.SetByte(shorts, 1, 1);
Buffer.SetByte(shorts, 3, 10);
// Display the arrays again.
DisplayArray(shorts, "shorts");
Console.ReadKey();
}
}
SetByte should be easy to understand, but if I print the shorts array before doing the SetByte operation the array looks like this
{short[2]}
[0]: 0
[1]: 0
After doing the first Buffer.SetByte(shorts, 1, 1); the array becomes
{short[2]}
[0]: 256
[1]: 0
And after setting Buffer.SetByte(shorts, 3, 10); the array becomes
{short[2]}
[0]: 256
[1]: 2560
At the end, in the example they print the array from right to left:
0A00 0100
I don't understand how this works, can someone give me to some information about this?
The Buffer class allows you to manipulate memory as if you were using a void pointer in c, it's like a sum of memcpy, memset, and so on to manipulate in a fast way memory on .net .
When you passed the "shorts" array, the Buffer class "sees" it as a pointer to four consecutive bytes (two shorts, each of them two bytes) :
|[0][1]|[2][3]|
short short
So the uninitialized array looks like this:
|[0][0]|[0][0]|
short short
When you do Buffer.SetByte(shorts, 1, 1); you instruct the Buffer class to change the second byte on the byte array, so it will be:
|[0][1]|[0][0]|
short short
If you convert the two bytes (0x00, 0x01) to a short it is 0x0100 (note as these are the two bytes one after other, but in reverse order, that's because the C# compiler uses little endianness), or 256
The second line basically does the same Buffer.SetByte(shorts, 3, 10);changes third byte to 10:
|[0][1]|[0][10]|
short short
And then 0x00,0x0A as a short is 0x0A00 or 2560.
The .NET types use little endianness. That means that the first byte (0th, actually) of a short, int, etc. contains the least significant bits.
After setting the array it seems like this as byte[]:
0, 1, 0, 10
As short[] it is interpreted like this:
0 + 1*256 = 256, 0 + 10*256 = 2560
i think the part that people might struggle with is that the Buffer.SetByte() method is basically iterating over the array differently than a regular assignment with the array indexer [], which would separate the array according to the width of the containing type(shorts/doubles/etc.) instead of bytes... to use your example:
the short array is usually seen as
arr = [xxxx, yyyy](in base 16)
but the SetByte method "sees" it as:
arr = [xx, yy, zz, ww]
so a call like Buffer.SetByte(arr, 1, 5) would address the second byte in the arry, which is still inside the first short. setting the value there and that's it.
the result should look like:
[05 00, 00 00] in hex or [1280,0].
Which is the fastest way to convert a byte[] to float[] and vice versa (without a loop of course).
I'm using BlockCopy now, but then I need the double memory. I would like some kind of cast.
I need to do this conversion just to send the data through a socket and reconstruct the array in the other end.
Surely msarchet's proposal makes copies too. You are talking about just changing the way .NET thinks about a memory area, if you dont' want to copy.
But, I don't think what you want is possible, as bytes and floats are represented totally different in memory. A byte uses exactly a byte in memory, but a float uses 4 bytes (32 bits).
If you don't have the memory requirements to store your data, just represent the data as the data type you will be using the most in memory, and convert the values you actually use, when you use them.
How do you want to convert a float (which can represent a value between ±1.5 × 10−45 and±3.4 × 10^38) into a byte (which can represent a value between 0 and 255) anyway?
(see more info her about:
byte: http://msdn.microsoft.com/en-us/library/5bdb6693(v=VS.100).aspx
float: http://msdn.microsoft.com/en-us/library/b1e65aza.aspx
More about floating types in .NET here: http://csharpindepth.com/Articles/General/FloatingPoint.aspx
You can use StructLayout to achieve this (from Stack Overflow question C# unsafe value type array to byte array conversions):
[StructLayout(LayoutKind.Explicit)]
struct UnionArray
{
[FieldOffset(0)]
public Byte[] Bytes;
[FieldOffset(0)]
public float[] Floats;
}
static void Main(string[] args)
{
// From bytes to floats - works
byte[] bytes = { 0, 1, 2, 4, 8, 16, 32, 64 };
UnionArray arry = new UnionArray { Bytes = bytes };
for (int i = 0; i < arry.Bytes.Length / 4; i++)
Console.WriteLine(arry.Floats[i]);
}
IEnumerable<float> ToFloats(byte[] bytes)
{
for(int i = 0; i < bytes.Length; i+=4)
yield return BitConverter.ToSingle(bytes, i);
}
Two ways if you have access to LINQ:
var floatarray = ByteArry.AsEnumerable.Cast<float>().ToArray();
or just using Array Functions
var floatarray = Array.ConvertAll(ByteArray, item => (float)item);
I have a file format I'm trying to write to using C#. The format encodes integer-based RGB color values as a floating point. It also stores the values as big-endian. I found an example of what I'm trying to do, written in php, here: http://www.colourlovers.com/ase.phps
I can convert the endian-ness easily. I know this code is very verbose, but I'm just using it to watch the bits swap during troubleshooting.
private uint SwapEndian(uint host) {
uint ReturnValue;
uint FirstHalf;
uint LastHalf;
FirstHalf = (host & 0xFFFF0000);
FirstHalf = (FirstHalf >> 16);
LastHalf = (host & 0x0000FFFF);
LastHalf = (LastHalf << 16);
ReturnValue = (FirstHalf | LastHalf);
return ReturnValue;
}
C# won't let me perform bit-shifts on floats. Converting to an int or uint to call my SwapEndian method above loses the encoding information the file format requires.
So, how would you take a floating point number and change its endian-ness without losing the exponent data?
You could just use:
var bytes = BitConverter.GetBytes(floatVal);
And reverse the array (assuming the CPU is little-endian, which you an check), or simply just access the array in the order you need.
There is also a way to do this with unsafe code, treating the float* as a byte* so you can get the bytes in the order you want.
I'm slightly confused as to why your shifting by 16 bits rather than 8. Naw do I understand why you need RGB as a float (they are generaly 1 byte each). But anywho.
you can use a 'fake' union to treat a float as an uint
[StructLayout(LayoutKind.Explicit)]
public struct FloatIntUnion
{
[FieldOffset(0)]
public float f;
[FieldOffset(0)]
public uint i;
}
this allows you to assign the float and then provide do bitwise operations on the uint part of the 'union' then use the float again as the final value.
However I would probably just use:
var bytes = BitConverter.GetBytes (RGB);
if (BitConverter.IsLittleEndian)
Array.Reverse (bytes);
return bytes;
until performance started to become an issue (because of THIS method (read profile first)).
In the documentation of hardware that allows us to control it via UDP/IP,
I found the following fragment:
In this communication protocol, DWORD is a 4 bytes data, WORD is a 2 bytes data,
BYTE is a single byte data. The storage format is little endian, namely 4 bytes (32bits) data is stored as: d7-d0, d15-d8, d23-d16, d31-d24; double bytes (16bits) data is stored as: d7-d0 , d15-d8.
I am wondering how this translates to C#?
Do I have to convert stuff before sending it over?
For example, if I want to send over a 32 bit integer, or a 4 character string?
C# itself doesn't define the endianness. Whenever you convert to bytes, however, you're making a choice. The BitConverter class has an IsLittleEndian field to tell you how it will behave, but it doesn't give the choice. The same goes for BinaryReader/BinaryWriter.
My MiscUtil library has an EndianBitConverter class which allows you to define the endianness; there are similar equivalents for BinaryReader/Writer. No online usage guide I'm afraid, but they're trivial :)
(EndianBitConverter also has a piece of functionality which isn't present in the normal BitConverter, which is to do conversions in-place in a byte array.)
You can also use
IPAddress.NetworkToHostOrder(...)
For short, int or long.
Re little-endian, the short answer (to do I need to do anything) is "probably not, but it depends on your hardware". You can check with:
bool le = BitConverter.IsLittleEndian;
Depending on what this says, you might want to reverse portions of your buffers. Alternatively, Jon Skeet has specific-endian converters here (look for EndianBitConverter).
Note that itaniums (for example) are big-endian. Most Intels are little-endian.
Re the specific UDP/IP...?
You need to know about network byte order as well as CPU endian-ness.
Typically for TCP/UDP comms, you always convert data to network byte order using the htons function (and ntohs, and their related functions).
Normally network order is big-endian, but in this case (for some reason!) the comms is little endian, so those functions are not very useful. This is important as you cannot assume the UDP comms they have implemented follow any other standards, it also makes life difficult if you have a big-endian architecture as you just can't wrap everything with htons as you should :-(
However, if you're coming from an intel x86 architecture, then you're already little-endian, so just send the data without conversion.
I'm playing around with packed data in UDP Multicast and I needed something to reorder UInt16 octets since I noticed an error in packet header (Wireshark), so I made this:
private UInt16 swapOctetsUInt16(UInt16 toSwap)
{
Int32 tmp = 0;
tmp = toSwap >> 8;
tmp = tmp | ((toSwap & 0xff) << 8);
return (UInt16) tmp;
}
In case of UInt32,
private UInt32 swapOctetsUInt32(UInt32 toSwap)
{
UInt32 tmp = 0;
tmp = toSwap >> 24;
tmp = tmp | ((toSwap & 0xff0000) >> 8);
tmp = tmp | ((toSwap & 0xff00) << 8);
tmp = tmp | ((toSwap & 0xff) << 24);
return tmp;
}
This is just for testing
private void testSwap() {
UInt16 tmp1 = 0x0a0b;
UInt32 tmp2 = 0x0a0b0c0d;
SoapHexBinary shb1 = new SoapHexBinary(BitConverter.GetBytes(tmp1));
SoapHexBinary shb2 = new SoapHexBinary(BitConverter.GetBytes(swapOctetsUInt16(tmp1)));
Debug.WriteLine("{0}", shb1.ToString());
Debug.WriteLine("{0}", shb2.ToString());
SoapHexBinary shb3 = new SoapHexBinary(BitConverter.GetBytes(tmp2));
SoapHexBinary shb4 = new SoapHexBinary(BitConverter.GetBytes(swapOctetsUInt32(tmp2)));
Debug.WriteLine("{0}", shb3.ToString());
Debug.WriteLine("{0}", shb4.ToString());
}
from which output was this:
0B0A: {0}
0A0B: {0}
0D0C0B0A: {0}
0A0B0C0D: {0}
If you're parsing and performance is not critical, consider this very simple code:
private static byte[] NetworkToHostOrder (byte[] array, int offset, int length)
{
return array.Skip (offset).Take (length).Reverse ().ToArray ();
}
int foo = BitConverter.ToInt64 (NetworkToHostOrder (queue, 14, 8), 0);