I am beginning to utilise directx, and the slimdx wrapper in c#. For many methods it is necessary to calculate the size of an object, for example in the case of buffers and draw calls. In particular, the "stride" being the number of bytes between successive elements.
So far the data I am passing is a single Vector3, of 12 bytes length. Therefore the size of buffer used is the number of elements * 12. In this simple case its easy to see what the size should be. However, how should I calculate it for more complicated examples? For instance:
struct VertexType
{
public Vector3 position;
public Vector4 color;
}
Would this be (12+16) in size? Does the fact it is arranged in a struct add anything to the size of the element?
I have tried using sizeof, but this throws an error stating the object is not of a predetermined size. What would be the correct approach?
Try Marshal.SizeOf - http://msdn.microsoft.com/en-us/library/System.Runtime.InteropServices.Marshal.SizeOf(v=vs.110).aspx
using System.Runtime.InteropServices;
VertexType v = new VertexType;
Marshal.SizeOf(typeof(VertexType)); //For the type
Marshal.SizeOf(v); //For an instance
Related
I'm trying to have my C# struct match up some complex padding and packing rules.
Fields should be aligned on 4 byte boundaries.
The entire struct should be a multiple of 16 bytes
Using the StructLayout attribute I can make sure that fields are aligned on 4 byte boundaries.
[StructLayout(LayoutKind.Sequential, Pack=4)]
struct Foo
{
float a;
float b;
}
But looking at the other options for the StructLayout attribute I see no options for padding the struct to multiples of 16 bytes. Is there no option for that in C#?
The only option I see is manually set the right size, using the Size property of the StructLayout attribute. But that seems brittle to me. As every time somebody adds a field to this struct they should take care not to forget to update the size.
After more searching, it indeed does look like I have to manually set the Size and FieldOffset to get to the right packing rules for Constant Buffers in DirectX/HLSL/C# interop code.
In my own project this is even a bit more complex. Since I use source generators to create these structs. But in the end I was able to figure it out. For those interested the source code can be found here.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have a pretty non trivial design problem on .net 4.5. I have a grid that is supposed to have millions of hexahedrons. Each hexahedron has 8 points and 6 quadrilateral faces. Each quadrilateral face may be planar or curved. If it is planar, it is represented by a (class|struct) called Plane, that has 4 doubles for the plane equation and has the 4 vertices of the quadrilateral. If the face is Curved, it is represented by a single point and a 3x3 matrix.
The main concern here is performance, garbage collection, assuming a memory limit of 2gb for any array of blocks. The question is: we have Block, Point, Face, Plane, Curve, Matrix3x3. Which of them should be class and which of them should be struct?
(Ignoring P/Invoke aspects, which is a different matter)
As a very general rule of thumb, you should only make types with small amounts of data (say 32 bytes) into structs.
Note that structs should ideally be immutable.
In terms of speed: It depends what you're doing, so you would have to perform some timings to really tell. However, it's likely that when you are passing items to a method it will be quicker to pass a reference type rather than a struct type when the struct size is greater than the reference size (which will be 32 bits for 32 bit code and 64 bits for 64 bit code).
One very important thing to bear in mind when creating arrays or List: For value types, the size of the value in bytes times the number of elements is the total contiguous size of the underlying array.
For reference types, the total size is the size of a reference (32 bits or 64 bits) times the size of the array.
Since the maximum size of an array is 2^31 bytes, this can be important if the size of the value type exceeds the size of a reference.
Updated
Let's assume we have a Plane type with 4 doubles on 64bit system, and that we have 1 million Planes.
If SPlane is a struct, then it would occupy 4*8 = 32 bytes
If CPlane is a class, then it will occupy 4*8 bytes (fields) + 16 bytes (header) = 48 bytes. And don't forget about the 8 bytes for each reference pointing to each instance.
Let's consider the pros and cons of using a struct, instead of a class.
Pros
GC Performance - there are less references to keep track of. The GC will treat an array of SPlane an one object. An array of CPlane would have to be treated as 1 million + 1 objects.
Memory space - an array of SPlane will occupy 32 million bytes in memory. An array of CPlane will occupy 8 million bytes (array of contiguous references) + 48 million bytes (individual instances). Thats 32 million bytes vs 56 million bytes.
Cons
Performance degradation due to copying
"resizing"/"expanding" an array of struct planes would copy 32 million bytes, whereas if we were using classes, it would copy the references only (8 million bytes)
likewise, things like passing a SPlane as an argument to a method, returning an SPlace or assigning a variable to another variable, will copy 32 bytes, whereas passing CPlane will copy just the reference.
Usual caveats of using value types.
no inheritance (doesn't seem to matter to you anyway)
acidental boxing (implicit casting to object or interface, calling GetType or non-overriden ToString). This one can be mitigated by being careful.
no canonical form (no explicit parameterless constructor - you can't restrain the default value of a value type field). E.g., unassigned fields and arrays would, by default, be filled with persons of 0 height - considering struct Person {int height;}.
no circular dependencies. A struct Person cannot contain fields of type Person, as that would lead to an infinite memory layout.
Since we don't know your exact use cases, you'll have to make the decision.
Like Matthew Watson suggested, you should measure the performance of both approaches and compare.
Suppose one wants to hold one million 3d points, and allow the following operations:
double GetX(int index) // And likewise GetY and GetZ
double SetX(int index, double value) // And likewise SetY and SetZ
double SetXYZ(int index, double x, double y, double z)
void CopyCoord(int src, int dest)
If one uses a mutable structure type:
struct Point3dStruct { public double X,Y,Z; }
Point3dStruct[] array;
the operations would become:
void init()
{
array = new Point3dStruct[1000000];
}
double GetX(int index)
{ return array[index].X; }
double SetX(int index, double value)
{ array[index].X = value; }
double SetXYZ(int index, double x, double y, double z)
{ array[index].X = x; array[index].Y = y; array[index].Y = z; }
void CopyCoord(int src, int dest)
{ array[dest] = array[src]; }
All operations would be reasonably efficient; 24,000,000 bytes would be required to hold 1,000,000 points, regardless of whether some or all of them were the same or different.
Using a so-called "immutable" struct would require changing the SetX and SetXYZ methods:
double SetX(int index, double value)
{
Point3dStruct temp = array[index];
array[index] = new Point3dStruct(value, temp.Y, temp.Z);
}
double SetXYZ(int index, double x, double y, double z)
{
array[index] = new Point3dStruct(x, y, z);
}
Performance for SetX would be much inferior to that of a simple exposed-field struct; no method would perform better than the exposed-field-struct equivalent. Memory requirements would not be affected by whether the struct was mutable or not.
A mutable class would require code much like a mutable struct except for the init and CopyCoord methods.
void init()
{
array = new Point3dClass[1000000];
for (int i=0; i<1000000; i++)
array[i] = new Point3dClass();
}
void CopyCoord(int src, int dest)
{
array[dest].X = array[src].X;
array[dest].Y = array[src].Y;
array[dest].Z = array[src].Z;
}
Note that accidentally writing array[dest] = array[src] would not copy the values, but would instead totally break the code! Memory usage would require an extra 16 or 32 bytes per element on 32-bit or 64-bit machines (i.e. 16,000,000 or 32,000,000 bytes) regardless of whether all points held the same or different values.
Use of an immutable class would require code similar to an immutable struct, except for the init method:
void init()
{
array = new Point3dClass[1000000];
var zero = new Point3dClass(0.0, 0.0, 0.0);
for (int i=0; i<1000000; i++)
array[i] = zero;
}
Initial memory usage would only be about 4,000,000 or 8,000,000 bytes (on 32- or 64-bit machines, respectively), but every separately-created instance of Point3dClass would add another 12 or 24 bytes. If the array holds references to 1,000,000 different instances of Point3dClass, those would total up to another 12,000,000 or 24,000,000 bytes.
If code will be using methods analogous to CopyCoord more often than it will be using methods analogous to SetX, then an immutable class can be a big win. If it will be using SetX a lot, an exposed-field mutable struct will offer the best performance. Mutable class types may play nicer than mutable structs when stored in collections other than arrays, but they have a substantial performance overhead and must be used with extreme care. The only advantage of immutable structs is that code which is written for an immutable struct can often be changed easily to use an immutable class instead.
I'm doing interop with some native library, which has some non-natural alignment feature which I want to simulate in .NET struct for the layout. Check these two structs:
public struct Int3
{
public int X;
public int Y;
public int Z;
}
public struct MyStruct
{
public short A;
public Int3 Xyz;
public short B;
}
So, within .NET, it uses its own layout rule to create the layout, which is, alignment would be min(sizeof(primitiveSize), StructLayout.Pack). So the layout of MyStruct would be:
[oo--] MyStruct.A (2 bytes data and 2 bytes padding)
[oooo oooo oooo] MyStruct.Xyz (3 int, no padding)
[oo--] MyStruct.B (2 bytes data and 2 bytes padding)
What I want to do is, I want to change the alignment of Int3 to 8 bytes, like something:
[StructLayout(Alignment = 8)]
public struct Int3 { .... }
Then the layout of MyStruct would became:
[oo-- ----] MyStruct.A (2 bytes for data, and 6 bytes padding, to align next Xyz to 8
[oooo oooo oooo ----] MyStruct.Xyz (4 bytes padding for alignment of 8)
[oo-- ----] (6 bytes padding, because the largest alignment in this struct is 8)
So, my question is:
1) Is there such an attribute in .NET to control the non-natural alignment like this?
2) If there is no such built-in attribute, I know there are other attributes such as StructLayout.Explict, OffsetAttribute, StructLayout.Size, StructLayout.Pack. With these attributes, I can simulate this layout manually, but it is not easy to use. So My second question would be, is there a way to hook into .NET struct layout creation which I can interfere the struct layout? What I mean is, I can create a custom attribute to specify the alignment, and then I calculate the layout, but I don't know how to interfere the .NET to use that layout.
Regards, Xiang.
There is no other way to 'hook into .NET' like you want that I am aware of than StructLayout.Explicit (which is just such a mechanism). Interop is quite a specialized need and, beyond the standard WinAPI cases, you should not expect it to be easy. In your case, unless you are dealing with truly large numbers of different structs with this unusual alignment, it's better to spell it out longhand with StructLayout.Explicit on MyStruct.
Almost any structure will be stored as part of a heap object (either as a class field, or as a field of a struct that is stored as a class field, etc.) The .net 32 platform aligns objects on the Large Object Heap to 16-byte boundaries, but other objects to 4-byte boundaries. Unless an object is manually allocated on the LOH, or is an array of more than 999 doubles [due to a truly atrocious hack, IMHO], there is no meaningful way to assure anything more specific than 4-byte alignment. Even if at some moment in time an unpinned struct is 16-byte aligned, any arbitrary GC cycle might relocate it and change that alignment.
I have an integer array of length 900 that contains only binary data [0,1].
I want to short the length of the array without losing binary data formate(original array values).
Is it possible to short the length of array of 900 into 10 or 20 length in C#???
Bitarray class will give you almost 1/32th of your int array's length.
You could actually apply some compression on bits and then store it. if its only 1s and 0s, Run-length encoding may help reduce size drastically in not-worst scenarios.
Run length encoding - Wiki article
Try to use System.Collections.BitArray
Here is the sample code:
using System;
using System.Collections.Generic;
using System.Text;
namespace ConsoleApp
{
class Program
{
static void Main(string[] args)
{
System.Collections.BitArray bits = new System.Collections.BitArray(900);
// Setting all bits to 0
bits.SetAll(false);
// Here Im setting 21st bit in array
bits[21] = true;
// etc...
int[] nativeBits = new int[29];
// Packing your bits in int array
bits.CopyTo(nativeBits, 0);
// This prints 2097152. this is 2^21
Console.WriteLine("First element is:"+nativeBits[0].ToString());
}
}
}
nativeBits array consists of only 29 elements. And now you can convert it in string
In fact, you have a binary integer with 900 digits. There are lots of ways you can hold that "number" depending on the what do you want with it and how fast.
Ask yourself:
do I need fast set function ( arr[n] = something )
do I need fast retrieval function ( val = arr[n] )
do I need iteration of some kind, for example find next n for which arr[n] is 1
and so on.
Then, ask again or modify you original question.
Otherwise, BitArray
EDIT:
Since we found something (a little) I would suggest rolling your own class for that.
Class would have a container such as byte[] and methods to set and unset a item on some position.
Checking common 1s from two arrays would be as simple as &&-ing an array on byte to byte basis.
I use XNA as a nice easy basis for some graphics processing I'm doing on the CPU, because it already provides a lot of the things I need. Currently, my "rendertarget" is an array of a custom Color struct I've written that consists of three floating point fields: R, G, B.
When I want to render this on screen, I manually convert this array to the Color struct that XNA provides (only 8 bits of precision per channel) by simply clamping the result within the byte range of 0-255. I then set this new array as the data of a Texture2D (it has a SurfaceFormat of SurfaceFormat.Color) and render the texture with a SpriteBatch.
What I'm looking for is a way to get rid of this translation process on the CPU and simply send my backbuffer directly to the GPU as some sort of texture, where I want to do some basic post-processing. And I really need a bit more precision than 8 bits there (not necessarily 32-bits, but since what I'm doing isn't GPU intensive, it can't hurt I guess).
How would I go about doing this?
I figured that if I gave Color an explicit size of 32 bytes (so 8 bytes padding, because my three channels only fill 24 bits) through StructLayout and set the SurfaceFormat of the texture that is rendered with the SpriteBatch to SurfaceFormat.Vector4 (32 bytes large) and filled the texture with SetData<Color> that it would maybe work. But I get this exception:
The type you are using for T in this method is an invalid size for this resource.
Is it possible to use any arbitrarily made up struct and interpret it as texture data in the GPU like you can with vertices through VertexDeclaration by specifying how it's laid out?
I think I have what I want by dumping the Color struct I made and using Vector4 for my color information. This works if the SurfaceFormat of the texture is also set to Vector4.