C# Generic Arrays and math operations on it - c#

I'm currently involved in a project where I have very large image volumes. This volumes have to processed very fast (adding, subtracting, thresholding, and so on). Additionally most of the volume are so large that they event don't fit into the memory of the system.
For that reason I have created an abstract volume class (VoxelVolume) that host the volume and image data and overloads the operators so that it's possible to perform the regular mathematical operations on volumes. Thereby two more questions opened up which I will put into stackoverflow into two additional threads.
Here is my first question. My volume is implemented in a way that it only can contain float array data, but most of the containing data is from an UInt16 image source. Only operations on the volume can create float array images.
When I started implementing such a volume the class looked like following:
public abstract class VoxelVolume<T>
{
...
}
but then I realized that overloading the operators or return values would get more complicated. An example would be:
public abstract class VoxelVolume<T>
{
...
public static VoxelVolume<T> Import<T>(param string[] files)
{
}
}
also adding two overloading operators would be more complicated:
...
public static VoxelVolume<T> operator+(VoxelVolume<T> A, VoxelVolume<T> B)
{
...
}
Let's assume I can overcome the problems described above, nevertheless I have different types of arrays that contain the image data. Since I have fixed my type in the volumes to float the is no problem and I can do an unsafe operation when adding the contents of two image volume arrays. I have read a few threads here and had a look around the web, but found no real good explanation of what to do when I want to add two arrays of different types in a fast way. Unfortunately every math operation on generics is not possible, since C# is not able to calculate the size of the underlying data type. Of course there might by a way around this problem by using C++/CLR, but currently everything I have done so far, runs in 32bit and 64bit without having to do a thing. Switching to C++/CLR seemed to me (pleased correct me if I'm wrong) that I'm bound to a certain platform (32bit) and I have to compile two assemblies when I let the application run on another platform (64bit). Is this true?
So asked shortly: How is it possible to add two arrays of two different types in a fast way. Is it true that the developers of C# haven't thought about this. Switching to a different language (C# -> C++) seems not to be an option.
I realize that simply performing this operation
float []A = new float[]{1,2,3};
byte []B = new byte[]{1,2,3};
float []C = A+B;
is not possible and unnecessary although it would be nice if it would work. My solution I was trying was following:
public static class ArrayExt
{
public static unsafe TResult[] Add<T1, T2, TResult>(T1 []A, T2 []B)
{
// Assume the length of both arrays is equal
TResult[] result = new TResult[A.Length];
GCHandle h1 = GCHandle.Alloc (A, Pinned);
GCHandle h2 = GCHandle.Alloc (B, Pinned);
GCHandle hR = GCHandle.Alloc (C, Pinned);
void *ptrA = h1.ToPointer();
void *ptrB = h2.ToPointer();
void *ptrR = hR.ToPointer();
for (int i=0; i<A.Length; i++)
{
*((TResult *)ptrR) = (TResult *)((T1)*ptrA + (T2)*ptrB));
}
h1.Free();
h2.Free();
hR.Free();
return result;
}
}
Please excuse if the code above is not quite correct, I wrote it without using an C# editor. Is such a solution a shown above thinkable? Please feel free to ask if I made a mistake or described some things incompletely.
Thanks for your help
Martin

This seems a (complicated) version of the "why don't we have an INumerical interface" .
The short answer to the last question is: No, going to unsafe pointers is no solution, the compiler still can't figure out the + in ((T1)*ptrA + (T2)*ptrB)) .

If you have only a few types like float and UInt32, provide all needed conversion functions, for example from VoxelVolume<UInt32> to VoxelVolume<float> and do the math on VoxelVolume<float>. That should be fast enough for most practical cases. You could even provide a generic conversion function from VoxelVolume<T1> to VoxelVolume<T2> (if T1 is convertible to T2). On the other hand, if you really need a
public static VoxelVolume<T2> operator+(VoxelVolume<T1> A,VoxelVolume<T2> B)
with type conversion from T1 to T2 for each array element, what hinders you from writing such operators?

Import, being a member of a generic class, probably doesn't also need to be itself generic. If it does, you definitely shouldn't use the same name T for both the generic parameter to the class and the generic parameter to the function.
What you're probably looking is Marc Gravell's Generic Operators
As for your questions about C++/CLI, yes this could help if you use templates instead of generics because then all the possible values for typename T are controlled at compile time and the compiler looks up the operators for each. Also, you can use /clr:pure or /clr:safe in which case your code will be MSIL, runnable on AnyCPU just like C#.

Admittedly, I didn't read the whole question (it's a quite a bit too long), but:
VoxelVolume<T> where T : ISummand ... T a; a.Add(b)
static float Sum (this VoxelVolume<float> self, VoxelVolume<float> other) {...}
To add float to byte in any meaningful sense you have to convert byte to float. So convert array of bytes to array of floats and then add them, you only lose some memory.

Related

Efficiently convert between equivalent structs in C#

I am communicating between two libraries which use equivalent structs. For example:
struct MyVector {
public float x, y, z;
}
struct TheirVector {
public float x, y, z;
}
Currently I am "casting" between these types by copying each of one's members into an instance of the other. Knowing how C# stores structs in memory, isn't there a much more efficient way of doing this with pointers?
Additionally, let's say I have two arrays of these equivalent structs:
MyVector[] array1; // (Length 1000)
TheirVector[] array2; // (Length 1000)
Isn't there a way to copy the whole block of memory from array1 into array2? Or even better, could I treat the entirety of array1 as of type array2 without creating a copy in memory?
I'm sure the answer requires pointers, but I have only used pointers in C++ and have no idea how C# implements them.
Does anyone have any examples on how to achieve something like this?
Thanks in advance!
Yes, if you're absolutely 100% sure that they have the same memory layout, you can coerce between them. The preferred way of doing this (to avoid too much unsafe / pointers) is: spans. For example:
var theirs = MemoryMarshal.Cast<MyVector, TheirVector>(array1);
This gives theirs as a Span<TheirVector>, without copying any of the actual data. Spans have a very similar API to arrays/vectors, so most things you expect to work: should work exactly as you expect. You simply have a typed reference to the same memory, using a different type.

Storing a C# reference to an array of structs and retrieving it - possible without copying?

UPDATE: the next version of C# has a feature under consideration that would directly answer this issue. c.f. answers below.
Requirements:
App data is stored in arrays-of-structs. There is one AoS for each type of data in the app (e.g. one for MyStruct1, another for MyStruct2, etc)
The structs are created at runtime; the more code we write in the app, the more there will be.
I need one class to hold references to ALL the AoS's, and allow me to set and get individual structs within those AoS's
The AoS's tend to be large (1,000's of structs per array); copying those AoS's around would be a total fail - they should never be copied! (they never need to!)
I have code that compiles and runs, and it works ... but is C# silently copying the AoS's under the hood every time I access them? (see below for full source)
public Dictionary<System.Type, System.Array> structArraysByType;
public void registerStruct<T>()
{
System.Type newType = typeof(T);
if( ! structArraysByType.ContainsKey(newType ) )
{
structArraysByType.Add(newType, new T[1000] ); // allowing up to 1k
}
}
public T get<T>( int index )
{
return ((T[])structArraysByType[typeof(T)])[index];
}
public void set<T>( int index, T newValue )
{
((T[])structArraysByType[typeof(T)])[index] = newValue;
}
Notes:
I need to ensure C# sees this as an array of value-types, instead of an array of objects ("don't you DARE go making an array of boxed objects around my structs!"). As I understand it: Generic T[] ensures that (as expected)
I couldn't figure out how to express the type "this will be an array of structs, but I can't tell you which structs at compile time" other than System.Array. System.Array works -- but maybe there are alternatives?
In order to index the resulting array, I have to typecast back to T[]. I am scared that this typecast MIGHT be boxing the Array-of-Structs; I know that if it were (T) instead of (T[]), it would definitely box; hopefully it doesn't do that with T[] ?
Alternatively, I can use the System.Array methods, which definitely boxes the incoming and outgoing struct. This is a fairly major problem (although I could workaround it if were the only way to make C# work with Array-of-struct)
As far as I can see, what you are doing should work fine, but yes it will return a copy of a struct T instance when you call Get, and perform a replacement using a stack based instance when you call Set. Unless your structs are huge, this should not be a problem.
If they are huge and you want to
Read (some) properties of one of a struct instance in your array without creating a copy of it.
Update some of it's fields (and your structs are not supposed to be immutable, which is generally a bad idea, but there are good reasons for doing it)
then you can add the following to your class:
public delegate void Accessor<T>(ref T item) where T : struct;
public delegate TResult Projector<T, TResult>(ref T item) where T : struct;
public void Access<T>(int index, Accessor<T> accessor)
{
var array = (T[])structArraysByType[typeof(T)];
accessor(ref array[index]);
}
public TResult Project<T, TResult>(int index, Projector<T, TResult> projector)
{
var array = (T[])structArraysByType[typeof(T)];
return projector(ref array[index]);
}
Or simply return a reference to the underlying array itself, if you don't need to abstract it / hide the fact that your class encapsulates them:
public T[] GetArray<T>()
{
return (T[])structArraysByType[typeof(T)];
}
From which you can then simply access the elements:
var myThingsArray = MyStructArraysType.GetArray<MyThing>();
var someFieldValue = myThingsArray[10].SomeField;
myThingsArray[3].AnotherField = "Hello";
Alternatively, if there is no specific reason for them to be structs (i.e. to ensure sequential cache friendly fast access), you might want to simply use classes.
There is a much better solution that is planned for adding to next version of C#, but does not yet exist in C# - the "return ref" feature of .NET already exists, but isn't supported by the C# compiler.
Here's the Issue for tracking that feature: https://github.com/dotnet/roslyn/issues/118
With that, the entire problem becomes trivial "return ref the result".
(answer added for future, when the existing answer will become outdated (I hope), and because there's still time to comment on that proposal / add to it / improve it!)

Type-proofing primitive .NET value types via custom structs: Is it worth the effort?

I'm toying with the idea of making primitive .NET value types more type-safe and more "self-documenting" by wrapping them in custom structs. However, I'm wondering if it's actually ever worth the effort in real-world software.
(That "effort" can be seen below: Having to apply the same code pattern again and again. We're declaring structs and so cannot use inheritance to remove code repetition; and since the overloaded operators must be declared static, they have to be defined for each type separately.)
Take this (admittedly trivial) example:
struct Area
{
public static implicit operator Area(double x) { return new Area(x); }
public static implicit operator double(Area area) { return area.x; }
private Area(double x) { this.x = x; }
private readonly double x;
}
struct Length
{
public static implicit operator Length(double x) { return new Length(x); }
public static implicit operator double(Length length) { return length.x; }
private Length(double x) { this.x = x; }
private readonly double x;
}
Both Area and Length are basically a double, but augment it with a specific meaning. If you defined a method such as…
Area CalculateAreaOfRectangleWith(Length width, Length height)
…it would not be possible to directly pass in an Area by accident. So far so good.
BUT: You can easily sidestep this apparently improved type safety simply by casting a Area to double, or by temporarily storing an Area in a double variable, and then passing that into the method where a Length is expected:
Area a = 10.0;
double aWithEvilPowers = a;
… = CalculateAreaOfRectangleWith( (double)a, aWithEvilPowers );
Question: Does anyone here have experience with extensive use of such custom struct types in real-world / production software? If so:
Has the wrapping of primitive value types in custom structs ever directly resulted in less bugs, or in more maintainable code, or given any other major advantage(s)?
Or are the benefits of custom structs too small for them to be used in practice?
P.S.: About 5 years have passed since I asked this question. I'm posting some of my experiences that I've made since then as a separate answer.
I did this in a project a couple of years ago, with some mixed results. In my case, it helped a lot to keep track of different kinds of IDs, and to have compile-time errors when the wrong type of IDs were being used. And I can recall a number of occasions where it prevented actual bugs-in-the-making. So, that was the plus side. On the negative side, it was not very intuitive for other developers -- this kind of approach is not very common, and I think other developers got confused with all these new types springing up. Also, I seem to recall we had some problems with serialization, but I can't remember the details (sorry).
So if you are going to go this route, I would recommend a couple of things:
1) Make sure you talk with the other folks on your team first, explain what you're trying to accomplish and see if you can get "buy-in" from everyone. If people don't understand the value, you're going to be constantly fighting against the mentality of "what's the point of all this extra code"?
2) Consider generate your boilerplate code using a tool like T4. It will make the maintenance of the code much easier. In my case, we had about a dozen of these types and going the code-generation route made changes much easier and much less error prone.
That's my experience. Good luck!
John
This is a logical next step to Hungarian Notation, see an excellent article from Joel here http://www.joelonsoftware.com/articles/Wrong.html
We did something similar in one project/API where esp. other developers needed to use some of our interfaces and it had siginificantly less "false alarm support cases" - because it made really obvious what was allowed/needed... I would suspect this means measurably less bugs though we never did the statistics because the resulting apps were not ours...
In the aprox. 5 years since I've asked this question, I have often toyed with defining struct value types, but rarely done it. Here's some reasons why:
It's not so much that their benefit is too small, but that the cost of defining a struct value type is too high. There's lots of boilerplate code involved:
Override Equals and implement IEquatable<T>, and override GetHashCode too;
Implement operators == and !=;
Possibly implement IComparable<T> and operators <, <=, > and >= if a type supports ordering;
Override ToString and implement conversion methods and type-cast operators feom/to other related types.
This is simply a lot of repetitive work that is often not necessary when using the underlying primitive types directly.
Often, there is no reasonable default value (e.g. for value types such as zip codes, dates, times, etc.). Since one can't prohibit the default constructor, defining such types as struct is a bad idea and defining them as class means some runtime overhead (more work for the GC, more memory dereferencing, etc.)
I haven't actually found that wrapping a primitive type in a semantic struct really offers significant benefits with regard to type safety; IntelliSense / code completion and good choices of variable / parameter names can achieve much of the same benefits as specialized value types. On the other hand, as another answer suggests, custom value types can be unintuitive for some developers, so there's an additional cognitive overhead in using them.
There have been some instances where I ended up wrapping a primitive type in a struct, normally when there are certain operations defined on them (e.g. a DecimalDegrees and Radians type with methods to convert between these).
Defining equality and comparison methods, on the other hand, does not necessarily mean that I'd define a custom value type. I might instead use primitive .NET types and provide well-named implementations of IEqualityComparer<T> and IComparer<T> instead.
I don't often use structs.
One thing you may consider is to make your type require a dimension. Consider this:
public enum length_dim { meters, inches }
public enum area_dim { square_meters, square_inches }
public class Area {
public Area(double a,area_dim dim) { this.area=a; this.dim=dim; }
public Area(Area a) { this.area = a.area; this.dim = a.dim; }
public Area(Length l1, Length l2)
{
Debug.Assert(l1.Dim == l2.Dim);
this.area = l1.Distance * l1.Distance;
switch(l1.Dim) {
case length_dim.meters: this.dim = square_meters;break;
case length_dim.inches: this.dim = square_inches; break;
}
}
private double area;
public double Area { get { return this.area; } }
private area_dim dim;
public area_dim Dim { get { return this.dim; } }
}
public class Length {
public Length(double dist,length_dim dim)
{ this.distance = dist; this.dim = dim; }
private length_dim dim;
public length_dim Dim { get { return this.dim; } }
private double distance;
public double Distance { get { return this.distance; } }
}
Notice that nothing can be created from a double alone. The dimension must be specified. My objects are immutable. Requiring dimension and verifying it would have prevented the Mars Express disaster.
I can't say, from my experience, if this is a good idea or not. It certainly has it's pros and cons. A pro is that you get an extra dose of type safety. Why accept a double for an angle when you can accept a type that has true angle semantics (degrees to/from radians, constrained to degree values of 0-360, etc). Con, not common so could be confusing to some developers.
See this link for a real example of a real commercial product that has several types like you describe:
http://geoframework.codeplex.com/
Types include Angle, Latitude, Longitude, Speed, Distance.

C++ CLI structure to byte array

I have a structure that represents a wire format packet. In this structure is an array of other structures. I have generic code that handles this very nicely for most cases but this array of structures case is throwing the marshaller for a loop.
Unsafe code is a no go since I can't get a pointer to a struct with an array (argh!).
I can see from this codeproject article that there is a very nice, generic approach involving C++/CLI that goes something like...
public ref class Reader abstract sealed
{
public:
generic <typename T> where T : value class
static T Read(array<System::Byte>^ data)
{
T value;
pin_ptr<System::Byte> src = &data[0];
pin_ptr<T> dst = &value;
memcpy((void*)dst, (void*)src,
/*System::Runtime::InteropServices::Marshal::SizeOf(T::typeid)*/
sizeof(T));
return value;
}
};
Now if just had the structure -> byte array / writer version I'd be set! Thanks in advance!
Using memcpy to copy an array of bytes to a structure is extremely dangerous if you are not controlling the byte packing of the structure. It is safer to marshall and unmarshall a structure one field at a time. Of course you will lose the generic feature of the sample code you have given.
To answer your real question though (and consider this pseudo code):
public ref class Writer abstract sealed
{
public:
generic <typename T> where T : value class
static System::Byte[] Write(T value)
{
System::Byte buffer[] = new System::Byte[sizeof(T)]; // this syntax is probably wrong.
pin_ptr<System::Byte> dst = &buffer[0];
pin_ptr<T> src = &value;
memcpy((void*)dst, (void*)src,
/*System::Runtime::InteropServices::Marshal::SizeOf(T::typeid)*/
sizeof(T));
return buffer;
}
};
This is probably not the right way to go. CLR is allowed to add padding, reorder the items and alter the way it's stored in memory.
If you want to do this, be sure to add [System.Runtime.InteropServices.StructLayout] attribute to force a specific memory layout for the structure. In general, I suggest you not to mess with memory layout of .NET types.
Unsafe code can be made to do this, actually. See my post on reading structs from disk: Reading arrays from files in C# without extra copy.
Not altering the structure is certainly sound advice. I use liberal amounts of StructLayout attributes to specify the packing, layout and character encoding. Everything flows just fine.
My issue is just that I need a performant and preferably generic solution. Performance because this is a server application and generic for elegance. If you look at the codeproject link you'll see that the StructureToPtr and PtrToStructure methods perform on the order of 20 times slower than a simple unsafe pointer cast. This is one of those areas where unsafe code is full of win. C# will only let you have pointers to primitives (and it's not generic - can't get a pointer to a generic), so that's why CLI.
Thanks for the psuedocode grieve, I'll see if it gets the job done and report back.
Am I missing something? Why not create a new array of the same size and initialise each element seperately in a loop?
Using an array of byte data is quite dangerous unless you are targetting one platform only... for example your method doesn't consider differing endianness between the source and destination arrays.
Something I don't really understand about your question as well is why having an array as a member in your class is causing a problem. If the class comes from a .NET language you should have no issues, otherwise, you should be able to take the pointer in unsafe code and initialise a new array by going through the elements pointed at one by one (with unsafe code) and adding them to it.

Getting the size of a field in bytes with C#

I have a class, and I want to inspect its fields and report eventually how many bytes each field takes. I assume all fields are of type Int32, byte, etc.
How can I find out easily how many bytes does the field take?
I need something like:
Int32 a;
// int a_size = a.GetSizeInBytes;
// a_size should be 4
You can't, basically. It will depend on padding, which may well be based on the CLR version you're using and the processor etc. It's easier to work out the total size of an object, assuming it has no references to other objects: create a big array, use GC.GetTotalMemory for a base point, fill the array with references to new instances of your type, and then call GetTotalMemory again. Take one value away from the other, and divide by the number of instances. You should probably create a single instance beforehand to make sure that no new JITted code contributes to the number. Yes, it's as hacky as it sounds - but I've used it to good effect before now.
Just yesterday I was thinking it would be a good idea to write a little helper class for this. Let me know if you'd be interested.
EDIT: There are two other suggestions, and I'd like to address them both.
Firstly, the sizeof operator: this only shows how much space the type takes up in the abstract, with no padding applied round it. (It includes padding within a structure, but not padding applied to a variable of that type within another type.)
Next, Marshal.SizeOf: this only shows the unmanaged size after marshalling, not the actual size in memory. As the documentation explicitly states:
The size returned is the actually the
size of the unmanaged type. The
unmanaged and managed sizes of an
object can differ. For character
types, the size is affected by the
CharSet value applied to that class.
And again, padding can make a difference.
Just to clarify what I mean about padding being relevant, consider these two classes:
class FourBytes { byte a, b, c, d; }
class FiveBytes { byte a, b, c, d, e; }
On my x86 box, an instance of FourBytes takes 12 bytes (including overhead). An instance of FiveBytes takes 16 bytes. The only difference is the "e" variable - so does that take 4 bytes? Well, sort of... and sort of not. Fairly obviously, you could remove any single variable from FiveBytes to get the size back down to 12 bytes, but that doesn't mean that each of the variables takes up 4 bytes (think about removing all of them!). The cost of a single variable just isn't a concept which makes a lot of sense here.
Depending on the needs of the questionee, Marshal.SizeOf might or might not give you what you want. (Edited after Jon Skeet posted his answer).
using System;
using System.Runtime.InteropServices;
public class MyClass
{
public static void Main()
{
Int32 a = 10;
Console.WriteLine(Marshal.SizeOf(a));
Console.ReadLine();
}
}
Note that, as jkersch says, sizeof can be used, but unfortunately only with value types. If you need the size of a class, Marshal.SizeOf is the way to go.
Jon Skeet has laid out why neither sizeof nor Marshal.SizeOf is perfect. I guess the questionee needs to decide wether either is acceptable to his problem.
From Jon Skeets recipe in his answer I tried to make the helper class he was refering to. Suggestions for improvements are welcome.
public class MeasureSize<T>
{
private readonly Func<T> _generator;
private const int NumberOfInstances = 10000;
private readonly T[] _memArray;
public MeasureSize(Func<T> generator)
{
_generator = generator;
_memArray = new T[NumberOfInstances];
}
public long GetByteSize()
{
//Make one to make sure it is jitted
_generator();
long oldSize = GC.GetTotalMemory(false);
for(int i=0; i < NumberOfInstances; i++)
{
_memArray[i] = _generator();
}
long newSize = GC.GetTotalMemory(false);
return (newSize - oldSize) / NumberOfInstances;
}
}
Usage:
Should be created with a Func that generates new Instances of T. Make sure the same instance is not returned everytime. E.g. This would be fine:
public long SizeOfSomeObject()
{
var measure = new MeasureSize<SomeObject>(() => new SomeObject());
return measure.GetByteSize();
}
It can be done indirectly, without considering the alignment.
The number of bytes that reference type instance is equal service fields size + type fields size.
Service fields(in 32x takes 4 bytes each, 64x 8 bytes):
Sysblockindex
Pointer to methods table
+Optional(only for arrays) array size
So, for class without any fileds, his instance takes 8 bytes on 32x machine. If it is class with one field, reference on the same class instance, so, this class takes(64x):
Sysblockindex + pMthdTable + reference on class = 8 + 8 + 8 = 24 bytes
If it is value type, it does not have any instance fields, therefore in takes only his fileds size. For example if we have struct with one int field, then on 32x machine it takes only 4 bytes memory.
I had to boil this down all the way to IL level, but I finally got this functionality into C# with a very tiny library.
You can get it (BSD licensed) at bitbucket
Example code:
using Earlz.BareMetal;
...
Console.WriteLine(BareMetal.SizeOf<int>()); //returns 4 everywhere I've tested
Console.WriteLine(BareMetal.SizeOf<string>()); //returns 8 on 64-bit platforms and 4 on 32-bit
Console.WriteLine(BareMetal.SizeOf<Foo>()); //returns 16 in some places, 24 in others. Varies by platform and framework version
...
struct Foo
{
int a, b;
byte c;
object foo;
}
Basically, what I did was write a quick class-method wrapper around the sizeof IL instruction. This instruction will get the raw amount of memory a reference to an object will use. For instance, if you had an array of T, then the sizeof instruction would tell you how many bytes apart each array element is.
This is extremely different from C#'s sizeof operator. For one, C# only allows pure value types because it's not really possible to get the size of anything else in a static manner. In contrast, the sizeof instruction works at a runtime level. So, however much memory a reference to a type would use during this particular instance would be returned.
You can see some more info and a bit more in-depth sample code at my blog
if you have the type, use the sizeof operator. it will return the type`s size in byte.
e.g.
Console.WriteLine(sizeof(int));
will output:
4
You can use method overloading as a trick to determine the field size:
public static int FieldSize(int Field) { return sizeof(int); }
public static int FieldSize(bool Field) { return sizeof(bool); }
public static int FieldSize(SomeStructType Field) { return sizeof(SomeStructType); }
Simplest way is: int size = *((int*)type.TypeHandle.Value + 1)
I know this is implementation detail but GC relies on it and it needs to be as close to start of the methodtable for efficiency plus taking into consideration how GC code complex is nobody will dare to change it in future. In fact it works for every minor/major versions of .net framework+.net core. (Currently unable to test for 1.0)
If you want more reliable way, emit a struct in a dynamic assembly with [StructLayout(LayoutKind.Auto)] with exact same fields in same order, take its size with sizeof IL instruction. You may want to emit a static method within struct which simply returns this value. Then add 2*IntPtr.Size for object header. This should give you exact value.
But if your class derives from another class, you need to find each size of base class seperatly and add them + 2*Inptr.Size again for header. You can do this by getting fields with BindingFlags.DeclaredOnly flag.
System.Runtime.CompilerServices.Unsafe
Use System.Runtime.CompilerServices.Unsafe.SizeOf<T>() where T: unmanaged
(when not running in .NET Core you need to install that NuGet package)
Documentation states:
Returns the size of an object of the given type parameter.
It seems to use the sizeof IL-instruction just as Earlz solution does as well. (source)
The unmanaged constraint is new in C# 7.3

Categories

Resources