Real size in memory of an array [duplicate] - c#

I was trying to determine the overhead of the header on a .NET array (in a 32-bit process) using this code:
long bytes1 = GC.GetTotalMemory(false);
object[] array = new object[10000];
for (int i = 0; i < 10000; i++)
array[i] = new int[1];
long bytes2 = GC.GetTotalMemory(false);
array[0] = null; // ensure no garbage collection before this point
Console.WriteLine(bytes2 - bytes1);
// Calculate array overhead in bytes by subtracting the size of
// the array elements (40000 for object[10000] and 4 for each
// array), and dividing by the number of arrays (10001)
Console.WriteLine("Array overhead: {0:0.000}",
((double)(bytes2 - bytes1) - 40000) / 10001 - 4);
Console.Write("Press any key to continue...");
Console.ReadKey();
The result was
204800
Array overhead: 12.478
In a 32-bit process, object[1] should be the same size as int[1], but in fact the overhead jumps by 3.28 bytes to
237568
Array overhead: 15.755
Anyone know why?
(By the way, if anyone's curious, the overhead for non-array objects, e.g. (object)i in the loop above, is about 8 bytes (8.384). I heard it's 16 bytes in 64-bit processes.)

Here's a slightly neater (IMO) short but complete program to demonstrate the same thing:
using System;
class Test
{
const int Size = 100000;
static void Main()
{
object[] array = new object[Size];
long initialMemory = GC.GetTotalMemory(true);
for (int i = 0; i < Size; i++)
{
array[i] = new string[0];
}
long finalMemory = GC.GetTotalMemory(true);
GC.KeepAlive(array);
long total = finalMemory - initialMemory;
Console.WriteLine("Size of each element: {0:0.000} bytes",
((double)total) / Size);
}
}
But I get the same results - the overhead for any reference type array is 16 bytes, whereas the overhead for any value type array is 12 bytes. I'm still trying to work out why that is, with the help of the CLI spec. Don't forget that reference type arrays are covariant, which may be relevant...
EDIT: With the help of cordbg, I can confirm Brian's answer - the type pointer of a reference-type array is the same regardless of the actual element type. Presumably there's some funkiness in object.GetType() (which is non-virtual, remember) to account for this.
So, with code of:
object[] x = new object[1];
string[] y = new string[1];
int[] z = new int[1];
z[0] = 0x12345678;
lock(z) {}
We end up with something like the following:
Variables:
x=(0x1f228c8) <System.Object[]>
y=(0x1f228dc) <System.String[]>
z=(0x1f228f0) <System.Int32[]>
Memory:
0x1f228c4: 00000000 003284dc 00000001 00326d54 00000000 // Data for x
0x1f228d8: 00000000 003284dc 00000001 00329134 00000000 // Data for y
0x1f228ec: 00000000 00d443fc 00000001 12345678 // Data for z
Note that I've dumped the memory 1 word before the value of the variable itself.
For x and y, the values are:
The sync block, used for locking the hash code (or a thin lock - see Brian's comment)
Type pointer
Size of array
Element type pointer
Null reference (first element)
For z, the values are:
Sync block
Type pointer
Size of array
0x12345678 (first element)
Different value type arrays (byte[], int[] etc) end up with different type pointers, whereas all reference type arrays use the same type pointer, but have a different element type pointer. The element type pointer is the same value as you'd find as the type pointer for an object of that type. So if we looked at a string object's memory in the above run, it would have a type pointer of 0x00329134.
The word before the type pointer certainly has something to do with either the monitor or the hash code: calling GetHashCode() populates that bit of memory, and I believe the default object.GetHashCode() obtains a sync block to ensure hash code uniqueness for the lifetime of the object. However, just doing lock(x){} didn't do anything, which surprised me...
All of this is only valid for "vector" types, by the way - in the CLR, a "vector" type is a single-dimensional array with a lower-bound of 0. Other arrays will have a different layout - for one thing, they'd need the lower bound stored...
So far this has been experimentation, but here's the guesswork - the reason for the system being implemented the way it has. From here on, I really am just guessing.
All object[] arrays can share the same JIT code. They're going to behave the same way in terms of memory allocation, array access, Length property and (importantly) the layout of references for the GC. Compare that with value type arrays, where different value types may have different GC "footprints" (e.g. one might have a byte and then a reference, others will have no references at all, etc).
Every time you assign a value within an object[] the runtime needs to check that it's valid. It needs to check that the type of the object whose reference you're using for the new element value is compatible with the element type of the array. For instance:
object[] x = new object[1];
object[] y = new string[1];
x[0] = new object(); // Valid
y[0] = new object(); // Invalid - will throw an exception
This is the covariance I mentioned earlier. Now given that this is going to happen for every single assignment, it makes sense to reduce the number of indirections. In particular, I suspect you don't really want to blow the cache by having to go to the type object for each assigment to get the element type. I suspect (and my x86 assembly isn't good enough to verify this) that the test is something like:
Is the value to be copied a null reference? If so, that's fine. (Done.)
Fetch the type pointer of the object the reference points at.
Is that type pointer the same as the element type pointer (simple binary equality check)? If so, that's fine. (Done.)
Is that type pointer assignment-compatible with the element type pointer? (Much more complicated check, with inheritance and interfaces involved.) If so, that's fine - otherwise, throw an exception.
If we can terminate the search in the first three steps, there's not a lot of indirection - which is good for something that's going to happen as often as array assignments. None of this needs to happen for value type assignments, because that's statically verifiable.
So, that's why I believe reference type arrays are slightly bigger than value type arrays.
Great question - really interesting to delve into it :)

Array is a reference type. All reference types carry two additional word fields. The type reference and a SyncBlock index field, which among other things is used to implement locks in the CLR. So the type overhead on reference types is 8 bytes on 32 bit. On top of that the array itself also stores the length which is another 4 bytes. This brings the total overhead to 12 bytes.
And I just learned from Jon Skeet's answer, arrays of reference types has an additional 4 bytes overhead. This can be confirmed using WinDbg. It turns out that the additional word is another type reference for the type stored in the array. All arrays of reference types are stored internally as object[], with the additional reference to the type object of the actual type. So a string[] is really just an object[] with an additional type reference to the type string. For details please see below.
Values stored in arrays: Arrays of reference types hold references to objects, so each entry in the array is the size of a reference (i.e. 4 bytes on 32 bit). Arrays of value types store the values inline and thus each element will take up the size of the type in question.
This question may also be of interest: C# List<double> size vs double[] size
Gory Details
Consider the following code
var strings = new string[1];
var ints = new int[1];
strings[0] = "hello world";
ints[0] = 42;
Attaching WinDbg shows the following:
First let's take a look at the value type array.
0:000> !dumparray -details 017e2acc
Name: System.Int32[]
MethodTable: 63b9aa40
EEClass: 6395b4d4
Size: 16(0x10) bytes
Array: Rank 1, Number of elements 1, Type Int32
Element Methodtable: 63b9aaf0
[0] 017e2ad4
Name: System.Int32
MethodTable 63b9aaf0
EEClass: 6395b548
Size: 12(0xc) bytes
(C:\Windows\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll)
Fields:
MT Field Offset Type VT Attr Value Name
63b9aaf0 40003f0 0 System.Int32 1 instance 42 m_value <=== Our value
0:000> !objsize 017e2acc
sizeof(017e2acc) = 16 ( 0x10) bytes (System.Int32[])
0:000> dd 017e2acc -0x4
017e2ac8 00000000 63b9aa40 00000001 0000002a <=== That's the value
First we dump the array and the one element with value of 42. As can be seen the size is 16 bytes. That is 4 bytes for the int32 value itself, 8 bytes for regular reference type overhead and another 4 bytes for the length of the array.
The raw dump shows the SyncBlock, the method table for int[], the length, and the value of 42 (2a in hex). Notice that the SyncBlock is located just in front of the object reference.
Next, let's look at the string[] to find out what the additional word is used for.
0:000> !dumparray -details 017e2ab8
Name: System.String[]
MethodTable: 63b74ed0
EEClass: 6395a8a0
Size: 20(0x14) bytes
Array: Rank 1, Number of elements 1, Type CLASS
Element Methodtable: 63b988a4
[0] 017e2a90
Name: System.String
MethodTable: 63b988a4
EEClass: 6395a498
Size: 40(0x28) bytes <=== Size of the string
(C:\Windows\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll)
String: hello world
Fields:
MT Field Offset Type VT Attr Value Name
63b9aaf0 4000096 4 System.Int32 1 instance 12 m_arrayLength
63b9aaf0 4000097 8 System.Int32 1 instance 11 m_stringLength
63b99584 4000098 c System.Char 1 instance 68 m_firstChar
63b988a4 4000099 10 System.String 0 shared static Empty
>> Domain:Value 00226438:017e1198 <<
63b994d4 400009a 14 System.Char[] 0 shared static WhitespaceChars
>> Domain:Value 00226438:017e1760 <<
0:000> !objsize 017e2ab8
sizeof(017e2ab8) = 60 ( 0x3c) bytes (System.Object[]) <=== Notice the underlying type of the string[]
0:000> dd 017e2ab8 -0x4
017e2ab4 00000000 63b74ed0 00000001 63b988a4 <=== Method table for string
017e2ac4 017e2a90 <=== Address of the string in memory
0:000> !dumpmt 63b988a4
EEClass: 6395a498
Module: 63931000
Name: System.String
mdToken: 02000024 (C:\Windows\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll)
BaseSize: 0x10
ComponentSize: 0x2
Number of IFaces in IFaceMap: 7
Slots in VTable: 196
First we dump the array and the string. Next we dump the size of the string[]. Notice that WinDbg lists the type as System.Object[] here. The object size in this case includes the string itself, so the total size is the 20 from the array plus the 40 for the string.
By dumping the raw bytes of the instance we can see the following: First we have the SyncBlock, then follows the method table for object[], then the length of the array. After that we find the additional 4 bytes with the reference to the method table for string. This can be verified by the dumpmt command as shown above. Finally we find the single reference to the actual string instance.
In conclusion
The overhead for arrays can be broken down as follows (on 32 bit that is)
4 bytes SyncBlock
4 bytes for Method table (type reference) for the array itself
4 bytes for Length of array
Arrays of reference types adds another 4 bytes to hold the method table of the actual element type (reference type arrays are object[] under the hood)
I.e. the overhead is 12 bytes for value type arrays and 16 bytes for reference type arrays.

I think you are making some faulty assumptions while measuring, as the memory allocation (via GetTotalMemory) during your loop may be different than the actual required memory for just the arrays - the memory may be allocated in larger blocks, there may be other objects in memory that are reclaimed during the loop, etc.
Here's some info for you on array overhead:
Arrays Undocumented
Article by Jeffrey Richter
.Net Type Internals

Because heap management (since you deal with GetTotalMemory) can only allocate rather large blocks, which latter are allocated by smaller chunks for programmer purposes by CLR.

I'm sorry for the offtopic but I found interesting info on memory overheading just today morning.
We have a project which operates huge amount of data (up to 2GB). As the major storage we use Dictionary<T,T>. Thousands of dictionaries are created actually. After change it to List<T> for keys and List<T> for values (we implemented IDictionary<T,T> ourselves) the memory usage decreased on about 30-40%.
Why?

Related

Memory usage of dynamic type in c#

Whether does dynamic-type use more memory size, than relevant type?
For example, does the field use only four bytes?
dynamic foo = (int) 1488;
Short answer:
No. It will actually use 12 bytes on 32bit machine and 24 bytes on 64 bit.
Long Answer
A dynamic type will be stored as an object but at run time the compiler will load many more bytes to make sense of what to do with the dynamic type. In order to do that, a lot more memory will be used to figure that out. Think of dynamic as a fancy object.
Here is a class:
class Mine
{
}
Here is the overhead for the above object on 32bit:
-------------------------- -4 bytes
| Object Header Word |
|------------------------| +0 bytes
| Method Table Pointer |
|------------------------| +4 bytes for Method Table Pointer
A total of 12 bytes needs to be allocated to it since the smallest reference type on 32bit is 12 bytes.
If we add one field to that class like this:
class Mine
{
public int Field = 1488;
}
It will still take 12 bytes because the overhead and the int field can fit in the 12 bytes.
If we add another int field, it will take 16 bytes.
However, if we add one dynamic field to that class like this:
class Mine
{
public dynamic Field = (int)1488;
}
It will NOT be 12 bytes. The dynamic field will be treated like an object and thus the size will be 12 + 12 = 24 bytes.
What is interesting is if you do this instead:
class Mine
{
public dynamic Field = (bool)false;
}
An instance of Mine will still take 24 bytes because even though the dynamic fields is only a boolean, it is still treated like an object.
On a 64bit machine, an instance of Mine with dynamic will take 48 bytes since the smallest reference type on 64 bit is 24 bytes (24 + 24 = 48 bytes).
Here are some gotchas you should be aware of and see this answer for size of object.

Why does struct alignment depend on whether a field type is primitive or user-defined?

In Noda Time v2, we're moving to nanosecond resolution. That means we can no longer use an 8-byte integer to represent the whole range of time we're interested in. That has prompted me to investigate the memory usage of the (many) structs of Noda Time, which has in turn led me to uncover a slight oddity in the CLR's alignment decision.
Firstly, I realize that this is an implementation decision, and that the default behaviour could change at any time. I realize that I can modify it using [StructLayout] and [FieldOffset], but I'd rather come up with a solution which didn't require that if possible.
My core scenario is that I have a struct which contains a reference-type field and two other value-type fields, where those fields are simple wrappers for int. I had hoped that that would be represented as 16 bytes on the 64-bit CLR (8 for the reference and 4 for each of the others), but for some reason it's using 24 bytes. I'm measuring the space using arrays, by the way - I understand that the layout may be different in different situations, but this felt like a reasonable starting point.
Here's a sample program demonstrating the issue:
using System;
using System.Runtime.InteropServices;
#pragma warning disable 0169
struct Int32Wrapper
{
int x;
}
struct TwoInt32s
{
int x, y;
}
struct TwoInt32Wrappers
{
Int32Wrapper x, y;
}
struct RefAndTwoInt32s
{
string text;
int x, y;
}
struct RefAndTwoInt32Wrappers
{
string text;
Int32Wrapper x, y;
}
class Test
{
static void Main()
{
Console.WriteLine("Environment: CLR {0} on {1} ({2})",
Environment.Version,
Environment.OSVersion,
Environment.Is64BitProcess ? "64 bit" : "32 bit");
ShowSize<Int32Wrapper>();
ShowSize<TwoInt32s>();
ShowSize<TwoInt32Wrappers>();
ShowSize<RefAndTwoInt32s>();
ShowSize<RefAndTwoInt32Wrappers>();
}
static void ShowSize<T>()
{
long before = GC.GetTotalMemory(true);
T[] array = new T[100000];
long after = GC.GetTotalMemory(true);
Console.WriteLine("{0}: {1}", typeof(T),
(after - before) / array.Length);
}
}
And the compilation and output on my laptop:
c:\Users\Jon\Test>csc /debug- /o+ ShowMemory.cs
Microsoft (R) Visual C# Compiler version 12.0.30501.0
for C# 5
Copyright (C) Microsoft Corporation. All rights reserved.
c:\Users\Jon\Test>ShowMemory.exe
Environment: CLR 4.0.30319.34014 on Microsoft Windows NT 6.2.9200.0 (64 bit)
Int32Wrapper: 4
TwoInt32s: 8
TwoInt32Wrappers: 8
RefAndTwoInt32s: 16
RefAndTwoInt32Wrappers: 24
So:
If you don't have a reference type field, the CLR is happy to pack Int32Wrapper fields together (TwoInt32Wrappers has a size of 8)
Even with a reference type field, the CLR is still happy to pack int fields together (RefAndTwoInt32s has a size of 16)
Combining the two, each Int32Wrapper field appears to be padded/aligned to 8 bytes. (RefAndTwoInt32Wrappers has a size of 24.)
Running the same code in the debugger (but still a release build) shows a size of 12.
A few other experiments have yielded similar results:
Putting the reference type field after the value type fields doesn't help
Using object instead of string doesn't help (I expect it's "any reference type")
Using another struct as a "wrapper" around the reference doesn't help
Using a generic struct as a wrapper around the reference doesn't help
If I keep adding fields (in pairs for simplicity), int fields still count for 4 bytes, and Int32Wrapper fields count for 8 bytes
Adding [StructLayout(LayoutKind.Sequential, Pack = 4)] to every struct in sight doesn't change the results
Does anyone have any explanation for this (ideally with reference documentation) or a suggestion of how I can get hint to the CLR that I'd like the fields to be packed without specifying a constant field offset?
I think this is a bug. You are seeing the side-effect of automatic layout, it likes to align non-trivial fields to an address that's a multiple of 8 bytes in 64-bit mode. It occurs even when you explicitly apply the [StructLayout(LayoutKind.Sequential)] attribute. That is not supposed to happen.
You can see it by making the struct members public and appending test code like this:
var test = new RefAndTwoInt32Wrappers();
test.text = "adsf";
test.x.x = 0x11111111;
test.y.x = 0x22222222;
Console.ReadLine(); // <=== Breakpoint here
When the breakpoint hits, use Debug + Windows + Memory + Memory 1. Switch to 4-byte integers and put &test in the Address field:
0x000000E928B5DE98 0ed750e0 000000e9 11111111 00000000 22222222 00000000
0xe90ed750e0 is the string pointer on my machine (not yours). You can easily see the Int32Wrappers, with the extra 4 bytes of padding that turned the size into 24 bytes. Go back to the struct and put the string last. Repeat and you'll see the string pointer is still first. Violating LayoutKind.Sequential, you got LayoutKind.Auto.
It is going to be difficult to convince Microsoft to fix this, it has worked this way for too long so any change is going to be breaking something. The CLR only makes an attempt to honor [StructLayout] for the managed version of a struct and make it blittable, it in general quickly gives up. Notoriously for any struct that contains a DateTime. You only get the true LayoutKind guarantee when marshaling a struct. The marshaled version certainly is 16 bytes, as Marshal.SizeOf() will tell you.
Using LayoutKind.Explicit fixes it, not what you wanted to hear.
EDIT2
struct RefAndTwoInt32Wrappers
{
public int x;
public string s;
}
This code will be 8 byte aligned so the struct will have 16 bytes. By comparison this:
struct RefAndTwoInt32Wrappers
{
public int x,y;
public string s;
}
Will be 4 byte aligned so this struct also will have 16 bytes. So the rationale here is that struct aligment in CLR is determined by the number of most aligned fields, clases obviously cannot do that so they will remain 8 byte aligned.
Now if we combine all that and create struct:
struct RefAndTwoInt32Wrappers
{
public int x,y;
public Int32Wrapper z;
public string s;
}
It will have 24 bytes {x,y} will have 4 bytes each and {z,s} will have 8 bytes. Once we introduce a ref type in the struct CLR will always align our custom struct to match the class alignment.
struct RefAndTwoInt32Wrappers
{
public Int32Wrapper z;
public long l;
public int x,y;
}
This code will have 24 bytes since Int32Wrapper will be aligned the same as long. So the custom struct wrapper will always align to the highest/best aligned field in the structure or to it's own internal most significant fields. So in the case of a ref string that is 8 byte aligned the struct wrapper will align to that.
Concluding custom struct field inside struct will always be aligned to the highest aligned instance field in the structure. Now if i'm not sure if this is a bug but without some evidence I'm going to stick by my opinion that this might be conscious decision.
EDIT
The sizes are actually accurate only when allocated on a heap but the structs themselves have smaller sizes (the exact sizes of it's fields). Further analysis seam to suggest that this might be a bug in the CLR code, but needs to be backed up by evidence.
I will inspect cli code and post further updates if something useful will be found.
This is a alignment strategy used by .NET mem allocator.
public static RefAndTwoInt32s[] test = new RefAndTwoInt32s[1];
static void Main()
{
test[0].text = "a";
test[0].x = 1;
test[0].x = 1;
Console.ReadKey();
}
This code compiled with .net40 under x64, In WinDbg lets do the following:
Lets find the type on the Heap first:
0:004> !dumpheap -type Ref
Address MT Size
0000000003e72c78 000007fe61e8fb58 56
0000000003e72d08 000007fe039d3b78 40
Statistics:
MT Count TotalSize Class Name
000007fe039d3b78 1 40 RefAndTwoInt32s[]
000007fe61e8fb58 1 56 System.Reflection.RuntimeAssembly
Total 2 objects
Once we have it lets see what's under that address:
0:004> !do 0000000003e72d08
Name: RefAndTwoInt32s[]
MethodTable: 000007fe039d3b78
EEClass: 000007fe039d3ad0
Size: 40(0x28) bytes
Array: Rank 1, Number of elements 1, Type VALUETYPE
Fields:
None
We see that this is a ValueType and its the one we created. Since this is an array we need to get the ValueType def of a single element in the array:
0:004> !dumparray -details 0000000003e72d08
Name: RefAndTwoInt32s[]
MethodTable: 000007fe039d3b78
EEClass: 000007fe039d3ad0
Size: 40(0x28) bytes
Array: Rank 1, Number of elements 1, Type VALUETYPE
Element Methodtable: 000007fe039d3a58
[0] 0000000003e72d18
Name: RefAndTwoInt32s
MethodTable: 000007fe039d3a58
EEClass: 000007fe03ae2338
Size: 32(0x20) bytes
File: C:\ConsoleApplication8\bin\Release\ConsoleApplication8.exe
Fields:
MT Field Offset Type VT Attr Value Name
000007fe61e8c358 4000006 0 System.String 0 instance 0000000003e72d30 text
000007fe61e8f108 4000007 8 System.Int32 1 instance 1 x
000007fe61e8f108 4000008 c System.Int32 1 instance 0 y
The structure is actually 32 bytes since it's 16 bytes is reserved for padding so in actuality every structure is at least 16 bytes in size from the get go.
if you add 16 bytes from ints and a string ref to: 0000000003e72d18 + 8 bytes EE/padding you will end up at 0000000003e72d30 and this is the staring point for string reference, and since all references are 8 byte padded from their first actual data field this makes up for our 32 bytes for this structure.
Let's see if the string is actually padded that way:
0:004> !do 0000000003e72d30
Name: System.String
MethodTable: 000007fe61e8c358
EEClass: 000007fe617f3720
Size: 28(0x1c) bytes
File: C:\WINDOWS\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String: a
Fields:
MT Field Offset Type VT Attr Value Name
000007fe61e8f108 40000aa 8 System.Int32 1 instance 1 m_stringLength
000007fe61e8d640 40000ab c System.Char 1 instance 61 m_firstChar
000007fe61e8c358 40000ac 18 System.String 0 shared static Empty
>> Domain:Value 0000000001577e90:NotInit <<
Now lets analyse the above program the same way:
public static RefAndTwoInt32Wrappers[] test = new RefAndTwoInt32Wrappers[1];
static void Main()
{
test[0].text = "a";
test[0].x.x = 1;
test[0].y.x = 1;
Console.ReadKey();
}
0:004> !dumpheap -type Ref
Address MT Size
0000000003c22c78 000007fe61e8fb58 56
0000000003c22d08 000007fe039d3c00 48
Statistics:
MT Count TotalSize Class Name
000007fe039d3c00 1 48 RefAndTwoInt32Wrappers[]
000007fe61e8fb58 1 56 System.Reflection.RuntimeAssembly
Total 2 objects
Our struct is 48 bytes now.
0:004> !dumparray -details 0000000003c22d08
Name: RefAndTwoInt32Wrappers[]
MethodTable: 000007fe039d3c00
EEClass: 000007fe039d3b58
Size: 48(0x30) bytes
Array: Rank 1, Number of elements 1, Type VALUETYPE
Element Methodtable: 000007fe039d3ae0
[0] 0000000003c22d18
Name: RefAndTwoInt32Wrappers
MethodTable: 000007fe039d3ae0
EEClass: 000007fe03ae2338
Size: 40(0x28) bytes
File: C:\ConsoleApplication8\bin\Release\ConsoleApplication8.exe
Fields:
MT Field Offset Type VT Attr Value Name
000007fe61e8c358 4000009 0 System.String 0 instance 0000000003c22d38 text
000007fe039d3a20 400000a 8 Int32Wrapper 1 instance 0000000003c22d20 x
000007fe039d3a20 400000b 10 Int32Wrapper 1 instance 0000000003c22d28 y
Here the situation is the same, if we add to 0000000003c22d18 + 8 bytes of string ref we will end up at the start of the first Int wrapper where the value actually point to the address we are at.
Now we can see that each value is an object reference again lets confirm that by peeking 0000000003c22d20.
0:004> !do 0000000003c22d20
<Note: this object has an invalid CLASS field>
Invalid object
Actually thats correct since its a struct the address tells us nothing if this is an obj or vt.
0:004> !dumpvc 000007fe039d3a20 0000000003c22d20
Name: Int32Wrapper
MethodTable: 000007fe039d3a20
EEClass: 000007fe03ae23c8
Size: 24(0x18) bytes
File: C:\ConsoleApplication8\bin\Release\ConsoleApplication8.exe
Fields:
MT Field Offset Type VT Attr Value Name
000007fe61e8f108 4000001 0 System.Int32 1 instance 1 x
So in actuality this is a more like an Union type that will get 8 byte aligned this time around (all of the paddings will be aligned with the parent struct). If it weren't then we would end up with 20 bytes and that's not optimal so the mem allocator will never allow it to happen. If you do the math again it will turn out that the struct is indeed 40 bytes of size.
So if you want to be more conservative with memory you should never pack it in a struct custom struct type but instead use simple arrays. Another way is to allocate memory off heap (VirtualAllocEx for e.g)
this way you are given you own memory block and you manage it the way you want.
The final question here is why all of a sudden we might get layout like that. Well if you compare the jited code and performance of a int[] incrementation with struct[] with a counter field incrementation the second one will generate a 8 byte aligned address being an union, but when jited this translates to more optimized assembly code (singe LEA vs multiple MOV). However in the case described here the performance will be actually worse so my take is that this is consistent with the underlying CLR implementation since it's a custom type that can have multiple fields so it may be easier/better to put the starting address instead of a value (since it would be impossible) and do struct padding there, thus resulting in bigger byte size.
Summary see #Hans Passant's answer probably above. Layout Sequential doesn't work
Some testing:
It is definitely only on 64bit and the object reference "poisons" the struct. 32 bit does what you are expecting:
Environment: CLR 4.0.30319.34209 on Microsoft Windows NT 6.2.9200.0 (32 bit)
ConsoleApplication1.Int32Wrapper: 4
ConsoleApplication1.TwoInt32s: 8
ConsoleApplication1.TwoInt32Wrappers: 8
ConsoleApplication1.ThreeInt32Wrappers: 12
ConsoleApplication1.Ref: 4
ConsoleApplication1.RefAndTwoInt32s: 12
ConsoleApplication1.RefAndTwoInt32Wrappers: 12
ConsoleApplication1.RefAndThreeInt32s: 16
ConsoleApplication1.RefAndThreeInt32Wrappers: 16
As soon as the object reference is added all the structs expand to be 8 bytes rather their 4 byte size. Expanding the tests:
Environment: CLR 4.0.30319.34209 on Microsoft Windows NT 6.2.9200.0 (64 bit)
ConsoleApplication1.Int32Wrapper: 4
ConsoleApplication1.TwoInt32s: 8
ConsoleApplication1.TwoInt32Wrappers: 8
ConsoleApplication1.ThreeInt32Wrappers: 12
ConsoleApplication1.Ref: 8
ConsoleApplication1.RefAndTwoInt32s: 16
ConsoleApplication1.RefAndTwoInt32sSequential: 16
ConsoleApplication1.RefAndTwoInt32Wrappers: 24
ConsoleApplication1.RefAndThreeInt32s: 24
ConsoleApplication1.RefAndThreeInt32Wrappers: 32
ConsoleApplication1.RefAndFourInt32s: 24
ConsoleApplication1.RefAndFourInt32Wrappers: 40
As you can see as soon as the reference is added every Int32Wrapper becomes 8 bytes so isn't simple alignment. I shrunk down the array allocation incase it was LoH allocation which is differently aligned.
Just to add some data to the mix - I created one more type from the ones you had:
struct RefAndTwoInt32Wrappers2
{
string text;
TwoInt32Wrappers z;
}
The program writes out:
RefAndTwoInt32Wrappers2: 16
So it looks like the TwoInt32Wrappers struct aligns properly in the new RefAndTwoInt32Wrappers2 struct.

GC.GetTotalMemory use and its return value

I have a binary file save on disk of size 15KB but why its memory size is always 4 bytes only
long mem1=GC.GetTotalMemory(false);
Object[] array= new Object[1000000];
array[1]=obj; // obj is the object content of the file before it is saved on disk
long mem2=GC.GetTotalMemory(false);
long sizeOfOneElementInArray=(mem2-mem1)/1000000;
I am wrong about something somewhere. I think it is incorrect because 4 bytes is not enough to store even a hello world string, but why is it incorrect.
Thanks for any help.
Is the assumption that by assigning obj to the index [1] in the array o that it would take a substantial number of bytes? All you are doing is assigning a reference. Not only that, but all that new Object[1000000] did was create an array (space to associate 1,000,000 Object's and the memory required by Object[].), not allocate 1,000,000 Object's. I am sure someone can elaborate even more about the internal data structures being used and why 4 bytes shows up.
The key thing to realize is that assigned obj to o[1] is not allocating additional memory for obj. If you are trying to determine an approximation call GC.GetTotalMemory before obj is allocated, then after. In your test obj is already allocated before you call the first GC.GetTotalMemory
In general, when MSDN documentation says things like A number that is the best available approximation of the number of bytes currently allocated in managed memory it is a bad idea to rely on it for accurate values :-)
All kidding aside, if your object 'o' is not used in the function after line #3 in your example, it is possible that it is being collected between lines 3 and 4. Just a guess.
First, if I use the code as written, x becomes 0 for me, because o is not used after the assignment, so the GC can collect it. The following assumes o is not collected (e.g. by using GC.KeepAlive(o) at the end of the method).
Let's look carefully what each of the two important lines of your code do (assuming 32-bit architecture):
Object[] o = new Object[1000000];
This line allocates 1000000 * 4 + 16 bytes. Each element in the array takes 4 bytes, because it's a reference (pointer) to an object. 16 bytes is the overhead of an array.
o[1] = obj;
This line changes one of the references in o to reference to obj. This line allocates exactly 0 bytes.
I'm not sure why are you confused about the result. It has to be 4, unless there are some unreferenced objects from earlier part of the code. In that case, it could be less than 4 (even a negative number). Of course, if this were a multi-threaded application, the result could be pretty much anything, depending on what other threads do.
And this all assumes that GetTotalMemory() is precise, which it doesn't have to be.
In my experience, marshal.sizeof() is generally a better method of getting an object's true size than asking the garbage collector.
http://msdn.microsoft.com/en-us/library/y3ybkfb3.aspx

How much memory array of objects in c# consumes?

Suppose that we have previously instantiated three objects A, B, C from class D
now an array defines as below:
D[] arr = new D[3];
arr[0]=A;
arr[1]=B;
arr[2]=C;
does array contains references to objects or has separate copy?
C# distinguishes reference types and value types.
A reference type is declared using the word class. Variables of these types contain references, so an array will be an array of references to the objects. Each reference is 4 bytes (on a 32-bit system) or 8 bytes (on a 64-bit system) large.
A value type is declared using the word struct. Values of this type are copied every time you assign them. An array of a value type contains copies of the values, so the size of the array is the size of the struct times the number of elements.
Normally when we say “object”, we refer to instances of a reference type, so the answer to your question is “yes”, but remember the difference and make sure that you don’t accidentally create a large array of a large struct.
An array of reference types only contains references.
In a 32 bit application references are 32 bits (4 bytes), and in a 64 bit application references are 64 bits (8 bytes). So, you can calculate the approximate size by multiplying the array length with the reference size. (There are also a few extra bytes for internal variables for the array class, and some extra bytes are used for memory management.)
You can look at the memory occupied by an array using WinDBG + SOS (or PSSCOR2). IIRC, an array of reference types is represented in memory by its length, followed by references to its elements, i.e. it's exact size is PLATFORM_POINTER_SIZE * (array.Length + 1)
The array is made out of pointers (32bit or 64bit) that points to the objects. An object is a reference type, only value types are copied to the array itself.
As #Yves said it has references to the objects. The array is a block of memory as it as in C.
So it size is sizeof(element) * count + the amount of memory needed by oop.

C# function normal return value VS out or ref argument

I've got a method in C# that needs to return a very large array (or any other large data structure for that matter).
Is there a performance gain in using a ref or out parameter instead of the standard return value?
I.E. is there any performance or other gain in using
void function(sometype input, ref largearray)
over
largearray function(sometype input)
The amount of stack space used on a 32-bit x86 processor to pass arguments of various types:
byte: 4 bytes
bool: 4 bytes
enum: 4 bytes
char: 4 bytes
short: 4 bytes
int: 4 bytes
long: 8 bytes
float: 4 bytes
double: 8 bytes
decimal: 16 bytes
struct: runtime size of the structure
string: 4 bytes
array: 4 bytes
object: 4 bytes
interface: 4 bytes
pointer: 4 bytes
class instance: 4 bytes
The ones below the line are reference types, their size will double on a 64-bit processor.
For a static method call, the first 2 arguments that are up to 4 bytes will be passed through CPU registers, not the stack. For an instance method call only one argument will be passed through registers. The rest are passed on the stack. A 64-bit processor supports passing 4 arguments through registers.
As is clear from the list, the only time you should ever consider passing an argument by ref is for structures. The normal guidance for this is to do so when the structure is larger than 16 bytes. It isn't always easy to guess the runtime size of a structure, up to 4 fields would usually be accurate. Less if those fields are double, long or decimal. This guidance then usually recommends turning your structure into a class, precisely for this reason.
Also note that there is no savings passing an argument as a byte or short intentionally, an int is the type that a 32-bit processor is happy with. Same for currently available 64-bit processors.
A method return value, the real topic of your question are almost always returned in a CPU register. Most types fit comfortably in the eax or edx:eax registers, an FPU register for floating point values. The only exceptions are large structures and decimal, they are too large to fit a register. They are called by reserving space on the stack for the return value and passing a 4 byte pointer to that space as an argument to the method.
There isn't, just return the array
An out parameter returns a reference to an instance of type, which wasn't required to be initialised before sending into a method.
A ref parameter returns a reference to an instance of type, that must be initialised before sending in to a method.
This is about call semantics, NOT performance.
There would be no difference between
void function(sometype input, out largearray output )
over
largearray function(sometype input)
However, if you do
largearray function( sometype input, ref largearray output )
and you require the caller to have pre-allocated the large array, that would of course be faster, but it would only matter if you call the method repeatedly and keep the large array allocated between calls.

Categories

Resources