I can't really understand why did object header got twice bigger in 64 bit applications.
The object header was 8 bytes and in 64 bit it is 16 what are these additional bytes used for ?
The object header is made up of two fields, the syncblk and the method table pointer (aka "type handle"). Second field is easy to understand, it is a pointer so it must grow from 4 to 8 bytes in 64-bit mode.
The syncblk is the much less obvious case, it is mix of flags and values (lock owner thread id, hash code, sync block index). No reason to make it bigger in 64-bit mode. What matters is what happens after the object is collected by the GC. If the free space was not eliminated by compacting the heap then the object space participates in the free block list. Works like a doubly-linked list. The 2nd field is the forward pointer to the next free block. The object data space is used to store the size of the free block, basic reason why an object is never less than 12 bytes. And the syncblk stores the back pointer to the previous free block. So now it must be big enough to store a pointer and therefore needs to grow to 8 bytes. So it is 8 + 8 = 16 bytes.
Fwiw, the minimum object size in 64-bit mode is 24 bytes, even though 8 + 8 + 4 = 20 bytes would do just fine, just to ensure that everything is aligned to 8. Alignment matters a great deal, you'd never want to have a pointer value straddle the L1 cache line. Makes accessing it about x3 times slower. The <gcAllowVeryLargeObjects> option is another reason, added later.
Related
I have huge a list of string. I want to hold these list as memory efficient. I tried to hold on a list. But, it uses 24 bytes for each string which has 5 characters. Namely, there should be some overhead areas.
Then, I tried to hold on a string array. The memory usage has been a bit efficient. But, I have still memory usage problem.
How can I hold a list of string? I know that "C# reserves 2 bytes for each character". I want to hold a string which has 5 characters as 5*2 = 10 bytes. But, why does it use 24 bytes for this process?
Thank you for helps.
enter image description here
Firstly, note that the difference between a List<string> that was created at the correct size, and a string[] (of the same size) is inconsequential for any non-trivial size; a List<T> is really just a fancy wrapper for T[] with insert/resize/etc capabilities. If you only need to hold the data: T[] is fine, but so is List<T> usually.
As for the string - it isn't C# that reserves anything - it is .NET that defines that a string is an object, which is internally a length (int) plus memory for char data, 2 bytes per char. But: objects in .NET have object headers, padding/alignment, etc - and importantly: a minimum size. So yes, they take more memory than just the raw data you're trying to represent.
If you only need the actual data, you could perhaps store the data not as string, but as raw memory - either a simple large byte[] or byte*, or as a twinned pair of int[]/int* (for lengths and/or offsets into the page) and a char[]/char* (for the actual character data), or a byte[]/byte* if you can work with encoded data (i.e. you're mainly interested in IO work). However, working with such a form will be hugely inconvenient - virtually no common APIs will want to play with you unless you are talking in string. There are some APIs that accept raw byte/char data, but they are largely the encoder/decoder APIs, and some IO APIs. So again: unless that's what you're doing: it won't end well. Very recently, some Span<char> / Span<byte> APIs have appeared which would make this slightly less inconvenient (if you can use the latest .NET Core builds, etc), but: I strongly suspect that in most common cases you're just going to have to accept the string overhead and live with it.
Minimum size of any object in 64-bit .NET is 24 bytes.
In 32-bit it's a bit smaller but there's always at least 8 bytes for the object header and here we'd expect the string to store it's length (4 bytes). 8 + 4 + 10 = 22. I'm guessing it also wants/needs all objects to be 4-byte aligned. So if you're storing them as objects, you're not going to get a much smaller representation.
If it's all 7-bit ASCII type characters, you could store them as arrays of bytes but each array would still take up some space.
Your best route (I appreciate this bit is more comment like) is to come up with different processing algorithms that don't require them to all be in memory at the same time in the first place.
15 years ago, while programming with Pascal, I understood why to use power of two's for memory allocation. But this still seems to be state-of-the-art.
C# Examples:
new StringBuilder(256);
new byte[1024];
int bufferSize = 1 << 12;
I still see this thousands of times, I use this myself and I'm still questioning:
Do we need this in modern programming languages and modern hardware?
I guess its good practice, but what's the reason?
EDIT
For example a byte[] array, as stated by answers here, a power of 2 will make no sense: the array itself will use 16 bytes (?), so does it make sense to use 240 (=256-16) for the size to fit a total of 256 bytes?
Do we need this in modern programming languages and modern hardware? I guess its good practice, but what's the reason?
It depends. There are two things to consider here:
For sizes less than the memory page size, there's no appreciable difference between a power-of-two and an arbitrary number to allocate space;
You mostly use managed data structures with C#, so you won't even know how many bytes are really allocated underneath.
Assuming you're doing low-level allocation with malloc(), using multiples of the page size would be considered a good idea, i.e. 4096 or 8192; this is because it allows for more efficient memory management.
My advice would be to just allocate what you need and let C# handle the memory management and allocation for you.
Sadly, it's quite stupid if you want to keep a block of memory in a single memory page of 4k... And persons don't even know it :-) (I didn't until 10 minutes ago... I only had an hunch)... An example... It's unsafe code and implementation dependant (using .NET 4.5 at 32/64 bits)
byte[] arr = new byte[4096];
fixed (byte* p = arr)
{
int size = ((int*)p)[IntPtr.Size == 4 ? -1 : -2];
}
So the CLR has allocated at least 4096 + (1 or 2) sizeof(int)... So it has gone over one 4k memory page. This is logical... It has to keep the size of the array somewhere, and keeping it together with the array is the most intelligent thing (for those that know what Pascal Strings and BSTR are, yes, it's the same principle)
I'll add that all the objects in .NET have a syncblck number and a RuntimeType... They are at least int if not IntPtr, so a total of between 8 and 16 bytes/object (This is explained in various places... try looking for .net object header if you are interested)
It still makes sense in certain cases, but I would prefer to analyze case-by-case whether I need that kind of specification or not, rather than blindly use it as good practice.
For example, there might be cases where you want to use exactly 8 bits of information (1 byte) to address a table.
In that case, I would let the table have the size of 2^8.
Object table = new Object[256];
By this, you will be able to address any object of the table using only one byte.
Even if the table is actually smaller and doesn't use all 256 places, you still have the guarantee of bidirectional mapping from table to index and from index to table, which could prevent errors that would appear, for example, if you had:
Object table = new Object[100];
And then someone (probably someone else) accesses it with a byte value out of table's range.
Maybe this kind of bijective behavior could be good, maybe you could have other ways to guarantee your constraints.
Probably, given the increase in smartness of current compilers, it is not the only good practice anymore.
IMHO, anything ending in exact power of two's arithmeric operation is like a fast track. low level arithmeric operation for power of two takes less number of turns and bit manipulations than any other numbers need extra work for cpu.
And found this possible duplicate:Is it better to allocate memory in the power of two?
Yes, it's good practice, and it has at least one reason.
The modern processors have L1 cache-line size 64 bytes, and if you will use buffer size as 2^n (for example 1024, 4096,..), you will take fully cache-line, without wasted space.
In some cases, this will help prevent false sharing problem (http://en.wikipedia.org/wiki/False_sharing).
I'm pretty new to this, so if the question doesn't make sense, I apologize ahead of time.
int in c# is 4 bytes if I am correct. If I have the statement:
int x;
I would assume this is taking up 4 bytes of memory. If each memory address space is 1 byte then this would take up four address slots? If so, how does x map to the four address locations?
If I have the statement int x; I would assume this is taking up 4 bytes of memory. How does x map to the address of the four bytes?
First off, Mike is correct. C# has been designed specifically so that you do not need to worry about this stuff. Let the memory manager take care of it for you; it does a good job.
Assuming you do want to see how the sausage is made for your own edification: your assumption is not warranted. This statement does not need to cause any memory to be consumed. If it does cause memory to be consumed, the int consumes four bytes of memory.
There are two ways in which the local variable (*) can consume no memory. The first is that it is never used:
void M()
{
int x;
}
The compiler can be smart enough to know that x is never written to or read from, and it can be legally elided entirely. Obviously it then takes up no memory.
The second way that it can take up no memory is if the jitter chooses to enregister the local. It may assign a machine register specifically to that local variable. The variable then has no address associated with it because obviously registers do not have an address. (**)
Assuming that the local does take up memory, the jitter is responsible for keeping track of the location of that memory.
If the local is a perfectly normal local then the jitter will bump the stack pointer by four bytes, thereby reserving four bytes on the stack. It will then associate those four bytes with the local.
If the local is a closed-over outer local of an anonymous function, a local of an iterator block, or a local of an async method then the C# compiler will generate the local as a field of a class; the jitter asks the garbage collector to allocate the class instance and the jitter associates the local with a particular offset from the beginning of the memory buffer associated with that instance by the garbage collector.
All of this is implementation detail subject to change at any time; do not rely upon it.
(*) We know it is a local variable because you said it was a statement. A field declaration is not a statement.
(**) If unsafe code takes the address of a local, obviously it cannot be enregistered.
There's a lot (and I mean a LOT) that can be said about this. Various topics you're hitting on are things like the stack, the symbol table, memory management, the memory hierarchy, ... I could go on.
BUT, since you're new, I'll try to give an easier answer:
When you create a variable in a program (such as an int), you are telling the compiler to reserve a space in memory for that data. An int is 4 bytes, so 4 consecutive bytes are reserved. The memory location you were referring to only points to the beginning. It is known afterwards that the length is 4 bytes.
Now that memory location (in the case you provided) is not really saved in the same way that a variable would be. Every time there is a command that needs x, the command is instead replaced with a command that explicitly grabs that memory location. In other words, the address is saved in the "code" section of your program, not the "data" section.
This is just a really, REALLY high overview. Hopefully it helps.
You really should not need to worry about these things, since there is no way in C# that you could write code that would make use of this information.
But if you must know, at the machine-code level when we instruct the CPU to access the contents of x, it will be referred to using the address of the first one of those four bytes. The machine instruction that will do this will also contain information about how many bytes to be accessed, in this case four.
If the int x; is declared within a function, then the variable will be allocated on the stack, rather than the heap or global memory. The address of x in the compiler's symbol table will refer to the first byte of the four-byte integer. However since it is on the stack, the remembered address will be that of the offset on the stack, rather than a physical address. The variable will then be referenced via a instruction using that offset from the current stack pointer.
Assuming a 32-bit run-time, the offset on the stack will be aligned so the address is a multiple of 4 bytes, i.e. the offset will end in either 0, 4, 8 or 0x0c.
Furthermore because the 80x86 family is little-endian, the first byte of the integer will be the least significant, and the fourth byte will be the most significant, e.g. the decimal value 1,000,000 would be stored as the four bytes 0x40 0x42 0x0f 0x00.
I have a binary file save on disk of size 15KB but why its memory size is always 4 bytes only
long mem1=GC.GetTotalMemory(false);
Object[] array= new Object[1000000];
array[1]=obj; // obj is the object content of the file before it is saved on disk
long mem2=GC.GetTotalMemory(false);
long sizeOfOneElementInArray=(mem2-mem1)/1000000;
I am wrong about something somewhere. I think it is incorrect because 4 bytes is not enough to store even a hello world string, but why is it incorrect.
Thanks for any help.
Is the assumption that by assigning obj to the index [1] in the array o that it would take a substantial number of bytes? All you are doing is assigning a reference. Not only that, but all that new Object[1000000] did was create an array (space to associate 1,000,000 Object's and the memory required by Object[].), not allocate 1,000,000 Object's. I am sure someone can elaborate even more about the internal data structures being used and why 4 bytes shows up.
The key thing to realize is that assigned obj to o[1] is not allocating additional memory for obj. If you are trying to determine an approximation call GC.GetTotalMemory before obj is allocated, then after. In your test obj is already allocated before you call the first GC.GetTotalMemory
In general, when MSDN documentation says things like A number that is the best available approximation of the number of bytes currently allocated in managed memory it is a bad idea to rely on it for accurate values :-)
All kidding aside, if your object 'o' is not used in the function after line #3 in your example, it is possible that it is being collected between lines 3 and 4. Just a guess.
First, if I use the code as written, x becomes 0 for me, because o is not used after the assignment, so the GC can collect it. The following assumes o is not collected (e.g. by using GC.KeepAlive(o) at the end of the method).
Let's look carefully what each of the two important lines of your code do (assuming 32-bit architecture):
Object[] o = new Object[1000000];
This line allocates 1000000 * 4 + 16 bytes. Each element in the array takes 4 bytes, because it's a reference (pointer) to an object. 16 bytes is the overhead of an array.
o[1] = obj;
This line changes one of the references in o to reference to obj. This line allocates exactly 0 bytes.
I'm not sure why are you confused about the result. It has to be 4, unless there are some unreferenced objects from earlier part of the code. In that case, it could be less than 4 (even a negative number). Of course, if this were a multi-threaded application, the result could be pretty much anything, depending on what other threads do.
And this all assumes that GetTotalMemory() is precise, which it doesn't have to be.
In my experience, marshal.sizeof() is generally a better method of getting an object's true size than asking the garbage collector.
http://msdn.microsoft.com/en-us/library/y3ybkfb3.aspx
I would like to estimate the amount of memory used by each session on my server in my ASP.NET web application. A few key questions:
How much memory is allocated just to have each Session instance?
Is the memory usage of each variable equal to its given address space (e.g. 32 bits for an Int32)?
What about variables with variable address space (e.g. String, Array[]s)?
What about custom object instances (e.g. MyCustomObject which holds various other things)?
Is anything added for each variable (e.g. address of Int32 variable to tie it to the session instance) adding to overhead-per-variable?
Would appreciate some help in figuring out on how I can exactly predict how much memory each session will consume. Thank you!
The HttpSessionStateContainer class has ten local varaibles, so that is roughly 40 bytes, plus 8 bytes for the object overhead. It has a session id string and an items collection, so that is something like 50 more bytes when the items collection is empty. It has some more references, but I believe those are references to objects shared by all session objects. So, all in all that would make about 100 bytes per session object.
If you put a value type like Int32 in the items collection of a session, it has to be boxed. With the object overhead of 8 bytes it comes to 12 bytes, but due to limitations in the memory manager it can't allocate less than 16 bytes for the object. With the four bytes for the reference to the object, an Int32 needs 20 bytes.
If you put a reference type in the items collection, you only store the reference, so that is just four bytes. If it's a literal string, it's already created so it won't use any more memory. A created string will use (8 + 8 + 2 * Length) bytes.
An array of value types will use (Length * sizeof(type)) plus a few more bytes. An array of reference types will use (Length * 4) plus a few more bytes for the references, and each object is allocated separately.
A custom object uses roughly the sum of the size of it's members, plus some extra padding in some cases, plus the 8 bytes of object overhead. An object containing an Int32 and a Boolean (= 5 bytes) will be padded to 8 bytes, plus the 8 bytes overhead.
So, if you put a string with 20 characters and three integers in a session object, that will use about (100 + (8 + 8 + 20 *2) + 3 * (20)) = 216 bytes. (However, the session items collection will probably allocate a capacity of 16 items, so that's 64 bytes of which you are using 16, so the size would be 264 bytes.)
(All the sizes are on a 32 bit system. On a 64 bit system each reference is 8 bytes instead of 4 bytes.)
.NET Memory Profiler is your friend:
http://memprofiler.com/
You can download the trial version for free and run it. Although these things sometimes get complicated to install and run, I found it surprisingly simple to connect to a running web server and inspect all the objects it was holding in memory.
You can probably get some of these using Performance Counters and Custom Performance Counters. I never tested these with ASP.NET, but otherwise they'r quite nice to measure performances.
This old, old article from Microsoft containing performance tips for ASP (not ASP.NET) states that each session has an overhead of about 10 kilobytes. I have no idea if this is also applicable to ASP.NET, but it sure is a lot more than then 100 bytes overhead the Guffa mentions.
Planning a large scale application there are some other things you probably need to consider besides rough memory usage.
It depends on a Session State Provider you choose, and default in-process sessions are probably not the case at all.
In case of Out-Of-Process session storage (which may be the preferred for scalable applications) the picture will be completely different, and depends on how session objects are serialized and stored.
With SQL session storage, there will not be linear RAM consumption.
I would recommend integration testing with an out-of-process flavour of session state Provider from the very beginning for a large-scale application.