This question already has answers here:
Find size of object instance in bytes in c#
(17 answers)
Closed 5 years ago.
I need to know how much bytes my object consumes in memory (in C#). for example how much my Hashtable, or SortedList, or List<String>.
this may not be accurate but its close enough for me
long size = 0;
object o = new object();
using (Stream s = new MemoryStream()) {
BinaryFormatter formatter = new BinaryFormatter();
formatter.Serialize(s, o);
size = s.Length;
}
I don't think you can get it directly, but there are a few ways to find it indirectly.
One way is to use the GC.GetTotalMemory method to measure the amount of memory used before and after creating your object. This won't be perfect, but as long as you control the rest of the application you may get the information you are interested in.
Apart from that you can use a profiler to get the information or you could use the profiling api to get the information in code. But that won't be easy to use I think.
See Find out how much memory is being used by an object in C#? for a similar question.
Unmanaged object:
Marshal.SizeOf(object yourObj);
Value Types:
sizeof(object val)
Managed object:
Looks like there is no direct way to get for managed objects, Ref:
https://learn.microsoft.com/en-us/archive/blogs/cbrumme/size-of-a-managed-object
OK, this question has been answered and answer accepted but someone asked me to put my answer so there you go.
First of all, it is not possible to say for sure. It is an internal implementation detail and not documented. However, based on the objects included in the other object. Now, how do we calculate the memory requirement for our cached objects?
I had previously touched this subject in this article:
Now, how do we calculate the memory requirement for our cached
objects? Well, as most of you would know, Int32 and float are four
bytes, double and DateTime 8 bytes, char is actually two bytes (not
one byte), and so on. String is a bit more complex, 2*(n+1), where n
is the length of the string. For objects, it will depend on their
members: just sum up the memory requirement of all its members,
remembering all object references are simply 4 byte pointers on a 32
bit box. Now, this is actually not quite true, we have not taken care
of the overhead of each object in the heap. I am not sure if you need
to be concerned about this, but I suppose, if you will be using lots
of small objects, you would have to take the overhead into
consideration. Each heap object costs as much as its primitive types,
plus four bytes for object references (on a 32 bit machine, although
BizTalk runs 32 bit on 64 bit machines as well), plus 4 bytes for the
type object pointer, and I think 4 bytes for the sync block index. Why
is this additional overhead important? Well, let’s imagine we have a
class with two Int32 members; in this case, the memory requirement is
16 bytes and not 8.
The following code fragment should return the size in bytes of any object passed to it, so long as it can be serialized.
I got this from a colleague at Quixant to resolve a problem of writing to SRAM on a gaming platform. Hope it helps out.
Credit and thanks to Carlo Vittuci.
/// <summary>
/// Calculates the lenght in bytes of an object
/// and returns the size
/// </summary>
/// <param name="TestObject"></param>
/// <returns></returns>
private int GetObjectSize(object TestObject)
{
BinaryFormatter bf = new BinaryFormatter();
MemoryStream ms = new MemoryStream();
byte[] Array;
bf.Serialize(ms, TestObject);
Array = ms.ToArray();
return Array.Length;
}
In debug mode
load SOS
and execute dumpheap command.
Related
I'm making a program in which one of its functions, in order to correctly create the message to be sent, keeps calling a function I have generated to add each of the parts to the array. The thing is, in C# you can't do this because the byte arrays (and if I'm not wrong, any kind of array) has a finite Length which cannot be changed.
Due to this, I thought of creating 2 byte variables. The first one would get the first to values. The second one would be created after you know the quantity of new bytes you have to add, and after this, you would delete the first variable and create it again, with the Length of the previous variable, but adding the Length of the new values, doing the same you did with the second variable. The code I've generated is:
byte[] message_mod_0 = adr_and_func;
byte[] byte_memory_adr = AddAndTypes.ToByteArray(memory_adr);
byte[] message_mod_1 = new byte[2 + byte_memory_adr.Length];
message_mod_1 = AddAndTypes.AddByteArrayToByteArray(message_mod_0, byte_memory_adr);
AddAndTypes.AddByteArrayToByteArray(message_mod_0, AddAndTypes.IntToByte(value));
byte[] CRC = Aux.CRC(message_mod_0);
AddAndTypes.AddByteArrayToByteArray(message_mod_0, CRC);
In this code, the two variables I've meant are message_mod_0 and message_mod_1. I also think of doing the deleting and redeclaring the byte_memory_adr variable that is required in order to know which is the Length of the byte array you want to add to the ouput message.
The parameters adr_and_func, memory_adr and value are given as input parameters of the function I'm making.
The question can be summed up as: is there any way to delete variables in the same scope they were created? And, in case it can be done, would there be any problem if I created a new variable with the same name after I have deleted the first one? I can't think of any reason why that could happen, but I'm pretty new to this programming language.
Also, I don't know if there is any less messy way of doing this.
This sounds like you are writing your own custom serializer.
I would recommend just using a existing library, like protobuf.net to define your messages if at all possible.
If this is not possible you can use a BinaryWriter to write your values to a Stream. If you want to keep it in memory use a MemoryStream and use .ToArray() when your done to get a array of all bytes.
As for memory, do not worry about it. Unless you have gigabyte sized messages the memory requirements should not be an issue, and the garbage collector will automatically recycle memory when it is no longer needed, and it can do this after the last usage, regardless of scope. If you have huge memory streams you might want to look at something like recyclable memory stream since this can avoid some allocation performance issues and fragmentation issues.
This question already has answers here:
Improving performance of for loop when decimating a buffer
(3 answers)
Closed 4 years ago.
I have a rather strange problem that I cannot figure out. I am using a third party library that creates a buffer. This buffer can contain doubles but copying between a double array and it is extremely slow. There must be something going on behind the scenes with the particular data type, specifically when you write to it. For example, the following works but take over 20 ms whereas a copy from a double array to another double array takes 20us.
Mitov.SignalLab.RealBuffer mitovBuffer = new Mitov.SignalLab.RealBuffer(16384);
double[] doubleBuffer = new double[16384];
private void Test()
{
for (int i=0; i < 16384; i++)
{
mitovBuffer[i] = doubleBuffer[i];
}
}
This works but takes 20+ ms. I can get a pointer to the mitovBuffer and I know that there are 8 bytes stored for each "double" in this buffer. Is there a way I can copy between these two? I've tried all the usual things like array.copy, block copies etc. Each time I get a cannot convert from a "double[] to double" error.
Thanks, Tom
Perhaps one reason this function is slow is because
Mitov.SignalLab.RealBuffer is a wrapper around a resizeable delphi buffer. If I understand their documentation correctly, the byte-wise assignment you are doing involves layers of abstraction that might even involve resizing the buffer for every byte.
The API even says that the class is intended for use within Delphi code, not from other languages. The API says
This is Real(double) Data wrapper buffer. Use this buffer to access
and manipulate the Real(double) data from inside your Delphi code.
.NET, C++ Builder and Visual C++ users should use the much more
convenient and powerful TSLCRealBuffer class.
However, their public API does not document that recommended class. Perhaps the documentation doesn't really reflect the product, but if I were you I'd call their engineers to find out what you are intended to do. Since you won't be able to pin their "buffer" abstraction, I suspect you don't want to use unmanaged code to push bytes into those locations.
If you want to try byte-wise loading, perhaps you might try their documented bytewise methods:
function GetByteSize() : Cardinal - Returns the size of the buffer in bytes.
function GetSize() : Cardinal - Returns the size of the buffer in elements.
function ByteRead() : PByte
function ByteWrite() : PByte
function ByteModify() : PByte
Or perhaps you can put your data into their internal format and then call their public procedure procedure AddCustom(AData : ISLData)
I have a problem with concat two byte[]. One of them have more than 300,000,000 byte. It's throwing exception of type System.OutOfMemoryException.
I use this code :
byte[] b3 = by2.Concat(by1).ToArray();
anybody can help me
Because of Concat call ToArray know nothing about how big the result array has to be. It can't create proper, big array and just fill it with data. So it creates small one, then when it's full creates new one with twice the size, etc. over and over again as long as there is more data to fill. This way you need much more memory then just theoretical (b1.Length + b2.Length) * 2. And things get even more tricky, because after certain point these big arrays are allocated on LOH, and are not collected that easily by GC as normal objects.
That's why you should not use ToArray() in this case and do it the old-fashioned way: allocate new array with size equals combines sizes of source arrays and copy the data.
Something like:
var b3 = new byte[b1.Length + b2.Length];
Array.Copy(b1, b2, b1.Length);
Array.Copy(b1, 0, b2, b1.Length, b2.Length);
It does not guaranty success, but makes it more likely. And executes much, much, much faster then ToArray().
When working with that amount of data, I think you should be working with streams (this of course depends on the application).
Then you can have code that works on the data without requiring it all to be loaded in memory at the same time, and you could create a specialized stream class that acts as a concatenation between two streams.
Well, the error message taks for itself, you don't have free continuous ~550Mb of RAM. Maybe it's just too fragmented.
Well.. you know, requesting from the system a continuous block of ~600meg - I'm not suprised. It is quite a large block itself, and provided that you must also have the source arrays in the memory, that's over 1GB of raw data chunks..
You should probably start thinking about other data structures, or try to keep them as files and map them to memory edit: memmapping a whole file needs the same contiguous area in address space, so it solves nothing. This answer will be deleted.
I want to use WriteFile to write big (~500mb) multidimensional array into file (because BinaryFormatter is very slow at writing big stuff and there is no other way in .Net framework to write multidimensiona byte arrays, only single bytes or single-dimensional arrays, and doing for loop and writing byte by byte is slow).
However, turns out, this is forbidden:
IOException
The OS handle's position is not what FileStream expected. Do not use a handle simultaneously in one FileStream and in Win32 code or another FileStream. This may cause data loss.
Is there any way around this, aside from re-opening the file stream each time I want to write using BinaryFormatter after I wrote using WriteFile?
(I understand this question has been abandoned and will be deleted soon.)
Firstly, WriteFile and BinaryFormatter just don't mix. WriteFile assumes you know the file format, i.e. the interpretation of the bytes that are written to the file. BinaryFormatter is a serializer, based on a file format that is internal to Microsoft .NET implementation (some would say proprietary, even though the information can be found online). As a consequence, you cannot even pass a file serialized by BinaryFormatter between Microsoft .NET and Mono C#.
Based on OP's description, it is clear that OP should not have used BinaryFormatter in the first place. Otherwise OP would be solely responsible for the loss (unrecoverability) of such data.
As Hans Passant commented, the performance of FileStream.Write should be able to match the Win32 call to WriteFile, asymptotically speaking. What this means is that the time overhead for each call can be modeled as alpha * numberOfBytesWritten + beta, where beta is the pure constant overhead per call. One can make this overhead relatively negligible by increasing the number of bytes written per call.
Given that we cannot directly pass a multidimensional C# array into WriteFile, here is the suggestion. Based on OP's comment, it is assumed the multidimensional array will have size byte[1024, 1024, 1024].
First, allocate a temporary 1D array of sufficient size. Typical recommendations range from 4KB to several MB, but that is only an optimization detail. For this example, we use 1MB = byte[1048576] because it nicely divides the total array size.
Then, we write a top-level for-loop over the outermost dimension.
In the next step, we use the System.Array.Copy utility function to copy the 1024 x 1024 bytes from the innermost two dimensions into the temporary 1D array. This relies on C# specification on multidimensional arrays, as documented on the System.Array.Copy function:
When copying between multidimensional arrays, the array behaves like a long one-dimensional array, where the rows (or columns) are conceptually laid end-to-end.
Once copied into the temporary 1D array, it can be written out to FileStream.Write.
I know this question has been asked before, but I can't seem to get it working with the answers I've read. I've got a CSV file ~ 1.2GB , If I'm running the process like a 32bit i get outOfMemoryException, it works if i run it as a 64bit process, but it still takes 3,4gb in memory, i do know that I'm storing a lot of data in my customData class, but still 3,4gb of ram?, Am I doing something wrong when reading the file?
dict is a dictionary in which i just have a mapping to which property to save something in, depending on the column it's in. Am i doing the reading the right way?
StreamReader reader = new StreamReader(File.OpenRead(path));
while(!reader.EndOfStream) {
String line = reader.ReadLine();
String[] values = line.Split(';');
CustomData data = new CustomData();
string value;
for (int i = 0; i < values.Length; i++) {
dict.TryGetValue(i, out value);
Type targetType = data.GetType();
PropertyInfo prop = targetType.GetProperty(value);
if(values[i]==null)
{
prop.SetValue(data, "NULL",null);
}
else
{
prop.SetValue(data, values[i], null);
}
}
dataList.Add(data);
}
There doesn't seem to be anything wrong in your usage of the stream reader, you read a line in memory, then forget it.
However, in C# a string is encoded in memory as UTF-16 so on the average a character consumes 2 bytes in memory.
If your CSV contains also a lot of empty fields that you convert to "NULL" you add up to 7 bytes for each empty field.
So on the whole, since you basically store all the data from your file in memory, it's not really surprising that you require almost 3 times the size of the file in memory.
The actual solution is to parse your data by chucks of N lines, treat them, and free them from memory.
Note: Consider using a CSV parser, there is more to CSV than just comas or semi-colons, what if one of your field conatins a semi-colon, a newline, a quote... ?
Edit
Actually each string take up to 20+(N/2)*4 bytes in memory see C# in Depth
Ok a couple of points here.
As pointed out in the comments, .NET under x86 can only consume 1.5GBytes per process, so consider that your maximum memory in 32 bit
The StreamReader itself will have an overhead. I don't know if it caches the entire file in memory, or not (maybe someone can clarify?). If so, reading and processing the file in chunks might be a better solution
The CustomData class, how many fields does it have, and how many instances are created? Note you will need 32bits for each reference in x86 and 64 bits for each reference in x64. So if you have CustomData class, which has 10 fields of type System.Object, each CustomData class before storing any data requires 88 bytes.
The dataList.Add at the end. I assume you are adding to a generic List? If so, note that List employes a doubling algorithm to resize. If you have 1GByte in a List and it requires 1 more byte in size, it will create a 2GByte array and copy the 1GByte to the 2GByte array on resize. So all of a sudden the 1GByte + 1 byte actually requires 3GBytes to manipulate. Another alternative is to use a pre-sized array