A process memory usage include (can be called as VirtualMemory):
PrivateMemmory: dedicated to a process and cannot be shared by other processes.
SharedMemory: runtime or 3rd linked library.
CommitedMemory [or PagedMemory]: mapped out to the hard disk. (ready for use)
ReservedMemory: only declared(not exist and no address).
Here is my understanding:
Virtual Memory = PrivateMemmory + SharedMemory + CommitedMemory + ReservedMemory;
WorkSet Memory = PrivateMemmory + SharedMemory + CommitedMemory;
Free Memory = 'Virtual Memory' - 'WorkSet Memory';
I calculate total usage memory of a process (not include the reserved) written with c#. the left is VMMap and the right is VS Monitor.
The process total memory size is about 5GB, and the reserved memory is about 4GB in VMMap, and VS Monitor show VirtualMemorySize64 is about 5GB, i am confused how can i get the total usage memory. there is 4GB of the reserved memory in VMMap, how can i get the reserved memory with .net Process class.
I set TotalUsageMemory value with below code, is it correct?
Int64 TotalUsageMemory = proc.WorkingSet64 + proc.PagedMemorySize64;
The numbers don't add up like that. Whether a page is in the working set or not is independent of whether it is shared or not. This again is (I believe) independent of whether it is committed or not.
The right counter to look at depends on the question you want to answer. Unfortunately, there is no counter that fully matches the intuitive notion of memory usage. Private bytes normally is what's used for that. Working set does not mean much in practice. This counter can change at any time due to OS actions. Virtual memory also is quite irrelevant from a performance standpoint.
Normally, memory usage is the memory that was incrementally consumed by starting that process. That's private bytes.
There exists no counter or computation to give you a TotalUsageMemory value.
Related
In my web application i use LeadTools to Create Multi Page Tiff file from stream. Below is a code that shows how I use leadtools.
using (RasterCodecs codecs = new RasterCodecs())
{
RasterImage ImageToAppened = default(RasterImage);
RasterImage imageSrc = default(RasterImage);
codecs.Options.Load.AllPages = true;
ImageToAppened = codecs.Load(fullInputPath, 1);
FileInfo fileInfooutputTiff = new FileInfo(fullOutputPath);
if (fileInfooutputTiff.Exists)
{
imageSrc = codecs.Load(fullOutputPath);
imageSrc.AddPage(ImageToAppened);
codecs.Save(imageSrc, fullOutputPath, RasterImageFormat.Ccitt, 1);
}
else
{
codecs.Save(ImageToAppened, fullOutputPath, RasterImageFormat.Ccitt, 1);
}
}
Above code works properly and i get many request for my web application at around 2000 requests. In some cases i get below error . But later on again it works properly for other request.
You have exceeded the amount of memory allowed for RasterImage allocations.See RasterDefaults::MemoryThreshold::MaximumGlobalRasterImageMemory.
Is that memory issue is for single request or for all the objects during the application started(global object)?
So what is the solution for above error?
The error you report references the MaximumGlobalRasterImageMemory:
You have exceeded the amount of memory allowed for RasterImage allocations.See RasterDefaults::MemoryThreshold::MaximumGlobalRasterImageMemory.
In the documentation it states:
Gets or sets a value that specifies the maximum size allowed for all RasterImage object allocations.
When allocating a new RasterImage object, if the new allocation causes the total memory used by all allocated RasterImage objects to exceed the value of MaximumGlobalRasterImageMemory, then the allocation will throw an exception.
So it looks like it's for all objects.
These are the specified default values:
On x86 systems, this property defaults to 1.5 GB.
On x64 systems, this property defaults to either 1.5 GB or 75 percent of the system's total physical RAM, whichever is larger.
I would advise that you familiarise yourself with the documentation for the SDK.
When handling files with many pages, here are a few general tips that could help with both web and desktop applications:
Avoid loading all pages and adding them to one RasterImage in memory. Instead loop through them and load them one (or a few) at a time, then append them to output file without keeping them in memory. Appending to file could get slower as the page count grows, but this help topic explains how you can speed that up.
You have "using (RasterCodecs codecs ..)" in your code, but the large memory is for the image, not the codecs object. Consider wrapping your RasterImage object in a "using" scope to speed up its disposal. In other words, go for "using (RasterImage image = ...)"
And the obvious suggestion: go for 64-bit, install as much RAM as you can and increase the value of MaximumGlobalRasterImageMemory.
I have a situation whereby the total # Bytes in all Heaps according to ANTS Memory Profiler is small (about 200MB), but the total Private Bytes/Working Set-Private is around 1.26GB.
The breakdown is something like:
Dictionaries, strings, int arrays, etc: 200MB
Free space: 793MB
The performance counter graph shows the "# Bytes in all Heaps" staying very low consistently - but the Private Bytes and Working Set - Private counters going up in ramps as the app is used.
No action in particular corresponds to a given rise, it seems.
It doesn't look like there are big allocations, then frees - the # Bytes in All Heaps really is very very flat. I've tried taking snapshots during all kinds of requests, including when the private bytes is ramping up - but there's simply nothing big being allocated, .NET wise.
What else can I try to diagnose? I'm installing WinDbg and will see what i can try there, though I haven't used it for memory debugging before.
When numbers are smaller, it's quick to grow the size of an array list from 2 to 4 memory addresses but when it starts to increase the amount of space closer to the max amount of space allowed in an array list (close to the 2MB limit). Would changing how much space is allotted in those bigger areas be more efficient if it was only growing the size of the array by a fraction of the size it needs at some point? Obviously growing the size from 1mb to 2mb isn't really a big deal now-days HOWEVER, if you had 50,000 people running something per hour that did this doubling the size of an array, I'm curious if that would be a good enough reason to alter how this works. Not to mention cut down on un-needed memory space (in theory).
A small graphical representation of what I mean..
ArrayList a has 4 elements in it and that is it's current max size at the moment
||||
Now lets add another item to the arraylist, the internal code will double the size of the array even though we're only adding one thing to the array.
The arraylist now becomes 8 elements large
||||||||
At these size levels, I doubt it makes any difference but when you're allocating 1mb up to 2mb everytime someone is doing something like adding some file into an arraylist or something that is around 1.25mb, there's .75mb of un-needed space allocated.
To give you more of an idea of the code that is currently ran in c# by the System.Collections.Generic class. The way it works now is it doubles the size of an array list (read array), every time a user tries to add something to an array that is too small. Doubling the size is a good solution and makes sense, until you're essentially growing it far bigger than you technically need it to be.
Here's the source for this particular part of the class:
private void EnsureCapacity(int min)
{
if (this._items.Length >= min)
return;
// This is what I'm refering to
int num = this._items.Length == 0 ? 4 : this._items.Length * 2;
if ((uint) num > 2146435071U)
num = 2146435071;
if (num < min)
num = min;
this.Capacity = num;
}
I'm going to guess that this is how memory management is handled in many programming languages so this has probably been considered many times before, just wondering if this is a kind of efficiency saver that could save system resources by a large amount on a massive scale.
As the size of the collection gets larger, so does the cost of creating a new buffer as you need to copy over all of the existing elements. The fact that the number of these copies that need to be done is indirectly proportional to the expense of each copy is exactly why the amortized cost of adding items to a List is O(1). If the size of the buffer increases linearly, then the amortized cost of adding an item to a List actually becomes O(n).
You save on memory, allowing the "wasted" memory to go from being O(n) to being O(1). As with virtually all performance/algorithm decisions, we're once again faced with the quintessential decision of exchanging memory for speed. We can save on memory and have slower adding speeds (because of more copying) or we can use more memory to get faster additions. Of course there is no one universally right answer. Some people really would prefer to have a slower addition speed in exchange for less wasted memory. The particular resource that is going to run out first is going to vary based on the program, the system that it's running on, and so forth. Those people in the situation where the memory is the scarcer resource may not be able to use List, which is designed to be as wildly applicable as possible, even though it can't be universally the best option.
The idea behind the exponential growth factor for dynamic arrays such as List<T> is that:
The amount of wasted space is always merely proportional to the amount of data in the array. Thus you are never wasting resources on a more massive scale than you are properly using.
Even with many, many reallocations, the total potential time spent copying while creating an array of size N is O(N) -- or O(1) for a single element.
Access time is extremely fast at O(1) with a small coefficient.
This makes List<T> very appropriate for arrays of, say, in-memory tables of references to database objects, for which near-instant access is required but the array elements themselves are small.
Conversely, linear growth of dynamic arrays can result in n-squared memory wastage. This happens in the following situation:
You add something to the array, expanding it to size N for large N, freeing the previous memory block (possibly quite large) of size N-K for small K.
You allocate a few objects. The memory manager puts some in the large memory block just vacated, because why not?
You add something else to the array, expanding it to size N+K for some small K. Because the previously freed memory block now is sparsely occupied, the memory manager does not have a large enough contiguous free memory block and must request more virtual memory from the OS.
Thus virtual memory committed grows quadratically despite the measured size of objects created growing linearly.
This isn't a theoretical possibility. I actually had to fix an n-squared memory leak that arose because somebody had manually coded a linearly-growing dynamic array of integers. The fix was to throw away the manual code and use the library of geometrically-growing arrays that had been created for that purpose.
That being said, I also have seen problems with the exponential reallocation of List<T> (as well as the similarly-growing memory buffer in Dictionary<TKey,TValue>) in 32-bit processes when the total memory required needs to grow past 128 MB. In this case the List or Dictionary will frequently be unable to allocate a 256 MB contiguous range of memory even if there is more than sufficient virtual address space left. The application will then report an out-of-memory error to the user. In my case, customers complained about this since Task Manager was reporting that VM use never went over, say, 1.5GB. If I were Microsoft I would damp the growth of 'List' (and the similar memory buffer in Dictionary) to 1% of total virtual address space.
I was running some test, to see the how my logging would perform is instead of doing File.AppendAllText I would first write to a memory stream and then copy to file. So, just to see how fast memory operation is I did this..
private void button1_Click(object sender, EventArgs e)
{
using (var memFile = new System.IO.MemoryStream())
{
using (var bw = new System.IO.BinaryWriter(memFile))
{
for (int i = 0; i < Int32.MaxValue; i++)
{
bw.Write(i.ToString() + Environment.NewLine);
}
bw.Flush();
}
memFile.CopyTo(new System.IO.FileStream(System.IO.Path.Combine("C", "memWriteWithBinaryTest.log"), System.IO.FileMode.OpenOrCreate));
}
}
When i reached 25413324 I got a Exception of type 'System.OutOfMemoryException' was thrown. even though my Process Explorer says I have about 700Mb of free ram???
Here are the screen shots (just in case)
Process Explorer
Here's the winform
EDIT : For the sake of more objects being created on heap, I rewrote the bw.write to this
bw.Write(i);
First of all, you run out of memory because you accumulate data in the MemoryStream, instead of writing it directly to the FileStream. Use the FileStream directly and you won't need much RAM at all (but you will have to keep the file open).
The amount of physical memory unused is not directly relevant to this exception, as strange as that might sound.
What matters is:
that you have a contiguous chunk of memory available in the process' virtual address space
that the system commit does not exceed the total RAM size + page file size
When you ask the Windows memory manager to allocate you some RAM, it needs to check not how much is available, but how much it has promised to make available to every other process. Such promising is done through commits. To commit some memory means that the memory manager offered you a guarantee that it will be available when you finally make use of it.
So, it can be that the physical RAM is completely used up, but your allocation request still succeeds. Why? Because there is lots of space available in the page file. When you actually start using the RAM you got through such an allocation, the memory manager will just simply page out something else. So 0 physical RAM != allocations will fail.
The opposite can happen too; an allocation can fail despite having some unused physical RAM. Your process sees memory through the so-called virtual address space. When your process reads memory at address 0x12340000, that's a virtual address. It might map to RAM at 0x78650000, or at 0x000000AB12340000 (running a 32-bit process on a 64-bit OS), it might point to something that only exists in the page file, or it might not even point at anything at all.
When you want to allocate a block of memory with contiguous addresses, it's in this virtual address space that the RAM needs to be contiguous. For a 32-bit process, you only get 2GB or 3GB of usable address space, so it's not too hard to use it up in such a way that no contiguous chunk of a sufficient size exists, despite there being both free physical RAM and enough total unused virtual address space.
This can be caused by memory fragmentation.
Large objects go onto the large object heap and they don't get moved around to make room for things. This can cause fragmentation where you have gaps in the available memory, which can cause out-of-memory when you try to allocate an object larger than any of the blocks of available memory.
See here for more details.
Any object larger than 85,000 bytes will be placed on the large object heap, except for arrays of doubles for which the threshold is just 1000 doubles (or 8000 bytes).
Also note that 32-bit .Net programs are limited to a maximum of 2GB per object and somewhat less than 4GB overall (perhaps as low as 3GB depending on the OS).
You should not be using a BinaryWriter to write text to a file. Use a TextWriter instead.
Now you are using:
for (int i = 0; i < Int32.MaxValue; i++)
This will write at least 3 bytes per write (number representation and newline). Times that by Int32.MaxValue and you need at least 6GB of memory already seeing that you are writing it to a MemoryStream.
Looking further at your code, you are going to write the MemoryStream to a file in any way. So you can simply do the following:
for (int i = 0; i < int.MaxValue; i++)
{
File.AppendAllText("filename.log", i.ToString() + Environment.Newline);
}
or write to an open TextWriter:
TextWriter writer = File.AppendText("filename.log");
for (int i = 0; i < int.MaxValue; i++)
{
writer.WriteLine(i);
}
If you want some memory buffer, which IMO is a bad idea for logging as you will lose the last bit of the writes during a crash, you can using the following the create the TextWriter:
StreamWriter(string path, bool append, Encoding encoding, int bufferSize)
and pass a 'biggish' number for bufferSize. The default is 1024.
To answer the question, you get an out of memory exception due to the MemoryStream resizing and at some point it gets to big to fit into memory (which was discussed in another answer).
I have desktop application developed in C#. The VM Size used by application is very high. I want to add watermark to a pdf file, which has more that 10,000 pages, 10776 pages to be exact, the VM size inscreases and some times the application freezes or it throws out of memory exception.
Is there a solution to release / decrease the VM size programatically in C#
Environment.FailFast :)
In all seriousness though, a large VM size is not necessarily an indication of a memory problem. I always get confused when it comes to the various memory metrics but I believe that VM size is a measurement of the amount of used address space, not necessarily used physical memory.
Here's another post on the topic: What does "VM Size" mean in the Windows Task Manager?
If you suspect that you have a problem with memory usage in your application, you probably need to consider using a memory profiler to find the root cause (pun intended.) It's a little tricky to get used to at first but it's a valuable skill. You'd be surprised what kind of performance issues surface when you're profiling.
This depends strongly on your source code. With the information given all I can say is that it would be best to get a memory profiler and check if there is space for optimizations.
Just to demonstrate you how memory usage might be optimized I would like to show you the following example. Instead of using string concatenation like this
string x = "";
for (int i=0; i < 100000; i++)
{
x += "!";
}
using a StringBuilder is far more memory- (and time-)efficient as it doesn't allocate a new string for each concatenation:
StringBuilder builder = new StringBuilder();
for (int i=0; i < 100000; i++)
{
builder.Append("!");
}
string x = builder.ToString();
The concatenation in the first sample creates a new string object on each iteration that occupies additional memory that will only be cleaned up when the garbage collector is running.