I am working on a C# application which is designed to run in the system tray all the time. I would therefore like to minimise the amount of memory which the application uses when idle. Using Windows perfmon and the Windows Task Manager I have got some figures for idle memory usage.
Windows XP TaskManager - Mem Usage - 96,300K
PerfMon
.NET CLR Memory
# Bytes in all Heaps - 34,513,708
# Total committed Bytes - 40,591,360
# Total reserved Bytes - 50,319,360
I think these figures mean that my application has been allocated 96MB of memory by Windows. 50MB of this has been allocated to the CLR. The CLR has handed out 40mb of this.
Is there any way to work out what the other 46mb of memory which hasn't been assigned to the CLR is being used for? I assume this will be a combination of memory used for loading DLLs into the process and memory used by this native code.
EDIT: I have download VMMap and found the following.
Private
Total - 72mb
Managed Heap - 25mb
Stack - 16mb (Seems quite large)
Private Data - 13mb (Not sure what this is)
Image - 8mb (Mostly .NET DLLs)
Page Table - 6mb (Seems quite large)
Heap - 3mb
Can anyone suggest an interpretation for the Stack, Private Data and Page Table figures?
NOTE: The counters I originally quoted are now showing some bizarre figures.
Windows XP TaskManager - Mem Usage - 43,628K
PerfMon
.NET CLR Memory
# Bytes in all Heaps - 20mb
# Total committed Bytes - 23mb
# Total reserved Bytes - 50mb
This suggests that the CLR has reserved more memory than has been allocated to the process. Obviously this can't be true so the TaskManager must only be showing what has been paged in at the moment.
Note that the difference between the total memory usage (I'm not exactly sure what figure TaskManager is showing; Windows tools have a bad history about using different terms for equal concepts) and the "#Total reserved bytes" may also be used by CLR, just not by the managed heap (so native allocations by the CLR, loaded DLLs, etc. may also account here).
You may want to checkout Sysinternals VMMap to get more detailed information.
Related
We have a C# Windows service (runs on Windows 2008 server + .NET framework 4.6 + GC perf hot fix). After running for a few days, the size of bytes in all heaps reaches more than 100 GB (committed), and the private bytes is very high too (110 GB+), but the RAM usage is only 68 GB (Working Set) & 59 GB (Private Bytes). Only 10 GB in page files on this server.
I made a dump and run WinDbg + SOS to analyze memory usage, and I found out that there are a lot of Free objects (about 54 GB). Could this be caused by Free objects? Do those free objects only take up virtual memory but no physical memory? If not, how it is possible that the committed virtual memory is much larger than used physical memory + page files?
You have just discovered the concept of demand-zero pages.
Let me cite from Windows Internals, 6th edition, part 2 [Amazon Germany], chapter 10, which is about memory management (page 276 in my edition of the book):
For many of those items, the commit charge may represent the potential use of storage rather than the actual. For example, a page of private committed memory does not actually occupy either a physical page of RAM or the equivalent page file space until it's been referenced at least once. Until then, it's a demand-zero page [...] But commit charge accounts for such pages when the virtual space is first created. This ensures that when the page is later referenced, actual physical storage space will be available for it.
This means: Windows will increase the size of either the working set or the page file when the (committed but yet unaccessed/unused) memory is accessed.
Your sentence
Do those free objects only take up virtual memory but no physical memory?
does not really fit to the rest of the question. Any memory, regardless of what it is filled with (.NET Free objects, .NET regular objects or even C++), may consume physical memory (then it's in the Working Set) or not (then it's in the Page File).
I've got a problem with a C# application and a COM component allocating memory:
C# program calls a function in a COM DLL written in C++ which does matrix processing. The function allocates a lot of memory (around 800MB in eight 100MB chunks). This fails (malloc returns "bad allocation" when calling the function from C#.
If I run the same function from a C program, allocating the same amount of memory, then there's no problem allocating memory.
I've got 8GB RAM, Win7 x64 and there are plenty of free memory.
How to fix that it works to allocate memory when calling from the C# application?
I tried to google it, but didn't really know what to search for. Searched for setting heap size etc, but that didn't give anything.
Feel a bit lost! All help are appreciated!
Amount of physical memory (8 GB) is not the constraint that limits memory consumption of your application. Supposedly, you built 32-bit application which has a fundamental limit of 4 GB of directly addressable bytes. For historical reasons, the application not doing any magic has only half of this - 2 GB. This is where you allocate from, and this space is used for other needs. 100 MB chucks are large enough to reduce the effectively usable space because of memory/address fragmentation (you want not just 100 chunks, you request continuous ones).
The easiest solution here is to build 64-bit applications. The limits there are distant.
If you still want 32-bit code:
enable /LARGEADDRESSWARE on the hosting application binary to extend limit from 2 to 4 GB
use file mappings, which you can keep in physical memory with your data and map into metered address space on demand
allocate smaller chunks
i have a WPF desktop app that crashed with the following exception:
System.Data.SqlServerCe.SqlCeException (0x80004005): There is not enough memory on the device running SQL Server
However, the memory values at crash time are somewhat not clear to me:
Current Process Current Working Set: 806 MB
Current Process Peak Working Set: 1157 MB
Current Process Current Page Memory Size: 779 MB
Current Process Peak Page Memory Size: 1502 MB
Current Process Private Memory Size: 779 MB
ComputerInfo TotalPhysicalMemory: 5884 MB
ComputerInfo TotalVirtualMemory: 2047 MB
ComputerInfo AvailablePhysicalMemory: 3378 MB
ComputerInfo AvailableVirtualMemory: 166 MB
btw. The Current Process values are taken from the C# Process class. The ComputerInfo values are taken from the VB.NET ComputerInfo class.
My app is compiled with (x86) configuration. The process is running on a Windows 7 64 Bit machine.
I see that the Available Virtual Memory is 166MB which looks pretty low.
How is it possible that the process crashed when there is plenty of AvailablePhysicalMemory reported by the VB.NET ComputerInfo class?..
The high Current and Peak Working Set indicates that probably there is a memory leak somewhere, but i still don't get why it crashed when there was plenty of available RAM.
Your assumption that physical memory is in any way relevant is the fundamental cause of your confusion. Remember, the right way to think about memory is that process memory is disk space. Physical memory is just a fast cache on top of the disk. Again, let me emphasize this: if you run out of physical memory then your machine gets slower. It doesn't give an out of memory error.
The relevant resource is virtual address space, not memory. You only get 4GB of virtual address space per 32 bit process and 2GB of that is reserved for the use of the operating system. Suppose you have 166 MB of virtual address space left and that it is divided into four chunks of 42 MB each. If a request for 50MB comes in, that request cannot be fulfilled. Unfortunately the error you get is "out of memory" and not "out of virtual address space", which would be a more accurate error message.
The solution to your problem is either (1) allocate way less than 2GB of user memory per process, (2) implement your own system for mapping memory into and out of virtual address space, or (3) use a 64 bit process that has a far larger amount of available virtual address space.
Each 32 bit (you have 32 bit process because TotalVirtualMemory: 2047 MB) can address only up to 2GB of memory, regardless of the available physical memory.
An OutOfMemoryException can be caused by a number of things.
It can be caused when your application doesn't have enough space in the Gen0 managed heap or in the large object heap to process a new allocation. This is a rare case, but will typically happen when the heap is too fragmented to allow a new allocation (sometimes of quite a small size!). In Gen0 this might happen due to an excessive use of pinned objects (when handling interop with unmanaged code); in the LOH this was once a common problem but appears much less frequent in later versions of .NET. It's worth noting that SqlCe access would include unmanaged code; I've not heard of any major issues with this but it's possible that your use of the SqlCe classes is causing problems.
Alternatively, it could be a virtual memory issue - which seems quite plausible given the figures you've posted. Eric Lippert has a good blog post about this kind of issue here. If your application is trying to write pages of memory to disk so that it can keep something else in memory, you might well see the exception because your VM is so limited.
I have a ASP.NET FRAMEWORK 4.0 website that has a memory leak. To find it I have installed ANTS Memory Profiler.
This is what I do :
Host website in IIS7
Start Ants Memory Profiler 8.1
Set the we are profiling a IIS website and state the URL to this webpage (built in release)
Start test and let the webpage startup (a lot of caching so about 1 min)
Take Memory Snapshot when first page is loaded and stable
Reload first page A LOT and see the memory raise from 110 MB (Private Bytes/Working Set -Private) to 270 MB
Visit a lot of pages on the webpage and see it raise to 360 MB
Push it some more and no more raise is done
Take Memory Snapshot and click Class list (check Classes with source)
This will show classes that are still kept for example
sites_mypage_default_asx - 10 320 bytes and 10 live instances
usercontrols_common_pagehead_ascx - 928 bytes and 4 live instances
and so on
I belive/hope that these are the classes that will be cleaned by the GC
But this is not where the large foot print is, I have to uncheck the Classes with source to get the really large one. For example(sorted on Live size(byte)
string - 1 890 292 bytes
RuntimeMethodInfo - 990 976 bytes
RuntimePropertyInfo - 604 136 bytes
Hastable+bucket[] - 413 712 bytes
and so on.
The problem is that there is not much I can do about these, when opening Instance relation graph I will only see System. classes and there is no information about where thay are hold in my website.
When the classes with source was checked I however found a big memory leak that could be fixed(this was before the above run).
But I do not know how to take the next step? Why is my website still taking up 350 MB? 350 MB with data is a lot of data and I canĀ“t see that I cache this much data!?
What should be my next step?
It not must have be a memory leak, just the memory pressure is not enough for the Garbage Collector to make more comprehensive job. To fully investigate this issue and check if it is a real memory leak you should make a long-running load test of your web page with average traffic. You can use Visual Studio Ultimate Load Testing if you are luck to have it or open source LoadUI project. During this test observe Performance Counters:
.NET Memory group counters, especially # Bytes in all Heaps and all Gen # heap size,
Process : Working Set and Process : Private bytes
After few hours of such test you will clearly see a trend of memory consumption. It might be that it will be released periodically if some threshold is exceeded. But if a memory consumption will grow all the time, you will have a more probable assumption of the memory leak. Then take a full memory dump of w3wp process at the end of memory leak and try to investigate it further.
As I am a big fan of WinDbg (it is faster, more detailed and cheaper than any GUI-based commercial tool), I suggest you to use it. Use it with Psscor2 or Psscor4 extension (depending version used by you application). After setting up the debugging environment (installing WinDbg and copying to its folder Psscor files), create a dump of the process. You can do it easily for example with help of Procdump tool:
procdump -ma <PID>
Then load dump using File -> Open Crush Dump option. Load appropriate version of Psscor:
.load psscor4
Then execute command to download symbols from Microsoft servers (if needed), make sure that you have an internet connection:
!symfix
And from now you should have access to plenty very interestings command (look for !help to list them). To see memory usage per type:
!dumpheap -stat
Which will result in a long list of types and their memory usage sorted ascending:
...
0x79b56d84 297,724 12,308,164 SomeNamespace.SomeObject
0x6983ad6c 1,177 19,751,856 SomeNamespace.SomeClass[]
0x79ba4aa0 6,544 46,300,516 System.Byte[]
0x001027a0 527 69,152,092 Free
0x79b9fb08 1,127,896 82,456,640 System.String
To see overall memory usage (iu means that also unrooted objects will be included):
!heapstat -iu
Heap Gen0 Gen1 Gen2 LOH
Heap0 6594540 1488744 24322236 19164192
Heap1 8360908 951312 30822196 14358048
Heap2 8207144 386488 23198448 16078256
Heap3 4299844 453440 36015332 16125560
Total 39615576 5301708 179028460 93254272
Free space: Percentage
Heap0 4868516 12 3512 8692736SOH: 15% LOH: 45%
Heap1 7221256 12 66200 5232904SOH: 18% LOH: 36%
Heap2 7518052 12 520 7677824SOH: 23% LOH: 47%
Heap3 3578232 12 6606504 4098640SOH: 24% LOH: 25%
Total 28807516 72 8353592 31990912
Unrooted objects: Percentage
Heap0 1688812 258828 8905748 4019992SOH: 33% LOH: 20%
Heap1 1052548 270796 9983932 5625984SOH: 28% LOH: 39%
Heap2 503560 267112 7697632 4596792SOH: 26% LOH: 28%
Heap3 571776 235440 8453980 5205176SOH: 22% LOH: 32%
Total 9691432 2179788 53539772 32143328
This information for sure will lead you to some conclusions, but further investigation is obviously possible so do not hesitate to ask.
I've created program which is intensively using C# sockets and unmanaged C++ DLL with few useful functions Like this one.
[DllImport(DLLName, CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
[return: MarshalAs(UnmanagedType.LPStr)]
private static extern void calcData(float ask, float bid, float volume, float lastTrade, string symbolName, TQuoteType type, IntPtr str, out int size);
I'm using C# multithreading with 8-10 threads and every one is sending data by sockets every 200ms.
Program working fine on Windows 7 and Windows Server 2008 , but on XP and Windows Server 2003 it gives Out Of Memory System Exception after 2 days working.
I can't understand what is happening because maximum usage of RAM is 17mb.
Can anyone help me to solve this problem ?
Have you tried looking for any unusual memory consumption by other processes on the XP / 2003 machine.
Given that the program runs fine on the other enviroment it might not even be your program that is causing problem but only a symptom.
As an alternative you can try to capture a dump when your program terminates using, and load it with WinDb or Visual studio
http://technet.microsoft.com/en-us/sysinternals/dd996900.aspx
ProcDump.exe -ma -t PROCESS.EXE
I think it is caused by Heap fragmentation.
If the program runs for days, and you C++ code use malloc/free or new/delete to allocate/release memory from the heap, on Windows XP and 2003, by default, you'll encounter Heap fragmentation problem. Heap fragmentation is a state in which available memory is broken into small, noncontiguous blocks. When a heap is fragmented, memory allocation can fail even when the total available memory in the heap is enough to satisfy a request, because no single block of memory is large enough.
After Windows XP and 2003, MS enables Low-fragmentation Heap, which fixes this problem, Applications do not need to enable the LFH for their heaps. On Windows XP and 2003, you can enable it by code. This page gives an example. (you don't need create another heap, you just need get the default heap with API GetProcessHeap.
First determine if you have a memory leak or not, don't use Task Manager for this, use perfmon instead. The Mem Usage column in Task Manager is only the working set, which is definitely not the complete picture for your app. So fire up perfmon, switch to report mode, which is easier to look at, and delete the existing counters. Then add these counters for your process:
Process | Working Set
Process | Virtual Bytes
Process | Private Bytes
.NET CLR Memory | # Total reserved Bytes
.NET CLR Memory | # Total committed Bytes
.NET CLR Memory | Large Object Heap size
I'm guessing you have a 32-bit app, so your max Virtual Address Space is 2GB. Run your app, and periodically check the counters. If the Process | Virtual Bytes gets near 1.9GB then things will fall apart in strange ways. Fragmentation could also be the issue, but that would only affect native memory and managed objects in the Large Object Heap. If the LOH gets too high, that could indicate fragmentation in LOH.
If you do see a memory leak, then you can figure if the leak is in managed code or in native code. Managed leak if the .NET CLR Memory | # Total reserved Bytes gets rather high. Native leak if .NET CLR Memory | # Total reserved Bytes stays low and .NET CLR Memory | # Total reserved Bytes gets high. Remember that managed memory is a subset of the process' total virtual address space.
If you think that you're not seeing fragmentation or a memory leak, then there's the possibility that .NET is throwing the Out of Memory exception and its a red herring, something else has gone wrong. This is uncommon but not that uncommon. At this point you'll need a debugger.