Need C# code that will eat up system memory. - c#

I need the opposite of good, optimized code. For testing purposes I need a simple program to eat up RAM. Preferably not all memory so that the OS is non-functional, but more like something that would simulate a memory leak, using up a lot of memory and slowing down the OS, gradually, overtime.
Specifically, can anyone provide code spinets or links to tutorials I can use?
I saw this code as a suggestion on another post:
for (object[] o = null;; o = new[] { o });
But it is not quite what I am looking for as per the description above.
Please help.

Use
Marshal.AllocHGlobal(numbytes)
You can attach this to a timer.
And just dont release the memory (dont call FreeHGlobal).
Thats the most straighforward, controllable and predictable way to consume memory.
Marhsal.AllocHGlobal

The Design is Below, the Question asked for Gradually eating up memory
Gradually Eats Up Memory
Parameter #1 is the Memory it will Eat Up in Megabytes (ie 6000 is 6 Gigs)
Parameter #2 is the Gradual Delay for each Iteration (ie 1000 is 1 second)
The Commited Memory and Working Set will be around the Same
It was designed to use XmlNode as the object that takes up Memory because then the COMMITTED MEMORY (memory allocated by process in OS) and WORKING SET MEMORY (memory actually used by the process) would be the same. If a primative type is used to take up memory such as a byte[] array, then the WORKING SET usually is nothing, because the memory is not actually be used by the process even though its been allocated.
Make sure to compile in x64 under the Project Properties under the Build Tab. Otherwise if its compiled in x32 then it will get an OutOfMemory error around the 1.7Gigs limit. In x64 the Memory it eats up will be pretty "limitless".
using System;
using System.Xml;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Runtime.InteropServices;
namespace MemoryHog
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(" Total Memory To Eat Up in Megs: " + args[0]);
Console.WriteLine("Millisecs Pause Between Increments: " + args[1]);
int memoryTotalInMegaBytes = Convert.ToInt32(args[0]);
int secondsToPause = Convert.ToInt32(args[1]);
Console.WriteLine("Memory Usage:" + Convert.ToString(GC.GetTotalMemory(false)));
long runningTotal = GC.GetTotalMemory(false);
long endingMemoryLimit = Convert.ToInt64(memoryTotalInMegaBytes) * 1000 * 1000;
List<XmlNode> memList = new List<XmlNode>();
while (runningTotal <= endingMemoryLimit)
{
XmlDocument doc = new XmlDocument();
for (int i=0; i < 1000000; i++)
{
XmlNode x = doc.CreateNode(XmlNodeType.Element, "hello", "");
memList.Add(x);
}
runningTotal = GC.GetTotalMemory(false);
Console.WriteLine("Memory Usage:" + Convert.ToString(GC.GetTotalMemory(false)));
Thread.Sleep(secondsToPause);
}
Console.ReadLine();
}
}
}

The easiest way to create memory leaks in C# is by attaching to events and not detaching. That is what I would do at least. Here is a SO that talks about this

Related

MemoryCache OutOfMemoryException

I am trying to figure out how the MemoryCache should be used in order to avoid getting out of memory exceptions. I come from ASP.Net background where the cache manages it's own memory usage so I expect that MemoryCache would do the same. This does not appear to be the case as illustrated in the bellow test program I made:
class Program
{
static void Main(string[] args)
{
var cache = new MemoryCache("Cache");
for (int i = 0; i < 100000; i++)
{
AddToCache(cache, i);
}
Console.ReadLine();
}
private static void AddToCache(MemoryCache cache, int i)
{
var key = "File:" + i;
var contents = System.IO.File.ReadAllBytes("File.txt");
var policy = new CacheItemPolicy
{
SlidingExpiration = TimeSpan.FromHours(12)
};
policy.ChangeMonitors.Add(
new HostFileChangeMonitor(
new[] { Path.GetFullPath("File.txt") }
.ToList()));
cache.Add(key, contents, policy);
Console.Clear();
Console.Write(i);
}
}
The above throws an out of memory exception after approximately reaching 2GB of memory usage (Any CPU) or after consuming all my machine's physical memory (x64)(16GB).
If I remove the cache.Add bit the program throws no exception. If I include a call to cache.Trim(5) after every cache add I see that it releases some memory and it keeps aproximately 150 objects in the cache at any given time (from cache.GetCount()).
Is calling cache.Trim my program's responsibility? If so when should it be called (like how can my program know that the memory is getting full)? How do you calculate the percentage argument?
Note: I am planning to use the MemoryCache in a long running windows service so it is critical for it to have proper memory management.
MemoryCache has a background thread that periodically estimates how much memory the process is using and how many keys are in the cache. When it thinks you are getting close to the cachememorylimit, it will Trim the cache. Each time this background thread runs, it checks to see how close you are to the limits, and it will increase the polling frequency under memory pressure.
If you add items very quickly, the background thread doesn't have a chance to run, and you can run out of memory before the cache can trim and GC can run (in a x64 process this can result in massive heap size and multi minute GC pauses). The trim process/memory estimation is also known to have bugs under some conditions.
If your program is prone to out of memory due to rapidly loading an excessive number of objects, something with a bounded size like an LRU cache is a much better strategy. LRU typically uses a policy based on item count to evict the least recently used items.
I wrote a thread safe implementation of TLRU (a time aware least recently used policy), that you can easily use as a drop in replacement for ConcurrentDictionary.
It's available on Github here: https://github.com/bitfaster/BitFaster.Caching
Install-Package BitFaster.Caching
Using it would look like something this for your program, and it will not run out of memory (depending on how big your files are):
class Program
{
static void Main(string[] args)
{
int capacity = 80;
TimeSpan timeToLive = TimeSpan.FromMinutes(5);
var lru = new ConcurrentTLru<int, byte[]>(capacity, timeToLive);
for (int i = 0; i < 100000; i++)
{
var value = lru.GetOrAdd(1, (k) => System.IO.File.ReadAllBytes("File.txt"));
}
Console.ReadLine();
}
}
If you really want to avoid running out of memory, you should also consider reading the files into a RecyclableMemoryStream, and using the Scoped class in BitFaster to make the cached values thread safe and avoid races on dispose.

How can i find the start and end memory of a process?

This is my Form1 code:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Diagnostics;
namespace ReadMemory
{
public partial class Form1 : Form
{
List<int> memoryAddresses = new List<int>();
public Form1()
{
InitializeComponent();
Process proc = Process.GetCurrentProcess();
IntPtr startOffset = proc.MainModule.BaseAddress;
IntPtr endOffset = IntPtr.Add(startOffset, proc.MainModule.ModuleMemorySize);
for (int i = 0; i < startOffset.ToInt64(); i++)
{
memoryAddresses.Add(startOffset[i]
}
}
private void modelsToolStripMenuItem_Click(object sender, EventArgs e)
{
}
}
}
I tried to scan all memory addresses from the start to the end and add them to a List.
But i'm getting an error on the line:
memoryAddresses.Add(startOffset[i]
Error 3 Cannot apply indexing with [] to an expression of type 'System.IntPtr'
Second thing is doing in the loop: startOffset.ToInt64() is ok ? Or i should do ToInt32() ?
That's just not how Windows works. It is a virtual memory demand-paged operating system, every process gets 2 gigabytes of memory. Which starts at 0x0001000 and ends at 0x7fffffff for a 32-bit process. Most processes start consuming VM at 0x00400000, the default start address of an EXE. The end of the VM space is always used by Windows to keep track of essential stuff like threads in the process. Lots of space in between, used to load DLLs and allocate memory for the heaps.
Seeing the allocations requires VirtualQueryEx(), you can't do it with the Process class. Your code is otherwise invalid, an IntPtr is not an array. Get insight in the way a process uses the virtual memory space with the SysInternals' VMMap utility. The same authors wrote the "Windows Internals" book, an essential book to understand how Windows works internally.
An IntPtr value is just a number, it's not an array that you can access by an index. Right now you are looping from zero to startOffset, but I think that you want to loop from startOffset to endOffset.
As a memory address can be either 32 bits or 64 bits, depending on the platform that you run the code on, you need a long (Int64) to handle either type of pointer:
List<long> memoryAddresses = new List<long>();
It's correct to use ToInt64 to turn the pointer value into an integer. The memory address will just be the variable that you use in the loop.
for (long i = startOffset.ToInt64(); i < endOffset.ToInt64(); i++) {
memoryAddresses.Add(i);
}
Note: As you are adding an item for the list for each byte in the process memory, the list will be eight times the size of the process memory. It's likely that you won't have enough memory in your process for that.

Parallel GZip Decompression of Log Files - Tweaking MaxDegreeOfParallelism for the Highest Throughput

We have up to 30 GB of GZipped log files per day. Each file holds 100.000 lines and is between 6 and 8 MB when compressed. The simplified code in which the parsing logic has been stripped out, utilises the Parallel.ForEach loop.
The maximum number of lines processed peaks at MaxDegreeOfParallelism of 8 on the two-NUMA node, 32 logical CPU box (Intel Xeon E7-2820 # 2 GHz):
using System;
using System.Collections.Concurrent;
using System.Linq;
using System.IO;
using System.IO.Compression;
using System.Threading.Tasks;
namespace ParallelLineCount
{
public class ScriptMain
{
static void Main(String[] args)
{
int maxMaxDOP = (args.Length > 0) ? Convert.ToInt16(args[0]) : 2;
string fileLocation = (args.Length > 1) ? args[1] : "C:\\Temp\\SomeFiles" ;
string filePattern = (args.Length > 1) ? args[2] : "*2012-10-30.*.gz";
string fileNamePrefix = (args.Length > 1) ? args[3] : "LineCounts";
Console.WriteLine("Start: {0}", DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ss.fffffffZ"));
Console.WriteLine("Processing file(s): {0}", filePattern);
Console.WriteLine("Max MaxDOP to be used: {0}", maxMaxDOP.ToString());
Console.WriteLine("");
Console.WriteLine("MaxDOP,FilesProcessed,ProcessingTime[ms],BytesProcessed,LinesRead,SomeBookLines,LinesPer[ms],BytesPer[ms]");
for (int maxDOP = 1; maxDOP <= maxMaxDOP; maxDOP++)
{
// Construct ConcurrentStacks for resulting strings and counters
ConcurrentStack<Int64> TotalLines = new ConcurrentStack<Int64>();
ConcurrentStack<Int64> TotalSomeBookLines = new ConcurrentStack<Int64>();
ConcurrentStack<Int64> TotalLength = new ConcurrentStack<Int64>();
ConcurrentStack<int> TotalFiles = new ConcurrentStack<int>();
DateTime FullStartTime = DateTime.Now;
string[] files = System.IO.Directory.GetFiles(fileLocation, filePattern);
var options = new ParallelOptions() { MaxDegreeOfParallelism = maxDOP };
// Method signature: Parallel.ForEach(IEnumerable<TSource> source, Action<TSource> body)
Parallel.ForEach(files, options, currentFile =>
{
string filename = System.IO.Path.GetFileName(currentFile);
DateTime fileStartTime = DateTime.Now;
using (FileStream inFile = File.Open(fileLocation + "\\" + filename, FileMode.Open))
{
Int64 lines = 0, someBookLines = 0, length = 0;
String line = "";
using (var reader = new StreamReader(new GZipStream(inFile, CompressionMode.Decompress)))
{
while (!reader.EndOfStream)
{
line = reader.ReadLine();
lines++; // total lines
length += line.Length; // total line length
if (line.Contains("book")) someBookLines++; // some special lines that need to be parsed later
}
TotalLines.Push(lines); TotalSomeBookLines.Push(someBookLines); TotalLength.Push(length);
TotalFiles.Push(1); // silly way to count processed files :)
}
}
}
);
TimeSpan runningTime = DateTime.Now - FullStartTime;
// Console.WriteLine("MaxDOP,FilesProcessed,ProcessingTime[ms],BytesProcessed,LinesRead,SomeBookLines,LinesPer[ms],BytesPer[ms]");
Console.WriteLine("{0},{1},{2},{3},{4},{5},{6},{7}",
maxDOP.ToString(),
TotalFiles.Sum().ToString(),
Convert.ToInt32(runningTime.TotalMilliseconds).ToString(),
TotalLength.Sum().ToString(),
TotalLines.Sum(),
TotalSomeBookLines.Sum().ToString(),
Convert.ToInt64(TotalLines.Sum() / runningTime.TotalMilliseconds).ToString(),
Convert.ToInt64(TotalLength.Sum() / runningTime.TotalMilliseconds).ToString());
}
Console.WriteLine();
Console.WriteLine("Finish: " + DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ss.fffffffZ"));
}
}
}
Here's a summary of the results, with a clear peak at MaxDegreeOfParallelism = 8:
The CPU load (shown aggregated here, most of the load was on a single NUMA node, even when DOP was in 20 to 30 range):
The only way I've found to make CPU load cross 95% mark was to split the files across 4 different folders and execute the same command 4 times, each one targeting a subset of all files.
Can someone find a bottleneck?
It's likely that one problem is the small buffer size used by the default FileStream constructor. I suggest you use a larger input buffer. Such as:
using (FileStream infile = new FileStream(
name, FileMode.Open, FileAccess.Read, FileShare.None, 65536))
The default buffer size is 4 kilobytes, which has the thread making many calls to the I/O subsystem to fill its buffer. A buffer of 64K means that you will make those calls much less frequently.
I've found that a buffer size of between 32K and 256K gives the best performance, with 64K being the "sweet spot" when I did some detailed testing a while back. A buffer size larger than 256K actually begins to reduce performance.
Also, although this is unlikely to have a major effect on performance, you probably should replace those ConcurrentStack instances with 64-bit integers and use Interlocked.Add or Interlocked.Increment to update them. It simplifies your code and removes the need to manage the collections.
Update:
Re-reading your problem description, I was struck by this statement:
The only way I've found to make CPU load cross 95% mark was to split
the files across 4 different folders and execute the same command 4
times, each one targeting a subset of all files.
That, to me, points to a bottleneck in opening files. As though the OS is using a mutual exclusion lock on the directory. And even if all the data is in the cache and there's no physical I/O required, processes still have to wait on this lock. It's also possible that the file system is writing to the disk. Remember, it has to update the Last Access Time for a file whenever it's opened.
If I/O really is the bottleneck, then you might consider having a single thread that does nothing but load files and stuff them into a BlockingCollection or similar data structure so that the processing threads don't have to contend with each other for a lock on the directory. Your application becomes a producer/consumer application with one producer and N consumers.
The reason for this is usually that threads synchronize too much.
Looking for synchronization in your code I can see heavy syncing on the collections. Your threads are pushing the lines individually. This means that each line incurs at best an interlocked operation and at worst a kernel-mode lock wait. The interlocked operations will contend heavily because all threads race to get their current line into the collection. They all try to update the same memory locations. This causes cache line pinging.
Change this to push lines in bigger chunks. Push line-arrays of 100 lines or more. The more the better.
In other words, collect results in a thread-local collection first and only rarely merge into the global results.
You might even want to get rid of the manual data pushing altogether. This is what PLINQ is made for: Streaming data concurrently. PLINQ abstracts away all the concurrent collection manipulations in a well-performing way.
I don't think Parallelizing the disk reads is helping you. In fact, this could be seriously impacting your performance by creating contention in reading from multiple areas of storage at same time.
I would restructure the program to first do a single-threaded read of raw file data into a memory stream of byte[]. Then, do a Parallel.ForEach() on each stream or buffer to decompress and count the lines.
You take an initial IO read hit up front but let the OS/hardware optimize the hopefully mostly sequential reads, then decompress and parse in memory.
Keep in mind that operations like decomprless, Encoding.UTF8.ToString(), String.Split(), etc. will use large amounts of memory, so clean up references to/dispose of old buffers as you no longer need them.
I'd be surprised if you can't cause the machine to generate some serious waste hit this way.
Hope this helps.
The problem, I think, is that you are using blocking I/O, so your threads cannot fully take advantage of parallelism.
If I understand your algorithm right (sorry, I'm more of a C++ guy) this is what you are doing in each thread (pseudo-code):
while (there is data in the file)
read data
gunzip data
Instead, a better approach would be something like this:
N = 0
read data block N
while (there is data in the file)
asyncRead data block N+1
gunzip data block N
N = N + 1
gunzip data block N
The asyncRead call does not block, so basically you have the decoding of block N happening concurrently with the reading of block N+1, so by the time you are done decoding block N you might have block N+1 ready (or close to be ready if I/O is slower than decoding).
Then it's just a matter of finding the block size that gives you the best throughput.
Good luck.

High memory usage for small application

im just building a very simple event based proxy monitor top disable the proxy settings depending on if a network location is available.
the issue is that the application is a tiny 10KB and has minimal interface, but yet it uses 10MB of ram.
The code is pretty simple:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Net;
using System.Net.NetworkInformation;
using Microsoft.Win32;
namespace WCSProxyMonitor
{
class _Application : ApplicationContext
{
private NotifyIcon NotificationIcon = new NotifyIcon();
private string IPAdressToCheck = "10.222.62.5";
public _Application(string[] args)
{
if (args.Length > 0)
{
try
{
IPAddress.Parse(args[0]); //?FormatException
this.IPAdressToCheck = args[0];
}
catch (Exception)
{}
}
this.enableGUIAspects();
this.buildNotificationContextmenu();
this.startListening();
}
private void startListening()
{
NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(networkChangeListener);
}
public void networkChangeListener(object sender, EventArgs e)
{
//foreach (NetworkInterface nic in NetworkInterface.GetAllNetworkInterfaces())
//{
//IPInterfaceProperties IPInterfaceProperties = nic.GetIPProperties();
//}
//Attempt to ping the domain!
PingOptions PingOptions = new PingOptions(128, true);
Ping ping = new Ping();
//empty buffer
byte[] Packet = new byte[32];
//Send
PingReply PingReply = ping.Send(IPAddress.Parse(this.IPAdressToCheck), 1000, Packet, PingOptions);
//Get the registry object ready.
using (RegistryKey RegistryObject = Registry.CurrentUser.OpenSubKey("Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings", true))
{
if (PingReply.Status == IPStatus.Success)
{
this.NotificationIcon.ShowBalloonTip(3000, "Proxy Status", "proxy settings have been enabled", ToolTipIcon.Info);
RegistryObject.SetValue("ProxyEnable", 1, RegistryValueKind.DWord);
}
else
{
this.NotificationIcon.ShowBalloonTip(3000, "Proxy Status", "proxy settings have been disabled", ToolTipIcon.Info);
RegistryObject.SetValue("ProxyEnable", 0, RegistryValueKind.DWord);
}
}
}
private void enableGUIAspects()
{
this.NotificationIcon.Icon = Resources.proxyicon;
this.NotificationIcon.Visible = true;
}
private void buildNotificationContextmenu()
{
this.NotificationIcon.ContextMenu = new ContextMenu();
this.NotificationIcon.Text = "Monitoring for " + this.IPAdressToCheck;
//Exit comes first:
this.NotificationIcon.ContextMenu.MenuItems.Add(new MenuItem("Exit",this.ExitApplication));
}
public void ExitApplication(object Sender, EventArgs e)
{
Application.Exit();
}
}
}
My questions are:
Is this normal for an application built on C#
What can I do to reduce the amount of memory being used.
the application is built on the framework of .NET 4.0
Regards.
It doesn't use anywhere near 10 MB of RAM. It uses 10 MB of address space. Address space usage has (almost) nothing whatsoever to do with RAM.
When you load the .NET framework, space for all the code is reserved in your address space. It is not loaded into RAM. The code is loaded into RAM in 4kb chunks called "pages" on an as-needed basis, but space for those pages has to be reserved in the address space so that the process is guaranteed that there is a space in the address space for all the code it might need.
Furthermore, when each page is loaded into RAM, if you have two .NET applications running at the same time then they share that page of RAM. The memory manager takes care of ensuring that shared code pages are only loaded once into RAM, even if they are in a thousand different address spaces.
If you're going to be measuring memory usage then you need to learn how memory works in a modern operating system. Things have changed since the 286 days.
See this related question:
Is 2 GB really my maximum?
And my article on the subject for a brief introduction to how memory actually works.
http://blogs.msdn.com/b/ericlippert/archive/2009/06/08/out-of-memory-does-not-refer-to-physical-memory.aspx
If you just start your application and then check the amount of memory usage the number may be high. .Net Application preload about 10 MB of memory when the application is started. After your app runs for a while you should see the memory usage drop. Also, just because you see a particular amount of memory in use by your app in the Task Manager it doesn't mean it is using that amount. .Net can also share memory for some components as well as preallocate memory. If you are really concerned get a real profiler for your application.
Your app itself is small, but it references classes the .NET framework. They need to be loaded into memory too. When you use Process Explorer from Sysinternals you can see what dlls are loaded and, if you select some more columns, also how much memory they use. That should help explain where some of the memory footprint is coming from, other reasons as described in the other answers may still be valid.
You could try a GC.Collect() to see how much memory is used after that, not recommended to fiddle with the GC in production code tho.
Regards GJ
Yes this is normal for C# applications, starting the CLR takes some doing.
As for reducing this the less DLL's you load the better so see what references you can remove.
Example I see you are importing Linq but did not see any in a quick scan of code, can you remove this and reduce the number of DLL's you project depends on.
I also see that you are using windows forms, 10M is not large for any application using forms.

Process Memory Size - Different Counters

I'm trying to find out how much memory my own .Net server process is using (for monitoring and logging purposes).
I'm using:
Process.GetCurrentProcess().PrivateMemorySize64
However, the Process object has several different properties that let me read the memory space used:
Paged, NonPaged, PagedSystem, NonPagedSystem, Private, Virtual, WorkingSet
and then the "peaks": which i'm guessing just store the maximum values these last ones ever took.
Reading through the MSDN definition of each property hasn't proved too helpful for me. I have to admit my knowledge regarding how memory is managed (as far as paging and virtual goes) is very limited.
So my question is obviously "which one should I use?", and I know the answer is "it depends".
This process will basically hold a bunch of lists in memory of things that are going on, while other processes communicate with it and query it for stuff. I'm expecting the server where this will run on to require lots of RAM, and so i'm querying this data over time to be able to estimate RAM requirements when compared to the sizes of the lists it keeps inside.
So... Which one should I use and why?
If you want to know how much the GC uses try:
GC.GetTotalMemory(true)
If you want to know what your process uses from Windows (VM Size column in TaskManager) try:
Process.GetCurrentProcess().PrivateMemorySize64
If you want to know what your process has in RAM (as opposed to in the pagefile) (Mem Usage column in TaskManager) try:
Process.GetCurrentProcess().WorkingSet64
See here for more explanation on the different sorts of memory.
OK, I found through Google the same page that Lars mentioned, and I believe it's a great explanation for people that don't quite know how memory works (like me).
http://shsc.info/WindowsMemoryManagement
My short conclusion was:
Private Bytes = The Memory my process has requested to store data. Some of it may be paged to disk or not. This is the information I was looking for.
Virtual Bytes = The Private Bytes, plus the space shared with other processes for loaded DLLs, etc.
Working Set = The portion of ALL the memory of my process that has not been paged to disk. So the amount paged to disk should be (Virtual - Working Set).
Thanks all for your help!
If you want to use the "Memory (Private Working Set)" as shown in Windows Vista task manager, which is the equivalent of Process Explorer "WS Private Bytes", here is the code. Probably best to throw this infinite loop in a thread/background task for real-time stats.
using System.Threading;
using System.Diagnostics;
//namespace...class...method
Process thisProc = Process.GetCurrentProcess();
PerformanceCounter PC = new PerformanceCounter();
PC.CategoryName = "Process";
PC.CounterName = "Working Set - Private";
PC.InstanceName = thisProc.ProcessName;
while (true)
{
String privMemory = (PC.NextValue()/1000).ToString()+"KB (Private Bytes)";
//Do something with string privMemory
Thread.Sleep(1000);
}
To get the value that Task Manager gives, my hat's off to Mike Regan's solution above. However, one change: it is not: perfCounter.NextValue()/1000; but perfCounter.NextValue()/1024; (i.e. a real kilobyte). This gives the exact value you see in Task Manager.
Following is a full solution for displaying the 'memory usage' (Task manager's, as given) in a simple way in your WPF or WinForms app (in this case, simply in the title). Just call this method within the new Window constructor:
private void DisplayMemoryUsageInTitleAsync()
{
origWindowTitle = this.Title; // set WinForms or WPF Window Title to field
BackgroundWorker wrkr = new BackgroundWorker();
wrkr.WorkerReportsProgress = true;
wrkr.DoWork += (object sender, DoWorkEventArgs e) => {
Process currProcess = Process.GetCurrentProcess();
PerformanceCounter perfCntr = new PerformanceCounter();
perfCntr.CategoryName = "Process";
perfCntr.CounterName = "Working Set - Private";
perfCntr.InstanceName = currProcess.ProcessName;
while (true)
{
int value = (int)perfCntr.NextValue() / 1024;
string privateMemoryStr = value.ToString("n0") + "KB [Private Bytes]";
wrkr.ReportProgress(0, privateMemoryStr);
Thread.Sleep(1000);
}
};
wrkr.ProgressChanged += (object sender, ProgressChangedEventArgs e) => {
string val = e.UserState as string;
if (!string.IsNullOrEmpty(val))
this.Title = string.Format(#"{0} ({1})", origWindowTitle, val);
};
wrkr.RunWorkerAsync();
}`
Is this a fair description? I'd like to share this with my team so please let me know if it is incorrect (or incomplete):
There are several ways in C# to ask how much memory my process is using.
Allocated memory can be managed (by the CLR) or unmanaged.
Allocated memory can be virtual (stored on disk) or loaded (into RAM pages)
Allocated memory can be private (used only by the process) or shared (e.g. belonging to a DLL that other processes are referencing).
Given the above, here are some ways to measure memory usage in C#:
1) Process.VirtualMemorySize64(): returns all the memory used by a process - managed or unmanaged, virtual or loaded, private or shared.
2) Process.PrivateMemorySize64(): returns all the private memory used by a process - managed or unmanaged, virtual or loaded.
3) Process.WorkingSet64(): returns all the private, loaded memory used by a process - managed or unmanaged
4) GC.GetTotalMemory(): returns the amount of managed memory being watched by the garbage collector.
Working set isn't a good property to use. From what I gather, it includes everything the process can touch, even libraries shared by several processes, so you're seeing double-counted bytes in that counter. Private memory is a much better counter to look at.
I'd suggest to also monitor how often pagefaults happen. A pagefault happens when you try to access some data that have been moved from physical memory to swap file and system has to read page from disk before you can access this data.

Categories

Resources