High memory usage for small application - c#

im just building a very simple event based proxy monitor top disable the proxy settings depending on if a network location is available.
the issue is that the application is a tiny 10KB and has minimal interface, but yet it uses 10MB of ram.
The code is pretty simple:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Net;
using System.Net.NetworkInformation;
using Microsoft.Win32;
namespace WCSProxyMonitor
{
class _Application : ApplicationContext
{
private NotifyIcon NotificationIcon = new NotifyIcon();
private string IPAdressToCheck = "10.222.62.5";
public _Application(string[] args)
{
if (args.Length > 0)
{
try
{
IPAddress.Parse(args[0]); //?FormatException
this.IPAdressToCheck = args[0];
}
catch (Exception)
{}
}
this.enableGUIAspects();
this.buildNotificationContextmenu();
this.startListening();
}
private void startListening()
{
NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(networkChangeListener);
}
public void networkChangeListener(object sender, EventArgs e)
{
//foreach (NetworkInterface nic in NetworkInterface.GetAllNetworkInterfaces())
//{
//IPInterfaceProperties IPInterfaceProperties = nic.GetIPProperties();
//}
//Attempt to ping the domain!
PingOptions PingOptions = new PingOptions(128, true);
Ping ping = new Ping();
//empty buffer
byte[] Packet = new byte[32];
//Send
PingReply PingReply = ping.Send(IPAddress.Parse(this.IPAdressToCheck), 1000, Packet, PingOptions);
//Get the registry object ready.
using (RegistryKey RegistryObject = Registry.CurrentUser.OpenSubKey("Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings", true))
{
if (PingReply.Status == IPStatus.Success)
{
this.NotificationIcon.ShowBalloonTip(3000, "Proxy Status", "proxy settings have been enabled", ToolTipIcon.Info);
RegistryObject.SetValue("ProxyEnable", 1, RegistryValueKind.DWord);
}
else
{
this.NotificationIcon.ShowBalloonTip(3000, "Proxy Status", "proxy settings have been disabled", ToolTipIcon.Info);
RegistryObject.SetValue("ProxyEnable", 0, RegistryValueKind.DWord);
}
}
}
private void enableGUIAspects()
{
this.NotificationIcon.Icon = Resources.proxyicon;
this.NotificationIcon.Visible = true;
}
private void buildNotificationContextmenu()
{
this.NotificationIcon.ContextMenu = new ContextMenu();
this.NotificationIcon.Text = "Monitoring for " + this.IPAdressToCheck;
//Exit comes first:
this.NotificationIcon.ContextMenu.MenuItems.Add(new MenuItem("Exit",this.ExitApplication));
}
public void ExitApplication(object Sender, EventArgs e)
{
Application.Exit();
}
}
}
My questions are:
Is this normal for an application built on C#
What can I do to reduce the amount of memory being used.
the application is built on the framework of .NET 4.0
Regards.

It doesn't use anywhere near 10 MB of RAM. It uses 10 MB of address space. Address space usage has (almost) nothing whatsoever to do with RAM.
When you load the .NET framework, space for all the code is reserved in your address space. It is not loaded into RAM. The code is loaded into RAM in 4kb chunks called "pages" on an as-needed basis, but space for those pages has to be reserved in the address space so that the process is guaranteed that there is a space in the address space for all the code it might need.
Furthermore, when each page is loaded into RAM, if you have two .NET applications running at the same time then they share that page of RAM. The memory manager takes care of ensuring that shared code pages are only loaded once into RAM, even if they are in a thousand different address spaces.
If you're going to be measuring memory usage then you need to learn how memory works in a modern operating system. Things have changed since the 286 days.
See this related question:
Is 2 GB really my maximum?
And my article on the subject for a brief introduction to how memory actually works.
http://blogs.msdn.com/b/ericlippert/archive/2009/06/08/out-of-memory-does-not-refer-to-physical-memory.aspx

If you just start your application and then check the amount of memory usage the number may be high. .Net Application preload about 10 MB of memory when the application is started. After your app runs for a while you should see the memory usage drop. Also, just because you see a particular amount of memory in use by your app in the Task Manager it doesn't mean it is using that amount. .Net can also share memory for some components as well as preallocate memory. If you are really concerned get a real profiler for your application.

Your app itself is small, but it references classes the .NET framework. They need to be loaded into memory too. When you use Process Explorer from Sysinternals you can see what dlls are loaded and, if you select some more columns, also how much memory they use. That should help explain where some of the memory footprint is coming from, other reasons as described in the other answers may still be valid.
You could try a GC.Collect() to see how much memory is used after that, not recommended to fiddle with the GC in production code tho.
Regards GJ

Yes this is normal for C# applications, starting the CLR takes some doing.
As for reducing this the less DLL's you load the better so see what references you can remove.
Example I see you are importing Linq but did not see any in a quick scan of code, can you remove this and reduce the number of DLL's you project depends on.
I also see that you are using windows forms, 10M is not large for any application using forms.

Related

Memory leak analysis and help requested

I've been using the methodology outlined by Shivprasad Koirala to check for memory leaks from code running inside a C# application (VoiceAttack). It basically involves using the Performance Monitor to track an application's private bytes as well as bytes in all heaps and compare these counters to assess if there is a leak and what type (managed/unmanaged). Ideally I need to test outside of Visual Studio, which is why I'm using this method.
The following portion of code generates the below memory profile (bear in mind the code has a little different format compared to Visual Studio because this is a function contained within the main C# application):
public void main()
{
string FilePath = null;
using (FileDialog myFileDialog = new OpenFileDialog())
{
myFileDialog.Title = "this is the title";
myFileDialog.FileName = "testFile.txt";
myFileDialog.Filter = "txt files (*.txt)|*.txt|All files (*.*)|*.*";
myFileDialog.FilterIndex = 1;
if (myFileDialog.ShowDialog() == DialogResult.OK)
{
FilePath = myFileDialog.FileName;
var extension = Path.GetExtension(FilePath);
var compareType = StringComparison.InvariantCultureIgnoreCase;
if (extension.Equals(".txt", compareType) == false)
{
FilePath = null;
VA.WriteToLog("Selected file is not a text file. Action canceled.");
}
else
VA.WriteToLog(FilePath);
}
else
VA.WriteToLog("No file selected. Action canceled.");
}
VA.WriteToLog("done");
}
You can see that after running this code the private bytes don't come back to the original count and the bytes in all heaps are roughly constant, which implies that there is a portion of unmanaged memory that was not released. Running this same inline function a few times consecutively doesn't cause further increases to the maximum observed private bytes or the unreleased memory. Once the main C# application (VoiceAttack) closes all the related memory (including the memory for the above code) is released. The bad news is that under normal circumstances the main application may be kept running indefinitely by the user, causing the allocated memory to remain unreleased.
For good measure I threw this same code into VS (with a pair of Thread.Sleep(5000) added before and after the using block for better graphical analysis) and built an executable to track with the Performance Monitor method, and the result is the same. There is an initial unmanaged memory jump for the OpenFileDialog and the allocated unmanaged memory never comes back down to the original value.
Does the memory and leak tracking methodology outlined above make sense? If YES, is there anything that can be done to properly release the unmanaged memory?
Does the memory and leak tracking methodology outlined above make sense?
No. You shouldn't expect unmanaged committed memory (Private Bytes) always be released. For instance processes have an unmanaged heap, which is managed to allow for subsequent allocations. And since Windows can page your committed memory, it isn't critical to minimize each processes committed memory.
If repeated calls don't increase memory use, you don't have a memory leak, you have delayed initialization. Some components aren't initialized until you use them, so their memory usage isn't being taken into account when you establish your baseline.

Need C# code that will eat up system memory.

I need the opposite of good, optimized code. For testing purposes I need a simple program to eat up RAM. Preferably not all memory so that the OS is non-functional, but more like something that would simulate a memory leak, using up a lot of memory and slowing down the OS, gradually, overtime.
Specifically, can anyone provide code spinets or links to tutorials I can use?
I saw this code as a suggestion on another post:
for (object[] o = null;; o = new[] { o });
But it is not quite what I am looking for as per the description above.
Please help.
Use
Marshal.AllocHGlobal(numbytes)
You can attach this to a timer.
And just dont release the memory (dont call FreeHGlobal).
Thats the most straighforward, controllable and predictable way to consume memory.
Marhsal.AllocHGlobal
The Design is Below, the Question asked for Gradually eating up memory
Gradually Eats Up Memory
Parameter #1 is the Memory it will Eat Up in Megabytes (ie 6000 is 6 Gigs)
Parameter #2 is the Gradual Delay for each Iteration (ie 1000 is 1 second)
The Commited Memory and Working Set will be around the Same
It was designed to use XmlNode as the object that takes up Memory because then the COMMITTED MEMORY (memory allocated by process in OS) and WORKING SET MEMORY (memory actually used by the process) would be the same. If a primative type is used to take up memory such as a byte[] array, then the WORKING SET usually is nothing, because the memory is not actually be used by the process even though its been allocated.
Make sure to compile in x64 under the Project Properties under the Build Tab. Otherwise if its compiled in x32 then it will get an OutOfMemory error around the 1.7Gigs limit. In x64 the Memory it eats up will be pretty "limitless".
using System;
using System.Xml;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Runtime.InteropServices;
namespace MemoryHog
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(" Total Memory To Eat Up in Megs: " + args[0]);
Console.WriteLine("Millisecs Pause Between Increments: " + args[1]);
int memoryTotalInMegaBytes = Convert.ToInt32(args[0]);
int secondsToPause = Convert.ToInt32(args[1]);
Console.WriteLine("Memory Usage:" + Convert.ToString(GC.GetTotalMemory(false)));
long runningTotal = GC.GetTotalMemory(false);
long endingMemoryLimit = Convert.ToInt64(memoryTotalInMegaBytes) * 1000 * 1000;
List<XmlNode> memList = new List<XmlNode>();
while (runningTotal <= endingMemoryLimit)
{
XmlDocument doc = new XmlDocument();
for (int i=0; i < 1000000; i++)
{
XmlNode x = doc.CreateNode(XmlNodeType.Element, "hello", "");
memList.Add(x);
}
runningTotal = GC.GetTotalMemory(false);
Console.WriteLine("Memory Usage:" + Convert.ToString(GC.GetTotalMemory(false)));
Thread.Sleep(secondsToPause);
}
Console.ReadLine();
}
}
}
The easiest way to create memory leaks in C# is by attaching to events and not detaching. That is what I would do at least. Here is a SO that talks about this

Determining CPU and RAM usage in a Silverlight 4 WIndows Sidebar Gadget

I'm trying to write a Silverlight 4 Windows Sidebar Gadget that, among other things, can monitor the usage of each CPU core (as a percentage) and the usage of RAM (in bytes) of the host computer. I've tried using System.Management, but Visual Studio won't let me add it, as it's not part of Silverlight.
In the end, I'm looking for some method that simply returns the usage of a specific CPU core. Automatically detecting the number of cores would be a bonus. The same goes for RAM.
Extensive searching has led me to believe that this is possible through COM+ automation, but I'm clueless as to how. Any direction would be very much appreciated.
You can use System.Windows.Analytics class to get systems stats..
It has a AverageProcessorLoad which you can use to get the current CPU usage(Value between 0 and 1)
.And its for Silverlight only.
You can simply use it like this:
float averageCPUUsage = System.Windows.Analytics.AverageProcessorLoad;
float myAppCPUUsage = System.Windows.Analytics.AverageProcessLoad;// Get cpu usage by your current app.
Update
But from Silverlight this is as far as we can go.. for RAM and Processor count you will need to have somthing installed on the client side itself to tell you.. from browser you can't.
You can also take a look at sample of System.Windows.Analytics usage on this article.
A little fragment of code from that article that shows usage of System.Windows.Analytics:
public partial class Page : UserControl
{
Analytics analytics;
public Page()
{
InitializeComponent();
CompositionTarget.Rendering += new EventHandler(CompositionTarget_Rendering);
}
void CompositionTarget_Rendering(object sender, EventArgs e)
{
if (analytics == null)
analytics = new Analytics();
}
}

How to limit I/O operations in .NET application?

I'm developing an application (.NET 4.0, C#) that:
1. Scans file system.
2. Opens and reads some files.
The app will work in background and should have low impact on the disk usage. It shouldn't bother users if they are doing their usual tasks and the disk usage is high. And vice versa, the app can go faster if nobody is using the disk.
The main issue is I don't know real amount and size of I/O operations because of using API (mapi32.dll) to read files. If I ask API to do something I don't know how many bytes it reads to handle my response.
So the question is how to monitor and manage the disk usage? Including file system scanning and files reading...
Check performance counters that are used by standard Performance Monitor tool? Or any other ways?
Using the System.Diagnostics.PerformanceCounter class, attach to the PhysicalDisk counter related to the drive that you are indexing.
Below is some code to illustrate, although its currently hard coded to the "C:" drive. You will want to change "C:" to whichever drive your process is scanning. (This is rough sample code only to illustrate the existence of performance counters - don't take it as providing accurate information - should always be used as a guide only. Change for your own purpose)
Observe the % Idle Time counter which indicates how often the drive is doing anything.
0% idle means the disk is busy, but does not necessarily mean that it is flat-out and cannot transfer more data.
Combine the % Idle Time with Current Disk Queue Length and this will tell you if the drive is getting so busy that it cannot service all the requests for data. As a general guideline, anything over 0 means the drive is probably flat-out busy and anything over 2 means the drive is completely saturated. These rules apply to both SSD and HDD fairly well.
Also, any value that you read is an instantaneous value at a point in time. You should do a running average over a few results, e.g. take a reading every 100ms and average 5 readings before using the information from the result to make a decision (i.e., waiting until the counters settle before making your next IO request).
internal DiskUsageMonitor(string driveName)
{
// Get a list of the counters and look for "C:"
var perfCategory = new PerformanceCounterCategory("PhysicalDisk");
string[] instanceNames = perfCategory.GetInstanceNames();
foreach (string name in instanceNames)
{
if (name.IndexOf("C:") > 0)
{
if (string.IsNullOrEmpty(driveName))
driveName = name;
}
}
_readBytesCounter = new PerformanceCounter("PhysicalDisk",
"Disk Read Bytes/sec",
driveName);
_writeBytesCounter = new PerformanceCounter("PhysicalDisk",
"Disk Write Bytes/sec",
driveName);
_diskQueueCounter = new PerformanceCounter("PhysicalDisk",
"Current Disk Queue Length",
driveName);
_idleCounter = new PerformanceCounter("PhysicalDisk",
"% Idle Time",
driveName);
InitTimer();
}
internal event DiskUsageResultHander DiskUsageResult;
private void InitTimer()
{
StopTimer();
_perfTimer = new Timer(_updateResolutionMillisecs);
_perfTimer.Elapsed += PerfTimerElapsed;
_perfTimer.Start();
}
private void PerfTimerElapsed(object sender, ElapsedEventArgs e)
{
float diskReads = _readBytesCounter.NextValue();
float diskWrites = _writeBytesCounter.NextValue();
float diskQueue = _diskQueueCounter.NextValue();
float idlePercent = _idleCounter.NextValue();
if (idlePercent > 100)
{
idlePercent = 100;
}
if (DiskUsageResult != null)
{
var stats = new DiskUsageStats
{
DriveName = _readBytesCounter.InstanceName,
DiskQueueLength = (int)diskQueue,
ReadBytesPerSec = (int)diskReads,
WriteBytesPerSec = (int)diskWrites,
DiskUsagePercent = 100 - (int)idlePercent
};
DiskUsageResult(stats);
}
}
A long term ago Microsoft Research published a paper on this (sorry I can’t remember the url).
From what I recall:
The program started off doing very few "work items".
They measured how long it took for each of their "work item".
After running for a bit, they could work out how fast an "work item" was with no load on the system.
From then on, if the "work item" were fast (e.g. no other programmers making requests), they made more requests, otherwise they backed-off
The basic ideal is:
“if they are slowing me down, then I
must be slowing them down, so do less
work if I am being slowed down”
Something to ponder: what if there are other processes which follow the same (or a similar) strategy? Which one would run during the "idle time"? Would the other processes get a chance to make use of the idle time at all?
Obviously this can't be done correctly unless there is some well-known OS mechanism for fairly dividing resources during idle time. In windows, this is done by calling SetPriorityClass.
This document about I/O prioritization in Vista seems to imply that IDLE_PRIORITY_CLASS will not really lower the priority of I/O requests (though it will reduce the scheduling priority for the process). Vista added new PROCESS_MODE_BACKGROUND_BEGIN and PROCESS_MODE_BACKGROUND_END values for that.
In C#, you can normally set the process priority with the Process.PriorityClass property. The new values for Vista are not available though, so you'll have to call the Windows API function directly. You can do that like this:
[DllImport("kernel32.dll", CharSet=CharSet.Auto, SetLastError=true)]
public static extern bool SetPriorityClass(IntPtr handle, uint priorityClass);
const uint PROCESS_MODE_BACKGROUND_BEGIN = 0x00100000;
static void SetBackgroundMode()
{
if (!SetPriorityClass(new IntPtr(-1), PROCESS_MODE_BACKGROUND_BEGIN))
{
// handle error...
}
}
I did not test the code above. Don't forget that it can only work on Vista or better. You'll have to use Environment.OSVersion to check for earlier operating systems and implement a fall-back strategy.
See this question and this also for related queries. I would suggest for a simple solution just querying for the current disk & CPU usage % every so often, and only continue with the current task when they are under a defined threshold. Just make sure your work is easily broken into tasks, and that each task can be easily & efficiently start/stopped.
Check if the screensaver is running ? Good indication that the user is away from the keyboard

Process Memory Size - Different Counters

I'm trying to find out how much memory my own .Net server process is using (for monitoring and logging purposes).
I'm using:
Process.GetCurrentProcess().PrivateMemorySize64
However, the Process object has several different properties that let me read the memory space used:
Paged, NonPaged, PagedSystem, NonPagedSystem, Private, Virtual, WorkingSet
and then the "peaks": which i'm guessing just store the maximum values these last ones ever took.
Reading through the MSDN definition of each property hasn't proved too helpful for me. I have to admit my knowledge regarding how memory is managed (as far as paging and virtual goes) is very limited.
So my question is obviously "which one should I use?", and I know the answer is "it depends".
This process will basically hold a bunch of lists in memory of things that are going on, while other processes communicate with it and query it for stuff. I'm expecting the server where this will run on to require lots of RAM, and so i'm querying this data over time to be able to estimate RAM requirements when compared to the sizes of the lists it keeps inside.
So... Which one should I use and why?
If you want to know how much the GC uses try:
GC.GetTotalMemory(true)
If you want to know what your process uses from Windows (VM Size column in TaskManager) try:
Process.GetCurrentProcess().PrivateMemorySize64
If you want to know what your process has in RAM (as opposed to in the pagefile) (Mem Usage column in TaskManager) try:
Process.GetCurrentProcess().WorkingSet64
See here for more explanation on the different sorts of memory.
OK, I found through Google the same page that Lars mentioned, and I believe it's a great explanation for people that don't quite know how memory works (like me).
http://shsc.info/WindowsMemoryManagement
My short conclusion was:
Private Bytes = The Memory my process has requested to store data. Some of it may be paged to disk or not. This is the information I was looking for.
Virtual Bytes = The Private Bytes, plus the space shared with other processes for loaded DLLs, etc.
Working Set = The portion of ALL the memory of my process that has not been paged to disk. So the amount paged to disk should be (Virtual - Working Set).
Thanks all for your help!
If you want to use the "Memory (Private Working Set)" as shown in Windows Vista task manager, which is the equivalent of Process Explorer "WS Private Bytes", here is the code. Probably best to throw this infinite loop in a thread/background task for real-time stats.
using System.Threading;
using System.Diagnostics;
//namespace...class...method
Process thisProc = Process.GetCurrentProcess();
PerformanceCounter PC = new PerformanceCounter();
PC.CategoryName = "Process";
PC.CounterName = "Working Set - Private";
PC.InstanceName = thisProc.ProcessName;
while (true)
{
String privMemory = (PC.NextValue()/1000).ToString()+"KB (Private Bytes)";
//Do something with string privMemory
Thread.Sleep(1000);
}
To get the value that Task Manager gives, my hat's off to Mike Regan's solution above. However, one change: it is not: perfCounter.NextValue()/1000; but perfCounter.NextValue()/1024; (i.e. a real kilobyte). This gives the exact value you see in Task Manager.
Following is a full solution for displaying the 'memory usage' (Task manager's, as given) in a simple way in your WPF or WinForms app (in this case, simply in the title). Just call this method within the new Window constructor:
private void DisplayMemoryUsageInTitleAsync()
{
origWindowTitle = this.Title; // set WinForms or WPF Window Title to field
BackgroundWorker wrkr = new BackgroundWorker();
wrkr.WorkerReportsProgress = true;
wrkr.DoWork += (object sender, DoWorkEventArgs e) => {
Process currProcess = Process.GetCurrentProcess();
PerformanceCounter perfCntr = new PerformanceCounter();
perfCntr.CategoryName = "Process";
perfCntr.CounterName = "Working Set - Private";
perfCntr.InstanceName = currProcess.ProcessName;
while (true)
{
int value = (int)perfCntr.NextValue() / 1024;
string privateMemoryStr = value.ToString("n0") + "KB [Private Bytes]";
wrkr.ReportProgress(0, privateMemoryStr);
Thread.Sleep(1000);
}
};
wrkr.ProgressChanged += (object sender, ProgressChangedEventArgs e) => {
string val = e.UserState as string;
if (!string.IsNullOrEmpty(val))
this.Title = string.Format(#"{0} ({1})", origWindowTitle, val);
};
wrkr.RunWorkerAsync();
}`
Is this a fair description? I'd like to share this with my team so please let me know if it is incorrect (or incomplete):
There are several ways in C# to ask how much memory my process is using.
Allocated memory can be managed (by the CLR) or unmanaged.
Allocated memory can be virtual (stored on disk) or loaded (into RAM pages)
Allocated memory can be private (used only by the process) or shared (e.g. belonging to a DLL that other processes are referencing).
Given the above, here are some ways to measure memory usage in C#:
1) Process.VirtualMemorySize64(): returns all the memory used by a process - managed or unmanaged, virtual or loaded, private or shared.
2) Process.PrivateMemorySize64(): returns all the private memory used by a process - managed or unmanaged, virtual or loaded.
3) Process.WorkingSet64(): returns all the private, loaded memory used by a process - managed or unmanaged
4) GC.GetTotalMemory(): returns the amount of managed memory being watched by the garbage collector.
Working set isn't a good property to use. From what I gather, it includes everything the process can touch, even libraries shared by several processes, so you're seeing double-counted bytes in that counter. Private memory is a much better counter to look at.
I'd suggest to also monitor how often pagefaults happen. A pagefault happens when you try to access some data that have been moved from physical memory to swap file and system has to read page from disk before you can access this data.

Categories

Resources