Reliable way to measure RAM usage of desktop application - c#

I'm working on an automated testing system (written in C#) for an application at work and I have great difficulty to measure the peak ram usage that it needs e.g. while loading certain files (memory usage is typically much higher during file loading).
First I tried to use Process.PeakWorkingSet64 and it worked quite well on the machines in use at that time until the testing system got deployed to more machines and some VMs.
On some of these machines PeakWorkingSet64 was way higher than on others (e.g. 180MB vs 420MB).
I tried different other values of Process and also tried to use PerformanceCounter but I don't know any other metric that gives me a peak value (I really want the peak not the current state).
I don't really get my head around why the PeakWorkingSet64 value is so much higher on some systems. I always run the exact same software with the exact same workloads on these machines. So if I have a software that allocates 1GB of data in RAM I also expect that every system it runs on reports a max memory usage of around 1GB.
Is there something important I'm missing here?
Any hints what I could do to measure memory usage reliably from within the testing system?

Related

clr.Dll!MetaDataGetDispenser and higher than expected CPU usage

I have a Windows Service written in C# which is running on a customer's server and seems to be working fine without any problems, but the CPU usage for this process is often much higher than I would expect. It is never over 50%, but is still a lot higher than the single figures I see when running in house.
This may simply be due to a higher workload, but I was attempting to determine what exactly the service was doing and process explorer is reporting a lot of threads with a start address of clr.Dll!MetaDataGetDispenser, each with small CPU Usage, but they all add up.
Does anyone know what this and what sort of code would be using it?
The service will be using WCF to connect to various clients (and presumably that will involve some sort of serialisation) and will also be accessing a Microsoft SQL Server, but no explicit reflection.

C# Memory Use Discrepancy between two Machines

Wrote a C# scraper. And analyzed the markup of 30K URLs to pull certain metrics from them.
Run the same code on two machines:
my dev box with 4 core CPU, 8 logical processors and 32GB or RAM. It used up to 300MB of RAM to the end. As I display the WorkingSet size, I could even see the GC kick in and lower memory use, then growing back again.
on a EC2 instance, same data but an instance with only 2 processors and 1.7GB of RAM. Here it used 1.1GB or RAM and, when all threads concluded work, it went down to 300MB just like my local test.
RAM usage was checked with both Environment.WorkingSet and Task Manager. My NET speed is not negligible so I don't think it could affect things even if that Amazon instance might be a little faster. (EC2 net performance differs per instance and this one is on the affordable side hence slower side.)
Why this memory use discrepancy? And can I somehow estimate before hand the memory use in C#?
My guess is that having a slower CPU in the cloud, the GC preferred to keep allocating more than cleaning up what was already used. But this is just my theory to excuse it's unexpected behavior, based on wishful thinking. Still, on my 32GB of RAM it could have used way more but it behaved. On 1.7GB of RAM it went all crazy using 1.1GB of it... I don't get it.
In C++ I just think of how many URLs I fetch at the same time, I think of 256KB average size + size of extracted data and I can tell, before hand, how much memory will be used quite precise. But this C# test left me wondering.
As I plan to release this tool in the wild... I don't feel comfortable taking over half the RAM especially on a lesser machine.
UPDATE: Forgot to mention both machines are Windows 8. Actually one is 8.1 (local) and one Server 2012 (EC2 cloud) both with .NET 4.5.2.

will C# compiler for big codebase run dramatically faster on machine with huge RAM?

I have seen some real slow build times in a big legacy codebase without proper assembly decomposition, running on a 2G RAM machine. So, if I wanted to speed it up without code overhaul, would a 16G (or some other such huge number) RAM machine be radically faster, if the fairy IT department were to provide one? In other words, is RAM the major bottleneck for sufficiently large dotnet projects or are there other dominant issues?
Any input about similar situation for building Java is also appreciated, just out of pure curiosity.
Performance does not improve with additional RAM once you have more RAM than the application uses. You are likely not to see any more improvement by using 128GB of RAM.
We cannot guess the amount needed. Measure by looking at task manager.
It certainly won't do you any harm...
2G is pretty small for a dev machine, I use 16G as a matter of course.
However, build times are going to be gated by file access sooner or later, so whilst you might get a little improvement I suspect you won't be blown away by it. ([EDIT] as a commenter says, compilation is likely to be CPU bound too).
Have you looked into parallel builds (e.g. see this SO question: Visual Studio 2010, how to build projects in parallel on multicore).
Or, can you restructure your code base and maybe remove some less frequently updated assemblies in to a separate sln, and then reference these as DLLs (this isn't a great idea in all cases, but sometimes it can be expedient). From you description of the problem I'm guessing this is easier said than done, but this is how we've achieved good results in our code base.
The whole RAM issue is actually one of ROI (Return on Interest). The more RAM you add to a system, the less likely the application is going to have to search for a memory location large enough to store an object of a particular size and the faster it'll go; however, after a certain point it's so unlikely that the system will pick a location that is not large enough to store the object that it's pointless to go any higher. (note that read/write speeds of the RAM stick play a role in this as well).
In summary: # 2gb RAM, you definitely should upgrade that to something more like 8gb or the suggested 16gb however doing something more than that would be almost pointless because the bottleneck will come from the processor then.
ALSO it's probably a good idea to note the speed of the RAM too because then your RAM can bottleneck because it can only handle XXXXmhz clock speed at most. Generally, though, 1600mhz is fine.

.NET Performance counters that monitor a machine's computing power

I've been researching a bit about .NET Performance counters, but couldn't find anything detailed (unlike the rest of the framework).
My question is: Which are the most relevant performance counters to get the maximum computing power of the machine? Well, the .NET virtual machine, that is..
Thank you,
Chuck Mysak
You haven't described what you mean by "computing power". Here are some of the things you can get through performance counters that might be relevant:
Number of SQL queries executed.
Number of IIS requests completed.
Number of distributed transactions committed.
Bytes of I/O performed (disk, network, etc.).
There are also relative numbers, such as percentages of processor and memory in use which can give an indication of how much of the "power" of your system is actually being used.
However, you will not find anything that correlates cleanly with raw computing "power". In fact, who is to say that the machine's full "power" is being taken advantage of at the time you look at the counters? It sounds like what you really want to do is run a benchmark, which includes the exact work to be performed and the collection of measurements to be taken. You can search the Internet for various benchmark applications. Some of these run directly in .NET while the most popular are native applications which you could shell out to execute.

winforms application memory usage

is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
also, how do you know how much memory the machine gives to OS, video cards, etc . .
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
Yes, it's possible (see some of the other answers), but it's going to be very unlikely that your application really needs to care. What is it that you're doing where you think you need to be this sensitive to memory pressure?
also, how do you know how much memory the machine gives to OS, video cards, etc . .
Again, this should be possible using WMI calls, but the bigger question is why do you need to do this?
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
No, this isn't a configurable value. When a .NET application starts up the operating system allocates a block of memory for it to use. This is handled by the OS and there is no way to configure the algorithms used to determine the amount of memory to allocate. Likewise, there is no way to configure how much of that memory the .NET runtime uses for the managed heap, stack, large object heap, etc.
I think I read the question a little differently, so hopefully this response isn't too off topic!
You can get a good overview of how much memory your application is consuming by using Windows Task Manager, or even better, Sysinternals Process Monitor. This is a quick way to review your processes at their peaks to see how they are behaving.
Out of the box, an x86 process will only be able to address 2GB of RAM. This means any single process on your machine can only consume up to 2GB. In reality, your likely to be able to consume only 1.5-1.8 before getting out of memory exceptions.
How much RAM your copy of Windows can actually address will depend on the Windows version and cpu architecture.
Using your example of 4GB RAM, the OS is going to give your applications up to 2GB of RAM to play in (which all processes share) and it will reserve 2GB for itself.
Depending on the operating system your running, you can tweak this, using the /3GB switch in the boot.ini, will adjust that ratio to 3GB for applications and 1GB for the OS. This has some impact to the OS, so I'd review that impact first and see if you can live with tradeoff (YMMV).
For a single application to be able to address greater than /3GB, your going to need to set a particular bit in the PE image header. This question/answer has good info on this subject already.
The game changes under x64 architecture. :)
Some good reference information:
Memory Limits for Windows Releases
Virtual Address Space
I think you can use WMI to get all that information
If you don't wish to use WMI, you could use GlobalMemoryStatusEx():
Function Call:
http://www.pinvoke.net/default.aspx/kernel32/GlobalMemoryStatusEx.html
Return Data:
http://www.pinvoke.net/default.aspx/Structures/MEMORYSTATUSEX.html
MemoryLoad will give you a number between 0 and 100 that represents the ~ percentage of physical memory in use and TotalPhys will tell you total total amount of physical memory in bytes.
Memory is tricky because usable memory is a blend of physical (ram) and virtual (page file) types. The specific blend, and what goes where, is determined by the operating system. Luckily, this is somewhat configurable as Windows allows you to stipulate how much virtual memory to use, if any.
Take note that not all of the memory in 32-bit Windows (XP & Vista) is available for use. Windows may report up to 4GB installed but only 3.1-3.2GB is available for actual use by the operating system and applications. This has to do with legacy addressing issues IIRC.
Good Luck

Categories

Resources