.NET Performance counters that monitor a machine's computing power - c#

I've been researching a bit about .NET Performance counters, but couldn't find anything detailed (unlike the rest of the framework).
My question is: Which are the most relevant performance counters to get the maximum computing power of the machine? Well, the .NET virtual machine, that is..
Thank you,
Chuck Mysak

You haven't described what you mean by "computing power". Here are some of the things you can get through performance counters that might be relevant:
Number of SQL queries executed.
Number of IIS requests completed.
Number of distributed transactions committed.
Bytes of I/O performed (disk, network, etc.).
There are also relative numbers, such as percentages of processor and memory in use which can give an indication of how much of the "power" of your system is actually being used.
However, you will not find anything that correlates cleanly with raw computing "power". In fact, who is to say that the machine's full "power" is being taken advantage of at the time you look at the counters? It sounds like what you really want to do is run a benchmark, which includes the exact work to be performed and the collection of measurements to be taken. You can search the Internet for various benchmark applications. Some of these run directly in .NET while the most popular are native applications which you could shell out to execute.

Related

Reliable way to measure RAM usage of desktop application

I'm working on an automated testing system (written in C#) for an application at work and I have great difficulty to measure the peak ram usage that it needs e.g. while loading certain files (memory usage is typically much higher during file loading).
First I tried to use Process.PeakWorkingSet64 and it worked quite well on the machines in use at that time until the testing system got deployed to more machines and some VMs.
On some of these machines PeakWorkingSet64 was way higher than on others (e.g. 180MB vs 420MB).
I tried different other values of Process and also tried to use PerformanceCounter but I don't know any other metric that gives me a peak value (I really want the peak not the current state).
I don't really get my head around why the PeakWorkingSet64 value is so much higher on some systems. I always run the exact same software with the exact same workloads on these machines. So if I have a software that allocates 1GB of data in RAM I also expect that every system it runs on reports a max memory usage of around 1GB.
Is there something important I'm missing here?
Any hints what I could do to measure memory usage reliably from within the testing system?

will C# compiler for big codebase run dramatically faster on machine with huge RAM?

I have seen some real slow build times in a big legacy codebase without proper assembly decomposition, running on a 2G RAM machine. So, if I wanted to speed it up without code overhaul, would a 16G (or some other such huge number) RAM machine be radically faster, if the fairy IT department were to provide one? In other words, is RAM the major bottleneck for sufficiently large dotnet projects or are there other dominant issues?
Any input about similar situation for building Java is also appreciated, just out of pure curiosity.
Performance does not improve with additional RAM once you have more RAM than the application uses. You are likely not to see any more improvement by using 128GB of RAM.
We cannot guess the amount needed. Measure by looking at task manager.
It certainly won't do you any harm...
2G is pretty small for a dev machine, I use 16G as a matter of course.
However, build times are going to be gated by file access sooner or later, so whilst you might get a little improvement I suspect you won't be blown away by it. ([EDIT] as a commenter says, compilation is likely to be CPU bound too).
Have you looked into parallel builds (e.g. see this SO question: Visual Studio 2010, how to build projects in parallel on multicore).
Or, can you restructure your code base and maybe remove some less frequently updated assemblies in to a separate sln, and then reference these as DLLs (this isn't a great idea in all cases, but sometimes it can be expedient). From you description of the problem I'm guessing this is easier said than done, but this is how we've achieved good results in our code base.
The whole RAM issue is actually one of ROI (Return on Interest). The more RAM you add to a system, the less likely the application is going to have to search for a memory location large enough to store an object of a particular size and the faster it'll go; however, after a certain point it's so unlikely that the system will pick a location that is not large enough to store the object that it's pointless to go any higher. (note that read/write speeds of the RAM stick play a role in this as well).
In summary: # 2gb RAM, you definitely should upgrade that to something more like 8gb or the suggested 16gb however doing something more than that would be almost pointless because the bottleneck will come from the processor then.
ALSO it's probably a good idea to note the speed of the RAM too because then your RAM can bottleneck because it can only handle XXXXmhz clock speed at most. Generally, though, 1600mhz is fine.

Designing library performance comparison tests

I am getting ready to perform a series of performance comparisons of various of the shelf products.
What do I need to do to show credibility in the tests? How do I design my benchmark tests so that they are respectable?
I am also interested in any suggestions on the actual design of the tests. Ways to load data without effecting the tests (Heisenberg Uncertainty Principle), or ways to monitor... etc
This is a bit tricky to answer without knowing what sort of "off the shelf" products you are trying to assess. Are you looking for UI responsiveness, throughput (e.g. email, transactions/sec), startup time, etc - all of these have different criteria for what measures you should track and different tools for testing or evaluating. But to answer some of your general questions:
Credibility - this is important. Try to make sure that whatever you are measuring has little run to run variance. Utilize the technique of doing several runs of the same scenario, get rid of outliers (i.e. your lowest and highest), and evaluate your avg/max/min/median values. If you're doing some sort of throughput test, consider making it long running so you have a good sample set. For example, if you are looking at something like Microsoft Exchange and thus are using their perf counters, try to make sure you are taking frequent samples (once per sec or every few secs) and have the test run for 20mins or so. Again, chop off the first few mins and the last few mins to eliminate any startup/shutdown noise.
Heisenburg - tricky. In most modern systems, depending on what application/measures you are measuring, you can minimize this impact by being smart about what/how you are measuring. Sometimes (like in the Exchange example), you'll see near 0 impact. Try to use as least invasive tools as possible. For example, if you're measuring startup time, consider using xperfinfo and utilize the events built into the kernel. If you're using perfmon, don't flood the system with extraneous counters that you don't care about. If you're doing some exteremely long running test, ratchet down your sampling interval.
Also try to eliminate any sources of environment variability or possible sources of noise. If you're doing something network intensive, consider isolating the network. Try to disable any services or applications that you don't care about. Limit any sort of disk IO, memory intensive operations, etc. If disk IO might introduce noise in something that is CPU bound, consider using SSD.
When designing your tests, keep repeatability in mind. If you doing some sort of microbenchmark type testing (e.g. perf unit test) then have your infrastructure support running the same operation n times exactly the same. If you're driving UI, try not to physically drive the mouse and instead use the underlying accessibility layer (MSAA, UIAutomation, etc) to hit controls directly programmatically.
Again, this is just general advice. If you have more specifics then I can try to follow up with more relavant guidance.
Enjoy!
Your question is very interesting, but a bit vague, because without knowing what to test it is not easy to give you some clues.
You can test performance from many different angles, then, depending on the use or target of the library you should try one approach or another; I will try to enumerate some of the things you may have to consider for measurement:
Multithreading: if the library uses
it or your software will use the
library in a multithreaded context
then you may have to test it with
many different processor and
multiprocessor configurations to see
how it reacts.
Startup time: its
importance depends on how intensively
will you use the library and what’s
the nature of the product being built
with it (client, server …).
Response time: for this do not take
the first execution, try to execute
the same call many times after the
first one and do an average. Using
System.Diagnostics.StopWatch could be
very useful for that.
Memory
consumption: analyze the growth,
beware of exponential ones ;). Go a
step further and measure quantity of
objects being created and disposed.
Responsiveness: you should not only
measure raw performance, how the user
feels the speed of the product it is
very important too.
Network: if the
library uses resources on the network
you may have to test it with
different bandwidth and latency
configurations, there is software to
simulate these situations.
Data:
try to create many different testing
data packages, trying to cover, for
example: a big bunch of raw data,
then a large set made of many smaller
chunks, a long iteration with small
pieces of data, …
Tools:
System.Diagnostics.Stopwatch: essential for benchmarking method calls
Performance counters: whenever available they are very useful to know what’s happening inside, allowing you to monitor the software without affecting its performance.
Profilers: there are some good memory and performance profilers in the market, but as you said, they always affect the measurements. They are good for finding bottlenecks in your software, but I don’t think you can use them for a comparison test.
Why do you care about the performance? In both cases the time taken to write the message to wherever you a storing your log will be a lot slower than anything else.
If you are really doing that match logging, then you are likely to need to index your log files so you can find the log entry you need, at that point you are not doing standard logging.

Good way to do performance logging (C#)

I am trying to get some detailed performance information from an application my company is developing. Examples of information I am trying to get would be how long a network transaction takes, how much CPU/memory the application is using, how long it takes for a given method to complete, etc.
I have had some failed attempts at this in the past (like trying to measure small time periods by using DateTime.Now). On top of that I don't know much of anything about getting CPU and memory statistics. Are there any good .Net classes, libraries, or frameworks out there that would help me collect this sort of information and/or log it to a text file?
What you are looking for is Performance Counters. For .net you need this Performance Counter.
Performance counters are one way to go, and the System.Diagnostics.Stopwatch class are good foundational places to look for doing this.
With performance counters (beyond those provided) you will need to manage both the infrastructure of tracking the events, as well as reporting the data. The performance counter base classes supply the connection details for hooking up to the event log, but you will need to provide other reporting infrastructure if you need to report the data in another way (such as to a log file, or database).
The stopwatch class is a wrapper around the high performance timer, giving you microsecond or nanosecond resolution depending on the processor or the platform. If you do not need that high of resolution you can use System.DateTime..Now.Ticks to get the current tick count for the processor clock and do differential math with that, giving you millisecond or bettter precision for most operations.
When tracking CPU statistics be aware that multiple processors and multiple cores will complicate any accurate statistics in some cases.
One last caution with performance counters, be aware that not all performance counters are on all machines. For instance ASP.NET counters are not present on a machine which does not have IIS installed, etc.
for a modern opensource library to do performance metrics and monitoring consider using app-metrics.
github: https://github.com/AppMetrics/AppMetrics
website https://www.app-metrics.io
A platform independent open source performance recorder build on top of aspect injector can be found here: https://gitlab.com/hectorjsmith/csharp-performance-recorder. It can be added to any C# project. Instructions on how to use can be found in the README in Gitlab.
Benefits
It's aspect oriented, meaning you don't have to litter your code with DateTime.Now - you just annotate the appropriate methods
It takes care of pretty printing the results - you can do with the what you like e.g. printing them to a file
Notes
This performance recorder is focused on the timing of methods only. It doesn't cover CPU or memory usage.
For cpu/memory use performance counters. For more specific information about specific methods (or lines of code) and specific objects, use a profiler. red-gate makes a great profiler.
http://www.red-gate.com/products/ants_performance_profiler

Determining available bandwidth

What is the best way to determine available bandwidth in .NET?
We have users that access business applications from various remote access points, wired and wireless and at times the bandwidth can be very low based on where the user is. When the applications appear to be running slow, the issue could be due to low bandwidth and not some other issue.
I would like to be able to run some kind of service that would warn users whenever the available bandwidth dips below a specific threshold.
Any thoughts?
Not beyond the obvious of downloading a file of a known size and timing how long it takes. the disadvantage of that is that you'd need to waste a lot of bandwidth to do it. Also, if you wanted to alert when throughput drops below a threshold, you'll have to run the test more-or-less continuously.
IMHO, I'd live with poor performance in some locations, given that you can't do anything about it if it does occur.
Sorry.
There's no easy way to measure bandwidth without actually using it - which of course will starve the applications. A couple of points to bear in mind though:
1) Is it actually bandwidth that's the problem, or latency? You can measure latency in a less intrusive manner than bandwidth.
2) Are the applications all run from the same server (or at least the same network)? You may find that users will have a good connection to some areas of the net but not others. (It's likely that the last mile will be the limiting factor, but it's not always the case.)
If you're transferring data, simply measure it. You could also download a reference object from somewhere if you want to make it independent of the speed of your server.
Without knowing the exact nature of your connection, or how its used, there are two options that I am aware of.
MultinetGetConnectionPerformance (http://msdn.microsoft.com/en-us/library/aa385342(VS.85).aspx)
System Event Notification Service (http://msdn.microsoft.com/en-us/library/aa377538(VS.85).aspx)
Neither are direct .NET classes, but can be implemented in .NET very easily.
Take a look at both of them and see if they will work for you.
Roy

Categories

Resources