Largest Heap used in a managed environment? (.net/java) - c#

What is the largest heap you have personally used in a managed environment such as Java or .NET? What were some of the performance issues you ran into, and did you end up getting a diminishing returns the larger the heap was?

I work on a 64-bit .Net system that typically uses 9-12 GB, and sometimes as much as 20GB. I have not seen any performance problems even while garbage collecting, and I have been looking hard as I was not expecting it to work so well.
An earlier version hung on to some objects for too long resulting in occasional GCs that freed up 3GB+. Even then, there was no noticeable impact on performance. The system is running on a 16-core server with 32GB RAM, which probably helps...

In .Net, on Windows 32-bit, You can only really get to about 1.4 GB of memory usage before things start getting really screwy (out of memory exceptions). This is due to a limitation in 32 bit windows that limits a single process to using more than 2 GB of RAM. There is /3GB switch you can put in your boot.ini, but that will only bring you a little bit further. If you want to use lots of memory, you should seriously consider running on a 64 bit version of windows.

I currently have a production application with 6 GB of memory. You'll need a 64-bit box as well for the JVM to be able to address that much.
The garbage collector is really the only thing (that I've found so far) where performance degrades with size, and then only if you manually kick off a System.GC, which forces the JVM to bring everything to a screeching halt as it traverses 6 GB worth of objects. Takes a good 20 seconds, too. The default GC behavior does not do this, BTW, you have to be dumb enough to make it do that. Also worth researching JVM tuning at this size.
You can also find things like distributed and clustered JVMs, sorry, don't have any good references as I didn't look into this option too closely, although I did find references to larger installations.

I am unsure what you mean by heap, but if you mean memory used, I have used quite a bit, 2GB+. I have a web app that does image processing and it requires loading 2 large scan files into memory to do analysis.
There were performance issues. Windows would swap out lots of ram, and then that would create a lot of page faults. There was never any need for anymore than 2 images at a time as all requests were gainst those images (I only allowed 1 session per image set at a time)
For instance, to setup the files for initial viewing would take about 5 seconds. Doing simple analysis and zooming would be fairly fast once in memory, in the order of .1 to .5 seconds.
I still had to optimize, so I ended up preparsung the files and chopping into smaller peices and worked only with the peices that were required by the user at the time.

I have used from 2GB to 5GB of memory in java, but usually when I get to more than 2GB I really start thinking about memory optimization. Diminishing returns can vary from not optimizing when it's necessary because you have a lot of memory, to not having memory available for the OS/Disk caches (which can help your application overall).
For Java, I recommend watching your memory usage per generation over time. Do you create a lot of temporary objects or have long-lasting objects that consume a lot of memory? A lot of optimization of memory can be done when knowing those things.

Related

C# Memory Use Discrepancy between two Machines

Wrote a C# scraper. And analyzed the markup of 30K URLs to pull certain metrics from them.
Run the same code on two machines:
my dev box with 4 core CPU, 8 logical processors and 32GB or RAM. It used up to 300MB of RAM to the end. As I display the WorkingSet size, I could even see the GC kick in and lower memory use, then growing back again.
on a EC2 instance, same data but an instance with only 2 processors and 1.7GB of RAM. Here it used 1.1GB or RAM and, when all threads concluded work, it went down to 300MB just like my local test.
RAM usage was checked with both Environment.WorkingSet and Task Manager. My NET speed is not negligible so I don't think it could affect things even if that Amazon instance might be a little faster. (EC2 net performance differs per instance and this one is on the affordable side hence slower side.)
Why this memory use discrepancy? And can I somehow estimate before hand the memory use in C#?
My guess is that having a slower CPU in the cloud, the GC preferred to keep allocating more than cleaning up what was already used. But this is just my theory to excuse it's unexpected behavior, based on wishful thinking. Still, on my 32GB of RAM it could have used way more but it behaved. On 1.7GB of RAM it went all crazy using 1.1GB of it... I don't get it.
In C++ I just think of how many URLs I fetch at the same time, I think of 256KB average size + size of extracted data and I can tell, before hand, how much memory will be used quite precise. But this C# test left me wondering.
As I plan to release this tool in the wild... I don't feel comfortable taking over half the RAM especially on a lesser machine.
UPDATE: Forgot to mention both machines are Windows 8. Actually one is 8.1 (local) and one Server 2012 (EC2 cloud) both with .NET 4.5.2.

will C# compiler for big codebase run dramatically faster on machine with huge RAM?

I have seen some real slow build times in a big legacy codebase without proper assembly decomposition, running on a 2G RAM machine. So, if I wanted to speed it up without code overhaul, would a 16G (or some other such huge number) RAM machine be radically faster, if the fairy IT department were to provide one? In other words, is RAM the major bottleneck for sufficiently large dotnet projects or are there other dominant issues?
Any input about similar situation for building Java is also appreciated, just out of pure curiosity.
Performance does not improve with additional RAM once you have more RAM than the application uses. You are likely not to see any more improvement by using 128GB of RAM.
We cannot guess the amount needed. Measure by looking at task manager.
It certainly won't do you any harm...
2G is pretty small for a dev machine, I use 16G as a matter of course.
However, build times are going to be gated by file access sooner or later, so whilst you might get a little improvement I suspect you won't be blown away by it. ([EDIT] as a commenter says, compilation is likely to be CPU bound too).
Have you looked into parallel builds (e.g. see this SO question: Visual Studio 2010, how to build projects in parallel on multicore).
Or, can you restructure your code base and maybe remove some less frequently updated assemblies in to a separate sln, and then reference these as DLLs (this isn't a great idea in all cases, but sometimes it can be expedient). From you description of the problem I'm guessing this is easier said than done, but this is how we've achieved good results in our code base.
The whole RAM issue is actually one of ROI (Return on Interest). The more RAM you add to a system, the less likely the application is going to have to search for a memory location large enough to store an object of a particular size and the faster it'll go; however, after a certain point it's so unlikely that the system will pick a location that is not large enough to store the object that it's pointless to go any higher. (note that read/write speeds of the RAM stick play a role in this as well).
In summary: # 2gb RAM, you definitely should upgrade that to something more like 8gb or the suggested 16gb however doing something more than that would be almost pointless because the bottleneck will come from the processor then.
ALSO it's probably a good idea to note the speed of the RAM too because then your RAM can bottleneck because it can only handle XXXXmhz clock speed at most. Generally, though, 1600mhz is fine.

.NET out of memory troubleshooting

After reading a few enlightening articles about memory in the .NET technology, Out of Memory does not refer to physical memory, 597499.
I thought I understood why a C# app would throw an out of memory exception -- until I started experimenting with two servers-- both are having 2.5 gigs of ram, windows server 2003 and identical programs running.
The only significant difference between the two being one has 7% hard drive storage left and the other more than 50%.
The server with 7% storage space left is consistently throwing an out of memory while the other is performing consistently well.
My app is a C# web application that process' hundreds of MBs of String object.
Why would this difference happen seeing that the most likely reason for the out of memory issue is out of contiguous virtual address space.
All I can think of is that you're exhausting the virtual memory. Sounds like you need to run a memory profiler on the app.
I've used the Red Gate profiler in similar situations in the past. You may be surprised how much memory your strings are actually using.
Is the paging file fragmentation different on each machine? High fragmentation could slow down paging operations and thus exacerbate memory issues. If the paging file is massively fragmented, sort it out e.g. bring the server off-line, set the paging file size to zero, defrag the drive, re-create the paging file.
It's hard to give any specific advice on how to deal with perf problems with your string handling without more detail of what you are doing.
Why would this difference happen
seeing that the most likely reason for
the out of memory issue is out of
contiguous virtual address space?
With 7% free hard disk your server is probably running out of space to page out memory from either your process or other processes, hence it has to keep everything in RAM and therefore you are unable to allocate additional memory more often than on the server with 50% free space.
What solutions do you guys propose?
Since you've already run a profiler and seen at least 600MB+ of usage with all the string data you need to start tackling this problem.
The obvious answer would be to not hold all that data in memory. If you are processing a large data set then load a bit, process it and then throw that bit away and load the next bit instead of loading it all up front.
If it's data you need to serve, look at a caching strategy like LRU (least recently used) and keep only the hottest data in memory but leave the rest on disk.
You could even offload the strings into a database (in-memory or disk-based) and let that handle the cache management for you.
A slighty left-of-field solution I've had to use in the past was simply compressing the string data in memory as it arrived and decompressing it again when needed using the SharpZipLib. It wasn't that slow surprisingly.
I would agree that your best bet is to use a memory profiler. I've used .Net Memory Profiler 3.5 and was able to diagnose the issue, which in my case were undisposed Regex statements. They have demo tutorials which will walk you through the process if you're not familiar.
As you your question, any single reference to the strings, the jagged array for instance, would still prevent the string from disposing. Without knowing more about your architecture, it would be tough to make a specific recommendation. I would suggest trying to optimize your app before extending memory though. It will come back to bite you later.
An OutOfMemoryException is more likely to indicate fragmentation in your page file - not that you are out of RAM or disk space.
It is generally (wrongly) assumed that the page file is used as a swap disk - that RAM overflow is written to the page file. All allocated memory is stored in the page file and only data that is under heavy usage is copied to RAM.
There's no simple code fix to this problem other than trying to reduce the memory footprint of your application. But if you really get desperate you can always try PageDefrag, which is a free application originally developed by SysInternals.
There is a few tricks to increase memory (I dont know if it works with a web-app, but it looks like it does):
"Out of memory? Easy ways to increase the memory available to your program"
http://blogs.msdn.com/b/calvin_hsia/archive/2010/09/27/10068359.aspx

Hitting a memory limit slows down the .Net application

We have a 64bit C#/.Net3.0 application that runs on a 64bit Windows server. From time to time the app can use large amount of memory which is available. In some instances the application stops allocating additional memory and slows down significantly (500+ times slower).When I check the memory from the task manager the amount of the memory used barely changes. The application keeps on running very slowly and never gives an out of memory exception.
Any ideas? Let me know if more data is needed.
You might try enabling server mode for the Garbage Collector. By default, all .NET apps run in Workstation Mode, where the GC tries to do its sweeps while keeping the application running. If you turn on server mode, it temporarily stops the application so that it can free up memory (much) faster, and it also uses different heaps for each processor/core.
Most server apps will see a performance improvement using the GC server mode, especially if they allocate a lot of memory. The downside is that your app will basically stall when it starts to run out of memory (until the GC is finished).
* To enable this mode, insert the following into your app.config or web.config:
<configuration>
<runtime>
<gcServer enabled="true"/>
</runtime>
</configuration>
The moment you are hitting the physical memory limit, the OS will start paging (that is, write memory to disk). This will indeed cause the kind of slowdown you are seeing.
Solutions?
Add more memory - this will only help until you hit the new memory limit
Rewrite your app to use less memory
Figure out if you have a memory leak and fix it
If memory is not the issue, perhaps your application is hitting CPU very hard? Do you see the CPU hitting close to 100%? If so, check for large collections that are being iterated over and over.
As with 32-bit Windows operating systems, there is a 2GB limit on the size of an object you can create while running a 64-bit managed application on a 64-bit Windows operating system.
Investigating Memory Issues (MSDN article)
There is an awful lot of good stuff mentioned in the other answers. However, I'm going to chip in my two pence (or cents - depending on where you're from!) anyway.
Assuming that this is indeed a 64-bit process as you have stated, here's a few avenues of investigation...
Which memory usage are you checking? Mem Usage or VMem Size? VMem size is the one that actually matters, since that applies to both paged and non-paged memory. If the two numbers are far out of whack, then the memory usage is indeed the cause of the slow-down.
What's the actual memory usage across the whole server when things start to slow down? Does the slow down also apply to other apps? If so, then you may have a kernel memory issue - which can be due to huge amounts of disk accessing and low-level resource usage (for example, create 20000 mutexes, or load a few thousand bitmaps via code that uses Win32 HBitmaps). You can get some indication of this on the Task Manager (although Windows 2003's version is more informative directly on this than 2008's).
When you say that the app gets significantly slower, how do you know? Are you using vast dictionaries or lists? Could it not just be that the internal data structures are getting so big so as to complicate the work any internal algorithms are performing? When you get to huge numbers some algorithms can start to become slower by orders of magnitude.
What's the CPU load of the application when it's running at full-pelt? Is actually the same as when the slow-down occurs? If the CPU usage decreases as the memory usage goes up, then that means that whatever it's doing is taking the OS longer to fulfill, meaning that it's probably putting too much load on the OS. If there's no difference in CPU load, then my guess is it's internal data structures getting so big as to slow down your algos.
I would certainly be looking at running a Perfmon on the application - starting off with some .Net and native memory counters, Cache hits and misses, and Disk Queue length. Run it over the course of the application from startup to when it starts to run like an asthmatic tortoise, and you might just get a clue from that as well.
Having skimmed through the other answers, I'd say there's a lot of good ideas. Here's one I didn't see:
Get a memory profiler, such as SciTech's MemProfiler. It will tell you what's being allocated, by what, and it will show you the whole slice n dice.
It also has video tutorials in case you don't know how to use it. In my case, I discovered I had IDisposable instances that I wasn't Using(...)

Reasons for seeing high "% Time in GC" in Perf Mon

While monitoring our application in Perf Mon I noticed that the % of Time In GC is anywhere from 20 - 60% while our application is performing a long running process (varies between 30 seconds to 1.5 minutes). This seems a bit excessive to me. This raises two important questions.
Am I correct that this excessive?
How can I figure out why route causes GC spikes?
Yes, this does sound excessive. Reducing the amount of GC would probably be the single best step you could take to reducing the runtime of your application (if that is your goal).
A high "% time in GC" is typically caused by allocating and then throwing away thousands or millions of objects. A good way to find out what's going on is to use a memory profiler tool.
Microsoft provides the free CLR Profiler. This will show you every allocation, but will make your app run 10-60 times slower. You may need to run it on less input data so that it can finish analyzing in a reasonable amount of time.
A great commercial tool is SciTech's .NET Memory Profiler. This imposes much less runtime overhead, and there is a free trial available. By taking multiple snapshots while your process is running, you can find out what type of objects are being frequently allocated (and then destroyed).
Once you've identified the source of the allocations, you then need to examine the code and figure out how those allocations can be reduced. While there are no one-size-fits-all answers, some things I've encountered in the past include:
String.Split can create hundreds of small short-lived strings. If you're doing a lot of string manipulation, it can help to process the string by walking it character-by-character.
Creating arrays or lists of thousands of small classes (say, under 24 bytes in size) can be expensive; if those classes can be treated as value types, it can (sometimes) greatly improve things to change them to structs.
Creating thousands of small arrays can increase memory usage a lot (because each array has a small amount of overhead); sometimes these can be replaced with one large array and indexes into a sub-section of it.
Having a lot of finalizable objects (particularly if they're not being disposed) can put a lot of pressure on the garbage collector; ensure that you're correctly disposing all IDisposable objects, and note that your own types should (almost) never have finalizers.
Microsoft has an article with Garbage Collection Guidelines for improving performance.
Am I correct that this excessive?
Yes, you are correct
How can I figure out why route causes GC spikes?
1.- Do take a look at PerfView
PerfView is a performance-analysis tool that helps isolate CPU- and
memory-related performance issues.
See Also: Improving Managed Code Performance
2.- See if GC.Collect or GC.WaitForPendingFinalizers is being called anywhere in your code or third party library. The latter can cause high CPU utilization.
Another reason could be lots of gen-1 or gen-2 collections, each of which takes MUCH more time and is caused by hanging on to objects a longer time.
I've seen this happen in web apps when buggy objects hang onto actual page objects - forcing the page to live as long as the other objects referring to them.
Breaking the link between objects and pages (in this case) caused GC to drop to very low values. Our site now has 100+ hits/second and GC time is typically 1% or less.

Categories

Resources