I've written a little application for consuming memory either on disk, or RAM. The reasoning behind this is to test how certain parts of the application behave with small amounts of memory and testing various installers etc. with low disk space. It's quite useful, however currently I've had to limit them to 2Gb.
Unfortunately sometimes I need to consumer more than just 2Gb and have to open the application up several times which is rather annoying. So simply, is there a way I can get around this 2Gb limitation on 32bit OS's?
As far as eating disk space, you can go over the 2GB barrier by creating multiple files, instead of just one. For memory, you might consider having the process automatically spawn enough child processes to consume the amount of RAM you're trying to eat.
For files on disk, you shouldn't be limited to 2GB, what format are the disks using?
FAT = 4GB
NTFS = 16TB
The .NET memory model is subject to the same limits of all applications. Nominally this is 2GB per application on a 32-bit operating system. There are some generic options which may allow you to increase this.
The /3GB switch in the boot.ini increases the 2Gb to 3GB.
PAE (Physical Address Extensions) which allows you to access memory above 4GB in a page-switching method. You would normally need to write support for this yourself (or hope Microsoft have done it for you in the framework/CLR).
See this detailed description of your options (for Windows in general).
Related
Wrote a C# scraper. And analyzed the markup of 30K URLs to pull certain metrics from them.
Run the same code on two machines:
my dev box with 4 core CPU, 8 logical processors and 32GB or RAM. It used up to 300MB of RAM to the end. As I display the WorkingSet size, I could even see the GC kick in and lower memory use, then growing back again.
on a EC2 instance, same data but an instance with only 2 processors and 1.7GB of RAM. Here it used 1.1GB or RAM and, when all threads concluded work, it went down to 300MB just like my local test.
RAM usage was checked with both Environment.WorkingSet and Task Manager. My NET speed is not negligible so I don't think it could affect things even if that Amazon instance might be a little faster. (EC2 net performance differs per instance and this one is on the affordable side hence slower side.)
Why this memory use discrepancy? And can I somehow estimate before hand the memory use in C#?
My guess is that having a slower CPU in the cloud, the GC preferred to keep allocating more than cleaning up what was already used. But this is just my theory to excuse it's unexpected behavior, based on wishful thinking. Still, on my 32GB of RAM it could have used way more but it behaved. On 1.7GB of RAM it went all crazy using 1.1GB of it... I don't get it.
In C++ I just think of how many URLs I fetch at the same time, I think of 256KB average size + size of extracted data and I can tell, before hand, how much memory will be used quite precise. But this C# test left me wondering.
As I plan to release this tool in the wild... I don't feel comfortable taking over half the RAM especially on a lesser machine.
UPDATE: Forgot to mention both machines are Windows 8. Actually one is 8.1 (local) and one Server 2012 (EC2 cloud) both with .NET 4.5.2.
Im a little confused with regards to the memory limitations of an application. As far as i can see, if i write a c# application, targeting x64, my program will have access to 8TB of Virtual address space = space on the HD?
OS >= Windows 7 professional supports 192gigs of RAM. So if i had 192gig system (unfortunately i dont), i could load just over 8.1TB of data into memory (assuming no other processes were running)?
Is virtual memory only used when i have run out available ram? Im assuming there is a performance implication associated with virtual memory vs using RAM?
Apologies if these appear stupid questions, but when it comes to memory management, im rather green.
Your question is actually several related question, taking each individually:
OS >= Windows 7 professional supports 192gigs of RAM. So if i had 192gig system (unfortunately i dont), i could load just over 8.1TB of data into memory (assuming no other processes were running)?
No, it would still be 8 TB. That is the maximum amount of addressable space, whether it is in RAM or elsewhere.
However you could never have 8 TB in use, even if you some how unloaded Windows itself, as the OS needs to keep track of the space being used. In total, you could probably get to 7 TB approximately.
is virtual memory only used when i have run out available ram?
No, if you have virtual memory turned on the entirety of RAM is typically preloaded onto your HDD (give or take a few seconds). This allows the OS to unload something to make room if it feels the need, without having to persist the data. Note that the OS keep thorough track so will know if this is the case or not.
Im assuming there is a performance implication associated with virtual memory vs using RAM?
Depends on your context. Every seek on the hard drive takes a computational eternity, however it is still a fraction of a second. Assuming your process isn't thrashing and repeatedly accessing virtual memory, you should not notice a significant performance hit outside high performance computing.
Apologies if these appear stupid questions, but when it comes to memory management, im rather green.
Your main problem is you have some preconceived notions about how memory works that don't line up with reality. If you are really interested you should look into how memory is used in a modern system.
For instance, most people conceptualize that a pointer points to a location in memory, since it is the fundamental structure. This isn't quite true. In fact the pointer contains a piece of information that can be decoded into a location in the addressable space of the system, which isn't always in RAM. This decoding process uses quite a few tricks that are interesting, but beyond the scope of this question.
Normally, you should write applications targeting Any CPU. The .NET loader then decides (depending on the platform it is running on) which version of the run-time environment will execute the application and into what kind of native code it will be compiled. There is no need to specify the platform, unless you are using custom native components which will be loaded into the process created for your application. This process is then associated with some virtual address space - how that is mapped to physical memory is managed by the OS...
Is it possible to put an artificial limit on the amount of memory a .NET process can use? I'm looking to simulate low-memory situations for testing purposes.
Right now we use a virtual machine for this type of thing. It works, but I'm curious to know if we can figure out a more convenient approach. One that could easily be automated would be ideal.
Edit: As Hans Passant points out, just limiting the amount of virtual memory available to the process won't replace the VM-based tests. The tests are for performance, so I would need to get swapping instead of OutOfMemoryException.
One that could easily be automated would be ideal.
You can use Windows Job Objects to manage this from code. Processes can be associated with a job, and JOBOBJECT_BASIC_LIMIT_INFORMATION allows you to restrict the working set size.
Alteratively, you can call SetProcessWorkingSetSize on the process directly, which will restrict the maximum allowable memory usage for that process.
The windows Application Verifier tool can simulate low resource levels, like low memory: http://msdn.microsoft.com/en-us/library/aa480483.aspx
A more programmatic solution using job objects (as #ReedCopsey also suggests) is here: http://www.codeproject.com/Articles/17831/How-to-Set-the-Maximum-Memory-a-Process-can-Use-at
Right now we use a virtual machine for this type of thing
Which probably means you limited the amount of RAM. That's not much of a test, a Windows process can never run out of RAM. You'll just slow down the process, the operating system will be swapping pages more frequently.
The real limitation is available virtual memory address space. Which is a fixed number for a 32-bit process, 2 gigabytes. Give or take a few non-standard options to increase that number. An OutOfMemoryException tells you that the process doesn't have a hole left in VM big enough to fit the allocation.
You can limit the amount of VM space by simply allocating a bunch of 50 megabyte byte[] arrays and storing them in a static variable of type List<byte[]> at the beginning of Main().
is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
also, how do you know how much memory the machine gives to OS, video cards, etc . .
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
Yes, it's possible (see some of the other answers), but it's going to be very unlikely that your application really needs to care. What is it that you're doing where you think you need to be this sensitive to memory pressure?
also, how do you know how much memory the machine gives to OS, video cards, etc . .
Again, this should be possible using WMI calls, but the bigger question is why do you need to do this?
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
No, this isn't a configurable value. When a .NET application starts up the operating system allocates a block of memory for it to use. This is handled by the OS and there is no way to configure the algorithms used to determine the amount of memory to allocate. Likewise, there is no way to configure how much of that memory the .NET runtime uses for the managed heap, stack, large object heap, etc.
I think I read the question a little differently, so hopefully this response isn't too off topic!
You can get a good overview of how much memory your application is consuming by using Windows Task Manager, or even better, Sysinternals Process Monitor. This is a quick way to review your processes at their peaks to see how they are behaving.
Out of the box, an x86 process will only be able to address 2GB of RAM. This means any single process on your machine can only consume up to 2GB. In reality, your likely to be able to consume only 1.5-1.8 before getting out of memory exceptions.
How much RAM your copy of Windows can actually address will depend on the Windows version and cpu architecture.
Using your example of 4GB RAM, the OS is going to give your applications up to 2GB of RAM to play in (which all processes share) and it will reserve 2GB for itself.
Depending on the operating system your running, you can tweak this, using the /3GB switch in the boot.ini, will adjust that ratio to 3GB for applications and 1GB for the OS. This has some impact to the OS, so I'd review that impact first and see if you can live with tradeoff (YMMV).
For a single application to be able to address greater than /3GB, your going to need to set a particular bit in the PE image header. This question/answer has good info on this subject already.
The game changes under x64 architecture. :)
Some good reference information:
Memory Limits for Windows Releases
Virtual Address Space
I think you can use WMI to get all that information
If you don't wish to use WMI, you could use GlobalMemoryStatusEx():
Function Call:
http://www.pinvoke.net/default.aspx/kernel32/GlobalMemoryStatusEx.html
Return Data:
http://www.pinvoke.net/default.aspx/Structures/MEMORYSTATUSEX.html
MemoryLoad will give you a number between 0 and 100 that represents the ~ percentage of physical memory in use and TotalPhys will tell you total total amount of physical memory in bytes.
Memory is tricky because usable memory is a blend of physical (ram) and virtual (page file) types. The specific blend, and what goes where, is determined by the operating system. Luckily, this is somewhat configurable as Windows allows you to stipulate how much virtual memory to use, if any.
Take note that not all of the memory in 32-bit Windows (XP & Vista) is available for use. Windows may report up to 4GB installed but only 3.1-3.2GB is available for actual use by the operating system and applications. This has to do with legacy addressing issues IIRC.
Good Luck
What is the largest heap you have personally used in a managed environment such as Java or .NET? What were some of the performance issues you ran into, and did you end up getting a diminishing returns the larger the heap was?
I work on a 64-bit .Net system that typically uses 9-12 GB, and sometimes as much as 20GB. I have not seen any performance problems even while garbage collecting, and I have been looking hard as I was not expecting it to work so well.
An earlier version hung on to some objects for too long resulting in occasional GCs that freed up 3GB+. Even then, there was no noticeable impact on performance. The system is running on a 16-core server with 32GB RAM, which probably helps...
In .Net, on Windows 32-bit, You can only really get to about 1.4 GB of memory usage before things start getting really screwy (out of memory exceptions). This is due to a limitation in 32 bit windows that limits a single process to using more than 2 GB of RAM. There is /3GB switch you can put in your boot.ini, but that will only bring you a little bit further. If you want to use lots of memory, you should seriously consider running on a 64 bit version of windows.
I currently have a production application with 6 GB of memory. You'll need a 64-bit box as well for the JVM to be able to address that much.
The garbage collector is really the only thing (that I've found so far) where performance degrades with size, and then only if you manually kick off a System.GC, which forces the JVM to bring everything to a screeching halt as it traverses 6 GB worth of objects. Takes a good 20 seconds, too. The default GC behavior does not do this, BTW, you have to be dumb enough to make it do that. Also worth researching JVM tuning at this size.
You can also find things like distributed and clustered JVMs, sorry, don't have any good references as I didn't look into this option too closely, although I did find references to larger installations.
I am unsure what you mean by heap, but if you mean memory used, I have used quite a bit, 2GB+. I have a web app that does image processing and it requires loading 2 large scan files into memory to do analysis.
There were performance issues. Windows would swap out lots of ram, and then that would create a lot of page faults. There was never any need for anymore than 2 images at a time as all requests were gainst those images (I only allowed 1 session per image set at a time)
For instance, to setup the files for initial viewing would take about 5 seconds. Doing simple analysis and zooming would be fairly fast once in memory, in the order of .1 to .5 seconds.
I still had to optimize, so I ended up preparsung the files and chopping into smaller peices and worked only with the peices that were required by the user at the time.
I have used from 2GB to 5GB of memory in java, but usually when I get to more than 2GB I really start thinking about memory optimization. Diminishing returns can vary from not optimizing when it's necessary because you have a lot of memory, to not having memory available for the OS/Disk caches (which can help your application overall).
For Java, I recommend watching your memory usage per generation over time. Do you create a lot of temporary objects or have long-lasting objects that consume a lot of memory? A lot of optimization of memory can be done when knowing those things.