I am testing appfabric cache performance. To do this, from my local machine , I am hitting the cache host on the LAN. This cache host is running on Windows Server 2008 and except from the bare essentials has nothing installed on it. It has 8 gigs of RAM. It is also a VMware virtual server,
As soon as I hit the cache host, i can see the memory being used increases. But something very fishy is going on somewhere. the total primary data bytes that is being used is 1.5 Gigs. The Object size is 1,744 bytes (using ANTS profiler).The total object count is 2,521,451. I have disabled eviction. But, this is interesting, as soon as the server hits the throttled state, I can see that server's RAM is being used at 7.72 Gigs, but apart from the distributed cache using 1.8 gigs there is no other application thats using such a high quantity of RAM.
I am using Visual studio 2010 , and I am inserting and reading the objects in parallel
The question that I wanted to ask is :
Where is my memory going? The server in the throttled state says I am using 7.72 gigs of the alloted memory whereas in the task manager, i can see that barely 3 Gigs are being used (if i add all the running process's memory)
Gagan, if you're still having this issue, can you download SysInternals' Process Monitor? You can find it here. Run it, and add columns for memory private bytes, working set, and virtual size. Peak private bytes and working set wouldn't hurt, either, and there are other memory columns you can add for fun : )
Task Manager doesn't give you the virtual size (Windows 7 lets you add Commit Size, don't know if Server 2008 also has that). This should give you a clearer picture of where the memory is going.
Let us know if that helps or you need further help pinning down where your memory is going!
I'm not entirely sure on this, as it's not clear in your question where you're seeing the different RAM usage amounts (VMWare guest or host).
When running VMWare Server on Server 2008, the memory usage reported by Task Manager on the host does not take into account the Virtual Machines individual usages. I noticed this a while ago, and am not sure if it's a bug/known issue/by design behaviour.
Example: I have 3 VMs running on my Server 2008 machine, each running a different variant of Windows, with applications running. Nothing is running on the host, other than the Virtual Machines at the moment. Task manager on the host reports ~2GB RAM used, whereas the guests are using at least 1GB each.
Can you clarify exactly where your RAM usage numbers are coming from?
Related
I am developing an asp.net application which needs to search a record from around 5 million records(of around 4GB data). Client is looking for higher performance and decided to memory cache. But I m facing issue while uploading data into memory cache from asp.net. I tried changing application pool settings and made virtual memory as 0, private memory as 0.. Nothing worked out. It is uploading fine till around 1.5GB and throwing out of memory exceptions. There is no issue when I pushed data using console application by unchecking " 32 bit" in build settings in application properties.
My Issue with asp.net. I am using .net frame work 4.0 with 4 core server , the memory available in the server is around 49GB. I also tried with enabling 32 bit run on 64 mode in application pool. But nothing changed.
Could please suggest me if there is any solution.
As already mentioned by John: Querying 5.000.000 records is the job of a DB and NOT your code. If you configure the DB correctly (let the DB use the memory as a cache, correct indexes, performant SQL-query) I would say with a 99.9% chance the DB will be MUCH faster then anything you can create in ASP.NET.
Anyhow if you REALLY want to do it the other way around, you need to create a 64-bit process.
Checklist for doing that (out of my head - no guarantee for completeness):
compile (all parts) of the solution as "Any CPU" or "x64"
run IIS on a 64-bit CPU an OS (which should be the case with 49 GB RAM available)
Set the Application-Pool to run as a 64-bit process with no memory limit:
Application Pools -> "Your Pool" -> Advanced Settings...
-> Enable 32-bit Application -> False
-> Private Memory Limit (KB) -> 0
-> Virtual Memory Limit (KB) -> 0
Thank you guys. I agree that i can do it in my DB. I am handling a huge volume of requests per sec. My DB query is like : select a,b,c,d,e from table1 where id = primary key... It is very simple and efficient query.. although this is efficient.. it is not giving the required performance. So we decided to use cache. Now we resolved the issue by creating a windows service ( which creates a proxy and hosts cache) and web application separately. web application internally call this windows service.. It is working now. Thank you for all suggestions.
Wrote a C# scraper. And analyzed the markup of 30K URLs to pull certain metrics from them.
Run the same code on two machines:
my dev box with 4 core CPU, 8 logical processors and 32GB or RAM. It used up to 300MB of RAM to the end. As I display the WorkingSet size, I could even see the GC kick in and lower memory use, then growing back again.
on a EC2 instance, same data but an instance with only 2 processors and 1.7GB of RAM. Here it used 1.1GB or RAM and, when all threads concluded work, it went down to 300MB just like my local test.
RAM usage was checked with both Environment.WorkingSet and Task Manager. My NET speed is not negligible so I don't think it could affect things even if that Amazon instance might be a little faster. (EC2 net performance differs per instance and this one is on the affordable side hence slower side.)
Why this memory use discrepancy? And can I somehow estimate before hand the memory use in C#?
My guess is that having a slower CPU in the cloud, the GC preferred to keep allocating more than cleaning up what was already used. But this is just my theory to excuse it's unexpected behavior, based on wishful thinking. Still, on my 32GB of RAM it could have used way more but it behaved. On 1.7GB of RAM it went all crazy using 1.1GB of it... I don't get it.
In C++ I just think of how many URLs I fetch at the same time, I think of 256KB average size + size of extracted data and I can tell, before hand, how much memory will be used quite precise. But this C# test left me wondering.
As I plan to release this tool in the wild... I don't feel comfortable taking over half the RAM especially on a lesser machine.
UPDATE: Forgot to mention both machines are Windows 8. Actually one is 8.1 (local) and one Server 2012 (EC2 cloud) both with .NET 4.5.2.
I'm looking for a solution for about 1 1/2 days now and just can't get to the point. I tried to start a *.lnk file in PocketPC 2003 out of our C# application. This *.lnk file contains a link to evm.exe which is a JVM for PocketPC. Argument passed is (besides others) -Xms8M which tells the JVM to reserve at least 8MB of memory.
If directly started from Windows Explorer there's no problem.
Now I created a process in C# pointing to the *.lnk file. When I try to start it the JVM console opens and brings up one of two errors: "EVM execution history too large" or "failed to initialize heap (Phase 1)" (or something like that).
If I delete the mentioned parameter the application comes up with no problem.
Because of this behaviour I assume that there is too few memory assigned to the newly created process. Is this realistic? And if: is there a way to assign more memory to the newly created process? Or am I completely wrong and have to go some other way (if any available)?
Edit:
--CodeSnippet--
this.myStartProcess = new Process { StartInfo = { FileName = appName },EnableRaisingEvents = true };
this.myStartProcess.Start()--CodeSnippet--
Edit 2:
After doing some more research it turned out that the real problem is that there are very limited resources available, eaten up by my launcher application (which is about 1.8 MB in total after starting) over time.
To improve things I started to study how the garbage collector works in Windows Mobile and so used two techniques to bring up the virtual machine.
First one is to reduce the memory taken by my own application by sending it to the background (SendToBack() method of the form) and waiting for the garbage collector to finish (GC.WaitForPendingFinalizers()).
After that I'm looking for 9 MB of free space in program memory before trying to bring the VM up. If there isn't enough space I try to shift the needed memory from storage memory to program memory.
This two techniques improved things a lot!
There's still a problem to my launcher application. The allocated bytes (strings and boxed objects to be concrete) increase over time when my launcher application is in front... It's about 30 kb in 10 minutes. After 24 hours the device will be rebooted automatically. At the moment I assume the launcher will be in front for about 10 minutes total during that period. Nevertheless it's not good to have memory leaks. Anyone got an idea how to chase this down?
Thanks in advance
Best regards
Marcel
It looks you have two reasons for which this could happen:
The default value provided to MinWorkingSet and MaxWorkingSet Properties are not satisfactory for your requirements. From http://msdn.microsoft.com/en-us/library/system.diagnostics.process.maxworkingset.aspx
The working set of a process is the set of memory pages currently
visible to the process in physical RAM memory. These pages are
resident and available for an application to use without triggering a
page fault.
The working set includes both shared and private data. The shared data
includes the pages that contain all the instructions that your
application executes, including the pages in your .dll files and the
system.dll files. As the working set size increases, memory demand
increases.
A process has minimum and maximum working set sizes. Each time a
process resource is created, the system reserves an amount of memory
equal to the minimum working set size for the process. The virtual
memory manager attempts to keep at least the minimum amount of memory
resident when the process is active, but it never keeps more than the
maximum size.
The system sets the default working set sizes. You can modify these
sizes using the MaxWorkingSet and MinWorkingSet members. However,
setting these values does not guarantee that the memory will be
reserved or resident.
It is effectively impossible to reserve the memory you are requiring for the JVM on your machine, because the way the OS manages memory ( which I would find really suprising because every modern os has virtual memory support)
We are running a .Net 1.1 based Windows Service (not an ASP.Net application), and we are getting System.OutOfMemoryException errors under heavy load.
The service basically hosts an in memory cache, consisting of an Asset hashtable, nested within that is an account hashtable, and within that is a class that stores values for a given time period (for the Asset+Account combination). The service serves up aggregates of this data to clients, as well as accepts updates to the data. The total number of nodes remains constant throughout the service lifetime.
In machine.Config, we see things such as:
<processModel
enable="true"
timeout="Infinite"
idleTimeout="Infinite"
shutdownTimeout="00:00:05"
requestLimit="Infinite"
requestQueueLimit="5000"
restartQueueLimit="10"
memoryLimit="60"
webGarden="false"
cpuMask="0xffffffff"
userName="machine"
password="AutoGenerate"
/>
These all seem to be related to ASP.Net/IIS applications, but our OutOfMemoryException is not occurring under ASP.Net, and there seems to be no equivalent configuration setting for non ASP applications.
Does this section perhaps apply to all .Net based applications, not just ASP.Net?
I ask because, our service was getting up around 1.2 GB of memory consumption (we are storing a large database in memory, yes, with good reason) when the error occurred, which is coincidentally roughly equal to 60% of 2GB (the memory "limit" of 32 bit applications). Could this apparent IIS config setting be causing our windows service memory to be capped as well?
To complicate matters a bit further, we are running this on .Net 1.1 32 Bit, under 64 Bit Windows Server 2003 (yes, we do have a reason for this unusual configuration), with 12 GB of RAM. From what I understand, each 32 Bit process should be able to address up to 4GB of RAM, should it not? Or, does this require changes to either the registry or a .Net config file?
NOTE: I am aware of the /3GB windows startup switch, but since we are on 64 Bit windows, I don't think that should apply (but feel free to correct me if I'm wrong).
Update 1
People seem to agree that processModel configuration is specific to ASP.Net applications only.
One answer says that 32 bit apps on 64 bit OS still have a 2GB per process limit, but most any reference I have been able to find says that each 32 bit process has access to 4GB on a 64 Bit OS. (But, perhaps this only only enabled through setting the IMAGEFILELARGEADDRESSAWARE bit?)
Some relevant links
How to set the IMAGE_FILE_LARGE_ADDRESS_AWARE bit for C# apps:
http://bytes.com/groups/net-c/569198-net-4gt
IIS6 Available Memory for 32-Bit Application with Web Garden on x64 OS (32Gb Ram):
IIS6 Available Memory for 32-Bit Application with Web Garden on x64 OS (32Gb Ram)
.NET Debugging Demos Lab 3: Memory:
http://blogs.msdn.com/tess/archive/2008/02/15/net-debugging-demos-lab-3-memory.aspx
Should be useful to find the specifics of the OutOfMemoryException?
Pushing the Limits - Virtual Memory:
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
Read this to understand concepts, and use testlimit to rule out machine/config issues. Once convinced it's your app's fault, read & re-read the articles from Tess' blog.
Final Update
Well, for our situation, this turned out to apparently be missing an .Net Service Pack....apparently there was an issue with remoting getting this exception, after the service pack it cleared up entirely!
The processModel configuration element is specific to ASP.NET processes and is not applicable to other .NET processes.
If you are running a 32-bit process on a 64-bit OS, your still subject to the process limit of a 32-bit process, which is 2GB. The practical limit is actually closer to 1.5 to 1.8GB, depending on your application characteristics - in other words, its very unlikely you will ever actually reach the 2GB process barrier.
In order for your Windows service to take advantage of the full 4GB of process space your expecting you will need to:
mark your process as LARGE_ADDRESS_AWARE. Believe this can be done using editbin.exe, but I've never done it! It also might open up a new can of worms... :) I'll see if I can't validate.
add /3GB in boot.ini
reboot server
Also consider the memory allocation profile of your application. If you are allocating objects greater than 85K in size, then these objects will be allocated in the large object heap. The large object heap is swept, but not compacted like other heaps, meaning that you could be experiencing fragmentation which will eventually keep the .net memory manager from allocating a continuous block of memory to satisfy the request.
You likely want to take snaps of the process and review what objects are in what heaps to get a better idea of whats going on within your process memory space.
Also, check the size of the page file on the server. An inadequately sized page file can also cause problems considering its shared across all processes, though that tends to error with system exceptions with some verbiage around 'virtual memory'.
Good luck!
Z
References:
Memory Limits for Windows Releases
Tess Ferrandez, .NET Debugging: Memory
The ProcessModel key is only used for ASP.NET, and even then, on Windows Server 2003 it's pretty much useless because you have the Application Pool configuration.
The /3GB switch does not apply to 64-bit Windows.
As I understand it, you CAN get OutOfMemoryExceptions if you have objects pinned in memory preventing the GC from effectively defragmenting the memory stack when a collection occurs.
You should strongly consider moving your service to a 64-bit application if you know you are jamming gigabytes of data into it. IMO you're playing with fire if you are coming this close to the limit.
is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
also, how do you know how much memory the machine gives to OS, video cards, etc . .
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
Yes, it's possible (see some of the other answers), but it's going to be very unlikely that your application really needs to care. What is it that you're doing where you think you need to be this sensitive to memory pressure?
also, how do you know how much memory the machine gives to OS, video cards, etc . .
Again, this should be possible using WMI calls, but the bigger question is why do you need to do this?
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
No, this isn't a configurable value. When a .NET application starts up the operating system allocates a block of memory for it to use. This is handled by the OS and there is no way to configure the algorithms used to determine the amount of memory to allocate. Likewise, there is no way to configure how much of that memory the .NET runtime uses for the managed heap, stack, large object heap, etc.
I think I read the question a little differently, so hopefully this response isn't too off topic!
You can get a good overview of how much memory your application is consuming by using Windows Task Manager, or even better, Sysinternals Process Monitor. This is a quick way to review your processes at their peaks to see how they are behaving.
Out of the box, an x86 process will only be able to address 2GB of RAM. This means any single process on your machine can only consume up to 2GB. In reality, your likely to be able to consume only 1.5-1.8 before getting out of memory exceptions.
How much RAM your copy of Windows can actually address will depend on the Windows version and cpu architecture.
Using your example of 4GB RAM, the OS is going to give your applications up to 2GB of RAM to play in (which all processes share) and it will reserve 2GB for itself.
Depending on the operating system your running, you can tweak this, using the /3GB switch in the boot.ini, will adjust that ratio to 3GB for applications and 1GB for the OS. This has some impact to the OS, so I'd review that impact first and see if you can live with tradeoff (YMMV).
For a single application to be able to address greater than /3GB, your going to need to set a particular bit in the PE image header. This question/answer has good info on this subject already.
The game changes under x64 architecture. :)
Some good reference information:
Memory Limits for Windows Releases
Virtual Address Space
I think you can use WMI to get all that information
If you don't wish to use WMI, you could use GlobalMemoryStatusEx():
Function Call:
http://www.pinvoke.net/default.aspx/kernel32/GlobalMemoryStatusEx.html
Return Data:
http://www.pinvoke.net/default.aspx/Structures/MEMORYSTATUSEX.html
MemoryLoad will give you a number between 0 and 100 that represents the ~ percentage of physical memory in use and TotalPhys will tell you total total amount of physical memory in bytes.
Memory is tricky because usable memory is a blend of physical (ram) and virtual (page file) types. The specific blend, and what goes where, is determined by the operating system. Luckily, this is somewhat configurable as Windows allows you to stipulate how much virtual memory to use, if any.
Take note that not all of the memory in 32-bit Windows (XP & Vista) is available for use. Windows may report up to 4GB installed but only 3.1-3.2GB is available for actual use by the operating system and applications. This has to do with legacy addressing issues IIRC.
Good Luck