I am developing an asp.net application which needs to search a record from around 5 million records(of around 4GB data). Client is looking for higher performance and decided to memory cache. But I m facing issue while uploading data into memory cache from asp.net. I tried changing application pool settings and made virtual memory as 0, private memory as 0.. Nothing worked out. It is uploading fine till around 1.5GB and throwing out of memory exceptions. There is no issue when I pushed data using console application by unchecking " 32 bit" in build settings in application properties.
My Issue with asp.net. I am using .net frame work 4.0 with 4 core server , the memory available in the server is around 49GB. I also tried with enabling 32 bit run on 64 mode in application pool. But nothing changed.
Could please suggest me if there is any solution.
As already mentioned by John: Querying 5.000.000 records is the job of a DB and NOT your code. If you configure the DB correctly (let the DB use the memory as a cache, correct indexes, performant SQL-query) I would say with a 99.9% chance the DB will be MUCH faster then anything you can create in ASP.NET.
Anyhow if you REALLY want to do it the other way around, you need to create a 64-bit process.
Checklist for doing that (out of my head - no guarantee for completeness):
compile (all parts) of the solution as "Any CPU" or "x64"
run IIS on a 64-bit CPU an OS (which should be the case with 49 GB RAM available)
Set the Application-Pool to run as a 64-bit process with no memory limit:
Application Pools -> "Your Pool" -> Advanced Settings...
-> Enable 32-bit Application -> False
-> Private Memory Limit (KB) -> 0
-> Virtual Memory Limit (KB) -> 0
Thank you guys. I agree that i can do it in my DB. I am handling a huge volume of requests per sec. My DB query is like : select a,b,c,d,e from table1 where id = primary key... It is very simple and efficient query.. although this is efficient.. it is not giving the required performance. So we decided to use cache. Now we resolved the issue by creating a windows service ( which creates a proxy and hosts cache) and web application separately. web application internally call this windows service.. It is working now. Thank you for all suggestions.
Related
I am troubleshooting a high traffic C# .NET Framework 4.61 website that utilizes System.Runtime.Caching.MemoryCache quite extensively. In the most recent week, I've seen slowdowns and when I put up some Perfmon counters, I saw that the MemoryCache gets emptied out every two minutes.
All other counters in the image show a similar picture (e.g. Cache Hits, Misses, etc...).
Initially, I thought that perhaps the app is running up against maximum limits of MemoryCache. So I added the following to the web.config:
<system.runtime.caching>
<memoryCache>
<namedCaches>
<add name="CacheManager"
cacheMemoryLimitMegabytes="8000"
physicalMemoryLimitPercentage="99"
pollingInterval="00:05:00" />
</namedCaches>
</memoryCache>
</system.runtime.caching>
However, nothing changed and the MemoryCache continues to be dumped every 2 minutes.
Some other troubleshooting notes:
There is no memory pressure. The application gets up to 2-3 GB of RAM before the 2 minute mark and then memory drops a bit (as evidenced by the # Bytes in all Heaps counter). The server has 24 GB of RAM.
Every 2 minutes there is a CPU spike because the app has to go back to the database to fetch data again.
The application is running on IIS in Windows Server 2019. It was previously running on Windows Server 2008, where these issues didn't exist.
So what could cause the MemoryCache to drop every 2 minutes?
P.S. With the help of MS Support and a couple of proc dumps, we were able to determine that the application domain is restarting every 2 minutes. We've added HKLM\Software\Microsoft\ASP.NET\FCNMode=1 to disable ASP.NET from responding to file system changes. However, what actually causes the restart is still unknown. According to Procmon, no changes are happening application directory.
I have made a website. Locally it works fine, no problems at all. But when I publish it to the server (VPS) it runs terribly slow. It takes almost 37 seconds for the homepage to load. While locally it takes like 400 ms.
I have no idea what to do because I tried a lot:
Removing EF 6 and replaced it with dapper
Using display templates instead of for each-loops and partials
Removing all ViewEngines and adding only Razor.
Checking duplicate queries (found one, fixed it)
Checked unnecessary queries and used joins
Checked if some functions are called to many times (loops)
Tried to PreCompile during publishing
It became faster local. From 900 ms to 350 ms.
But on the server nothing seems to help. So I turned to the server.
For web application:
Checked if debug was set on true on the server (was always false)
Changed the connection string from (local) to 127.0.0.1,1234 (I use a port due to hackers. Port is not the real one in this example).
Published as debug, not release
Server settings:
Tried for force 32bit for the AppPool
Put the web application in an own AppPool
Disabled the ipv6
Shutdown other AppPools
Turned off the Windows FireWall (don't worry, turned it up again)
Restarted the server again.
But then I noticed was that IIS Worker Process was using almost 1.7 GB memory, % of the memory and I was like "... What? Why?". Later I saw it was using 2.7 GB.
I installed Glimpse on the web application. But I cannot find out what the problem could be.
Some information of the techniques I use in the project:
3 layer: Business, Data, Web
DI with Unity
Bootstrap for design, jQuery
EntityFramework 6 in the first version, Dapper in second
MVC 5
AspNet Identity for user management
VPS information:
4 GB
Intel(R) Xeon(R) CPU E5-2680 v2 # 2.80Ghz
64-bit OS
Windows Server 2012
I can't think of any other information to give. So, if I missed something, just ask. If you want to check if out: Website is http://www.zatyscorner.com. I try to optimize the start page first so see what is happening.
Hopefully, someone can help me out.
Okay, so back at programming-school I was taught that you have to do a few things when there are performance issues: Check the server, check the code and.... Check the queries...
Well, in this case, the queries to SQL were the bad guys. I had one query that took 7700 records (give or take), took 10 from those and showed it... Yeah, that will cause a timeout and enormous amount of memory... Especially when 3 people are doing it at the same time.
Problem fixed! I am rebuilding most queries now with Dapper and I make sure the good amount of records are returned.
Feeling like a noob :(
I think changing the Application Pool's Identity to LocalSystem can help, check out this answer here.
I will also suggest you to enable output caching on your IIS if you haven't done yet. Enable it for all image files i.e. .png,.jpg, etc., .css files, html and cshtml files and js files too.
Here is how you can do it
I'm crossing posting this from the 51degrees forums as it hasn't gotten much traction there.
I went ahead and implemented the latest NuGet package version of 51Degrees into a site we manage here at work. (2.19.1.4) We are attempting to bring in house the management of the mobile views for this site (it's currently done by a third party). So the only functionality we are interested in is the detection. We disabled the redirect functionality by commenting out the redirect element in the config and we modified the logging level to Fatal (the log is in the App_Data folder).
To our understanding, those were the only changes needed. And this worked. We could switch our layout view between desktop and mobile based on the information 51degrees was providing.
While testing and promoting through DEV and QA we noted increased memory consumption in the app pool, but nothing that we were overly worried about. The app pool at standard traffic levels consumes roughly 230 MB of memory in PROD. It will spike to 300 MB during peak times, so nothing too worrisome, especially considering we do a fair amount of InProc caching.
As of Sunday we promoted 51degreees lite into PROD, but disabled the mobile views (we did this in QA as well). We wanted to see how it would perform in PROD and what kind of impact it would have on the server in a live environment. Just to reiterate, QA revealed increased memory use, but we could not replicate PROD loads and variances.
PROD revealed some concerns. Memory consumption on the app pool of one of the two frontends grew slowly throughout the day up to a peak at days end of 560MB on the app pool at 11 PM. The other peaked at 490MB.
We confirmed the problem was isolated to 51degrees by removing it from the site, recycling, and monitoring for another day. App pool memory never exceeded 300MB.
We also ran the app pool through SciTech's memory profiler to confirm. The results showed 51Degrees consuming majority of the additional memory above the expected. (We can run these tests again in a QA environment if wanted. The numbers will be lower, but they will paint a picture).
So the questions:
1) What would account for this large memory consumption? While a 500-600MB app pool isn't the end of the world, having our mobile detection solution more than double our app pool size is worrisome. (While our site isn't the heaviest traffic site, it does receive a fairly decent number of requests)
2) Are there any settings that we can apply to prevent or reduce the memory consumption? Ideally, we'd like to limit the memory consumption of 51 degrees to just the memory needed to load the product and monitor incoming requests.
Thanks for any feedback.
I am testing appfabric cache performance. To do this, from my local machine , I am hitting the cache host on the LAN. This cache host is running on Windows Server 2008 and except from the bare essentials has nothing installed on it. It has 8 gigs of RAM. It is also a VMware virtual server,
As soon as I hit the cache host, i can see the memory being used increases. But something very fishy is going on somewhere. the total primary data bytes that is being used is 1.5 Gigs. The Object size is 1,744 bytes (using ANTS profiler).The total object count is 2,521,451. I have disabled eviction. But, this is interesting, as soon as the server hits the throttled state, I can see that server's RAM is being used at 7.72 Gigs, but apart from the distributed cache using 1.8 gigs there is no other application thats using such a high quantity of RAM.
I am using Visual studio 2010 , and I am inserting and reading the objects in parallel
The question that I wanted to ask is :
Where is my memory going? The server in the throttled state says I am using 7.72 gigs of the alloted memory whereas in the task manager, i can see that barely 3 Gigs are being used (if i add all the running process's memory)
Gagan, if you're still having this issue, can you download SysInternals' Process Monitor? You can find it here. Run it, and add columns for memory private bytes, working set, and virtual size. Peak private bytes and working set wouldn't hurt, either, and there are other memory columns you can add for fun : )
Task Manager doesn't give you the virtual size (Windows 7 lets you add Commit Size, don't know if Server 2008 also has that). This should give you a clearer picture of where the memory is going.
Let us know if that helps or you need further help pinning down where your memory is going!
I'm not entirely sure on this, as it's not clear in your question where you're seeing the different RAM usage amounts (VMWare guest or host).
When running VMWare Server on Server 2008, the memory usage reported by Task Manager on the host does not take into account the Virtual Machines individual usages. I noticed this a while ago, and am not sure if it's a bug/known issue/by design behaviour.
Example: I have 3 VMs running on my Server 2008 machine, each running a different variant of Windows, with applications running. Nothing is running on the host, other than the Virtual Machines at the moment. Task manager on the host reports ~2GB RAM used, whereas the guests are using at least 1GB each.
Can you clarify exactly where your RAM usage numbers are coming from?
We are running a .Net 1.1 based Windows Service (not an ASP.Net application), and we are getting System.OutOfMemoryException errors under heavy load.
The service basically hosts an in memory cache, consisting of an Asset hashtable, nested within that is an account hashtable, and within that is a class that stores values for a given time period (for the Asset+Account combination). The service serves up aggregates of this data to clients, as well as accepts updates to the data. The total number of nodes remains constant throughout the service lifetime.
In machine.Config, we see things such as:
<processModel
enable="true"
timeout="Infinite"
idleTimeout="Infinite"
shutdownTimeout="00:00:05"
requestLimit="Infinite"
requestQueueLimit="5000"
restartQueueLimit="10"
memoryLimit="60"
webGarden="false"
cpuMask="0xffffffff"
userName="machine"
password="AutoGenerate"
/>
These all seem to be related to ASP.Net/IIS applications, but our OutOfMemoryException is not occurring under ASP.Net, and there seems to be no equivalent configuration setting for non ASP applications.
Does this section perhaps apply to all .Net based applications, not just ASP.Net?
I ask because, our service was getting up around 1.2 GB of memory consumption (we are storing a large database in memory, yes, with good reason) when the error occurred, which is coincidentally roughly equal to 60% of 2GB (the memory "limit" of 32 bit applications). Could this apparent IIS config setting be causing our windows service memory to be capped as well?
To complicate matters a bit further, we are running this on .Net 1.1 32 Bit, under 64 Bit Windows Server 2003 (yes, we do have a reason for this unusual configuration), with 12 GB of RAM. From what I understand, each 32 Bit process should be able to address up to 4GB of RAM, should it not? Or, does this require changes to either the registry or a .Net config file?
NOTE: I am aware of the /3GB windows startup switch, but since we are on 64 Bit windows, I don't think that should apply (but feel free to correct me if I'm wrong).
Update 1
People seem to agree that processModel configuration is specific to ASP.Net applications only.
One answer says that 32 bit apps on 64 bit OS still have a 2GB per process limit, but most any reference I have been able to find says that each 32 bit process has access to 4GB on a 64 Bit OS. (But, perhaps this only only enabled through setting the IMAGEFILELARGEADDRESSAWARE bit?)
Some relevant links
How to set the IMAGE_FILE_LARGE_ADDRESS_AWARE bit for C# apps:
http://bytes.com/groups/net-c/569198-net-4gt
IIS6 Available Memory for 32-Bit Application with Web Garden on x64 OS (32Gb Ram):
IIS6 Available Memory for 32-Bit Application with Web Garden on x64 OS (32Gb Ram)
.NET Debugging Demos Lab 3: Memory:
http://blogs.msdn.com/tess/archive/2008/02/15/net-debugging-demos-lab-3-memory.aspx
Should be useful to find the specifics of the OutOfMemoryException?
Pushing the Limits - Virtual Memory:
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
Read this to understand concepts, and use testlimit to rule out machine/config issues. Once convinced it's your app's fault, read & re-read the articles from Tess' blog.
Final Update
Well, for our situation, this turned out to apparently be missing an .Net Service Pack....apparently there was an issue with remoting getting this exception, after the service pack it cleared up entirely!
The processModel configuration element is specific to ASP.NET processes and is not applicable to other .NET processes.
If you are running a 32-bit process on a 64-bit OS, your still subject to the process limit of a 32-bit process, which is 2GB. The practical limit is actually closer to 1.5 to 1.8GB, depending on your application characteristics - in other words, its very unlikely you will ever actually reach the 2GB process barrier.
In order for your Windows service to take advantage of the full 4GB of process space your expecting you will need to:
mark your process as LARGE_ADDRESS_AWARE. Believe this can be done using editbin.exe, but I've never done it! It also might open up a new can of worms... :) I'll see if I can't validate.
add /3GB in boot.ini
reboot server
Also consider the memory allocation profile of your application. If you are allocating objects greater than 85K in size, then these objects will be allocated in the large object heap. The large object heap is swept, but not compacted like other heaps, meaning that you could be experiencing fragmentation which will eventually keep the .net memory manager from allocating a continuous block of memory to satisfy the request.
You likely want to take snaps of the process and review what objects are in what heaps to get a better idea of whats going on within your process memory space.
Also, check the size of the page file on the server. An inadequately sized page file can also cause problems considering its shared across all processes, though that tends to error with system exceptions with some verbiage around 'virtual memory'.
Good luck!
Z
References:
Memory Limits for Windows Releases
Tess Ferrandez, .NET Debugging: Memory
The ProcessModel key is only used for ASP.NET, and even then, on Windows Server 2003 it's pretty much useless because you have the Application Pool configuration.
The /3GB switch does not apply to 64-bit Windows.
As I understand it, you CAN get OutOfMemoryExceptions if you have objects pinned in memory preventing the GC from effectively defragmenting the memory stack when a collection occurs.
You should strongly consider moving your service to a 64-bit application if you know you are jamming gigabytes of data into it. IMO you're playing with fire if you are coming this close to the limit.