51Degrees Memory Consumption - c#

I'm crossing posting this from the 51degrees forums as it hasn't gotten much traction there.
I went ahead and implemented the latest NuGet package version of 51Degrees into a site we manage here at work. (2.19.1.4) We are attempting to bring in house the management of the mobile views for this site (it's currently done by a third party). So the only functionality we are interested in is the detection. We disabled the redirect functionality by commenting out the redirect element in the config and we modified the logging level to Fatal (the log is in the App_Data folder).
To our understanding, those were the only changes needed. And this worked. We could switch our layout view between desktop and mobile based on the information 51degrees was providing.
While testing and promoting through DEV and QA we noted increased memory consumption in the app pool, but nothing that we were overly worried about. The app pool at standard traffic levels consumes roughly 230 MB of memory in PROD. It will spike to 300 MB during peak times, so nothing too worrisome, especially considering we do a fair amount of InProc caching.
As of Sunday we promoted 51degreees lite into PROD, but disabled the mobile views (we did this in QA as well). We wanted to see how it would perform in PROD and what kind of impact it would have on the server in a live environment. Just to reiterate, QA revealed increased memory use, but we could not replicate PROD loads and variances.
PROD revealed some concerns. Memory consumption on the app pool of one of the two frontends grew slowly throughout the day up to a peak at days end of 560MB on the app pool at 11 PM. The other peaked at 490MB.
We confirmed the problem was isolated to 51degrees by removing it from the site, recycling, and monitoring for another day. App pool memory never exceeded 300MB.
We also ran the app pool through SciTech's memory profiler to confirm. The results showed 51Degrees consuming majority of the additional memory above the expected. (We can run these tests again in a QA environment if wanted. The numbers will be lower, but they will paint a picture).
So the questions:
1) What would account for this large memory consumption? While a 500-600MB app pool isn't the end of the world, having our mobile detection solution more than double our app pool size is worrisome. (While our site isn't the heaviest traffic site, it does receive a fairly decent number of requests)
2) Are there any settings that we can apply to prevent or reduce the memory consumption? Ideally, we'd like to limit the memory consumption of 51 degrees to just the memory needed to load the product and monitor incoming requests.
Thanks for any feedback.

Related

Azure Web App Service has Steady CPU Time Increase

I have what is essentially a flashcard web app, hosted on the free version of Azure and coded in ASP.NET (C#). It is used by a small amount of people (40 or so). As you can see in the graph below, the CPU Time was steady for a while and then steadily started increasing around April 1. The problem is that I am now reaching Azure's 60-minute CPU Time per day limit, causing my app to shut down when it reaches that quota.
I am unaware of ANY changes, either in the code or in the websites configuration, that happened at any time period seen on this chart.
Quick note: The large spikes are expected, and I don't believe they're related to the issue. Long story short, it was the day of a competition where the app was used significantly more than usual. This happens every couple weeks during each competition. I don't believe it's related because it has NEVER been followed by a steady increase shortly after. So the spike is normal; the gradual increase is not.
I have restarted the web service many times. I have redeployed the code. I have turned off many features in the C# code that might increase the CPU Time. I checked the website's request count, and it is actually LOWER after that first spike than before it. Even when there are periods of no requests (or something small like <5 requests per hour), the CPU Time is still high. So this has nothing to do with request count or something like undisposed threads (which would get cleared upon webservice restart anyways).
One last thing: I have also deployed this EXACT same code to another Azure website, which I have used for years as the test website. The test website does NOT have this issue. The test website connects to the same data and everything. The only difference is that it's not what other users use, so the request count is much lower, and it does not gradually increase. This leads me to believe it is not an issue in my C#/ASP.NET code.
My theory is that there is some configuration in Azure that is causing this, but I don't know what. I didn't change anything around the time the CPU Time started increasing, but I don't see what else it could be. Any ideas would be greatly appreciated, as I've been wracking my brain for weeks on this, and it's causing my production site to go down for hours every day.
EDIT: Also, the CPU Usage is NOT high at this time. So while the CPU is supposedly running at long periods of time, it never approaches 100% CPU at any given moment. So this is also NOT an issue of high CPU usage.

Azure Cloud Service Performance

I have an Azure Cloud Service that hosts a website that generates PDF files. The PDF files can vary in size up to around 10MB. Its a .NET MVC Entity Framework site that uses PDFSharp/Migradoc to generate the PDFs.
I've been trying to improve the performance of the PDF generation and I've found that even when increasing the instance size - for example from A2 to A4 - I see little change in performance.
I checked the monitor in the portal and it doesn't show a very high usage, and I also checked the Resource Monitor on one of the Cloud Service servers and the memory usage was at most 50% usage.
Azure Portal Monitor:
Resource Monitor:
Note: the memory usage is at 50%, CPU usage is low and most of the network usage displayed seems to be from the remote desktop connection.
I imagine the slowness is either due to slow disc write speed or long garbage collection intervals but I'm not sure how to prove this.
I'd greatly appreciate any pointers or ideas that can help me to improve the performance. I can also upgrade the instance size to one of the newer instances (D/D2 series) if you think a faster SD drive or CPU will help, but since the CPU usage is so low I'm not sure if that would help.
Thanks!

Memory Cache Maximum limit in Asp.net

I am developing an asp.net application which needs to search a record from around 5 million records(of around 4GB data). Client is looking for higher performance and decided to memory cache. But I m facing issue while uploading data into memory cache from asp.net. I tried changing application pool settings and made virtual memory as 0, private memory as 0.. Nothing worked out. It is uploading fine till around 1.5GB and throwing out of memory exceptions. There is no issue when I pushed data using console application by unchecking " 32 bit" in build settings in application properties.
My Issue with asp.net. I am using .net frame work 4.0 with 4 core server , the memory available in the server is around 49GB. I also tried with enabling 32 bit run on 64 mode in application pool. But nothing changed.
Could please suggest me if there is any solution.
As already mentioned by John: Querying 5.000.000 records is the job of a DB and NOT your code. If you configure the DB correctly (let the DB use the memory as a cache, correct indexes, performant SQL-query) I would say with a 99.9% chance the DB will be MUCH faster then anything you can create in ASP.NET.
Anyhow if you REALLY want to do it the other way around, you need to create a 64-bit process.
Checklist for doing that (out of my head - no guarantee for completeness):
compile (all parts) of the solution as "Any CPU" or "x64"
run IIS on a 64-bit CPU an OS (which should be the case with 49 GB RAM available)
Set the Application-Pool to run as a 64-bit process with no memory limit:
Application Pools -> "Your Pool" -> Advanced Settings...
-> Enable 32-bit Application -> False
-> Private Memory Limit (KB) -> 0
-> Virtual Memory Limit (KB) -> 0
Thank you guys. I agree that i can do it in my DB. I am handling a huge volume of requests per sec. My DB query is like : select a,b,c,d,e from table1 where id = primary key... It is very simple and efficient query.. although this is efficient.. it is not giving the required performance. So we decided to use cache. Now we resolved the issue by creating a windows service ( which creates a proxy and hosts cache) and web application separately. web application internally call this windows service.. It is working now. Thank you for all suggestions.

CPU usage extremely high on TS deployment

Our application is written in .NET (framework 3.5). We are experiencing problems with the applications performance when deployed in a terminal services environment. The client is using a TS farm. They have 4GB ram and a decent xeon processor.
When the application is opened in this environment, it sits at 25% CPU usage even when idle. When deployed in a normal client - server environment, it behaves normally, spiking the CPU usage when necessary and drops down to 0 when idle.
Does anyone have any ideas what could be causing this? Or, what I could do to investigate? We have no memory leaks that we can find using performance profiling tools.
This is a WinForms application
We dont have a TS environment avialable to test on
The application is a Business Application.
Basically, capturing and updating of data. Its a massive business application, but there is little multithreading, listeners etc. We do have ANTS profiler (memory / performance) but as mentioned in our environment we dont have the problem - it only occurs on the TS environment
Well, there are a few questions before we can really get you too far.
Is this a Console Application? WinForms Application? or Windows Service?
Do you have a Terminal Services environment available?
What does your application do?
Depending on what the application does, you might check to see if there is unusually high activity on their hardware that you have not accounted for. Examples that I have noticed in the past are items such as having a FileSystemWatcher accidentally listening to a "drop location" for reporting on a client server. Things of that nature, items that while "idle" shouldn't be busy, but are.
Otherwise, if you have the ability to do so, you could also use a tool such as ANTS Profiler from RedGate to see WHAT is using the CPU time on the environment.
Look for sections in your application that constantly repaints the window. Factor those out so that when sitting idle it isn't constantly repainting the window.

Does processModel memoryLimit apply to ASP.Net only? (System.OutOfMemoryException)

We are running a .Net 1.1 based Windows Service (not an ASP.Net application), and we are getting System.OutOfMemoryException errors under heavy load.
The service basically hosts an in memory cache, consisting of an Asset hashtable, nested within that is an account hashtable, and within that is a class that stores values for a given time period (for the Asset+Account combination). The service serves up aggregates of this data to clients, as well as accepts updates to the data. The total number of nodes remains constant throughout the service lifetime.
In machine.Config, we see things such as:
<processModel
enable="true"
timeout="Infinite"
idleTimeout="Infinite"
shutdownTimeout="00:00:05"
requestLimit="Infinite"
requestQueueLimit="5000"
restartQueueLimit="10"
memoryLimit="60"
webGarden="false"
cpuMask="0xffffffff"
userName="machine"
password="AutoGenerate"
/>
These all seem to be related to ASP.Net/IIS applications, but our OutOfMemoryException is not occurring under ASP.Net, and there seems to be no equivalent configuration setting for non ASP applications.
Does this section perhaps apply to all .Net based applications, not just ASP.Net?
I ask because, our service was getting up around 1.2 GB of memory consumption (we are storing a large database in memory, yes, with good reason) when the error occurred, which is coincidentally roughly equal to 60% of 2GB (the memory "limit" of 32 bit applications). Could this apparent IIS config setting be causing our windows service memory to be capped as well?
To complicate matters a bit further, we are running this on .Net 1.1 32 Bit, under 64 Bit Windows Server 2003 (yes, we do have a reason for this unusual configuration), with 12 GB of RAM. From what I understand, each 32 Bit process should be able to address up to 4GB of RAM, should it not? Or, does this require changes to either the registry or a .Net config file?
NOTE: I am aware of the /3GB windows startup switch, but since we are on 64 Bit windows, I don't think that should apply (but feel free to correct me if I'm wrong).
Update 1
People seem to agree that processModel configuration is specific to ASP.Net applications only.
One answer says that 32 bit apps on 64 bit OS still have a 2GB per process limit, but most any reference I have been able to find says that each 32 bit process has access to 4GB on a 64 Bit OS. (But, perhaps this only only enabled through setting the IMAGEFILELARGEADDRESSAWARE bit?)
Some relevant links
How to set the IMAGE_FILE_LARGE_ADDRESS_AWARE bit for C# apps:
http://bytes.com/groups/net-c/569198-net-4gt
IIS6 Available Memory for 32-Bit Application with Web Garden on x64 OS (32Gb Ram):
IIS6 Available Memory for 32-Bit Application with Web Garden on x64 OS (32Gb Ram)
.NET Debugging Demos Lab 3: Memory:
http://blogs.msdn.com/tess/archive/2008/02/15/net-debugging-demos-lab-3-memory.aspx
Should be useful to find the specifics of the OutOfMemoryException?
Pushing the Limits - Virtual Memory:
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
Read this to understand concepts, and use testlimit to rule out machine/config issues. Once convinced it's your app's fault, read & re-read the articles from Tess' blog.
Final Update
Well, for our situation, this turned out to apparently be missing an .Net Service Pack....apparently there was an issue with remoting getting this exception, after the service pack it cleared up entirely!
The processModel configuration element is specific to ASP.NET processes and is not applicable to other .NET processes.
If you are running a 32-bit process on a 64-bit OS, your still subject to the process limit of a 32-bit process, which is 2GB. The practical limit is actually closer to 1.5 to 1.8GB, depending on your application characteristics - in other words, its very unlikely you will ever actually reach the 2GB process barrier.
In order for your Windows service to take advantage of the full 4GB of process space your expecting you will need to:
mark your process as LARGE_ADDRESS_AWARE. Believe this can be done using editbin.exe, but I've never done it! It also might open up a new can of worms... :) I'll see if I can't validate.
add /3GB in boot.ini
reboot server
Also consider the memory allocation profile of your application. If you are allocating objects greater than 85K in size, then these objects will be allocated in the large object heap. The large object heap is swept, but not compacted like other heaps, meaning that you could be experiencing fragmentation which will eventually keep the .net memory manager from allocating a continuous block of memory to satisfy the request.
You likely want to take snaps of the process and review what objects are in what heaps to get a better idea of whats going on within your process memory space.
Also, check the size of the page file on the server. An inadequately sized page file can also cause problems considering its shared across all processes, though that tends to error with system exceptions with some verbiage around 'virtual memory'.
Good luck!
Z
References:
Memory Limits for Windows Releases
Tess Ferrandez, .NET Debugging: Memory
The ProcessModel key is only used for ASP.NET, and even then, on Windows Server 2003 it's pretty much useless because you have the Application Pool configuration.
The /3GB switch does not apply to 64-bit Windows.
As I understand it, you CAN get OutOfMemoryExceptions if you have objects pinned in memory preventing the GC from effectively defragmenting the memory stack when a collection occurs.
You should strongly consider moving your service to a 64-bit application if you know you are jamming gigabytes of data into it. IMO you're playing with fire if you are coming this close to the limit.

Categories

Resources