Azure Cloud Service Performance - c#

I have an Azure Cloud Service that hosts a website that generates PDF files. The PDF files can vary in size up to around 10MB. Its a .NET MVC Entity Framework site that uses PDFSharp/Migradoc to generate the PDFs.
I've been trying to improve the performance of the PDF generation and I've found that even when increasing the instance size - for example from A2 to A4 - I see little change in performance.
I checked the monitor in the portal and it doesn't show a very high usage, and I also checked the Resource Monitor on one of the Cloud Service servers and the memory usage was at most 50% usage.
Azure Portal Monitor:
Resource Monitor:
Note: the memory usage is at 50%, CPU usage is low and most of the network usage displayed seems to be from the remote desktop connection.
I imagine the slowness is either due to slow disc write speed or long garbage collection intervals but I'm not sure how to prove this.
I'd greatly appreciate any pointers or ideas that can help me to improve the performance. I can also upgrade the instance size to one of the newer instances (D/D2 series) if you think a faster SD drive or CPU will help, but since the CPU usage is so low I'm not sure if that would help.
Thanks!

Related

Reliable way to measure RAM usage of desktop application

I'm working on an automated testing system (written in C#) for an application at work and I have great difficulty to measure the peak ram usage that it needs e.g. while loading certain files (memory usage is typically much higher during file loading).
First I tried to use Process.PeakWorkingSet64 and it worked quite well on the machines in use at that time until the testing system got deployed to more machines and some VMs.
On some of these machines PeakWorkingSet64 was way higher than on others (e.g. 180MB vs 420MB).
I tried different other values of Process and also tried to use PerformanceCounter but I don't know any other metric that gives me a peak value (I really want the peak not the current state).
I don't really get my head around why the PeakWorkingSet64 value is so much higher on some systems. I always run the exact same software with the exact same workloads on these machines. So if I have a software that allocates 1GB of data in RAM I also expect that every system it runs on reports a max memory usage of around 1GB.
Is there something important I'm missing here?
Any hints what I could do to measure memory usage reliably from within the testing system?

EC2 Instance Selection

We have recently started using AWS free tier for our CRM product.
We are facing speed related issues currently, so we are planning to change EC2 Instance.
It's a dotnet based website, using ASP.Net, C#.net, Microsoft SQL server 2012, IIS 7 server.
It would be great if someone can suggest correct EC2 instance for our usage. We are planning to use t2.Medium and MS SQL Enterprise license, Route 53, 30 GB EBS Volume, CloudWatch, SES and SNS. Are we missing something here..? Also what would be the approximate monthly billing for this usage..?
Thanks in advance. Cheers!!
It's impossible to say for sure what the issue is without some performance monitoring. If you haven't already, setup Cloudwatch monitors. Personally I like to use monitoring services like New Relic as they can dive deep into your system - to the stored procedure and ASP.NET code level to identify bottlenecks.
The primary reason for doing this is to identify if your instance is maxing out on CPU usage, memory usage, swapping to disk, or if your bottleneck is in your networking bandwidth.
That being said, as jas_raj mentioned, the t-series instances are burstable, meaning if you have steady heavy traffic, you won't get good use from them. They're better suited for occasional peaks in load.
The m-series will provide a more stable level of performance but, in some cases, can be exceeded in performance by a bursting t-series machine. When I run CMS, CRM and similar apps in EC2, I typically start with an M3 instance.
There are some other things to consider as well.
Consider putting your DB on RDS or on a separate server with high performance EBS volumes (EBS optimized, provisioned IOPS, etc.).
If you can, separate your app and session state (as well as the data layer) so you can consider using smaller EC2 instances but scale them based on traffic and demand.
As you can imagine, there are a lot of factors that go into performance, but I hope this helps.
You can calculate the pricing based on your options by using Amazon's Simple Monthly Calculator.
Regarding your usage, I don't have a lot of experience on the Windows side with AWS but I would point out the fact the amount of CPU allocation on t2 instances is based on a credit system. If that's acceptable to your usage fine, otherwise switch to a non t2 instance for more deterministic CPU performance.
If you have good understandings about your application, I would suggest you check here for differences between instance types and selection suggestions.

C# Memory Use Discrepancy between two Machines

Wrote a C# scraper. And analyzed the markup of 30K URLs to pull certain metrics from them.
Run the same code on two machines:
my dev box with 4 core CPU, 8 logical processors and 32GB or RAM. It used up to 300MB of RAM to the end. As I display the WorkingSet size, I could even see the GC kick in and lower memory use, then growing back again.
on a EC2 instance, same data but an instance with only 2 processors and 1.7GB of RAM. Here it used 1.1GB or RAM and, when all threads concluded work, it went down to 300MB just like my local test.
RAM usage was checked with both Environment.WorkingSet and Task Manager. My NET speed is not negligible so I don't think it could affect things even if that Amazon instance might be a little faster. (EC2 net performance differs per instance and this one is on the affordable side hence slower side.)
Why this memory use discrepancy? And can I somehow estimate before hand the memory use in C#?
My guess is that having a slower CPU in the cloud, the GC preferred to keep allocating more than cleaning up what was already used. But this is just my theory to excuse it's unexpected behavior, based on wishful thinking. Still, on my 32GB of RAM it could have used way more but it behaved. On 1.7GB of RAM it went all crazy using 1.1GB of it... I don't get it.
In C++ I just think of how many URLs I fetch at the same time, I think of 256KB average size + size of extracted data and I can tell, before hand, how much memory will be used quite precise. But this C# test left me wondering.
As I plan to release this tool in the wild... I don't feel comfortable taking over half the RAM especially on a lesser machine.
UPDATE: Forgot to mention both machines are Windows 8. Actually one is 8.1 (local) and one Server 2012 (EC2 cloud) both with .NET 4.5.2.

51Degrees Memory Consumption

I'm crossing posting this from the 51degrees forums as it hasn't gotten much traction there.
I went ahead and implemented the latest NuGet package version of 51Degrees into a site we manage here at work. (2.19.1.4) We are attempting to bring in house the management of the mobile views for this site (it's currently done by a third party). So the only functionality we are interested in is the detection. We disabled the redirect functionality by commenting out the redirect element in the config and we modified the logging level to Fatal (the log is in the App_Data folder).
To our understanding, those were the only changes needed. And this worked. We could switch our layout view between desktop and mobile based on the information 51degrees was providing.
While testing and promoting through DEV and QA we noted increased memory consumption in the app pool, but nothing that we were overly worried about. The app pool at standard traffic levels consumes roughly 230 MB of memory in PROD. It will spike to 300 MB during peak times, so nothing too worrisome, especially considering we do a fair amount of InProc caching.
As of Sunday we promoted 51degreees lite into PROD, but disabled the mobile views (we did this in QA as well). We wanted to see how it would perform in PROD and what kind of impact it would have on the server in a live environment. Just to reiterate, QA revealed increased memory use, but we could not replicate PROD loads and variances.
PROD revealed some concerns. Memory consumption on the app pool of one of the two frontends grew slowly throughout the day up to a peak at days end of 560MB on the app pool at 11 PM. The other peaked at 490MB.
We confirmed the problem was isolated to 51degrees by removing it from the site, recycling, and monitoring for another day. App pool memory never exceeded 300MB.
We also ran the app pool through SciTech's memory profiler to confirm. The results showed 51Degrees consuming majority of the additional memory above the expected. (We can run these tests again in a QA environment if wanted. The numbers will be lower, but they will paint a picture).
So the questions:
1) What would account for this large memory consumption? While a 500-600MB app pool isn't the end of the world, having our mobile detection solution more than double our app pool size is worrisome. (While our site isn't the heaviest traffic site, it does receive a fairly decent number of requests)
2) Are there any settings that we can apply to prevent or reduce the memory consumption? Ideally, we'd like to limit the memory consumption of 51 degrees to just the memory needed to load the product and monitor incoming requests.
Thanks for any feedback.

Does processModel memoryLimit apply to ASP.Net only? (System.OutOfMemoryException)

We are running a .Net 1.1 based Windows Service (not an ASP.Net application), and we are getting System.OutOfMemoryException errors under heavy load.
The service basically hosts an in memory cache, consisting of an Asset hashtable, nested within that is an account hashtable, and within that is a class that stores values for a given time period (for the Asset+Account combination). The service serves up aggregates of this data to clients, as well as accepts updates to the data. The total number of nodes remains constant throughout the service lifetime.
In machine.Config, we see things such as:
<processModel
enable="true"
timeout="Infinite"
idleTimeout="Infinite"
shutdownTimeout="00:00:05"
requestLimit="Infinite"
requestQueueLimit="5000"
restartQueueLimit="10"
memoryLimit="60"
webGarden="false"
cpuMask="0xffffffff"
userName="machine"
password="AutoGenerate"
/>
These all seem to be related to ASP.Net/IIS applications, but our OutOfMemoryException is not occurring under ASP.Net, and there seems to be no equivalent configuration setting for non ASP applications.
Does this section perhaps apply to all .Net based applications, not just ASP.Net?
I ask because, our service was getting up around 1.2 GB of memory consumption (we are storing a large database in memory, yes, with good reason) when the error occurred, which is coincidentally roughly equal to 60% of 2GB (the memory "limit" of 32 bit applications). Could this apparent IIS config setting be causing our windows service memory to be capped as well?
To complicate matters a bit further, we are running this on .Net 1.1 32 Bit, under 64 Bit Windows Server 2003 (yes, we do have a reason for this unusual configuration), with 12 GB of RAM. From what I understand, each 32 Bit process should be able to address up to 4GB of RAM, should it not? Or, does this require changes to either the registry or a .Net config file?
NOTE: I am aware of the /3GB windows startup switch, but since we are on 64 Bit windows, I don't think that should apply (but feel free to correct me if I'm wrong).
Update 1
People seem to agree that processModel configuration is specific to ASP.Net applications only.
One answer says that 32 bit apps on 64 bit OS still have a 2GB per process limit, but most any reference I have been able to find says that each 32 bit process has access to 4GB on a 64 Bit OS. (But, perhaps this only only enabled through setting the IMAGEFILELARGEADDRESSAWARE bit?)
Some relevant links
How to set the IMAGE_FILE_LARGE_ADDRESS_AWARE bit for C# apps:
http://bytes.com/groups/net-c/569198-net-4gt
IIS6 Available Memory for 32-Bit Application with Web Garden on x64 OS (32Gb Ram):
IIS6 Available Memory for 32-Bit Application with Web Garden on x64 OS (32Gb Ram)
.NET Debugging Demos Lab 3: Memory:
http://blogs.msdn.com/tess/archive/2008/02/15/net-debugging-demos-lab-3-memory.aspx
Should be useful to find the specifics of the OutOfMemoryException?
Pushing the Limits - Virtual Memory:
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
Read this to understand concepts, and use testlimit to rule out machine/config issues. Once convinced it's your app's fault, read & re-read the articles from Tess' blog.
Final Update
Well, for our situation, this turned out to apparently be missing an .Net Service Pack....apparently there was an issue with remoting getting this exception, after the service pack it cleared up entirely!
The processModel configuration element is specific to ASP.NET processes and is not applicable to other .NET processes.
If you are running a 32-bit process on a 64-bit OS, your still subject to the process limit of a 32-bit process, which is 2GB. The practical limit is actually closer to 1.5 to 1.8GB, depending on your application characteristics - in other words, its very unlikely you will ever actually reach the 2GB process barrier.
In order for your Windows service to take advantage of the full 4GB of process space your expecting you will need to:
mark your process as LARGE_ADDRESS_AWARE. Believe this can be done using editbin.exe, but I've never done it! It also might open up a new can of worms... :) I'll see if I can't validate.
add /3GB in boot.ini
reboot server
Also consider the memory allocation profile of your application. If you are allocating objects greater than 85K in size, then these objects will be allocated in the large object heap. The large object heap is swept, but not compacted like other heaps, meaning that you could be experiencing fragmentation which will eventually keep the .net memory manager from allocating a continuous block of memory to satisfy the request.
You likely want to take snaps of the process and review what objects are in what heaps to get a better idea of whats going on within your process memory space.
Also, check the size of the page file on the server. An inadequately sized page file can also cause problems considering its shared across all processes, though that tends to error with system exceptions with some verbiage around 'virtual memory'.
Good luck!
Z
References:
Memory Limits for Windows Releases
Tess Ferrandez, .NET Debugging: Memory
The ProcessModel key is only used for ASP.NET, and even then, on Windows Server 2003 it's pretty much useless because you have the Application Pool configuration.
The /3GB switch does not apply to 64-bit Windows.
As I understand it, you CAN get OutOfMemoryExceptions if you have objects pinned in memory preventing the GC from effectively defragmenting the memory stack when a collection occurs.
You should strongly consider moving your service to a 64-bit application if you know you are jamming gigabytes of data into it. IMO you're playing with fire if you are coming this close to the limit.

Categories

Resources