I acquired an ASP.NET web forms website built by some contract developers. The site was pretty bad at first and ran pretty slow, but has been tweaked and in an isolated environment runs okay.
The site sits on a single (beefy) web server and connects to an Oracle database. The site is within a very large organizations data center but is not load balanced. Lately the site has received about 3-4x the traffic it typically sees and the site is crawling with about 4k unique users a day. IIS6 by the way. The IT dept. has examined the CPU and memory levels and they appear fine. I know there are some other tweaks I can make to IIS to cache static files and I am adding OutputCache to controls where it makes sense. However, what else could be the cause of a slowdown that appears to be caused by load? I'm unsure if the application pool needs more memory allocated or if the site is simply a piece of junk. I know the latter is true, but it surprises me that significant load would be a code-only issue.
Any ideas?
Look to see if indexes are applied to the queries/store procedures being run. Also, see if the page is doing selects or update/deletes. When you apply indexes it can slow down the deletes/update and spreeds up the selects.
Usually there are profilers in the database that can be ran on the queries and they will indicate what indexes should be applied to what table. See you Database admin to see if that can be ran on the store procedures you use.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Currently we were working on a cloud migration project where we made the following changes:
SQL Server 2014 to Azure SQL PaaS
Redis cache[Windows porting] to Azure Redis PaaS
Static files from shared drives to Azure File Service
Implemented Transient Fault Handling for database interactions.
HttpSessionState changed from SQL Server to Custom[Redis PaaS]
The Application had two web applications that use the same database:
One built in Classic model of dot net coding with web-forms.
The other application built using dot net MVC4.
After we moved the applications from existing Rackspace environment[2 servers each with 4GB RAM] to Azure and ran a load test and received the following results:
The MVC4 application is fractionally faster.
The Web-Form application started performing poorly, with same load,
response time increased from 0.46 seconds to 45.8 seconds.
The memory usage is same, database utilization is around 30%-40% and the CPU utilization in nearly 100%(all the web-servers) at 1100 concurrent users(at Rackspace, it served 4500 concurrent users).
We tested the application 2 D5 azure server VMs with RAM being higher and CPU being faster.
Can anyone highlight how such drastic performance drop(one application performing almost same, other one performing almost 100 times slower) is possible?
NB: One observation, the CPU utilization stays at 100% even after 30mins of stopping the load-test. Then it drops quickly.
I will second the notion (emphatically) that you invest as much time and energy as you can in profiling your application to identify bottlenecks. Run profiles on-premises and in Azure and compare, if possible.
Your application clearly has many moving parts and a reasonably large surface area... that's no crime, but it does mean that its hard to pinpoint the issue(s) you're having without some visibility into runtime behavior. The issue could lie in your Redis caching, in the static file handling, or in the session state loading/unloading/interaction cycle. Or it could be elsewhere. There's no magic answer here... you need data.
That said... I've consulted on several Azure migration projects and my experience tells me that one area to look closer at is the interaction between your ASP.NET Web Forms code and SQL. Overly-chatty applications (ones that average multiple SQL calls per HTTP request) and/or ones that issue expensive queries that perform lots of logic on the database or return large result sets, tend to exhibit poor performance in public clouds like Azure, where code and data may not be co-located, "noisy neighbor" problems can impact database performance, etc. These issues aren't unique to Web Forms applications or Azure, but they tend to be exacerbated in older, legacy applications that were written with an assumption of code and data being physically close. Since you don't control (completely) where your code and data live in Azure relative to each other, issues that may be masked in an on-prem scenario can surface when moving to the cloud.
Some specifics to consider:
take a close look at the use of data binding in your Web Forms app... in practice it tends to encourage expensive queries and transfer of large result sets from database to application, something you might sometimes get away with on-premises but not in the cloud
take another look at your SQL configuration... you don't mention what tier you're using (Basic, Standard, Premium) but this choice can have a big impact on your overall app performance (and budget!). If it turns out (for example) that your Web Forms app does issue expensive queries, then use of a higher tier may help
Azure SQL DB tiers
familiarize yourself with the notion of "cloud native" vs. "cloud capable" applications... generally speaking, just because you can find a way to run an application in the cloud doesn't mean its ideally suited to do so. From your description it sounds like you've made some effort to utilize some cloud-native services, so that's a good start. But if I had to guess (since we can't see your code) I'd think that you might need some additional refactoring in your Web Forms app to make it more efficient and better able to run in an environment you don't have 100% control over.
More on cloud-native
dated but still relevant advice on Azure migration
If you can give us more details on where you see bottlenecks, we can offer more specific advice.
Best of luck!
There is some loop in the code that cause 100% CPU.
When the problem occurs, take a dump from (the kudu). Analyze it in windbg by
1) list threads cpu time with !runaway
2) check the callstack of the threads, specifically the greatest cpu consumer with
~*e!clrstack and with ~*kb
We are seeing a very high amount of CPU and memory usage from one of our .NET MVC apps and can't seem to track down what the cause of it is. Our group does not have access to the web server itself but instead gets notified automatically when certain limits are hit (90+% of CPU or memory). Running locally we can't seem to find the problem. Some items we think might be the culprit
The app has a number of threads running in the background when users take certain actions
We are using memcached (on a different machine than the web server)
We are using web sockets
Other than that the app is pretty standard as far as web applications go. Couple of forms here, login/logout there, some admin capabilities to manage users and data; nothing super fancy.
I'm looking at two different solutions and wondering what would be best.
Create a page inside the app itself (available only to app admins) that shows information about memory and CPU being used. Are there examples of this or is it even possible?
Use some type of 3rd party profiling service or application that gets installed on the web servers and allows us to drive down to find what is causing the high CPU and memory usage in the app.
i recommed the asp.net mvc miniprofiler. http://miniprofiler.com/
it is simple to implement and to extend, can run in production mode, can store its results to SQL Server. i used it many times to find difficult performance issues.
Another possibility is to use http://getglimpse.com/ in combination with the miniprofiler glimpse-plugin https://github.com/mcliment/miniprofiler-glimpse-plugin
both tools are open source and don't require admin access to the server.
You can hook up Preemptive's Runtime Intelligence to it. - http://www.preemptive.com/
Otherwise a profiler, or load test could help find the problem. Do you have anything monitoring the actual machine health? (Processor usage, memory usage, disk queue lengths, etc..).
http://blogs.msdn.com/b/visualstudioalm/archive/2012/06/04/getting-started-with-load-testing-in-visual-studio-2012.aspx
Visual studio has a built-in profiler (depending on version and edition). You may be able to WMI query the web server that has the issues, or write/provide diagnostic recording/monitoring tools to hand them over to someone that does have access.
Do you have any output caching? what version of IIS? Is the 90% processor usage you are getting alerted to showing that your web process is actually the one doing it? ( Perhaps it's not your app if the alert is improperly configured)
I had a similar situation and I created a system monitor to my app admins based on this project
I have this online store built with the following language/technologies C#,MVC3,StructureMap for DI,SignalR for real-time notifications, and FBConnect for member login.
I am running this site on a dedicated server with Core2 Quad CPU # 2.40GHz and 8GB of RAM but the CPU usage still reaches 60-80% when many users are accessing the site. The site is loading photos from the database but I don't think this is the problem because I've already implemented caching of these photos which you could see on my older post # MVC3 + Streaming images from the Database is causing very high CPU usage on my server . I've even modified my pages to initially load 20 photos, and to only load more photos when the user scrolls to the bottom of the page.
I've discussed this to a friend who's also a .net developer and he said that I should probably research with the Session-State modes coz it might help. I haven't changed anything with regards to Session-State on my site so it's still using the default InProc.
My Question is: What's the best Session-State mode to use that could handle large traffic? And will it improve my site's performance?
Just to give you a picture of how the site get's a lot of users, here's how it works:
1.Photos of items for sale are posted by the seller in albums (max photos/album is 200 and they are loaded by 20's).
2.First customer to comment/reserve the item will be the winning buyer.
3.Seller then confirms the comments/reservations to the first buyer.
The site has more than 1000 users and at least 80% of this users are accessing the site at the same time.
Is it okay that I'm using the default InProc? Or should I Use StateServer or SQLServer mode?
Thanks
I don't think this is the problem because[...]
You are guessing. As long as you are guessing you will fail to handle the performance issue (assuming it is an issue: see below).
You need to measure. Use something like mini-profiler to determine exactly what is taking the time.
But:
the CPU usage still reaches 60-80%
Does the site slow down? Are requests queuing up? Do the users perceive the site as slow?
That level of CPU usage might be quite normal for the rate of requests you are getting.
If you use a separate server to store your sessions, that will ease the session management load on your primary server, but you'll gain network overhead when reading/writing to sessions on the other server.
Using SQLServer mode I believe is the best option in your case, especially since it gives you the benefit of having a "hard" copy of your sessions, in case there is any kind of dispute over who commented/reserved the item first.
You're already using SQL to load your images, so why not just give it one more thing to do?
If I'm overlooking something, I'll let the community point it out.
In my opinion, 800 users hitting the same server is not that light.
Sessionstate has nothing to do with it.
You're loading images from a database and holding them in cache. How many images do you have in cache? 100KB per image? 50K images or more?
If you get to a point that you have too many things in cache (not just images!), ASP.NET will automatically discard things in cache (depending on their importance). When you come to that point, your application will be constantly putting new things in cache that are being erased almost immediately after being inserted which is the same as not having the images in cache.
Anyway, I still think this might not be the case because if you really have 800 users hitting the same server at the same time, that's a lot.
I agree with #Richard. Use something like mini-profiler.
This is a very general question, so I won't be providing any code as my project is fairly large.
I have an ASP.NET project, which I've been maintaining and adding to you for a year now. There's about 30 pages in total, each mostly having a couple of gridview's and SqlDataSource's, and usually not more than 10-15 methods in the codebehind. There is also a fairly hefty LINQ-to-SQL dbml file - with around 40-50 tables.
The application takes about 30-40 seconds to compile, which I suppose isn't too bad - but the main issue is that when deployed, it's slow at loading pages compared to other applications on the same server and app pool - it can take around 10 seconds to load a simple page. I'm very confident the issue isn't isolated to any specific page - it seems more of a global application issue.
I'm just wondering if there are any settings in the web.config etc I can use to help speed things up? Or just general tips on common 'mistakes' or issues developers encounter than can cause this. My application is close to completion, and the speed issues are really tainting the customer's view of it.
As the first step find out source of the problem, either application or database side.
Application side:
Start by enabling trace for slow pages and see size of ViewState, sometimes large ViewState cause slow page load.
Database side:
Use Sql Profiler to see what exactly requires a lot of time to get done
Useful links:
How to: Enable Tracing for an ASP.NET Application
Improve ASP.NET Performance By Disabling ViewState And Setting Session As ReadOnly
How to Identify Slow Running Queries with SQL Profiler
Most common oversight probably: don't forget to turn off debugging in your web.config before deploying.
<compilation debug="false" targetFramework="4.0">
A few others:
Don't enable session state or viewstate where you don't use it
Use output caching where possible, consider a caching layer in general i.e. for database queries (memcached, redis, etc.)
Minify and combine CSS
Minify javascript
What to do now:
Look at page load in Firebug or Chrome developer tools. Check to make sure you're not sending a huge payload over the wire.
Turn on trace output to see how the server is spending its time.
Check network throughput to your server.
How to avoid this in the future:
Remember that speed is a feature. If your app is slow as a dog, customers can't help but think it sucks. This means you want to run your app on "production" hardware as soon as you can and deploy regularly so that you catch performance problems as they're introduced. It's no fun to have an almost-done app that takes 10 seconds to deliver a page. Hopefully, you get lucky and can fix most of this with config. If you're unlucky, you might have some serious refactoring to do.
For example, if you've used ViewState pretending it was magic and free, you may have to rework some of that dependency.
Keep perf on a short leash. Your app will be snappy, and people will
think you are awesome.
I have used ASP.NET in mostly intranet scenarios and pretty familiar with it but for something such as shopping cart or similar session data there are various possibilities. To name a few:
1) State-Server session
2) SQL Server session
3) Custom database session
4) Cookie
What have you used and what our your success or lessons learnt stories and what would you recommend? This would obviously make a difference in a large-scale public website so please comment on your experiences.
I have not mentioned in-proc since in a large-scale app this has no place.
Many thanks
Ali
The biggest lesson I learned was one I already knew in theory, but got to see in practice.
Removing all use of sessions entirely from an application (does not necessarily mean all of the site) is something we all know should bring a big improvement to scalability.
What I learnt was just how much of an improvement it could be. By removing the use of sessions, and adding some code to handle what had been handled by them before (which at each individual point was a performance lose, as each individual point was now doing more work than it had before) the performance gain was massive to the point of making actions one would measure in many seconds or even a couple of minutes become sub-second, CPU usage became a fraction of what it had been, and the number of machines and amount of RAM went from clearly not enough to cope, to be a rather over-indulgent amount of hardware.
If sessions cannot be removed entirely (people don't like the way browsers use HTTP authentication, alas), moving much of it into a few well-defined spots, ideally in a separate application on the server, can have a bigger effect that which session-storage method is used.
In-proc certainly can have a place in a large-scale application; it just requires sticky sessions at the load balancing level. In fact, the reduced maintenance cost and infrastructure overhead by using in-proc sessions can be considerable. Any enterprise-grade content switch you'd be using in front of your farm would certainly offer such functionality, and it's hard to argue for the cash and manpower of purchasing/configuring/integrating state servers versus just flipping a switch. I am using this in quite large scaled ASP.NET systems with no issues to speak of. RAM is far too cheap to ignore this as an option.
In-proc session (at least when using IIS6) can recycle at any time and is therefore not very reliable because the sessions will end when the server decides, not when the session actually times out. The sessions will also expire when you deploy a new version of the web site, which is not true of server-based session providers. This can potentially give your users a bad experience if you update in the middle of their session.
Using a Sql Server is the best option because it is possible to have sessions that never expire. However, the cost of the server, disk space, its maintenance, and peformance all have to be considered. I was using one on my E-commerce app for several years until we changed providers to one with very little database space. It was a shame that it had to go.
We have been using the state service for about 3 years now and haven't had any issues. That said, we now have the session timeout set at an hour an in E-commerce that is probably costing us some business vs the never expire model.
When I worked for a large company, we used a clustered SQL Server in another application that was more critical to remain online. We had multiple redundency on every part of the system including the network cards. Keep in mind that adding a state server or service is adding a potential single point of failure for the application unless you go the clustered route, which is more expensive to maintain.
There was also an issue when we first switched to the SQL based approach where binary objects couldn't be serialized into session state. I only had a few and modified the code so it wouldn't need the binary serialization so I could get the site online. However, when I went back to fix the serialization issue a few weeks later, it suddenly didn't exist anymore. I am guessing it was fixed in a Windows Update.
If you are concerned about security, state server is a no-no. State server performs absolutely no access checks, anybody who is granted access to the tcp port state server uses can access or modify any session state.
In proc is unreliable (and you mentioned that) so that's not to consider.
Cookies isn't really a session state replacement since you can't store much data there
I vote for a database based storage (if needed at all) of some kind, it has the best possibility to scale.