This is a very general question, so I won't be providing any code as my project is fairly large.
I have an ASP.NET project, which I've been maintaining and adding to you for a year now. There's about 30 pages in total, each mostly having a couple of gridview's and SqlDataSource's, and usually not more than 10-15 methods in the codebehind. There is also a fairly hefty LINQ-to-SQL dbml file - with around 40-50 tables.
The application takes about 30-40 seconds to compile, which I suppose isn't too bad - but the main issue is that when deployed, it's slow at loading pages compared to other applications on the same server and app pool - it can take around 10 seconds to load a simple page. I'm very confident the issue isn't isolated to any specific page - it seems more of a global application issue.
I'm just wondering if there are any settings in the web.config etc I can use to help speed things up? Or just general tips on common 'mistakes' or issues developers encounter than can cause this. My application is close to completion, and the speed issues are really tainting the customer's view of it.
As the first step find out source of the problem, either application or database side.
Application side:
Start by enabling trace for slow pages and see size of ViewState, sometimes large ViewState cause slow page load.
Database side:
Use Sql Profiler to see what exactly requires a lot of time to get done
Useful links:
How to: Enable Tracing for an ASP.NET Application
Improve ASP.NET Performance By Disabling ViewState And Setting Session As ReadOnly
How to Identify Slow Running Queries with SQL Profiler
Most common oversight probably: don't forget to turn off debugging in your web.config before deploying.
<compilation debug="false" targetFramework="4.0">
A few others:
Don't enable session state or viewstate where you don't use it
Use output caching where possible, consider a caching layer in general i.e. for database queries (memcached, redis, etc.)
Minify and combine CSS
Minify javascript
What to do now:
Look at page load in Firebug or Chrome developer tools. Check to make sure you're not sending a huge payload over the wire.
Turn on trace output to see how the server is spending its time.
Check network throughput to your server.
How to avoid this in the future:
Remember that speed is a feature. If your app is slow as a dog, customers can't help but think it sucks. This means you want to run your app on "production" hardware as soon as you can and deploy regularly so that you catch performance problems as they're introduced. It's no fun to have an almost-done app that takes 10 seconds to deliver a page. Hopefully, you get lucky and can fix most of this with config. If you're unlucky, you might have some serious refactoring to do.
For example, if you've used ViewState pretending it was magic and free, you may have to rework some of that dependency.
Keep perf on a short leash. Your app will be snappy, and people will
think you are awesome.
Related
I acquired an ASP.NET web forms website built by some contract developers. The site was pretty bad at first and ran pretty slow, but has been tweaked and in an isolated environment runs okay.
The site sits on a single (beefy) web server and connects to an Oracle database. The site is within a very large organizations data center but is not load balanced. Lately the site has received about 3-4x the traffic it typically sees and the site is crawling with about 4k unique users a day. IIS6 by the way. The IT dept. has examined the CPU and memory levels and they appear fine. I know there are some other tweaks I can make to IIS to cache static files and I am adding OutputCache to controls where it makes sense. However, what else could be the cause of a slowdown that appears to be caused by load? I'm unsure if the application pool needs more memory allocated or if the site is simply a piece of junk. I know the latter is true, but it surprises me that significant load would be a code-only issue.
Any ideas?
Look to see if indexes are applied to the queries/store procedures being run. Also, see if the page is doing selects or update/deletes. When you apply indexes it can slow down the deletes/update and spreeds up the selects.
Usually there are profilers in the database that can be ran on the queries and they will indicate what indexes should be applied to what table. See you Database admin to see if that can be ran on the store procedures you use.
I have this online store built with the following language/technologies C#,MVC3,StructureMap for DI,SignalR for real-time notifications, and FBConnect for member login.
I am running this site on a dedicated server with Core2 Quad CPU # 2.40GHz and 8GB of RAM but the CPU usage still reaches 60-80% when many users are accessing the site. The site is loading photos from the database but I don't think this is the problem because I've already implemented caching of these photos which you could see on my older post # MVC3 + Streaming images from the Database is causing very high CPU usage on my server . I've even modified my pages to initially load 20 photos, and to only load more photos when the user scrolls to the bottom of the page.
I've discussed this to a friend who's also a .net developer and he said that I should probably research with the Session-State modes coz it might help. I haven't changed anything with regards to Session-State on my site so it's still using the default InProc.
My Question is: What's the best Session-State mode to use that could handle large traffic? And will it improve my site's performance?
Just to give you a picture of how the site get's a lot of users, here's how it works:
1.Photos of items for sale are posted by the seller in albums (max photos/album is 200 and they are loaded by 20's).
2.First customer to comment/reserve the item will be the winning buyer.
3.Seller then confirms the comments/reservations to the first buyer.
The site has more than 1000 users and at least 80% of this users are accessing the site at the same time.
Is it okay that I'm using the default InProc? Or should I Use StateServer or SQLServer mode?
Thanks
I don't think this is the problem because[...]
You are guessing. As long as you are guessing you will fail to handle the performance issue (assuming it is an issue: see below).
You need to measure. Use something like mini-profiler to determine exactly what is taking the time.
But:
the CPU usage still reaches 60-80%
Does the site slow down? Are requests queuing up? Do the users perceive the site as slow?
That level of CPU usage might be quite normal for the rate of requests you are getting.
If you use a separate server to store your sessions, that will ease the session management load on your primary server, but you'll gain network overhead when reading/writing to sessions on the other server.
Using SQLServer mode I believe is the best option in your case, especially since it gives you the benefit of having a "hard" copy of your sessions, in case there is any kind of dispute over who commented/reserved the item first.
You're already using SQL to load your images, so why not just give it one more thing to do?
If I'm overlooking something, I'll let the community point it out.
In my opinion, 800 users hitting the same server is not that light.
Sessionstate has nothing to do with it.
You're loading images from a database and holding them in cache. How many images do you have in cache? 100KB per image? 50K images or more?
If you get to a point that you have too many things in cache (not just images!), ASP.NET will automatically discard things in cache (depending on their importance). When you come to that point, your application will be constantly putting new things in cache that are being erased almost immediately after being inserted which is the same as not having the images in cache.
Anyway, I still think this might not be the case because if you really have 800 users hitting the same server at the same time, that's a lot.
I agree with #Richard. Use something like mini-profiler.
I'm trying to determine the cause of a very long (imho) initial start up of an ASP.NET application.
The application uses various third party libraries, and lots of references that I'm sure could be consolidated, however, I'm trying to identify (and apportion blame) the dlls and how much they contribute to the extended startup process.
So far, the start up times vary from 2-5 minutes depending on usage of other things on the box. This is unacceptable in my opinion based on the complexity of the site, and I need to reduce this to something in the region of 30 seconds maximum.
To be clear on the scope of the performance I'm looking for, it's the time from first request to the initial Application_Start method being hit.
So where would I start with getting information on which DLL's are loaded, and how long they take to load so I can try to put a cost/benefit together on which we need to tackle/consolidate.
From an ability perspective, I've been using JetBrains dotTrace for a while, and I'm clear on how benchmark the application once we're in the application, but it appears this is outside of the application code, and therefore outside of what I currently know.
What I'm looking for is methodologies on how to get visibility of what is happening before the first entry point into my code.
Note: I know that I can call the default page on recycle/upgrade to do an initial load, but I'd rather solve the actual problem rather than papering over it.
Note2: the hardware is more than sufficiently scaled and separated in terms of functionality, therefore I'm fairly sure that this isn't the issue.
Separate answer on profiling/debugging start up code:
w3wp is just a process that runs .Net code. So you can use all profiling and debugging tools you would use for normal .Net application.
One tricky point is that w3wp process starts automatically on first request to an application and if your tools do not support attaching to process whenever it start it makes problematic to investigate startup code of your application.
Trick to solve it is to add another application to the same Application Pool. This way you can trigger w3wp creation by navigating to another application, than attach/configure your tools against already running process. When you finally trigger your original application tools will see loading happening in existing w3wp process.
With 2-5 minutes delay you may not even need profiler - simply attach Visual Studio debugger the way suggested above and randomly trigger "break all" several times during load of your site. There is a good chance that slowest portion of the code will be on the stack of one of many threads. Also watch out for debug output - may give you some clues what is going on.
You may also use WinDbg to capture stacks of all threads in similar way (could be more light way than VS).
Your DLL references are loaded as needed, not all at once.
Do external references slow down my ASP.NET application? (VS: Add Reference dialog)
If startup is taking 2-5 minutes, I would look at what happens in Application_Start, and at what the DLLs do once loaded. Are they trying to connect to a remote service that is very slow? Is the machine far too small for what it's doing (e.g. running a DB with large amounts of data plus the web server on an AWS micro instance or similar)?
Since the load time is probably not the IIS worker process resolving references, I would turn to traditional application profilers (e.g. Jetbrains, Antz, dotTrace) to see where the time is being spent as the DLLs initialize, and in your Application_Start method.
Entertainment options check along with profiling:
profile everything, add time tracing to everything and log the information
if you have many ASPX views that need to be compiled on startup (I think it is default for release configuration) than it will take some time
references to Web services or other XML serialization related code will need to compile serialization assemblies if none are present yet
access to remote services (including local SQL) may require the services start up too
aggressive caching in application/remote services may require per-population of caches
Production:
What is the goal for start up time? Figure out it first, otherwise you will not be able to reach it.
What is price you are willing to pay to decrease start up time. Adding 1-10 more servers may be cheaper than spending months of development/test time and delaying the product.
Consider multiple servers, rolling restarts with warm up calls, web gardens
If caching of DB objects or in general is an issue consider existing distributed in-memory caches...
Despite a large number of dlls I'm almost sure that for a reasonable application it cannot be a cause of problem. Most of the time it is static objects initialization is causing slow startup.
In C# static variables are initialized when a type is first time accessed. I would recommend to use a sql profiler and see what are the queries that are performed during the start time of the application and from there see what are the objects that are expensive to initialized.
I have a fairly busy site which does around 10m views a month.
One of my app pools seemed to jam up for a few hours and I'm looking for some ideas on how to troubleshoot it..? I suspect that it somehow ran out of threads but I'm not sure how to determine this retroactively..? Here's what I know:
The site never went 'down', but around 90% of requests started timing out.
I can see a high number of "HttpException - Request timed out." in the log during the outage
I can't find any SQL errors or code errors that would have caused the timeouts.
The timeouts seem to have been site wide on all pages.
There was one page with a bug on it which would have caused errors on that specific page.
The site had to be restarted.
The site is ASP.NET C# 3.5 WebForms..
Possibilities:
Thread depletion: My thought is that the page causing the error may have somehow started jamming up the available threads?
Global code error: Another possibility is that one of my static classes has an undiscovered bug in it somewhere. This is unlikely as the this has never happened before, and I can't find any log errors for these classes, but it is a possibility.
UPDATE
I've managed to trace the issue now while it's occurring. The pages are being loaded normally but for some reason WebResource.axd and ScriptResource.axd are both taking a minute to load. In the performance counters I can see ASP.NET Requests Queued spikes at this point.
The first thing I'd try is Sam Saffron's CPU analyzer tool, which should give an indication if there is something common that is happening too much / too long. In part because it doesn't involve any changes; just run it at the server.
After that, there are various other debugging tools available; we've found that some very ghetto approaches can be insanely effective at seeing where time is spent (of course, it'll only work on the 10% of successful results).
You can of course just open the server profiling tools and drag in various .NET / IIS counters, which may help you spot some things.
Between these three options, you should be covered for:
code dropping into a black hole and never coming out (typically threading related)
code running, but too slowly (typically data access related)
I'm developing a web service whose methods will be called from a "dynamic banner" that will show a sort of queue of messages read from a sql server table.
The banner will have a heavy pressure in the home pages of high traffic sites; every time the banner will be loaded, it will call my web service, in order to obtain the new queue of messages.
Now: I don't want that all this traffic drives queries to the database every time the banner is loaded, so I'm thinking to use the asp.net cache (i.e. HttpRuntime.Cache[cacheKey]) to limit database accesses; I will try to have a cache refresh every minute or so.
Obviously I'll try have the messages as little as possible, to limit traffic.
But maybe there are other ways to deal with such a scenario; for example I could write the last version of the queue on the file system, and have the web service access that file; or something mixing the two approaches...
The solution is c# web service, asp.net 3.5, sql server 2000.
Any hint? Other approaches?
Thanks
Andrea
It depends on a lot of things:
If there is little change in the data (think backend with "publish" button or daily batches), then I would definitely use static files (updated via push from the backend). We used this solution on a couple of large sites and worked really well.
If the data is small enough, memory caching (i.e. Http Cache) is viable, but beware of locking issues and also beware that Http Cache will not work that well under heavy memory load, because items can be expired early if the framework needs memory. I have been bitten by it before! With the above caveats, Http Cache works quite well.
I think caching is a reasonable approach and you can take it a step further and add a SQL Dependency to it.
ASP.NET Caching: SQL Cache Dependency With SQL Server 2000
If you go the file route, keep this in mind.
http://petesbloggerama.blogspot.com/2008/02/aspnet-writing-files-vs-application.html
Writing a file is a better solution IMHO - its served by IIS kernel code, w/o the huge asp.net overhead and you can copy the file to CDNs later.
AFAIK dependency cashing is not very efficient with SQL Server 2000.
Also, one way to get around the memory limitation mentioned by Skliwz is that if you are using this service outside of the normal application you can isolate it in it's own app pool. I have seen this done before which helps as well.
Thanks all, as the data are little in size, but the underlying tables will change, I think that I'll go the HttpCache way: I need actually a way to reduce db access, even if the data are changing (so that's the reason to not using a direct Sql dependency as suggested by #Bloodhound).
I'll make some stress test before going public, I think.
Thanks again all.
Of course you could (should) also use the caching features in the SixPack library .
Forward (normal) cache, based on HttpCache, which works by putting attributes on your class. Simplest to use, but in some cases you have to wait for the content to be actually be fetched from database.
Pre-fetch cache, from scratch, which, after the first call will start refreshing the cache behind the scenes, and you are guaranteed to have content without wait in some cases.
More info on the SixPack library homepage. Note that the code (especially the forward cache) is load tested.
Here's an example of simple caching:
[Cached]
public class MyTime : ContextBoundObject
{
[CachedMethod(1)]
public DateTime Get()
{
Console.WriteLine("Get invoked.");
return DateTime.Now;
}
}