ASP.NET startup Performance profiling web - c#

I'm trying to determine the cause of a very long (imho) initial start up of an ASP.NET application.
The application uses various third party libraries, and lots of references that I'm sure could be consolidated, however, I'm trying to identify (and apportion blame) the dlls and how much they contribute to the extended startup process.
So far, the start up times vary from 2-5 minutes depending on usage of other things on the box. This is unacceptable in my opinion based on the complexity of the site, and I need to reduce this to something in the region of 30 seconds maximum.
To be clear on the scope of the performance I'm looking for, it's the time from first request to the initial Application_Start method being hit.
So where would I start with getting information on which DLL's are loaded, and how long they take to load so I can try to put a cost/benefit together on which we need to tackle/consolidate.
From an ability perspective, I've been using JetBrains dotTrace for a while, and I'm clear on how benchmark the application once we're in the application, but it appears this is outside of the application code, and therefore outside of what I currently know.
What I'm looking for is methodologies on how to get visibility of what is happening before the first entry point into my code.
Note: I know that I can call the default page on recycle/upgrade to do an initial load, but I'd rather solve the actual problem rather than papering over it.
Note2: the hardware is more than sufficiently scaled and separated in terms of functionality, therefore I'm fairly sure that this isn't the issue.

Separate answer on profiling/debugging start up code:
w3wp is just a process that runs .Net code. So you can use all profiling and debugging tools you would use for normal .Net application.
One tricky point is that w3wp process starts automatically on first request to an application and if your tools do not support attaching to process whenever it start it makes problematic to investigate startup code of your application.
Trick to solve it is to add another application to the same Application Pool. This way you can trigger w3wp creation by navigating to another application, than attach/configure your tools against already running process. When you finally trigger your original application tools will see loading happening in existing w3wp process.
With 2-5 minutes delay you may not even need profiler - simply attach Visual Studio debugger the way suggested above and randomly trigger "break all" several times during load of your site. There is a good chance that slowest portion of the code will be on the stack of one of many threads. Also watch out for debug output - may give you some clues what is going on.
You may also use WinDbg to capture stacks of all threads in similar way (could be more light way than VS).

Your DLL references are loaded as needed, not all at once.
Do external references slow down my ASP.NET application? (VS: Add Reference dialog)
If startup is taking 2-5 minutes, I would look at what happens in Application_Start, and at what the DLLs do once loaded. Are they trying to connect to a remote service that is very slow? Is the machine far too small for what it's doing (e.g. running a DB with large amounts of data plus the web server on an AWS micro instance or similar)?
Since the load time is probably not the IIS worker process resolving references, I would turn to traditional application profilers (e.g. Jetbrains, Antz, dotTrace) to see where the time is being spent as the DLLs initialize, and in your Application_Start method.

Entertainment options check along with profiling:
profile everything, add time tracing to everything and log the information
if you have many ASPX views that need to be compiled on startup (I think it is default for release configuration) than it will take some time
references to Web services or other XML serialization related code will need to compile serialization assemblies if none are present yet
access to remote services (including local SQL) may require the services start up too
aggressive caching in application/remote services may require per-population of caches
Production:
What is the goal for start up time? Figure out it first, otherwise you will not be able to reach it.
What is price you are willing to pay to decrease start up time. Adding 1-10 more servers may be cheaper than spending months of development/test time and delaying the product.
Consider multiple servers, rolling restarts with warm up calls, web gardens
If caching of DB objects or in general is an issue consider existing distributed in-memory caches...

Despite a large number of dlls I'm almost sure that for a reasonable application it cannot be a cause of problem. Most of the time it is static objects initialization is causing slow startup.
In C# static variables are initialized when a type is first time accessed. I would recommend to use a sql profiler and see what are the queries that are performed during the start time of the application and from there see what are the objects that are expensive to initialized.

Related

Make .NET executable to load faster in the first time

I made a simple Windows Forms executable in C# (by simple, I mean it has about 20 methods, but it's just one window), targeting .NET Framework 2.0. When the application loads, it doesn't do anything else than the default InitializeComponent(); in the constructor.
The first time I open, the application takes about 7 seconds to load in my Windows 8 here. Then it takes less than a second in the next times I open it.
In a Windows XP I tried, it takes about 30 seconds to load in the first time. A few friends of mine, when testing the application, also complain that it takes a lot of time to load in the first time (about 30 seconds too). Then it takes faster (1 or 2 seconds).
I assume this can be because .NET Framework is not loaded yet, so it takes some time to load in their machines.
Have you ever experienced the same problem with .NET applications?
Do you have any clue why this happens and how can I fix this?
EDIT - I see some people are suggesting NGEN. However, if this needs to be done in every machine that will use the application, it can't be the solution for this. I want to release my application to a large public of "common users", and that makes no sense if I require them to do some extra stuff to use my application. It's already bad enough that we are requiring the .NET Framework. My application should only be a standalone EXE without any dependencies (except for the framework).
Thank you.
You can try pre-generating the native image using NGEN which .NET will use when your application loads.
You can find more information here - http://msdn.microsoft.com/en-GB/library/6t9t5wcf(v=vs.80).aspx
These are platform dependant and not transferable usually so you'll need to do this on each machine you deploy on.
This is most likely caused by Just-In-Time compilation of the CIL. You can compile your code for the environment that you are running on using NGen. However, you will most likely lose the platform agnostic-ness of .Net if you go down this route.
This blog entry on MSDN explains the performance benefits of NGen in some detail, I suggest giving it a read.
Update following comments
As Lloyd points out in his answer, if you give your users an installer NGen can be run at this point for the environment that the application is being installed on.
However, if NGen isn't an option, then I'd recommend starting your application with a profiler attached. It can highlight any performance bottlenecks in your code. If you have any singletons or expensive static initializers these will be highlighted as such and will give you the opportunity to refactor them.
There are many great .Net profilers out there (see this SO question for details), personally I'd recommend having a look at dotTrace - they also offer a free trial period for a month which may be all that's required for your application.
[...] targeting .NET Framework 2.0. When the application loads, it doesn't do anything else than the default InitializeComponent(); in the constructor.
Actually that's not true. An application also loads types, initializes static constructors, etc.
In most cases when I have performance issues, I simply use a profiler... There can be a lot going on and a profiler is the easiest way to get some insights. There are some different options available; personally, I'm a fan of Red-Gate's profilers and they have a trial you can use.
It's noteworthy that the way this happens has changed in the .NET Framework. If you cannot get the performance you want in 2.0, I'd simply try another framework. Granted, Windows XP might be a small issue there...

High CPU and Memory usage from .NET MVC app

We are seeing a very high amount of CPU and memory usage from one of our .NET MVC apps and can't seem to track down what the cause of it is. Our group does not have access to the web server itself but instead gets notified automatically when certain limits are hit (90+% of CPU or memory). Running locally we can't seem to find the problem. Some items we think might be the culprit
The app has a number of threads running in the background when users take certain actions
We are using memcached (on a different machine than the web server)
We are using web sockets
Other than that the app is pretty standard as far as web applications go. Couple of forms here, login/logout there, some admin capabilities to manage users and data; nothing super fancy.
I'm looking at two different solutions and wondering what would be best.
Create a page inside the app itself (available only to app admins) that shows information about memory and CPU being used. Are there examples of this or is it even possible?
Use some type of 3rd party profiling service or application that gets installed on the web servers and allows us to drive down to find what is causing the high CPU and memory usage in the app.
i recommed the asp.net mvc miniprofiler. http://miniprofiler.com/
it is simple to implement and to extend, can run in production mode, can store its results to SQL Server. i used it many times to find difficult performance issues.
Another possibility is to use http://getglimpse.com/ in combination with the miniprofiler glimpse-plugin https://github.com/mcliment/miniprofiler-glimpse-plugin
both tools are open source and don't require admin access to the server.
You can hook up Preemptive's Runtime Intelligence to it. - http://www.preemptive.com/
Otherwise a profiler, or load test could help find the problem. Do you have anything monitoring the actual machine health? (Processor usage, memory usage, disk queue lengths, etc..).
http://blogs.msdn.com/b/visualstudioalm/archive/2012/06/04/getting-started-with-load-testing-in-visual-studio-2012.aspx
Visual studio has a built-in profiler (depending on version and edition). You may be able to WMI query the web server that has the issues, or write/provide diagnostic recording/monitoring tools to hand them over to someone that does have access.
Do you have any output caching? what version of IIS? Is the 90% processor usage you are getting alerted to showing that your web process is actually the one doing it? ( Perhaps it's not your app if the alert is improperly configured)
I had a similar situation and I created a system monitor to my app admins based on this project

ASP.NET application - ideas on how to speed it up

This is a very general question, so I won't be providing any code as my project is fairly large.
I have an ASP.NET project, which I've been maintaining and adding to you for a year now. There's about 30 pages in total, each mostly having a couple of gridview's and SqlDataSource's, and usually not more than 10-15 methods in the codebehind. There is also a fairly hefty LINQ-to-SQL dbml file - with around 40-50 tables.
The application takes about 30-40 seconds to compile, which I suppose isn't too bad - but the main issue is that when deployed, it's slow at loading pages compared to other applications on the same server and app pool - it can take around 10 seconds to load a simple page. I'm very confident the issue isn't isolated to any specific page - it seems more of a global application issue.
I'm just wondering if there are any settings in the web.config etc I can use to help speed things up? Or just general tips on common 'mistakes' or issues developers encounter than can cause this. My application is close to completion, and the speed issues are really tainting the customer's view of it.
As the first step find out source of the problem, either application or database side.
Application side:
Start by enabling trace for slow pages and see size of ViewState, sometimes large ViewState cause slow page load.
Database side:
Use Sql Profiler to see what exactly requires a lot of time to get done
Useful links:
How to: Enable Tracing for an ASP.NET Application
Improve ASP.NET Performance By Disabling ViewState And Setting Session As ReadOnly
How to Identify Slow Running Queries with SQL Profiler
Most common oversight probably: don't forget to turn off debugging in your web.config before deploying.
<compilation debug="false" targetFramework="4.0">
A few others:
Don't enable session state or viewstate where you don't use it
Use output caching where possible, consider a caching layer in general i.e. for database queries (memcached, redis, etc.)
Minify and combine CSS
Minify javascript
What to do now:
Look at page load in Firebug or Chrome developer tools. Check to make sure you're not sending a huge payload over the wire.
Turn on trace output to see how the server is spending its time.
Check network throughput to your server.
How to avoid this in the future:
Remember that speed is a feature. If your app is slow as a dog, customers can't help but think it sucks. This means you want to run your app on "production" hardware as soon as you can and deploy regularly so that you catch performance problems as they're introduced. It's no fun to have an almost-done app that takes 10 seconds to deliver a page. Hopefully, you get lucky and can fix most of this with config. If you're unlucky, you might have some serious refactoring to do.
For example, if you've used ViewState pretending it was magic and free, you may have to rework some of that dependency.
Keep perf on a short leash. Your app will be snappy, and people will
think you are awesome.

Hints and tips for a Windows service I am creating in C# and Quartz.NET

I have a project ongoing at the moment which is create a Windows Service that essentially moves files around multiple paths. A job may be to, every 60 seconds, get all files matching a regular expression from an FTP server and transfer them to a Network Path, and so on. These jobs are stored in an SQL database.
Currently, the service takes the form of a console application, for ease of development. Jobs are added using an ASP.NET page, and can be editted using another ASP.NET page.
I have some issues though, some relating to Quartz.NET and some general issues.
Quartz.NET:
1: This is the biggest issue I have. Seeing as I'm developing the application as a console application for the time being, I'm having to create a new Quartz.NET scheduler on all my files/pages. This is causing multiple confusing errors, but I just don't know how to institate the scheduler in one global file, and access these in my ASP.NET pages (so I can get details into a grid view to edit, for example)
2: My manager would suggested I could look into having multiple 'configurations' inside Quartz.NET. By this, I mean that at any given time, an administrator can change the applications configuration so that only specifically chosen applications run. What'd be the easiest way of doing this in Quartz.NET?
General:
1: One thing that that's crucial in this application is assurance that the file has been moved and it's actually on the target path (after the move the original file is deleted, so it would be disastrous if the file is deleted when it hasn't actually been copied!). I also need to make sure that the files contents match on the initial path, and the target path to give peace of mind that what has been copied is right. I'm currently doing this by MD5 hashing the initial file, copying the file, and before deleting it make sure that the file exists on the server. Then I hash the file on the server and make sure the hashes match up. Is there a simpler way of doing this? I'm concerned that the hashing may put strain on the system.
2: This relates to the above question, but isn't as important as not even my manager has any idea how I'd do this, but I'd love to implement this. An issue would arise if a job is executed when a file is being written to, which may be that a half written file will be transferred, thus making it totally useless, and it would also be bad as the the initial file would be destroyed while it's being written to! Is there a way of checking of this?
As you've discovered, running the Quartz scheduler inside an ASP.NET presents many problems. Check out Marko Lahma's response to your question about running the scheduler inside of an ASP.NET web app:
Quartz.Net scheduler works locally but not on remote host
As far as preventing race conditions between your jobs (eg. trying to delete a file that hasn't actually been copied to the file system yet), what you need to implement is some sort of job-chaining:
http://quartznet.sourceforge.net/faq.html#howtochainjobs
In the past I've used the TriggerListeners and JobListeners to do something similar to what you need. Basically, you register event listeners that wait to execute certain jobs until after another job is completed. It's important that you test out those listeners, and understand what's happening when those events are fired. You can easily find yourself implementing a solution that seems to work fine in development (false positive) and then fails to work in production, without understanding how and when the scheduler does certain things with regards to asynchronous job execution.
Good luck! Schedulers are fun!

What could cause a Windows Service to hang when a Console App doing the exact same thing using the exact same base libraries doesn't?

I hate asking questions like this - they're so undefined... and undefinable, but here goes.
Background:
I've got a DLL that is the guts of an application that is a timed process. My timer receives a configuration for the interval at which it runs and a delegate that should be run when the interval elapses. I've got another DLL that contains the process that I inject.
I created two applications, one Windows Service and one Console Application. Each of the applications read their own configuration file and load the same libraries pushing the configured timer interval and delegate into my timed process class.
Problem:
Yesterday and for the last n weeks, everything was working fine in our production environment using the Windows Service. Today, the Windows Service will run for a period of around 20-30 minutes and hangs (with a timer interval of 30 secods), but the console application runs without issue and has for the past 4 hours. Detailed logging doesn't indicate any failure. It's as if the Windows Service just...dies quietly - without stopping.
Given that my Windows Service and Console Applications are doing the exact same thing, I can only think that there is something that is causing the Windows Service process to hang - but I have no idea what could be causing that. I've checked the configuration files, and they're both identical - I even copied and pasted the contents of one into the other just to be sure. No dice.
Can anyone make suggestions as to what might cause a Windows Service to hang, when a counterpart Console Application using the same base libraries doesn't; or can anyone point me in the direction of tools that would allow me to diagnose what could be causing this issue?
Thanks for everyone's help - still digging.
You need to figure out what changed on the production server. At first, the IT guys responsible will swear that nothing changed but you have to be persistent. i've seen this happen to often i've lost count. Software doesn't spoil. Period. The change must have been to the environment.
Difference in execution: You have two apps running the same code. The most likely difference (and culprit) is that the service is running with a different set of security credentials than your console app and might fall victim to security vagaries. Check on that first. Which Windows account is running the service? What is its role and scope? Is there any 3rd party security software running on the server and perhaps Killing errant apps? Do you have to register your service with a 3rd party security service? Is your .Net assembly properly signed? Are your .Net assemblies properly registered and configured on the server? Last but not least, don't forget that a debugger user, which you most likely are, gets away with a lot more stuff than many other account types.
Another thought: Since timing seems to be part of the issues, check the scheduled tasks on the machine. Perhaps there's a process that is set to go off every 30 minutes that is interfering with your own.
You can debug a Windows service by running it interactively within Visual Studio. This may help you to isolate the problem by setting (perhaps conditional) breakpoints.
Alternatively, you can use the Visual Studio "Attach to process" dialog window to find the service process and attach to it with the "Debug CLR" option enabled. Again this allows you to set breakpoints as needed.
Are you using any assertions? If an assertion fires without being re-directed to write to a log file, your service will hang. If the code throws an unhandled exception, perhaps because of a memory leak, then your service process will crash. If you set the Service Control Manager (SCM) to restart your process in the event of a crash, you should be able to see that the service has been restarted. As you have identical code running in both environments, these two situations don't seem likely. But remember that your service is being hosted by the SCM, which means a very different environment to the one in which your console app is running.
I often use a "heartbeat", where each active thread in the service sends a regular (say every 30 seconds) message to a local MSMQ. This enables manual or automated monitoring, and should give you some clues when these heartbeat messages stop arriving.
Annother possibility is some sort of permissions problem, because the service is probably running with a different local/domain user to the console.
After the hang, can you use the SCM to stop the service? If you can't, then there is probably some sort of thread deadlock problem. After the service appears to hang, you can go to a command-line and type sc queryex servicename. This should give you the current STATE of the service.
I would probably put in some file logging just to see how far the program is getting. It may give you a better idea of what is looping/hanging/deadlocked/crashing.
You can try these techniques
Logging start logging the flow of the code in the service. Have this parameter based so you dont have a deluge after you are done. You should log all function names, parameters, timestamps.
Attach Debugger Locally or Remotely attach a debugger with the code to the running service, set appropriate breakpoints (can be based on the data gathered from logging)
PerfMon Run this utility and gather information about the machine that the service is running on for any additional clues (high CPU spikes, IO spikes, excessive paging, etc)
Microsoft provides a good resource on debugging a Windows Service. That essentially sounds like what you'd have to do given that your question is so generic. With that said, has any changes been made to the system over the last few days that could aversely affect the service? Have you made any updates to the code that change the way the service might possibly work?
Again, I think you're going to have to do some serious debugging to find your problem.
What type of timer are you using in the windows service? I've seen numberous people on SO have problems with timers and windows services. Here is a good tutorial just to make sure you are setting it up correctly and using the right type of timer. Hope that helps.
Another potential problem in reference to psasik's answer is if your application is relying on something only available when being run in User Mode.
Running in service mode runs in (is it desktop0?) which can cause some issues in my experience if you are trying to determine states of something that can only be seen in user mode.
Smells like a threading issue to me. Is there any threading or async work being done at all? One crucial question is "does the service hang on the same line of code or same method every time?" Use your logging to find out the last thing that happens before a hang, and if so, post the problem code.
One other tool you may consider is a good profiler. If it is .NET code, I believe RedGate ANTS can monitor it and give you a good picture of any threadlock scenarios.

Categories

Resources