How to find performance bottleneck in application? - c#

Question is a bit general, but I'll try my best to ask it correctly.
So I started supporting system which is composed of parts:
Client Application (.net/c#)
WebService Server (IIS 6)
DB (Oracle 10g)
In client application, user is making reports requests, which are proceeded by the web service, and the data is taken from DB.
And some of these reports are proceeding VERY SLOW. About 1 hour for proceeding about 100 000 rows from 1 table.
So I want to know, which part of the system is slowing the whole system down.
What are the most common practics to find this out? I understand that the question is very complex and general and recommending particular instrument is quite impossible. But what are common tactics? What is usual way to find out if system is slowing down on DB, web service or client side?

You can use profilers.You can find "Hot spots" based on profiling methods(eg CPU Sampling ,Instrumentation...and etc) .Personally for C# client i use visual studio profiler ,Jabraines DotTrace and ANT Memory profiler .Additionally you can use windows performance monitor as well. I use SQL profiler and RedGate Profiler for Database profiling.But there are more profilers available over the internet.Just have a research on Google and select a suitable one.

You may have answered your own question in the "where", ie have a look at the queries called when running the reports. As for the "How" have a look at the tools/profilers available to profile your application and the database.

Related

any one used ScaleArc tool to handle automatic failover of servers and load balancing?

i have come across tool called ScaleArc which helps in handling auto failover, caching, dynamic load balancing etc..
is there anyone who used it? and how can i download it and integrate it to ssms
I can attest to using ScaleArc successfully within our SSAS CRM automotive platform consisting of 4000+ Databases on 100+ SQL Servers in our production environment.
ScaleArc is an appliance that you add your SQL Server Availability groups to that provide caching of current login activity which include customizable load balancing across all the replicas within that group. The only requirement is that you have SQL Server Always on configured on your servers and ScaleArc does the rest; very easy to implement!
We have experienced automatic failovers within our single point of failure servers and NONE of our customers even noticed. We have also conducted daytime maintenance that has required us to failover to secondary nodes during business hours again; no customer impact.
If uptime is of extreme concern to you; then ScaleArc is your answer. Prior to ScaleArc; we experienced a lot of downtime, now we have a %99.9999 uptime record on our Single point of failure servers.
I hope this helps!
Michael Atkins, Director, IT Operations
I will second Mr. Atkins experience although my implementation was much smaller.
It enabled us to reduce our SQL server footprint from 22 SQL servers to 12 and with smaller SKU's dropping our overall costs by more than the ScaleArc investment. Adding to that the increased uptime and drop in issue investigations, it was a great investment.
I put ScaleArc in place for a customer support forum and it does a great job with failovers and makes patching much easier as you can leverage the script builder to take servers in an out of rotation gracefully and then add them to the run book for your patching and deployment activities. Our uptime went up considerably after the ScaleArc implementation, >99.99%.
The automatic load balancing algorithm actually improved performance by sending calls to a 2nd datacenter 65ms away. When we saw this in the report, we were able to dig into the code and find two bugs that needed to be addressed.
I strongly recommend that you take a look at ScaleArc. It may not be right for every engagement, but is well worth the time to see.
Michael Schaeffer
Senior Business Program Manager

Huge performance drop after moving to Azure [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Currently we were working on a cloud migration project where we made the following changes:
SQL Server 2014 to Azure SQL PaaS
Redis cache[Windows porting] to Azure Redis PaaS
Static files from shared drives to Azure File Service
Implemented Transient Fault Handling for database interactions.
HttpSessionState changed from SQL Server to Custom[Redis PaaS]
The Application had two web applications that use the same database:
One built in Classic model of dot net coding with web-forms.
The other application built using dot net MVC4.
After we moved the applications from existing Rackspace environment[2 servers each with 4GB RAM] to Azure and ran a load test and received the following results:
The MVC4 application is fractionally faster.
The Web-Form application started performing poorly, with same load,
response time increased from 0.46 seconds to 45.8 seconds.
The memory usage is same, database utilization is around 30%-40% and the CPU utilization in nearly 100%(all the web-servers) at 1100 concurrent users(at Rackspace, it served 4500 concurrent users).
We tested the application 2 D5 azure server VMs with RAM being higher and CPU being faster.
Can anyone highlight how such drastic performance drop(one application performing almost same, other one performing almost 100 times slower) is possible?
NB: One observation, the CPU utilization stays at 100% even after 30mins of stopping the load-test. Then it drops quickly.
I will second the notion (emphatically) that you invest as much time and energy as you can in profiling your application to identify bottlenecks. Run profiles on-premises and in Azure and compare, if possible.
Your application clearly has many moving parts and a reasonably large surface area... that's no crime, but it does mean that its hard to pinpoint the issue(s) you're having without some visibility into runtime behavior. The issue could lie in your Redis caching, in the static file handling, or in the session state loading/unloading/interaction cycle. Or it could be elsewhere. There's no magic answer here... you need data.
That said... I've consulted on several Azure migration projects and my experience tells me that one area to look closer at is the interaction between your ASP.NET Web Forms code and SQL. Overly-chatty applications (ones that average multiple SQL calls per HTTP request) and/or ones that issue expensive queries that perform lots of logic on the database or return large result sets, tend to exhibit poor performance in public clouds like Azure, where code and data may not be co-located, "noisy neighbor" problems can impact database performance, etc. These issues aren't unique to Web Forms applications or Azure, but they tend to be exacerbated in older, legacy applications that were written with an assumption of code and data being physically close. Since you don't control (completely) where your code and data live in Azure relative to each other, issues that may be masked in an on-prem scenario can surface when moving to the cloud.
Some specifics to consider:
take a close look at the use of data binding in your Web Forms app... in practice it tends to encourage expensive queries and transfer of large result sets from database to application, something you might sometimes get away with on-premises but not in the cloud
take another look at your SQL configuration... you don't mention what tier you're using (Basic, Standard, Premium) but this choice can have a big impact on your overall app performance (and budget!). If it turns out (for example) that your Web Forms app does issue expensive queries, then use of a higher tier may help
Azure SQL DB tiers
familiarize yourself with the notion of "cloud native" vs. "cloud capable" applications... generally speaking, just because you can find a way to run an application in the cloud doesn't mean its ideally suited to do so. From your description it sounds like you've made some effort to utilize some cloud-native services, so that's a good start. But if I had to guess (since we can't see your code) I'd think that you might need some additional refactoring in your Web Forms app to make it more efficient and better able to run in an environment you don't have 100% control over.
More on cloud-native
dated but still relevant advice on Azure migration
If you can give us more details on where you see bottlenecks, we can offer more specific advice.
Best of luck!
There is some loop in the code that cause 100% CPU.
When the problem occurs, take a dump from (the kudu). Analyze it in windbg by
1) list threads cpu time with !runaway
2) check the callstack of the threads, specifically the greatest cpu consumer with
~*e!clrstack and with ~*kb

EC2 Instance Selection

We have recently started using AWS free tier for our CRM product.
We are facing speed related issues currently, so we are planning to change EC2 Instance.
It's a dotnet based website, using ASP.Net, C#.net, Microsoft SQL server 2012, IIS 7 server.
It would be great if someone can suggest correct EC2 instance for our usage. We are planning to use t2.Medium and MS SQL Enterprise license, Route 53, 30 GB EBS Volume, CloudWatch, SES and SNS. Are we missing something here..? Also what would be the approximate monthly billing for this usage..?
Thanks in advance. Cheers!!
It's impossible to say for sure what the issue is without some performance monitoring. If you haven't already, setup Cloudwatch monitors. Personally I like to use monitoring services like New Relic as they can dive deep into your system - to the stored procedure and ASP.NET code level to identify bottlenecks.
The primary reason for doing this is to identify if your instance is maxing out on CPU usage, memory usage, swapping to disk, or if your bottleneck is in your networking bandwidth.
That being said, as jas_raj mentioned, the t-series instances are burstable, meaning if you have steady heavy traffic, you won't get good use from them. They're better suited for occasional peaks in load.
The m-series will provide a more stable level of performance but, in some cases, can be exceeded in performance by a bursting t-series machine. When I run CMS, CRM and similar apps in EC2, I typically start with an M3 instance.
There are some other things to consider as well.
Consider putting your DB on RDS or on a separate server with high performance EBS volumes (EBS optimized, provisioned IOPS, etc.).
If you can, separate your app and session state (as well as the data layer) so you can consider using smaller EC2 instances but scale them based on traffic and demand.
As you can imagine, there are a lot of factors that go into performance, but I hope this helps.
You can calculate the pricing based on your options by using Amazon's Simple Monthly Calculator.
Regarding your usage, I don't have a lot of experience on the Windows side with AWS but I would point out the fact the amount of CPU allocation on t2 instances is based on a credit system. If that's acceptable to your usage fine, otherwise switch to a non t2 instance for more deterministic CPU performance.
If you have good understandings about your application, I would suggest you check here for differences between instance types and selection suggestions.

High CPU and Memory usage from .NET MVC app

We are seeing a very high amount of CPU and memory usage from one of our .NET MVC apps and can't seem to track down what the cause of it is. Our group does not have access to the web server itself but instead gets notified automatically when certain limits are hit (90+% of CPU or memory). Running locally we can't seem to find the problem. Some items we think might be the culprit
The app has a number of threads running in the background when users take certain actions
We are using memcached (on a different machine than the web server)
We are using web sockets
Other than that the app is pretty standard as far as web applications go. Couple of forms here, login/logout there, some admin capabilities to manage users and data; nothing super fancy.
I'm looking at two different solutions and wondering what would be best.
Create a page inside the app itself (available only to app admins) that shows information about memory and CPU being used. Are there examples of this or is it even possible?
Use some type of 3rd party profiling service or application that gets installed on the web servers and allows us to drive down to find what is causing the high CPU and memory usage in the app.
i recommed the asp.net mvc miniprofiler. http://miniprofiler.com/
it is simple to implement and to extend, can run in production mode, can store its results to SQL Server. i used it many times to find difficult performance issues.
Another possibility is to use http://getglimpse.com/ in combination with the miniprofiler glimpse-plugin https://github.com/mcliment/miniprofiler-glimpse-plugin
both tools are open source and don't require admin access to the server.
You can hook up Preemptive's Runtime Intelligence to it. - http://www.preemptive.com/
Otherwise a profiler, or load test could help find the problem. Do you have anything monitoring the actual machine health? (Processor usage, memory usage, disk queue lengths, etc..).
http://blogs.msdn.com/b/visualstudioalm/archive/2012/06/04/getting-started-with-load-testing-in-visual-studio-2012.aspx
Visual studio has a built-in profiler (depending on version and edition). You may be able to WMI query the web server that has the issues, or write/provide diagnostic recording/monitoring tools to hand them over to someone that does have access.
Do you have any output caching? what version of IIS? Is the 90% processor usage you are getting alerted to showing that your web process is actually the one doing it? ( Perhaps it's not your app if the alert is improperly configured)
I had a similar situation and I created a system monitor to my app admins based on this project

What is the most cost-effective way to break up a centralised database?

Following on from this question...
What to do when you’ve really screwed up the design of a distributed system?
... the client has reluctantly asked me to quote for option 3 (the expensive one), so they can compare prices to a company in India.
So, they want me to quote (hmm). In order for me to get this as accurate as possible, I will need to decide how I'm actually going to do it. Here's 3 scenarios...
Scenarios
Split the database
My original idea (perhaps the most tricky) will yield the best speed on both the website and the desktop application. However, it may require some synchronising between the two databases as the two "systems" so heavily connected. If not done properly and not tested thouroughly, I've learnt that synchronisation can be hell on earth.
Implement caching on the smallest system
To side-step the sync option (which I'm not fond of), I figured it may be more productive (and cheaper) to move the entire central database and web service to their office (i.e. in-house), and have the website (still on the hosted server) download data from the central office and store it in a small database (acting as a cache)...
Set up a new server in the customer's office (in-house).
Move the central database and web service to the new in-house server.
Keep the web site on the hosted server, but alter the web service URL so that it points to the office server.
Implement a simple cache system for images and most frequently accessed data (such as product information).
... the down-side is that when the end-user in the office updates something, their customers will effectively be downloading the data from a 60KB/s upload connection (albeit once, as it will be cached).
Also, not all data can be cached, for example when a customer updates their order. Also, connection redundancy becomes a huge factor here; what if the office connection is offline? Nothing to do but show an error message to the customers, which is nasty, but a necessary evil.
Mystery option number 3
Suggestions welcome!
SQL replication
I had considered MSSQL replication. But I have no experience with it, so I'm worried about how conflicts are handled, etc. Is this an option? Considering there are physical files involved, and so on. Also, I believe we'd need to upgrade from SQL express to SQL non-free, and buy two licenses.
Technical
Components
ASP.Net website
ASP.net web service
.Net desktop application
MSSQL 2008 express database
Connections
Office connection: 8 mbit down and 1 mbit up contended line (50:1)
Hosted virtual server: Windows 2008 with 10 megabit line
Having just read for the first time your original question related to this I'd say that you may have laid the foundation for resolving the problem simply because you are communicating with the database by a web service.
This web service may well be the saving grace as it allows you to split the communications without affecting the client.
A good while back I was involved in designing just such a system.
The first thing that we identified was that data which rarely changes - and immediately locked all of this out of consideration for distribution. A manual process for administering using the web server was the only way to change this data.
The second thing we identified was that data that should be owned locally. By this I mean data that only one person or location at a time would need to update; but that may need to be viewed at other locations. We fixed all of the keys on the related tables to ensure that duplication could never occur and that no auto-incrementing fields were used.
The third item was the tables that were truly shared - and although we worried a lot about these during stages 1 & 2 - in our case this part was straight-forwards.
When I'm talking about a server here I mean a DB Server with a set of web services that communicate between themselves.
As designed our architecture had 1 designated 'master' server. This was the definitive for resolving conflicts.
The rest of the servers were in the first instance a large cache of anything covered by item1. In fact it wasn't a large cache but a database duplication but you get the idea.
The second function of the each non-master server was to coordinate changes with the master. This involved a very simplistic process of actually passing through most of the work transparently to the master server.
We spent a lot of time designing and optimising all of the above - to finally discover that the single best performance improvement came from simply compressing the web service requests to reduce bandwidth (but it was over a single channel ISDN, which probably made the most difference).
The fact is that if you do have a web service then this will give you greater flexibility about how you implement this.
I'd probably start by investigating the feasability of implementing one of the SQL server replication methods
Usual disclaimers apply:
Splitting the database will not help a lot but it'll add a lot of nightmare. IMO, you should first try to optimize the database, update some indexes or may be add several more, optimize some queries and so on. For database performance tuning I recommend to read some articles from simple-talk.com.
Also in order to save bandwidth you can add bulk processing to your windows client and also add zipping (archiving) to your web service.
And probably you should upgrade to MS SQL 2008 Express, it's also free.
It's hard to recommend a good solution for your problem using the information I have. It's not clear where is the bottleneck. I strongly recommend you to profile your application to find exact place of the bottleneck (e.g. is it in the database or in fully used up channel and so on) and add a description of it to the question.
EDIT 01/03:
When the bottleneck is an up connection then you can do only the following:
1. Add archiving of messages to service and client
2. Implement bulk operations and use them
3. Try to reduce operations count per user case for the most frequent cases
4. Add a local database for windows clients and perform all operations using it and synchronize the local db and the main one on some timer.
And sql replication will not help you a lot in this case. The most fastest and cheapest solution is to increase up connection because all other ways (except the first one) will take a lot of time.
If you choose to rewrite the service to support bulking I recommend you to have a look at Agatha Project
Actually hearing how many they have on that one connection it may be time to up the bandwidth at the office (not at all my normal response) If you factor out the CRM system what else is a top user of the bandwidth? It maybe the they have reached the point of needing more bandwidth period.
But I am still curious to see how much information you are passing that is getting used. Make sure you are transferring efferently any chance you could add some easy quick measures to see how much people are actually consuming when looking at the data.

Categories

Resources