IIS Performance "Architecture" - c#

I was just wondering what will have the best performance.
Lets say we have 3 physical servers, where each server has 32cores and 64gb ram, and the application is a "standard" asp.net application. Load balancing is already in place.
Setup 1# - One applicaiton consumes all
- One IIS server with 1 application running on each physical server. (total of 3 application "endpoints")
Setup 2# - Shared resources
- One IIS server with 16 applications in a webfarm. (total of 48 application "endpoints")
Setup 3# - Virtualization
Virtualization: 15 virtual servers (total of 45 application endpoints)
What would have the best performance, and why?

It depends! Much depends on what the application is doing and where it spends its time.
In broad terms, though:
If an application is compute-bound -- i.e. the time taken to retrieve data from an external source such as a database is limited -- then in most cases setup #1 will likely be fastest. IIS is itself highly multi-threaded and giving it control of the machine's resources will allow it to self-tune.
If the application is data-bound -- i.e. more than (say) 40% of the time taken for each request is spent getting and waiting for data -- then setup #2 may be better. This is especially the case for less-well-written applications that do synchronous in-process databases accesses: even if a thread is sitting around waiting for database access to complete it's still consuming resources.
As discussed here: How to increase thread-pool threads on IIS 7.0 you'll run out of thread pool threads eventually. However, as discussed on MSDN here: http://blogs.msdn.com/b/david.wang/archive/2006/03/14/thoughts-on-application-pools-running-out-of-threads.aspx by creating multiple IIS worker processes you're really just papering over the cracks of larger underlying issues.
Unless there's other reasons -- such as manageability -- I'd not recommend setup #3 as the overhead of managing additional operating systems in entire virtual machines is quite considerable.
So: monitor your system, use something like the MiniProfiler (http://code.google.com/p/mvc-mini-profiler/) to figure out where the issues in the code lie, and use asynchronous non-blocking calls whenever you can.

It really depends on your application, you have to design for each architecture and performance test your setups. Some applications will run fast on setup 1 and not on the other setups and the other way around. There are many more things you can optimize on performance in iis. The key thing is you design you application for monitoring and scaling.

Related

Can an MVC web app and an API web app run in the same pool?

I am working on an ASP.NET MVC web application (C#) that is a module of a larger application as a whole (mostly desktop/win service based - VB.NET). Currently the application makes HTTP calls to a web service (provided as an API) which is it's own independent application (also using MVC, VB.NET). Possibly not how I would design it, but it is what I have inherited.
My issue is this: If I host the MVC app in local IIS and run the API project in IIS Express, all is good. If I split the two projects up to run in separate Application Pools in local IIS, all is good. However, if I run both apps out of the same pool in IIS, I run into a lot of issues. Namely, timeouts when I make the call to HttpClient.GetAsync(url) - especially on a page that is calling this 9 times to dynamically retrieve different images based on ID (each call is made to MVC app which then makes the call to the API). Some calls make it through, most do not.
The exceptions relate to cancelled tasks (timeout = 100s) - but the actions require a fraction of a second so there is no need to timeout. Execution never even makes it into the functions in the API side when it fails - like the HTTP client has given up offering any more connections, or the task is waiting for HTTP to send the request and it never does.
I have tried making it Async all the way through, tried making the HttpClient static, etc. But no joy. Is this simply something that just shouldn't be done - allowing the two apps to share an app pool? If so, I can live with that. But if there is something I can do to handle it more efficiently, that'd be very useful. Any information/resources on this would be very much appreciated. Ta!
We recently ran into the same issue, it took quite a bit of debugging to work out that using the same applciation pool was the cause of the problem.
I also found this could work if you increase the number of Maximum Worker Processes in the advanced settings for the the application pool.
I'm not sure why this is, but i'm guessing all of the requests are being dealt with by one process and this is causing a backlog and ultimately there are too many resulting in timeouts. (I'd be happy for someone more knowledgeable in IIS to correct me though)
HTH
Edit: On further reading here Carmelo Pulvirenti's Blog it seems as though the garbage collector could be to be blame. The blog states that multiple applications running on the same pool share memory, the side effects are;
It means that GC runs a lot of time per second in order to provide
clean memory to your app. What is the side effect? In server mode,
garbage collector requires to stop all threads activities to clean up
the memory (this was improved on .NET Fx 4.5, where the collation does
not require to stop all threads). What is the side effect?
Slow performance of your web application
High impact on CPU time due to GC’s activity.

Best way to throttle an external application's CPU Usage

Ok - here is the scenario:
I host a server application on Amazon AWS hosted windows instances. (I do not have access to the source code - so I cannot resolve the issues from within the applications source code)
These specific instances are able to build up CPU credits during times of idle cpu (less than 10-20% usage) and then spend those CPU credits during times of increased compute requirement.
My server application however, typically runs at around 15-20% cpu usage when no clients are connected- this is time when I would rather lower the cpu usage to around 5% through throttling of the cpu - maintaining enough cpu throughput to accept a TCP Socket from incoming clients.
When a connected client is detected, I would like to remove the throttle and allow full access to the reserve of AWS CPU Credits.
I have got code in place that can Suspend and Resume processes via C# using Windows API calls.
I am however a bit fuzzy on how to accurately attain a target cpu usage for that process.
What I am doing so far, which is having moderate success:
Looping inside another application
check the cpu usage of the server application - using performance counters (dont like these - they require a 100-1000 ms wait in order to return a % value)
I determine if the current value is above or below the target value - if above, I increase an int value called 'sleep' by 10ms
If below - 'sleep' is decreased by 10ms.
Then the application will call
Process.Suspend();
Threads.sleep(sleep);
Process.Resume();
Like I said - this is having moderate success.
But there are several reasons I don't like it:
1. It requires a semi-rapid loop in an external application: This might end up just shifting cpu usage to that application.
2. Im sure there are better mathematical solutions to work out the ideal sleep time.
I came across this application : http://mion.faireal.net/BES/
It seems to do everything I want, except I need to be able to control it, and I am not a c++ developer.
It also seems to be able to achieve accurate cpu throttling without consuming large cpu utself.
Can someone suggest CPU throttle techniques.
Remember - I cannot modify the source code of the application being throttled - at most, I could inject code into it: but it occurs to me that if I inject suspend code into it, then the resume code could not fire etc.
An external agent program might be the best way to go.

Optimising performance of WCF service under sudden load

I am developing a C# WCF service which calls a backend c# server application.
We are running performance tests on the service.
For example each test might consist of these steps
- Logon
- Create application object ( gets created in Sql database by server application )
- Delete application object
- Logoff
We run the test with 100 concurrent users ( ie unique client threads ) with no ramp up and no user wait time between test steps
We have done quite a bit of optimisation on the server side, so that the tests run quite well when we run them repeatedly - say 100 concurrent threads, each thread repeats the test steps 25 times - results of typically about average 1 second response time for each step in the test which is OK.
However when we run tests with 100 concurrent users but only run the tests once in each thread, the results are inconsistent - sometimes the test steps may take quite a bit longer average elapsed time could be 5 seconds for a step in the test.
It appears that under a sudden burst of activity, the service returns inconsistent performance results.
I have tried several things to optimise performance
( for compatibility with the client the WCF binding has to be BasicHttpBinding )
varying the serviceThrottling maxConcurrentCalls and maxConcurrentSessions parameters in the WCF configuration
using a semaphore to limit the number of concurrent requests inside the Wcf service
implementing the methods inside the wcf service as Tasks ( .net version is 4.5 )
making the methods async tasks
tuning the size of the ThreadPool using setMinThreads
using a custom attribute to extend WCF to implement a custom ThreadPool as per this Msdn article ( http://msdn.microsoft.com/en-us/magazine/cc163321.aspx )
I have found that running tests and varying parameters can be used to tune the application performance, but we still have the problem that performance results are poorer and more inconsistent when we run say 100 concurrent client threads repeating the test steps 1 time.
My question is : what are the best ways of tuning a c# WCF service so it will respond well to a sudden burst of client activity ?
Thanks.
Decided to post as answer instead so here the good thing to do :
1 - Check concurency connection. Sometime small server might be limited between 2 and 50 which is very low. your server admin should know what to do.
2 - Load balancing with WCF is possible and help alot when split over multiple servers.
3 - Have the IIS host server ONLY doing IIS work. i.e dont have SQL running on it either
4 - Do not open WCF service connection, query, close connection every single request. an handshake is needed every single time and over time with multiple user it become alot of time lost because of that. Instead open the connection once when the application start and close on exit (or error obviously)
5 - Use smaller type inside the service. Try avoiding type such as decimal, int64. Decimal is 128 bits and int64 is 64 bits and perform much slower than float/double/int. Obviously if you absolutely need to use them use them but try to limit.
6 - A single big method make the overall timing slower for everyone as the waiting line grow faster and slow IIS and might loose new connections if you have alot of user because of timeout. But smaller methods will take longer for everyone because of the extra back and forth of data BUT user will see more progress that way and will feel the software is faster even though it is really not.
BasicHttpbinding works great in anycase

Force simultaneous threads/tasks for C# load testing app?

Question:
Is there a way to force the Task Parallel Library to run multiple tasks simultaneously? Even if it means making the whole process run slower with all the added context switching on each core?
Background:
I'm fairly new to multithreading, so I could use some assistance. My initial research hasn't turned up much, but I also doubt I know what exactly to search for. Perhaps someone more experienced with multithreading can help me better understand TPL and/or find a better solution.
Our company is planning on deploying a piece of software to all users' machines that will connect to a central server a few times a day, and synchronize some files and MS Access data back to the user's machine. We would like to load-test this concept first and see how the Access DB holds up to lots of simultaneous connections.
I've been tasked with writing a .NET application that behaves like the client app (connecting & syncing with a network location), but does this on multiple threads simultaneously.
I've been getting familiar with the Task Parallel Library (TPL), as this seems like the best (newest) way to handle multithreading, and get return values back from each thread easily. However as I understand it, TPL decides how to run each "task" for the fastest execution possible, splitting the work among the available cores. So lets say I want to run 30 sync jobs on a 2-core machine... the TPL would run 15 on each core, sequentially. This would mean my load test would only be hitting the Access DB with at most 2 connections at the same time. I want to hit the database with lots of simultaneous connections.
You can force the TPL to do this by specifying TaskOptions.LongRunning. According to Reflector (not according to the docs, though) this always creates a new thread. I consider relying on this safe production use.
Normal tasks will not do, because they don't guarantee execution. Setting MinThreads is a horrible solution (for production) because you are changing a process global setting to solve a local problem. And still, you are not guaranteed success.
Of course, you can also start threads. Tasks are more convenient though because of error handling. Nothing wrong with using threads for this use case.
Based on your comment, I think you should reconsider using Access in the first place. It doesn't scale well and has problems once the database grows to a certain size. Especially if this is simply served off some file share on your network.
You can try and simulate load from your single machine but I don't think that would be very representative of what you are trying to accomplish.
Have you considered using SQL Server Express? It's basically a de-tuned version of the full-blown SQL Server which might suit your needs better.

CPU usage extremely high on TS deployment

Our application is written in .NET (framework 3.5). We are experiencing problems with the applications performance when deployed in a terminal services environment. The client is using a TS farm. They have 4GB ram and a decent xeon processor.
When the application is opened in this environment, it sits at 25% CPU usage even when idle. When deployed in a normal client - server environment, it behaves normally, spiking the CPU usage when necessary and drops down to 0 when idle.
Does anyone have any ideas what could be causing this? Or, what I could do to investigate? We have no memory leaks that we can find using performance profiling tools.
This is a WinForms application
We dont have a TS environment avialable to test on
The application is a Business Application.
Basically, capturing and updating of data. Its a massive business application, but there is little multithreading, listeners etc. We do have ANTS profiler (memory / performance) but as mentioned in our environment we dont have the problem - it only occurs on the TS environment
Well, there are a few questions before we can really get you too far.
Is this a Console Application? WinForms Application? or Windows Service?
Do you have a Terminal Services environment available?
What does your application do?
Depending on what the application does, you might check to see if there is unusually high activity on their hardware that you have not accounted for. Examples that I have noticed in the past are items such as having a FileSystemWatcher accidentally listening to a "drop location" for reporting on a client server. Things of that nature, items that while "idle" shouldn't be busy, but are.
Otherwise, if you have the ability to do so, you could also use a tool such as ANTS Profiler from RedGate to see WHAT is using the CPU time on the environment.
Look for sections in your application that constantly repaints the window. Factor those out so that when sitting idle it isn't constantly repainting the window.

Categories

Resources