My code is in ASP.NET MVC Razor C# and Database is SQL Server 2012. Right now the code is on localhost.
ok. Before moving to the server, I want to test the website design should able to scale/support 100,000 concurrent users.
Question : I am a .NET Developer only. Is there any way to do the testing for 100,000 concurrent users on a single machine ?
Short answer: Yes, you can create load tests in visual studio (using VS Ultimate/Enterprise test tools) no problems.
Some basic info here: https://msdn.microsoft.com/en-us/library/vstudio/dd293540(v=vs.110).aspx
But...
Your machine will not be able to handle creating 100,000 simultaneous requests, let alone the site/application servicing those requests on a single machine.
You really need to setup a staging environment that will mimic your production implementation, then deploy and load test on that with load balancing and all the bells and whistles. Otherwise the load/stress test will be a waste of time, the stats you will get back from the test will show 100% timeouts over ~1,000 concurrent users (which is not at all a representation of the speed of your app, just the speed of your machine).
Then once you have said staging environment setup. I would suggest spreading the load test over 5-10 PC/VM's as well. This will give the best "real-world" results.
As per your question, you cannot test an application (either web or desktop application) with 100,000 users without the help of testing software's. The type of testing you are willing to perform is known as volume testing or stress testing in which number of users access the application at the same time and we observe the behavior of the application. This can be done by using a software called HP-Performance Center developed by HP. But it is a licensed software. You wont get it for free.
Related
I have a product, and a front end website where people can purchase the product. Upon purchase, I have a system that creates an A record in my DNS server that points to an IP address. It then creates a new IIS website with the bindings required.
All this works well, but I'm now looking at growing the business and to do this I'll need to handle upgrades of the application.
Currently, I have my application running 40 websites. It's all the same code base and each website uses it's own SQL Server database. Each website is ran in a separate application pool and operate completely independently.
I've looked at using TeamCity to build the application and then have a manual step that runs MSDeploy for each website but this isn't particularly ideal since I'd need to a) purchase a full license and b) always remember to add a new website to the TeamCity build.
How do you handle the upgrade and deployments of the same code base running many different websites and separate SQL Server databases?
First thing, it is possible to have a build configuration in TeamCity that builds and deploys to a specific location...whether a local path or a network drive. I don't remember exactly how but one of the companies I worked with in Perth had exactly the same environment. This assumes that all websites are pointing to the same physical path in the file system.
Now, a word of advice, I don't know how you have it all setup, but if this A record is simply creating a subdomain, I'd shift my approach to a real multi-tenant environment. That is, one single website, one single app pool for all clients and multiple bindings associated to a specific subdomain. This approach is way more scalable and uses way less memory resources...I've done some benchmark profiling in the past and amount of memory each process (apppool) was consuming was a massive waste of resources. There's a catch though, you will need to prepare your app for a multi-tenant architecture to avoid any sort of bleeding such as
Avoiding any per-client singleton component
Avoiding static variables
Cache cannot be global and MUST a client context associated
Pay special attention to how your save client files to the file system
Among other stuff. If you need more details about setting up TeamCity in your current environment, let me know. I could probably find some useful info
Developing within SharePoint 2010 - all the latest updates for it are installed (SP2 etc.)
Standard farm with 2 application servers, 2 front-end servers, Active Directory server and 2 SQL servers. All this stuff is hosted by Windows Azure Virtual machines, within Virtual network.
While performing simple SPWebApplication.Lookup() noticed that it takes very-very long to complete - about 16 seconds. To compare - locally it takes about 1 second. And on another very similar farm, also hosted in Azure - about 2 seconds.
What attempts were made to fix performance degradation:
Checked configs and network settings, pings etc. - looks 100% OK.
Profiled with SQL Profiler - no bottlenecks found - there is no hard SQL for this request actually.
Double-check that all the servers and DBs are upgraded and up to date.
Kick off all the possible errors that were found in ULS and Windows logs - now it's clear there.
Investigation of metrics with Metalogix Diagnostics manager - as result nothing critical was found. It only sometimes showed that processor queue length is big. But as I know, normal number for it is #of cores +1. So 4-5 in my case is fine. Also it's needed to note that from my perspective - for VM it's also normal to have such number.
Wrote very simple console app that performs lookup of web app. Profiled with Ants profiler. Noticed that call tree differs from the result received locally. Maybe that's OK, cause locally I have standalone installation.
The result at farm is not optimistic - several calls have a huge Hits count. Though, it's clear where the bottleneck in the call tree - all the ideas about the source have already finished. Profiling result as follows: http://1drv.ms/1kYT3rT
It would be great if you could advice.
Thanks in advance.
How many disks do you have on your VM?
Azure has limited IOPS, 500 per disk... organise your databases so they are on different disks to get more IOPS. You can have 16 disks per VM.
http://msdn.microsoft.com/library/azure/dn248436.aspx
I'd like to know my options for the following scenario:
I have a C# winforms application (developed in VS 2010) distributed to a number of offices within the country. The application communicates with a C# web service which lies on a main server at a separate location and there is one database (SQL Server 2012) at a further location. (All servers run Windows Server 2008)
Head Office (where we are) utilize the same front-end to manage certain information on the database which needs to be readily available to all offices - real-time. At the same time, any data they change needs to be readily available to us at Head Office as we have a real-time dashboard web application that monitors site-wide statistics.
Currently, the users are complaining about the speed at which the application operates. They say it is really slow. We work in a business-critical environment where every minute waiting may mean losing a client.
I have researched the following options, but do not come from a DB background, so not too sure what the best route for my scenario is.
Terminal Services/Sessions (which I've just implemented at Head Office and they say it's a great improvement, although there's a terrible lag - like remoting onto someones desktop, which is not nice to work on.)
Transactional Replication (Sounds like something quite plausible for my scenario, but would require all offices to have their own SQL server database on their individual servers and they have a tendency to "fiddle" and break everything they're left in charge of!) Wish we could take over all their servers, but they are franchises so have their own IT people on site.)
I've currently got a whole lot of the look-up data being cached on start-up of the application but this too takes 2-3 minutes to complete which is just not acceptable!
Does anyone have any ideas?
With everything running through the web service, there is no need for additional SQL Servers to be deployed local to the client. The WS wouldn't be able to communicate with these databases, unless the WS was also deployed locally as well.
Before suggesting any specific improvements, you need to benchmark where your bottlenecks are occurring. What is the latency between the various clients and the web service, and then from the web service and the database? Does the database show any waiting? Once you know the worst case scenario, improve that, and then work your way down.
Some general thoughts, though:
Move the WS closer to the database
Cache the data at the web service level to save on DB calls
Find the expense WS calls, and try to optimize the throughput
If the lookup data doesn't change all that often, use a local copy of SQL CE to cache that data, and use the MS Sync Framework to keep the data synchronized to the SQL Server
Use SQL CE for everything on the client computer, and use a background process to sync between the client and WS
UPDATE
After your comment, two additional thoughts. If your web service payload(s) is/are large, you can try adding compression on the web service (if it hasn't already been implemented).
You can also update your client to do the WS calls asynchronously, either in a thread or if you are using .NET 4.5 using async/await. This would at least allow the client to use the UI, but wouldn't necessary fix any issues with data load times.
I have one application which is developed in ASP .NET MVC 3 which using a SQL server database.
Apart from this, I have one console application which calls an external web service and update the same database with the information and business rules. (Basically we iterate the records from Web service and process the business rule and update the same database), we have configured the console application with Windows scheduler to process it periodically.
The problem is, when my Console application runs periodically, it uses the 100% CPU usage (because we're getting more than 2000 records from web service), and because of that my current MVC application is gets haging OR sometime works very very slow because both application are configured on same windows server.
Could anybody please do let me know that How would I resolve this problem where I want both the things on same server because I have central database used by both application.
Thanks in advance.
You haven't given any detail that anyone can really provide resolution, so I'll simply suggest how I would approach it.
First, I would review the database schema with a DBA to make sure there aren't things like table locks (or if there are, come up with strategies to compensate for them). I would then use the SQL Server profiler to see where (or if) there are any bottle necks in SQL server while these things are running. I would then profile the console application to make sure it's not doing something it doesn't need to be doing. I might even consider profiling the web site to see if there's anything in there that might be contributing to slowness.
After that, I would figure out how to get rid of the Console application and work its functionality into the site. Spawning another application on a given web request is not scalable. More than a couple of those come in at once and you've got the potential to bog the server down very easily.
We have a very old windows forms application that communicates with the server using .net remoting.
Can anyone recommend a method or a tool to performance test this.
Last time I looked there were no good tools for performance testing a windows application with many users.
You can use a profiler to see what is going on if the single user case is too slow.
If you only are about a hand full of users, most UI test tools could be used to record a script that you played back on a few machines.
Otherwise you need to write a command line application that talks to the server in the same way as the windows forms app, and then run many copies of that command line app.
Or get the server to log all calls in such a way as the calls can be played back.
In the past I have thought about getting each client to log the calls to the sever in the form of C# statements that could then be compiled to create a performance test program.
There are some systems that claims to record the communication between the client and the sever at the network level, and then play it back many times. This may work if it is sensible for lot of clients to send the some requests with the same params to the server, otherwise there will be lots of scripting within the tool to create custom requests for each client.
To do performance testing on the application, you can use a profiler, such as the ANTS Performance Profiler. I'm not sure if that's going to profile the network communication as well or not.
Alternatively, you can set up performance counters and record information that way when you run your app.