I need to write an ASP.NET application which must handle a very large number of transactions per second - as many as 5000 users may transact at the same time. I think I will use WCF in back to communicate with SQL server. But in front, can IIS handle 5000 users at the same time effectively, or is there any simple way to host my application outside of IIS?
It will depend on the characteristics of the machine but you could always setup a web farm to handle high loads.
You can host a WCF application outside of IIS using WAS, Windows Service or a .NET application.
It certainly would be possible to design a system using IIS that could handle the load you describe. Whether this is a good idea or not really depends on the application. I suggest perhaps you look at some benchmarking some of the loads to determine if it is quicker to host in IIS or if you host a WCF application outside of IIS.
Why you need it outside IIS. you can have 5000 TPS with IIS. But bear in mind that it depends from lot of aspects... like hardware, what configuration you have for your servers, it depends from heaviness of your application, what is the response time of your applications. Also as suggested you can have web farm. You can use load balancer and have several servers behind it. So it is possible just you need to have a proper design and if needed a budget for hardware upgrade.
Related
We have multiple load balanced IIS web servers for our application backed by a MS SQL server database. We store application configuration information in the database. While the application is running I frequently change the configuration and the changes need to be propagated to the other web servers. Is there a good way to do this? I have been doing it through SignalR (to alert other servers a change has occurred and they should refresh their configuration) but SignalR is not always reliable and sometimes one server does not get the message. Is there a better solution?
Thank you
Updated
I now understand you to need to propagate an application level configuration change.
You could, as you mentioned, use SignalR. This would require having a central server that hosts the websocket connections, but has the benefit of being "instant".
Alternatively, if your requirements are simple, perhaps a short term in-memory cache would suffice.
If it's complex than that, I'd recommend looking into event queues (MSMQ, RabbitMQ). In this model, the instance changing the configuration publishes an event to the queue which can be consumed by the other instances on a background thread.
Original Answer
Microsoft Web Deploy was built to do this. It supports synchronizing sites across servers, even down to application pool settings and SSL certificates.
The IIS documentation site has a specific page that is relevant to your use case: Synchronize IIS.
There is a lot involved in configuring Web Deploy so I won't attempt to explain it all here, but for posterity reasons the command to sync a local site to a remote machine would be:
msdeploy.exe -verb:sync
-source:apphostconfig="Default Web Site"
-dest:apphostconfig="Default Web Site",computername=Server1
(The command was split over multiple lines for readability)
As an an entirely alternative approach, you could also use a "pull configuration" system like Powershell Desired State Configuration or Chef.
I've been away from web development for 6/7 years now and I'm completely lost in regards to how to do things. I'm going through some tutorials on HTML5 and whatnot, but I was hoping to get a helping hand here.
I'm trying to build a (POC) website which would have the "server" monitor it's running applications and when a certain application is running change the content of a hosted page. I don't want the model to be PageLoad->Application Check, I'd rather have something like ServerStart->ApplicationHook->Callback->Model->PageLoad->CheckModel, so a hook is put in place when the server starts and the callback of the hook updates a model which the page uses to update. Although this architecture may not be the best way, in general I'm just looking for a way to have a long running process which starts when the server starts up. Eventually I'l move this to a Windows service which calls a webservice when changes are made, but for a POC I'd rather keep away from multiple applications interacting, as the Windows Service would need to be "called" by the server too and I can't think of an easy implementation of that at the moment.
So, if you were building a page which relied on events on the server and needed to be able to interact with an application on the server separately to an individual page, but the page needs to be able to "post" information back to that application what would you do?
My explanation has been a bit all over the place, so I hope at some point my question has come across clearly! :)
Maybe there're alternatives but I think your only option for this kind of setup is a Windows service. If you need to talk to it from other components, have it use sockets or listen for HTTP requests on a known port. Doing what you described from a web application is not impossible but it'd be certainly very hard since it's the web server (app pool executable) that controls what happens in the process, not your code. In a Windows service, you're in control.
Edit: here's an article about the different options for hosting a web service - it seems to me that using a Windows service is, indeed, your best choice. You may be able to use a WCF service but you'll have to talk to a local application on the server and that part may be easier to do just using a Windows service.
I am relatively green with C# and WCF. I have landed on a project where I am creating self hosted WCF services running as Windows services but am starting to wonder if I should use IIS instead (which we don't currently use) as managing all of these services could eventually get cumbersome.
Despite my best efforts, I have yet to find any definitive information about why I might favor one approach over the other. The services are primarily used for utility stuff like resizing images, retrieving files, etc. and are called by both C# and Java clients.
Thanks
The shortest answer would be 'it depends'. On your requirements. You can self host without problems, but IIS will manage resources more effectively and enable you to fine tune stuff more easily than self-hosted.
For instance, in IIS would be more simple to deploy a new version or remove and old one.
Either way is fine.
Generally, using the builtin IIS hosting capabilities can make deployment and configuration simpler for you. Also you have the activation model of http.sys - which means IIS will start the necessary process for you when an appropriate message arrives.
Clients of any platform can connect to the WCF services regardless whether they are self-hosted or IIS hosted.
ps: how to allow IIS-hosted WCF services to store their configuration data in distinct xxx.config files
Starting to develop to actual code to my website and wanted to know how do i develop or design the website that is load balance friendly. I read a post on stackoverflow regarding scalability and the selected answer stated: "Make sure you consider load balancing when developing your application". How do I go about this?
Your decision will come down to environment. If this is a product for sale, you will not have any control over the load balancing implementation. This means that "sticky sessions," where a user is bound to the same server for the duration of a session, cannot be guaranteed. Sticky sessions allow just about any application to be load-balanced, but they are not as efficient.
If you cannot guarantee an implementation with sticky sessions, avoid the usage of Session state altogether, or look into a shared-session solution.
1) do not use static fields to store data, statistics, ...
2) use session with care - you can still use in-process with sticky ssessions but I do not like it.
3) Do not rely on the IP of the server
Well, one answer is to reduce reliance upon session variables. It's possible to share session variables between servers via session server, but that means all your servers have a single point of failure on the session server then, plus reducing performance.
Basically, just try to make each page as stand-alone and stateless as possible, and you'll be good.
This might be obvious to most of you, but actually was an issue in our environment when we started to use a load balancer / several web servers: Do not rely on the IP addresses of your web server.
We had a production environment that used a switch and a set of internal IP addresses, including the one of the web server (our products usually run in a closed off environment, not the open Internet). If you have several web servers that becomes a problem.
Make sure you have a development/QA environment where you can test your software in a load balanced environment and see the issues in your code as you develop it rather than waiting until the deployment day.
One thing to take into account is the usage of Session data to maintain state.
As your application subsequent requests can be handled by other servers in the balance line you can not use InProc mode and StateServer mode.
I am making a medium sized standard LOB application. Currently its a web application but I am formulating a proposal to revamp it into a Desktop remote application. By this I mean that the database and the application server will be hosted in a remote location. The client application will communicate with the server via the internet through (either WCF / Webservices / Remoting).
My question is this: The only reason I am shifting this from a web platform is due to the constraints of the web (I dont want to do AJAX or Java scripting to minimize those constraints, so please no JS/AJAX recommendations). I have made traditional desktop applications and they are considerably fast but i have never made a remote or a distributed application. I am not sure weather the speed of the application will be faster then the web or not.
As I understand it, the remote desktop application would be much faster. For one, there wont be any post backs involved, (I hate them so much). The data will obviously come via internet, so in that respect, is it better to shift to the remote desktop just for sheer speed and power?
Any help in the right direction would be greatfull. Many thanks.
Zeeshan
I think biggest advantage of desktop clients over web applications is freedom in UI design, and you don't have to worry about any inconsistencies in the client environment, although those are not an issue if you are using a client that runs on silverlight.
Personally I don't like web applications that requires a lot of user interaction, there are some of them that is a pleasure to use but I think it is very easy to do it the wrong way and end up having a buggy or not so responsive application (probably because of the incompatibilities in browsers, I have IE, Firefox and Chrome installed on my computer and I use one for some websites because they run faster on it, and others for other sites because web pages show up correctly only on them). Though this might not be an issue for a silverlight client.
In case of network speed, depending on the things that goes on the wire even with binary serialization remoting might have quite a bit of overhead. For example along with the data it writes full class names, library names and their versions so it can get pretty big and slow even for small amounts of data (although it should still be smaller then HTTP). It also has the same problems that HTTP has over unreliable connections because it uses a similar protocol. For one project we had to write a custom serializer for some objects because binary serialization alone was generating 200K, but our custom serializer for those objects were generating 50K. Then we ended up writing our own network protocol because the one that comes with the runtime was frequently stalling over unreliable wireless networks, and remoting doesn't give any control on the socket created by it (which makes sense in terms of encapsulation but you can't close it and force it to open a new one).
(I am assuming that you are asking about remoting vs web app. not remote desktop vs. web apps, because of your note about post back, you can't avoid it with a remote desktop session)
Rewriting an application just for sheer speed? No, because probably user won't see much difference in response time.
You are somewhat ambiguous with your terminology - do you want a client app that runs on the user's machine, or do you want an app that runs on the server and the user connects via remote desktop (RDP)?
If you are talking about a client app that communicates to the server via WCF etc., then yes it will be faster than a standard web app, although it will still be slower than a native desktop app. It will be faster than the web app not just because of the lack of postbacks, but also because you will be sending pure data through the wire, not a massive amount of HTML/Javascript combined with your data. With a client app, you have several options so consider them carefully - do you want Silverlight, WPF, or a native WinForms app? Each have their positives and negatives.
If you were talking about having a client app running on the server which the user then access via RDP, then you have other considerations to think of. For any more than two concurrent users you will need to consider buying CALs so the users can connect to the server. At this point you should also be considering whether you should be running a terminal server or Citrix type setup instead of using remote desktop.
Edit
When using WCF over a WAN (internet) you will certainly have to consider how you will secure it. WCF makes it trivial to secure the channel, but you need to consider how you will do authentication - there are a couple of different ways, but you can easily google that stuff yourself. The method you choose will be important due to the limited resources or skill-sets of the users.
As for what you write it in, you can't argue with Winforms if that is where your experience is. Personally, i would never again use ASP.NET/Ajax/etc for a web type application, it would be WPF or Silverlight all the way (i would only use ASP.NET for simple web sites). You can use the express (free) versions of Visual Studio to write it in, you don't need Expression (it's just a nice to have, and is more aimed at the design side than the actual coding side). Deploying the app need not be difficult - Silverlight or WPF xbap are delivered via the web, the user has to do nothing (except for the simple install of the Silverlight plugin or installing the right .Net framework for WPF - check this link). Winforms or stand-alone WPF require slightly more work, but you can avoid most issues by writing a good installer.
Whichever you choose, make sure you don't under estimate the time for development (because you will have a bit of a learning curve), and also make sure you budget enough time for testing it - especially the security side of it :)
I have been in a similar situation, although started with a Winforms LOB application.
Heres what we found with WinForms...
It's going to be harder to deploy in your release cycle, to all client machines.
WinForms can't be run on other operating systems easily. (with the exception on mono)
WCF endpoints can get complicated, and you need to manage an endpoint for release/version of your application.
Authentication, Authorization and Security can be tricky to get right!
Heres why you should stick to a html web application.
it's going to be easier to deploy, as you just need to copy one set of DLL's into the bin folder. Can be scripted from a continuous integration or staging server.
Security is going to be easy, by using a SSL certificate.
Silverlight/Flash should fill in the gaps that HTML leaves out.
Microsoft has also combined the connected systems in .net 3.5, they now call it WCF (ASMX/Remoting/etc...). It's got quite a learning curve 4-5 weeks.