Context: My company has been developing a WebAPI application to be hosted by IIS and the request latencies for a single static content file are about ~60ms. We investigated benchmarking the same app using WebAPI Self host and the latencies for the same content file were ~15ms, which really blew us away.
From a deployment process, we love IIS as it provides us extreme flexibility in doing hot deployments by copying DLLs directly to our web servers, which doesn't require us to do any sort of drain-stopping.
Question: Is it feasible to do similar hot deployments (just copying over dll's) with self hosted applications?
Nope, while self-host is executing, the DLLs will be locked so you'll have to stop the self-host first. You can do other tricks like deploying to another folder and then re-routing requests etc. but it's not the same as IIS deployment.
Self host allows you to do some neat things like have a secondary service running on the same address that will respond while the primary service is down. e.g. It could return a 503 with a retry-after header. Stopping and starting a service to enable copying files should only cost a few seconds.
On the other hand, you are doing something wrong if IIS is taking longer than self-host to deliver static content. IIS can use kernel mode functions of http.sys to deliver static content. One of the Owin based hosts has enabled this for self-host but the default self host does not allow it. In my experience IIS should definitely be faster than 60ms for small files.
Related
I am trying to run a background service which just writes to a file on a specified interval.
There are two methods that I tried
1) Created the project with the Console application template
2) Created the project with Web Application as template
When I run the app from visual Studio, both of them run fine. But when I deploy them to IIS, only the web application version works. It must be noted that there is absolutely no difference between the code of the two projects. I have used the WebHost as a hosting strategy in both the projects as well as well as installed all the dependencies in case of Console application as there are in the Web Application version.
I must also inform that I have used the preloadEnabled="true" option in IIS as IIS needs a web request to start the application.
I am wondering what is the difference between both the project types as the code is the same? I don't want the Web Application template.
Edit 1: I forgot to mention that the service will also need to expose an api endpoint for healthcheck purposes. Will the windows service approach listen for http requests?
I used the following article for implementing my background service.
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice
After years of building background services, I learned that Windows services are the best tools to implement these applications. While there are different techniques to keep an IIS application up and running in the background and prevent it from getting recycled, in practice, the applications on IIS are not meant to be executed forever.
If you had an intention to build your app in the cloud, I would have suggested using something like Azure WebJobs or Azure Functions Timer-Triggered functions, but for on-premise, even using something like Hangfire in the web is not sustainable. The worst happens when you need backward compatibility on Windows servers that don't have the "Application Initialization" module.
My suggestion is to move your application to a simple Windows Service if you control your environment. Windows services consume less memory, are easier to manage, and can run forever without getting recycled.
WebApplications are plain the wrong tools for this.
Being always on and always reachable, WebServers are primary targets for hacking. To compensate for that, they are usually run under the most restrictive user rights you can imagine: Read rights to their programm and this instances content directory. While I do not know why it worked at all, it propably will stop working in Production.
What you wanted to write was eitehr a Service or something executed by the Windows Task Sheduler. Personally I advise for the Task Sheduler as Services have their own set of restrictions. Unless of coruse there is some detail of the requirements that you did not told us.
This article could be helpful. It's a step by step tutorial on how to convert a console application to a web application.
Currently working on a .NET solution for an application server. I'm using .NET 4.0 running on Windows Server 2008 R2 with IIS 7.5.
My requirements are:
The application server can run multiple Console applications at once on a schedule - Quartz.net looks like a really good solution to this problem - and is working well for me so far
The application server will also host a web application that will report on jobs (what time they ran, what they did, how long they took etc)
I would like to be able to restart the "service" that is running my jobs and trigger ad hoc jobs from the web interface.
The Service that is running my jobs needs to run all the time
Once this is live I will not have direct access to the machine to restart a Windows Service, but i could potentially setup IIS to be able to do this for me.
WCF Services looks quite promising to me - but I'm not sure where to host it. My current project uses a WCF Service to run console applications using the Quartz.net plugin. Configuration for what to run and when to run it is stored in an oracle database and my WCF service connets directly to the database to retrieve this information (not sure if that is the intended use of WCF).
If I host the WCF Service in IIS / WAS then running the console applications might be a security concern from what I've read. I can keep the WCF service running all the time using appFabric at least. Alternatively I could host it in a Windows Service and allow my web app to consume the WCF service to report on the jobs. I'm concerned about using a Windows Service though as I wont have direct maintenance access to this machine and if it breaks I'm in trouble. I would really like to be able to do the maintenance from a web application. A windows service also feel a little unnecessary given it can be hosted from IIS.
So my question is - is a WCF Service the right approach to this problem or can anyone think of a better approach?
If a WCF service is a good approach - where should I host it so that I can perform maintenance via a web interface given I will not have direct access to the machine itself?
Should the WCF service be the one to start and schedule the jobs?
I think you're overengineering it, possibly.
The Problem: You have a web site which needs to start up jobs on an ad-hoc basis. Other jobs will be run to a fixed schedule. The web site will report on all/any of these jobs.
For running the scheduled jobs, a Windows Service using Quartz is indeed an ideal solution for the fixed schedule part. However, to report on those jobs the data must be collected by the Service and made available. A service can be set up to restart on fail, so you can guarantee that it will always be running (barring a minute or two when it's restarting if it fails - and why should it?. However, any history will be lost unless the Service stores it somewhere it can be retrieve it after a restart.
The simpler solution to the web site getting the history is for the Service to write its data to a database. Then it doesn't worry about a restart: all the history has already been saved, and the data can be read by the web site at any time.
Similarly, if the web site talks directly to the Service (as a WCF Service or otherwise) then what happens if the service is not currently running? The request gets fails until the restart is completed. Frustrating for the user. Alternatively, the web site puts the request into the database. The service monitors the database for requests, and starts jobs appropriately when it sees a new request. If a request is written while the service is not running, when it restarts it will see the request(s) in the DB and execute them.
So I think using a WCF service is overkill, and actually introduces some problems: persistence of history, and what to do about requests made while the service is down. These problems don't arise if you go the way I've described.
Cheers -
I have been through WCF and related topics, created my first WCF service. It works fine but the problem is that I don't understand hosting concept.
Different tutorials do different things like some create separate console applications for it to host service then using it in Asp.net app but some doesn't host it in any place and just add reference to another project and use it.
I don't understand that when to host where and why?
Please help me in this issue. I am using Visual studio 2013 with .net 4 and asp.net c#.
Basically, a WCF service needs to be hosted somewhere, so that it can be accessed from somewhere else. There are several ways to do this (and probably more than I know about too), but two of the most simple and common ways are to host the service in IIS Express, or in IIS (Internet Information Server).
IIS Express
The simplest way to achieve the first (IIS Express), is to simply right click the project in Visual Studio, and select View in Browser. That will open a directory listing in your browser, and in that directory you should see a file ending in .svc. Clicking that file should open the service description page with text like:
You have created a service.
To test this service, you will need to create a client (...)
The URL to that page, is in effect the URL clients will need to connect to your service. It should look something like http://localhost:64835/YourServiceName.svc.
That means the service is hosted locally, at port 64835, and accessible for clients at that address. Since this is in IIS Express however, it will no longer be accessible once you've closed Visual Studio, since it only runs as part of it.
IIS proper
Hosting in IIS means your service will be accessible whenever IIS is running. Once installed, it will usually start up when you log in, and run silently in the background. When it is running, you can start your service simply by accessing the correct URL. If it is not running, it might take a few seconds to start it. The next time it is called, it should respond quickly.
Note that in IIS, an application will by default run on port 80, which is the default port checked by browsers and possibly other clients - which means you don't need to specify it as in the example above. The URL will therefor generally be something more simple, like http://localhost/yourservice/yourservice.svc (although you could configure it to another port, or another protocol (say https://.., or something else if you like).
Once configured, and the relevant port is open however, your service should be accessible to the rest of the world.
Note: From the outside world, the URL will be different; it could be something like:
http://123.456.789.123/yourservice/yourservice.svc, if that is your IP address, or
http://yourdomain.com/yourservice/yourservice.svc if you've set up a domain.
I would suggest you consider hosting under two categories:
Self Hosting
IIS Hosting
As per you question "when to host where and why", I would say that you self host a WCF service only during testing. Mostly, self hosting is not used in live/production environments. For production environments use ISS hosting.
Self-hosting would be useful for testing on the local machine and on the intranet, while IIS hosting would be useful over the internet.
However, one must be aware that there is no rule as such regarding the usage of a particular hosting technique in any particular situation. With experience, the developer will be the best judge.
I am relatively green with C# and WCF. I have landed on a project where I am creating self hosted WCF services running as Windows services but am starting to wonder if I should use IIS instead (which we don't currently use) as managing all of these services could eventually get cumbersome.
Despite my best efforts, I have yet to find any definitive information about why I might favor one approach over the other. The services are primarily used for utility stuff like resizing images, retrieving files, etc. and are called by both C# and Java clients.
Thanks
The shortest answer would be 'it depends'. On your requirements. You can self host without problems, but IIS will manage resources more effectively and enable you to fine tune stuff more easily than self-hosted.
For instance, in IIS would be more simple to deploy a new version or remove and old one.
Either way is fine.
Generally, using the builtin IIS hosting capabilities can make deployment and configuration simpler for you. Also you have the activation model of http.sys - which means IIS will start the necessary process for you when an appropriate message arrives.
Clients of any platform can connect to the WCF services regardless whether they are self-hosted or IIS hosted.
ps: how to allow IIS-hosted WCF services to store their configuration data in distinct xxx.config files
I need to write an ASP.NET application which must handle a very large number of transactions per second - as many as 5000 users may transact at the same time. I think I will use WCF in back to communicate with SQL server. But in front, can IIS handle 5000 users at the same time effectively, or is there any simple way to host my application outside of IIS?
It will depend on the characteristics of the machine but you could always setup a web farm to handle high loads.
You can host a WCF application outside of IIS using WAS, Windows Service or a .NET application.
It certainly would be possible to design a system using IIS that could handle the load you describe. Whether this is a good idea or not really depends on the application. I suggest perhaps you look at some benchmarking some of the loads to determine if it is quicker to host in IIS or if you host a WCF application outside of IIS.
Why you need it outside IIS. you can have 5000 TPS with IIS. But bear in mind that it depends from lot of aspects... like hardware, what configuration you have for your servers, it depends from heaviness of your application, what is the response time of your applications. Also as suggested you can have web farm. You can use load balancer and have several servers behind it. So it is possible just you need to have a proper design and if needed a budget for hardware upgrade.