What is the best way to run ServiceStack on Linux / Mono? - c#

Listed on the ServiceStack website it shows that ServiceStack can run on Mono with either:
XSP
mod_mono
FastCgi
Console
What are these different configurations and which is preferred for Web Services on Mono?

Update for Linux
From the v4.5.2 Release ServiceStack now supports .NET Core which offers significant performance and stability improvements over Mono that’s derived from a shared cross-platform code-base and supported by Microsoft's well-resourced, active and responsive team. If you’re currently running ServiceStack on Mono, we strongly recommend upgrading to .NET Core to take advantage of its superior performance, stability and its top-to-bottom supported Technology Stack.
Update for Mono
Our recommended Setup for hosting ASP .NET sites on Linux and Mono is to use nginx/HyperFastCgi. We've published a step-by-step guide going through creating an Ubuntu VM from scratch complete with deploy / install / conf / init scripts at mono-server-config.
We're no longer recommending MonoFastCGI after noticing several stability and performance issues. This blog post provides a good analysis of the performance, memory usage and stability of the different ASP.NET Hosting options in Mono.
Development
XSP is similar to VS.NET WebDev server - a simple standalone ASP.NET WebServer written in C#. This is suitable for development or small work loads. You just run it from the root directory of your ServiceStack ASP.NET host which will make it available at http://localhost:8080.
Production
For external internet services you generally want to host ServiceStack web services as part of a full-featured Web Server. The 2 most popular full-featured web servers for Linux are:
Nginx
Use Mono FastCGI to host ServiceStack ASP.NET hosts in Nginx.
Apache
Use mod_mono to host ServiceStack ASP.NET hosts in an Apache HTTP Server.
Self Hosting
ServiceStack also supports self-hosting which lets you run your ServiceStack webservices on its own in a standalone Console application (i.e. without a web server). This is a good idea when you don't need the services of a full-featured web server (e.g: you just need to host web services internally on an Intranet).
By default the same ServiceStack Console app binary runs on both Windows/.NET and Mono/Linux as-is. Although if you wish, you can easily daemonize your application to run as a Linux daemon as outlined here. The wiki page also includes instructions for configuring your self-hosted web service to run behind an Nginx or Apache reverse proxy.
Since it provides a good fit for Heroku's Concurrency model as detailed in their 12 factor app self-hosting will be an area we'll be looking to provide increased support around in the near future.
ServiceStack.net Nginx / Mono FastCGI configuration
The servicestack.net website itself (inc. all live demos) runs on an Ubuntu hetzner vServer using Nginx + Mono FastCGI.
This command is used to start the FastCGI background process:
fastcgi-mono-server4 --appconfigdir /etc/rc.d/init.d/mono-fastcgi
/socket=tcp:127.0.0.1:9000 /logfile=/var/log/mono/fastcgi.log &
Which hosts all applications defined in *.webapp files in the /etc/rc.d/init.d/mono-fastcgi folder specified using XSP's WebApp File Format, e.g:
ServiceStack.webapp:
<apps>
<web-application>
<name>ServiceStack.Northwind</name>
<vhost>*</vhost>
<vport>80</vport>
<vpath>/ServiceStack.Northwind</vpath>
<path>/home/mythz/src/ServiceStack.Northwind</path>
</web-application>
</apps>
This runs the FastCGI Mono process in the background which you can get Nginx to connect to by adding this rule to nginx.conf:
location ~ /(ServiceStack|RedisAdminUI|RedisStackOverflow|RestFiles)\.* {
root /usr/share/nginx/mono/servicestack.net/;
index index.html index.htm index.aspx default.htm Default.htm;
fastcgi_index /default.htm;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME /usr/share/servicestack.net$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
Which will forward any route starting with /ServiceStack or /RedisAdminUI, etc to the FastCGI mono server process for processing. Some example apps hosted this way:
http://www.servicestack.net/ServiceStack.Northwind/
http://www.servicestack.net/ServiceStack.Hello/
http://www.servicestack.net/RedisAdminUI/AjaxClient/
http://www.servicestack.net/RedisStackOverflow/
For those interested the full Nginx + FastCGI configuration files for servicestack.net are available for download.

In production we use nginx with unix file sockets
We found a bug/memory leak when using socket communication with nginx, service stack and mono. This was with 500 concurrent requests, whilst you'd expect a spike in cpu and memory it never came back down again. We didn't do any further testing to discover where the problem was but there is a bug logged with xamarin bugzilla that seems similar to the issues we had. Essentially we tried the following and it was good enough for us.
We switched to using unix sockets with the following command params
fastcgi-mono-server4 /filename=/tmp/something.socket /socket=unix
/applications=/var/www/
The problem we had with this method is that the permissions of the socket file changed everytime you run fastcgi-mono-server4 so you have to correct them after you've started fastcgi-mono-server4! The other downside is that on our boxes it could only handle about 120 concurrent requests. However this isn't really an issue for us at the moment and you can always spawn more processes.
Hope this helps

Disclaimer: I'm the author of HyperFastCgi server and the author of blog post was mentioned in ceco's answer
nginx with HyperFastCgi do this job. HyperFastCgi does not leak memory as mono fastcgi server and performs much faster, because it uses low-level mono API to pass data between application domains instead of slow mono JIT implementation of cross-domain calls. Also it has option to use native libevent library for sockets communications which is roughly 1.5-2 faster than current mono System.Net.Sockets implementation.
Key features of HyperFastCgi:
Allows to use 3 different ways to deal with sockets and cross-domain communication:
Managed Listener with Managed Transport (uses only managed code, asynchronous System.Net.Sockets. Slow in mono, due to slow JIT cross-domain calls)
Managed Listener with Combined Transport (uses async System.Net.Sockets as listener and low-level mono API for cross-domain calls. Much much faster)
Native Listener (uses native libevent as socket library and low-level mono API to make cross-domain calls. The best performance)
Allows several ways to parallel web requests: using ThreadPool, .NET 4.5 Task or Single-threading. Last options is combined with Native Listener makes the web-server works like NodeJS: all requests are processed in single thread in asynchronous way.
Allows to write simple request handlers without using System.Web at all. This increases processing performance of requests for 2-2.5 times.

There is a helpful and relatively recent blog post regarding the performance of Mono using ServiceStack. I thought it could be of use to some who are about to decide how to host their services: Servicestack performance in Mono.
As it says - the FastCGI Mono server has tons of memory leaks which I can confirm. I ran ab -n 100000 -c 10 http://myurl on Ubuntu Desktop 14.04 using Mono 3.2.8 and Nginx 1.4.6 and FastCGI Mono Server 3.0.11 and a service written using ServiceStack 3.9.71. I don't think it matters which version of ServiceStack I am using since the FastCGI Mono Server is the leaky bit. It ate all the memory available - about 1Gb out of 2GB in total.
Also, the performance of Nginx + FastCGI Mono Server is bad, at least when compared to other solutions. My sample REST service had about 275 requests per second. The author of the blog had reviewed the code of FastCGI Mono Server and decided to write his own implementation. For some reason it's not working though, at least on my machine.
So the point, I guess, is that you should not use the FastCGI Mono Server. Unless you want to reboot your box often.
As this post is mostly negative I should say what are my intentions regarding hosting my services. I will probably go for self-hosting with an AppHost inheriting AppHostHttpListenerLongRunningBase behind Nginx. Using the same sample REST service above I get about 1100 requests per second. The better news is that the process had no apparent leaks, I tested it with about 1 000 000 requests and the process had consumed < 100MB RAM.
P.S. I am not the author of the blog post :)

evhttp-sharp - http server with host for NancyFx
https://github.com/kekekeks/evhttp-sharp
Very fast, almost 4 time faster than nancy-libevent2.
http://www.techempower.com/benchmarks/#section=data-r8&hw=i7&test=json&s=2&l=2
There are test results for different configurations:
JSON responses per second:
evhttp-sharp 91,557
nancy-libevent2 17,338
servicestack-nginx-d 953
nancy 896
aspnet-jsonnet-mono 863

Related

lightweight framework for automatic IIS deployment

Some background:
I'm a Java server developer and works mostly for Linux server
I use python as major scripting language for all trivial tasks, as well as deployment
Now I'm going to move to a project using C# on Windows.
What's feature I like for fabric:
fabric is lightweight, and easy for small team to learn
python can do a lot of things than some deployment framework. For example, we manage hundreds of machines via a internal management portal, a lot of specified configurations and functionalities. We implemented the portal using Java and some fancy web framework. I use python to access the rest API from the server and retrieve server address/port and other information for deployment. The most convenient way to get it done is using a real programming language.
with fabric, I can run any command on remote server. With this I can easily have workarounds for some unusual cases.
again, python is really good to do customization work for servers, e.g. build an init.d script from template.
What I have checked for windows:
Windows Deployment Kit
Ansible
Chef
(The list above may cost me too much to learn to get the right choice. I didn't dig into them.)
I also think about installing a ssh server software on windows. But I'm looking for a Windows way for Windows deployment.
The question is:
Is there an easy to learn and lightweight script framework for windows server deployment? my goal is to automatic everything for deployment to multiple IIS instances, and also, handle some daily/weekly ops tasks.
My Final solution that works pretty well:
install cygwin on windows server
install sshd as service
use python fabric to handle deployment
run commands on server using fabric
sync files using rsync
steps of uploading files
upload a random password file to server
start rsync daemon on remote server
upload file with modification check and compression using rsync
stop rsync daemon on remote server
server provision

C# web application without IIS

For web application development using C#, IIS seems to be the standard choice of web server. But are there any other options? I want to use Linux for my web server.
As far as I can see the other options are:
Make your web server program handle the IIS stuff yourself. (As long as you don't need a lot of IIS features, this won't be too expensive)
Use XSP
Use Apache Tomcat with mod_mono
Use another web server like nginx or lighttpd (Is this even possible with C#?)
Use that OWIN stuff (Are there implementations of this which are mature enough to consider yet?)
Something else I haven't considered ... ?
Which of these options is the most viable for a web application in the long term?
I'm mostly concerned about the long term maintainability of the project, rather than the server being able to handle high volume loads.
Option 1
Install Mono+XSP
Launch ASP.NET web-app using XSP on different port e.g. 8001
Proxy this port through nginx to 80
Option 2 (Better)
Wait for ASP.NET 5 (MVC6) release with complete Linux support
Use standalone-application approach (OWIN-Like) on different port e.g. 8001
Proxy this port through nginx to 80
On both ways you will need to install Mono, and I recommend installing latest one, not the one that, for example, Ubuntu PPA gives.
Also, since you are concerned about long-term you should really wait for Option 2 since it's going to be released in next couple of months.
If you really cannot wait those months and do not want to work on beta-product and then migrate to stable only option I see is NancyFX which is web-app framework written in C# that has full Linux support now.

hot-deployments with Self Host Web Api Services

Context: My company has been developing a WebAPI application to be hosted by IIS and the request latencies for a single static content file are about ~60ms. We investigated benchmarking the same app using WebAPI Self host and the latencies for the same content file were ~15ms, which really blew us away.
From a deployment process, we love IIS as it provides us extreme flexibility in doing hot deployments by copying DLLs directly to our web servers, which doesn't require us to do any sort of drain-stopping.
Question: Is it feasible to do similar hot deployments (just copying over dll's) with self hosted applications?
Nope, while self-host is executing, the DLLs will be locked so you'll have to stop the self-host first. You can do other tricks like deploying to another folder and then re-routing requests etc. but it's not the same as IIS deployment.
Self host allows you to do some neat things like have a secondary service running on the same address that will respond while the primary service is down. e.g. It could return a 503 with a retry-after header. Stopping and starting a service to enable copying files should only cost a few seconds.
On the other hand, you are doing something wrong if IIS is taking longer than self-host to deliver static content. IIS can use kernel mode functions of http.sys to deliver static content. One of the Owin based hosts has enabled this for self-host but the default self host does not allow it. In my experience IIS should definitely be faster than 60ms for small files.

Web server integration in Windows application

I am developing a Windows application using the .NET framework. I am exploring the best ways to integrate a web server in my application that listens to localhost:8080 (or whatever port). I do not want to compromise on the security of the user so I would like to use some library or existing web server application that is secure and does not have vulnerabilities.
So I rounded myself to 2 ways -
1) Using a popular web-server like nginx or apache - and invoking it through the application (and running it as a process with CreateNoWindow = true;).
2) Using a web server written in C# or C++ like WebServer.
What would be the best way to do it? I am open to more suggestions.
I have done this 3 ways in .NET
If it's a web service or Data service your can use the ServiceHost to host your own endpoints
IIS Express is probably the most robust and "proper" solution, unfortunately it's still in beta. You can get it by downloading Webmatrix. http://www.asp.net/WebMatrix
The source code for the webserver embedded in Visual studio is available, it's called Cassini. I've used it on a few projects. It's available as source or even packaged:
http://code.google.com/p/cassini/
http://ultidev.com/Products/Cassini/

Distributed application (WCF/Remoting/web servervices) Vs Web application

I am making a medium sized standard LOB application. Currently its a web application but I am formulating a proposal to revamp it into a Desktop remote application. By this I mean that the database and the application server will be hosted in a remote location. The client application will communicate with the server via the internet through (either WCF / Webservices / Remoting).
My question is this: The only reason I am shifting this from a web platform is due to the constraints of the web (I dont want to do AJAX or Java scripting to minimize those constraints, so please no JS/AJAX recommendations). I have made traditional desktop applications and they are considerably fast but i have never made a remote or a distributed application. I am not sure weather the speed of the application will be faster then the web or not.
As I understand it, the remote desktop application would be much faster. For one, there wont be any post backs involved, (I hate them so much). The data will obviously come via internet, so in that respect, is it better to shift to the remote desktop just for sheer speed and power?
Any help in the right direction would be greatfull. Many thanks.
Zeeshan
I think biggest advantage of desktop clients over web applications is freedom in UI design, and you don't have to worry about any inconsistencies in the client environment, although those are not an issue if you are using a client that runs on silverlight.
Personally I don't like web applications that requires a lot of user interaction, there are some of them that is a pleasure to use but I think it is very easy to do it the wrong way and end up having a buggy or not so responsive application (probably because of the incompatibilities in browsers, I have IE, Firefox and Chrome installed on my computer and I use one for some websites because they run faster on it, and others for other sites because web pages show up correctly only on them). Though this might not be an issue for a silverlight client.
In case of network speed, depending on the things that goes on the wire even with binary serialization remoting might have quite a bit of overhead. For example along with the data it writes full class names, library names and their versions so it can get pretty big and slow even for small amounts of data (although it should still be smaller then HTTP). It also has the same problems that HTTP has over unreliable connections because it uses a similar protocol. For one project we had to write a custom serializer for some objects because binary serialization alone was generating 200K, but our custom serializer for those objects were generating 50K. Then we ended up writing our own network protocol because the one that comes with the runtime was frequently stalling over unreliable wireless networks, and remoting doesn't give any control on the socket created by it (which makes sense in terms of encapsulation but you can't close it and force it to open a new one).
(I am assuming that you are asking about remoting vs web app. not remote desktop vs. web apps, because of your note about post back, you can't avoid it with a remote desktop session)
Rewriting an application just for sheer speed? No, because probably user won't see much difference in response time.
You are somewhat ambiguous with your terminology - do you want a client app that runs on the user's machine, or do you want an app that runs on the server and the user connects via remote desktop (RDP)?
If you are talking about a client app that communicates to the server via WCF etc., then yes it will be faster than a standard web app, although it will still be slower than a native desktop app. It will be faster than the web app not just because of the lack of postbacks, but also because you will be sending pure data through the wire, not a massive amount of HTML/Javascript combined with your data. With a client app, you have several options so consider them carefully - do you want Silverlight, WPF, or a native WinForms app? Each have their positives and negatives.
If you were talking about having a client app running on the server which the user then access via RDP, then you have other considerations to think of. For any more than two concurrent users you will need to consider buying CALs so the users can connect to the server. At this point you should also be considering whether you should be running a terminal server or Citrix type setup instead of using remote desktop.
Edit
When using WCF over a WAN (internet) you will certainly have to consider how you will secure it. WCF makes it trivial to secure the channel, but you need to consider how you will do authentication - there are a couple of different ways, but you can easily google that stuff yourself. The method you choose will be important due to the limited resources or skill-sets of the users.
As for what you write it in, you can't argue with Winforms if that is where your experience is. Personally, i would never again use ASP.NET/Ajax/etc for a web type application, it would be WPF or Silverlight all the way (i would only use ASP.NET for simple web sites). You can use the express (free) versions of Visual Studio to write it in, you don't need Expression (it's just a nice to have, and is more aimed at the design side than the actual coding side). Deploying the app need not be difficult - Silverlight or WPF xbap are delivered via the web, the user has to do nothing (except for the simple install of the Silverlight plugin or installing the right .Net framework for WPF - check this link). Winforms or stand-alone WPF require slightly more work, but you can avoid most issues by writing a good installer.
Whichever you choose, make sure you don't under estimate the time for development (because you will have a bit of a learning curve), and also make sure you budget enough time for testing it - especially the security side of it :)
I have been in a similar situation, although started with a Winforms LOB application.
Heres what we found with WinForms...
It's going to be harder to deploy in your release cycle, to all client machines.
WinForms can't be run on other operating systems easily. (with the exception on mono)
WCF endpoints can get complicated, and you need to manage an endpoint for release/version of your application.
Authentication, Authorization and Security can be tricky to get right!
Heres why you should stick to a html web application.
it's going to be easier to deploy, as you just need to copy one set of DLL's into the bin folder. Can be scripted from a continuous integration or staging server.
Security is going to be easy, by using a SSL certificate.
Silverlight/Flash should fill in the gaps that HTML leaves out.
Microsoft has also combined the connected systems in .net 3.5, they now call it WCF (ASMX/Remoting/etc...). It's got quite a learning curve 4-5 weeks.

Categories

Resources