HTTP Proxy server in C# - c#

My company is experimenting with writing a proxy server using the .NET Fx 3.5 and C#. From our research I have read that HttpListener is not a good candidate for a proxy server though I am unsure as to why.
We are currently working with the Mentalis proxy example source code though that will involve, among other things, implementing our own logging and performance counters. Using HttpListener will wrap Http.sys which will give us some of the performance statistics we require out of the box.
So why is HttpListener a bad candidate for HTTP proxy work?
(And yes we are considering Squid 3.1 by writing or configuring an ICAP server as well.)

HttpListener is in .NET to provide a major building block for a simple HTTP server. Where simple includes not supporting high operation rates.
Typically HTTP proxies need to be very low overhead to support many concurrent connections as well as providing the proxy's function (which depends on the type of proxy).
Proxies are detailed in RFC 2616 ยง8.1.3) and that immediately provides one item that (if I understand HttpListener correctly) is not possible:
The proxy server MUST signal persistent connections separately with its clients and the origin servers (or other proxy servers) that it connects to. Each persistent connection applies to only one transport link.

You might also consider that the windows port of nginx was released a few days ago. Many sites that have squid and varnish experience are very pleased after converting to nginx. Then there is always whatever MS is calling ISA server these days.
Gone off to look at the Mentalis code now :D

Related

Is WCF built on sockets?

I am trying to understand programming with sockets on a more detailed level rather than just with the API callings. I have fair understanding of C# WCF and socket programming using WinSocks in C++. Now I have 2 main questions:
Does WCF use sockets internally for the communication in all cases. In other words is WCF a wrapper around sockets and is built upon them?
Does all kind of network based communication use sockets at the end for sending/receiving data, which is something mandated by the OSI model?
A little detailed explanation will be better than just a Yes/No answer.
(With acknowledgement to the other SO users who agreed to reopen this question).
As an opening remark, remember that it's 2019 2020 and WCF is obsolete and I'm personally glad to see it gone, and I strongly recommend against using WCF for any new projects and I advise people to transition away from WCF as soon as possible.
Now, in response to your question (bold emphasis mine):
Does WCF use sockets internally for the communication in all cases. In other words is WCF a wrapper around sockets and is built upon them?
Strictly speaking, no (but in a practical sense, for inter-machine transport, yes).
WCF is a .NET platform that is concerned with "message processing". WCF tries to abstract away the underlying details of message transport (but it does so horribly, and so no-one should use it today), so it is entirely possible to build a WCF application that achieves inter-machine and inter-network communication without ever using Windows' Winsock, or whatever "Socket"-esque API is available for a given computing platform.
Now, while ostensibly WCF is all about abstraction, in practice WCF was geared around SOAP messages (and SOAP is horrible too, but that's another discussion), and SOAP primarily uses HTTP as a message transport - and HTTP primarily uses TCP/IP, and almost every single TCP/IP application on Microsoft Windows will be using the Winsock API somewhere in the process' communication stack. (It can be argued that HTTP applications on Windows will use http.sys which performs HTTP request/response processing in kernel-mode, which necessarily means bypassing Windows' user-mode Winsock API and instead http.sys uses "Winsock Kernel" which is its own thing).
In the above paragraph, note the use of the word "primarily" (as opposed to "exclusively" or "always") - because:
WCF doesn't have to use SOAP, it can use other messaging models/protocols/paradigms like net.tcp (which itself is more like a "binary SOAP") or even REST (though REST support came late in WCF's lifespan and it's a total pain to configure correctly, YMMV).
SOAP doesn't have to use HTTP, it can use other transports like SMTP. And WCF expressly supports other SOAP's other main transports like SMTP and FTP.
While HTTP is effectively tied to TCP/IP and Winsock is the only real way a user-mode application will use TCP/IP, other transports like SMTP don't have to use TCP/IP (at least, not in the way you think - see my footnote).
And of course, throughout all of this - user-mode applications are always free to use a different Networking Programming Interface besides Winsock or BSD sockets (for example, Windows' named-pipes present a streaming IPC interface just like how TCP behaves - or the vendor of a network-interface-card could have its own exclusively networking API which is somehow simply better than the Sockets API (similar to how GPU vendors in the mid-1990s were pushing their own APIs (Glide, PowerVR, Rendition, etc) until they all ended-up having to support Direct3D and OpenGL (and who uses Metal? hah).
And while WCF isn't exactly designed with testability in mind, it is still possible to host and run WCF applications inside an integration-testing environment where the actual message transport is just a thin proxy object, or a faked or mocked implementation - so Sockets are completely avoided there as well.
But in practice - in Win32, networking is accomplished using Winsock (Microsoft's implementation of the BSD Sockets API) so if you're using WCF to communicate between machines then I can say with 99% certainty that eventually your messages will pass-through Winsock.
Footnote: Regarding using WCF with SMTP without using Sockets: Many SMTP e-mail servers, including Microsoft Exchange Server, support "pickup directories" - which are filesystem directories actively monitored by the e-mail server, which detects when a new file has been added to the folder and reads each file as an SMTP envelope and processes it the same way as though it was an SMTP envelope received by the server's SMTP service endpoint - if a SOAP-in-SMTP message were to be "dropped" inside the pickup directory and it was destined for a recipient local to the pickup directory's e-mail service, then that message will not pass through Winsock at all either.

.NET WebSocket client and server library

I'm looking for an open source, cross-platform, actively maintained .NET library which provides websocket functionality for both clients and servers, in such a way that most of the code (after connection is established) can use the same abstraction regardless of which side of the connection it is on. Ideally, it would be a platform-independent implementation of System.Net.WebSockets, but I don't really care if it defines its own types, so long as there's some single abstract WebSocket class that can be shared by client and server code.
Things that I've looked at and that did not qualify (but correct me if I'm wrong):
System.Net.WebSockets (client only, Win8+ only)
WebSocket4Net (client only)
WebSocket Portable (client only)
Fleck (server only)
WebSocketListener (server only)
SuperWebSocket (server only)
Owin.WebSocket (server only)
PowerWebSockets (proprietary)
XSockets (proprietary)
Alchemy Websockets (last release in 2012, many active bugs in the tracker with no answers)
The only one that I could find that seems to be matching the requirements is websocket-sharp. However, what worries me there is the sheer number of opened issues in the tracker along the lines of clients unable to connect, invalid data frames etc - it sounds like it's not very mature yet.
Are there any other candidates that match my requirements that I have missed? Or am I wrong about any of the libraries listed above being client/server only?
Look at Microsoft's SignalR. SignalR is a higher level abstraction around websockets. SignalR also allows the client to be written in .NET (C#). From the SignalR documentation:
The SignalR Hubs API enables you to make remote procedure calls (RPCs) from a server to connected clients and from clients to the server. In server code, you define methods that can be called by clients, and you call methods that run on the client. In client code, you define methods that can be called from the server, and you call methods that run on the server. SignalR takes care of all of the client-to-server plumbing for you.
SignalR also offers a lower-level API called Persistent Connections. For an introduction to SignalR, Hubs, and Persistent Connections, or for a tutorial that shows how to build a complete SignalR application, see SignalR - Getting Started.
One another solution is to make use of Edge.js. This is a .NET library that utilizes Node.js. You could let Node.js to act as both the server and client of the WebSocket channel. And then utilize Edge.js to act as the bridge between the worlds, Nodejs and the .Net. Have a look at the following, there are plenty of samples as well. github.com/tjanczuk/edge/tree/master#scripting-clr-from-nodejs. Both are excellent frameworks that are actively maintained.
However the use of Edge.js does introduce an additional dependency, node.js
You can take a look at the WebSocketRPC. The library is based on the System.Net.WebSockets and it is portable. Moreover, it auto-generates JavaScript client code and has support for ASP.NET Core.
I suggest you try-out samples first located inside the GitHub repository.
Disclaimer: I am the author of the library.

Nginx + FastCGI uses Management records? If not, then what?

I am writing a FastCGI application interface library in C#/Mono, running on a plain-'ol Linux box (Vagrant and/or EC2), using Nginx as the web server. I am trying to make my implementation comply with the FastCGI 1.0 spec. As such I am prepared to receive a FCGI_GET_VALUES record, and respond with FCGI_GET_VALUES_RESULT. However, my experience is that Nginx FastCGI is not sending this. So, the questions I am trying to answer:
(1) OK, the web server's not required to send FCGI_GET_VALUES, it's optional. So, has it fallen out of use? Do other FastCGI server implementations still use this or not? Is there a way to configure Nginx FastCGI to enable it?
(2) Three defined config values go back to the web server in the FCGI_GET_VALUES_RESULT record: max concurrent transport connections the app will accept; max concurrent requests the app will accept; whether the app multiplexes connections. Lacking FCGI_GET_VALUES, what alternative methods, if any, exist to communicate or configure Nginx's FastCGI module with such settings?
1) I recently went on a search for an open source web server with support for FastCGI management messages. I skimmed the source code of several very quickly, including nginx. The only one that looked like it had code to send FCGI_GET_VALUES was OpenLiteSpeed. I didn't get round to testing it before giving up on FastCGI I'm afraid, and it didn't look like it actually paid any attention to the values it received.
2) I'll cover what I know about each parameter individually:
FCGI_MAX_CONNS: Don't think there's any way to directly specify this in nginx. Maybe you could do something with http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html
OpenLiteSpeed has an option to limit the number of connections to a fastcgi server.
FCGI_MPXS_CONNS and FCGI_MAX_REQS: nginx doesn't support multiplexing FastCGI over a single connection. I couldn't find a web server that did.
For reference, I skimmed through the source code of these web servers, and none of them look like they send FCGI_GET_VALUES:
apache2 (mod_fastcgi, mod_fcgi, mod_proxy_fcgi), caudium, monkey, hiawatha, jetty, lighttpd, nginx, cherokee
Some of them did process FCGI_GET_VALUES_RESULT though.

Options to stack native protocol over http?

We have a client/server system where all communications are done using a native protocol over a binary/SSL stream on TCP. All of our code is written in C# .NET 2.0 and some parts in 3.5. Our protocol is designed to support a variety of messaging patterns, namely Request/Response, and for lack of a better term, one-way messages from either the client or the server on an irregular basis.
Our objective is to add a feature to our system to carry our protocol over HTTP. There are several reasons for doing so, but I don't need to explain that here I think. Please tell me if I should.
The thought is to embed our protocol as application/binary in HTTP requests using the standard request methods ie., GET, PUT, POST, but not DELETE, and following the HTTP specification. This would be rather straight forward to do if our protocol was only request/response. The main concern comes from the one-way messages, and more specifically the unsolicited messages coming from the server. Another important concern is that HTTP is not oriented for persistent connections, but I believe with HTTP/1.1 this can be overcome. A third concern is that the server connections are not stateless.
We've been designing and prototyping this for a couple weeks, and we've come up with a couple ideas:
Refactor the code in the communication and protocol layers on both server and client sides. Although much of the code is shared, it is a lot of work that in all likely hood will not be a success. The question here is can this even be done with our poorly designed protocol?
Use a proxy approach. That is create an HTTP server using WCF and unwrap the HTTP messages and relay the native messages to and from our server over persistent connections. This would require a layer of abstraction on the client side that would actually maintain two connections to the proxy. One connection to perform request/response and the other to carry the unsolicited messages using a delayed response technique.
HTTP Tunneling which we haven't yet researched.
We're hoping that someone has encountered this challenge before and could lend some sound advice?
Please accept my apologies if this is the wrong place to post this question.
For the server initiated messages, you might consider WebSockets. According to Scott Guthrie's blog there is support for web sockets in ASP.Net MVC 4 beta.

WCF or Custom Socket Architecture

I'm writing a client/server architecture where there are going to be possibly hundreds of clients over multiple virtual machines, mostly on the intranet but some in other locations.
Each client will be gathering data constantly and sending a message to a server every second or so. Each message will probably be about 128 characters or so in length.
My question is, for this architecture where I am writing both client/server in .NET is should I go with WCF or some socket code I've written previously. I need scalability (which the socket code has in mind), reliability and just the ability to handle that many messages.
I would not make final decision without peforming some proof of concept. Create very simple service, host it and use some stress test to get real performance results. Than validate results against your requirements. You have mentioned amount of messages but you didn't mentioned expected response time. There is currently discussed similar question on MSDN forum which complains about slow response time of WCF compared to sockets.
Other requirements are not directly mentioned in your post so I will make some assumption for best performance:
Use netTcpBinding - best performance, binary encoding, requires .NET server / clients. I guess you are going to use Net.Tcp because your other choice was direct socket programming.
Don't use security if you don't have to - reduces performance. Probably not possible for clients outside your intranet.
Reuse proxy on clients if possible. Openning TCP connection is expensive if you reuse the same proxy you will have single connection per proxy. This will affect instancing of you services - by default single service instance will handle all requests from single proxy.
Set service throttling so that your service host is ready for many clients
Also you should make some decisions about load balancing. Load balancing for WCF net.tcp connections requires sticky sessions (session affinity) so that after openning the channel client always calls the service on the same server (bacause instance of that service was created only on single server).
100 requests per second does not sound like much for a WCF service, especially with that little payload. But it should be quite quick to setup a simple setup with a WCF service with one echo method just returning the input and then hook up a client with a bunch of threads and a loop.
If you already have a working socket implementation you might keep it, but otherwise you can pick WCF and spend your precious development time elsewhere.
From my experience with WCF, i can tell you that it's performance on high load is very very nice. Especially you can chose between several bindings to achieve your requirements for the different scenarios (httpBinding for outside communication, netPeerTcpBinding in local network e.g.).

Categories

Resources