I'm relatively new to proxies.
I am currently required to design a caching proxy for work.
We have a webservice which serves up data based on calls to it, naturally.
I am required to create a proxy for a rich client application that caches the results of these calls.
The results are basically string names of products identified by a composition of ids.
I could just create a class that acts as my proxy client that caches the results in a cache, I was thinking of using the System.Web.Caching.Cache object.
However I thought I'd ask to see if there are any design aspects and considerations that I have missed. Is there a design that is commonly known that I have not found?
[UPDATE - 12 Oct 2009]
Seems like System.Web.Caching.Cache is not advisable for client side caching.
http://msdn.microsoft.com/en-us/library/system.web.caching.cache.aspx:
The Cache class is not intended for
use outside of ASP.NET applications.
It was designed and tested for use in
ASP.NET to provide caching for Web
applications. In other types of
applications, such as console
applications or Windows Forms
applications, ASP.NET caching might
not work correctly.
First, I've used System.Web.Caching.Cache in Winforms and Service solutions and never had a problem with it. I recommend mitigating against the disclaimer by testing to ensure the cache works and doesn't leak resources, but none of my testing ever showed this happening.
Alternatively there is a caching solution in Enterprise Library, and probably a bunch of other open source ones you could consider. And there are probably some commercially supported ones too if you prefer.
However, are you hosting your web service in ASP.Net (.asmx extension is usually a good indicator). In that case your still in ASP.Net and so the disclaimer shouldn't apply.
Also note that ASP.Net has caching options you can drive directly from the web.config without having to do any coding.
But all of the above is server side caching.
Assuming your web service is using HTTP (or HTTPS) as the transport layer, then proxy and client caching require use of the HTTP response headers, I think Cache-Control, Expires and possibly Last-Modified are involved. Then it is up to the client and proxies to decide whether they will support caching or not.
Alternatively if your actually trying to write a proxy solution that supports caching (and that would be rewriting a wheel so you should have a good reason to do this) then you probably want a HTTP Handler, with the System.Web.HttpRequest and System.Web.HttpResponse representing the client side of things, and passing the request through to the server using System.Net.HttpWebRequest and System.Net.HttpResponse. The System.Web.Caching.Cache could then support your cached responses on your proxy. That said there would be lots of rules to have to implement here, including HTTP Request headers (also has Cache-Control options).
Related
This seems like a duplicate question - but after hours of search, it seems there is no clear question-answer which summarize the issues i'm raising here.
We have a web application (built using asp.net MVC4) which stores customers sensitive customer information.
We've decided to migrate our entire application to https.
My question is, except for the IIS and certificates technical issues, which we've already know how to deal with, what should be changed on code level?
What will happen for instance for:
Included external scripts containing http, such as: http://code.jquery.com/jquery-1.7.1.min.js - will it work automatically without any problem and popup messages or blocking on the client browsers?
Internal links, which we've forgotten to change, which redirect to our site using http?
Images/Sources which have http in their URL.
Should we change all references from http to relative, or just specifying // without the http/https protocol ? (as seen on other posts on this subject)
Should we do nothing, will it happen automatically?
Is there a way to do something in IIS or Global.asax etc, in order to automatically take care of all http leftovers?
What else should we take in account when migrating to https?
Thanks in advance.
For all internal static resources hopefully you have used #Url.Content helper and for all internal dynamic resources you have used #Html.ActionLink, #Html.BeginForm, ... helpers to generate the links. This way you don't need to worry about anything.
For all external resources you could use // syntax in the link which will respect the protocol.
Since you are switching to HTTPS you might consider marking all your cookies (if any) with the secure flag to ensure that they are transmitted only over a secure channel.
I am making a webservice that only needs to serve json and it needs to be scalable.
I have gotten the impression that Nginx is a more scalable webserver than IIS 7.5 and that it is extremely simple to manage compared to IIS. Also, Nginx can very easily be used to load balance among several json services, using the upstream module.
As I only need to serve json I feel that ASP.Net and IIS i way overkill. I just need some very simple routing and a simple authcookie mechanism I easily can write myself.
Right now I am using MVC3 but feels it is to bloated when I only servce Json, and I am very annoyed by the facts that I am having to write custom Auth attributes to make a default deny policy, having to make a HttpModule to hack around WindowsFroms authentication's default redirection of unauthorized requests and in general needs to read and learn a lot to stay in control of the framework. I have also considered WCF but my prev. experience with this was that there was to much bloat and configuration for my needs and to much stuff to know about to stay in control.
I prefer simplicity and want to avoid "framework overhead" when I just need to handle some Http for a simple, fast and scalable async json service. So I am considering a setup like this:
An Nginx webserver on a linux box that load balances among and proxies webrequests to async json services.
The json services are written as Windows Services using HttpListener to do Async handling of web reqests.
What are your thoughts about this architecture ?
EDIT: Actually I think it would be more performant using fastcgi from nginx to the windows services instead of proxying http prequests ? What are your thoughts ?
Just implement an ASHX - basically a IHttpHandler for IIS which scales really well and most of the issues you describe just "go away"... it gives you full control over the whole request/response processing... for a nice tutorial see http://www.dotnetperls.com/ashx
My feeling is that it would be more than sufficient to configure a simple web application that has one request handler that returns the relevant Json response.
You may want to look at optimising the ASP .NET pipeline. You can read more about this and other ASP .NET optimisations here. If the request handling is lightweight from a processing perspective, you may also want to bump up the thread limits. This is also covered in the article referenced.
From an Nginx point of view, you may want to check that number of worker threads matches your CPU count.
Hope this helps.
I am new to RESTful web services. We are taking the REST route for building our public web services to be consumed by out clients.And i had a few questions.
Are there any kind of limitation with pure REST webs services? and if yes then would a hybrid REST web service take care of those limitations?
I am thinking about using SSL + Hash Message Authentication Code (HMAC) in Authorization header for security along with IP based based filtering. what do you guys think about it?
Are there any good client side tools for testing?
Currently i am using the following
http://code.google.com/p/rest-client/
And what about some kind of client side code generation tool?
The following links are my source of info.
http://msdn.microsoft.com/en-us/library/dd203052.aspx
Link
The first thing to keep in mind is that a REST service should be stateless, which is very different when compared to a SOAP/RPC type of service interface. Using REST methodology requires you to rethink how you want your clients to interact with the service, breaking down the interactions into clear and concise method calls.
REST
+ Lightweight messages, very little overhead (other than the XML itself)
+ Easily readable results, can easily test with a web browser
+ Easy to implement
- Looser interface, loose type checking
SOAP
+ More rigid, with a strict contract definition
+ Plenty of development tools available.
Looking through the WCF MSDN documentation, WCF SOAP support was integrated from the start while REST support is a recently added feature. I myself am having a hard time finding documentation for authentication/security for REST services, as most of the documentation is directed towards SOAP.
Client side generation tools: I haven't come across any for REST services as REST doesn't define a service contract as SOAP does. WADL is an attempt to do that for REST services.
http://en.wikipedia.org/wiki/Web_Application_Description_Language
http://wadl.codeplex.com/
I'm interesting in reading more responses dealing with authentication and security, as I'm looking into that myself.
This is a good starting point of a WCF REST WebService:
REST / SOAP endpoints for a WCF service
(BTW: Stackoverflow has nice REST kind of urls.)
You can test a REST service with just a web browser (Go to the url and get the XML or JSON). Fiddler is also good tool, and FireBug-plugin for FireFox. I usually make a thin service-interface project and a separate (unit-tested) logics-project.
For authentication I would first generate a Guid and a timestamp. Then based on those a hash (.NET supports SHA256 and SHA512). The Guid can be stored to server (database table) to map it some concrete numerical id. Then you can have a rest url like:
/myobject/1?timestamp=20100802201000&hash=4DR7HGJPRE54Y
and just disable the hash & timestamp check in development environment (e.g. with AOP). With timestamp I would check that the stamp is between 15 minutes back and forward in time (=should be enough to prevent attacks).
Will your service be visible to the public/internet and is your client a jQuery or Silverlight -client? Then you still have a problem: You don't want to include a secret key in the client software code.
So you need to generate hash in server and some kind of cookie to store the client session. (This can be done e.g. with a separate login-page/application in a folder with different config-file.) I remember that this book did have something on the topic:
If you want to enable the HttpContext when using WCF, you need to set <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> under <system.serviceModel>.
Then you can check current user identity from HttpContext.Current.User.Identity.Name.
However, if you want to make a pure REST service then you don't use cookies, but a HTTP Basic Authentication coupled with SSL/TLS for each call.
I think that it's easy to make a client with just LINQ2Xml or jQuery so maybe client generation is not needed.
Or you can also have both, a SOAP and a REST interface, and use a service reference to make a client.
One thing to keep in mind is that you can take REST as a philosophy (everything should be reachable by a clean URL, without hidden strings attached) or as a dogma (you have to use PUT and DELETE even if that means a lot of hardship down the line).
The emphasis is on simplification - like using simple data types for params instead of stuctured pileups, nor clobering interface for superfluous reasons (like towing giant page "title" in a url), not using headers which are not well known and de-facto standard.
So, you can design perfectly RESTful interface using just GET and retain usability and testability from web browsers. You can also use any standard authentication methods or several of them for redundancy depending on your actual target audience. If you are making an app to run on a corpnet with standardized credentials and tokens you can continue using that. If you are doing something for very general access you can use combination of GET args and/or cookies - keeps your URL-s clean for 99.99% of users.
You can even serve both JSON and XML (like Google maps for example) and still be RESTfull, but you can't do full scale SOAP (complex input types etc). You can do limited SOAP - simple types for requests, always expressible as GET args, people still get WSDL for documentation.
Hope this paints flexible enough picture - the way of thinking above any strict dogma.
I've recently discovered a way to implement RESTful services using Global.asax (by handling the Application_BeginRequest event). Basically, I am saying it is possible (and easy) to implement a RESTful web service in classic ASP.NET, without any need for WCF.
It takes approximately 30 lines of code to figure out which method you want to call (from the URL) and pass it parameters (from the query string, via reflection) as well as serialize the result using XmlSerializer. It all leads to a web service that can be accessed through HTTP GET requests, and returns standard XML data.
So with that in mind, is there any reason to use WCF when creating a RESTful web service that will be invoked only through HTTP GET requests? WCF introduces a lot of overhead and restrictions, whereas the Global.asax approach I described above is much easier to implement, customize, and deploy.
Note - JSON endpoints can also be implemented without WCF by using the JavaScriptSerializer.
Also - HTTP POST requests can be handled by Global.asax in a similar way.
So in the end, what would be the reason to use WCF in such a case? Is there better scalability or performance?
You can also use Asp.Net MVC to implement REST quite easy.
What you don't get for free with this approach is:
non-HTTP bindings.
support for multiple message formats.
non-IIS hosting.
control over the process activation.
control over the instance creation.
object pooling.
message queueing.
transactions.
scalability and reliability.
additional technologies built on top of WCF like the OData support.
If none of these applies to your scenario - you don't need WCF.
The answer is, 2 different pipelines, but the WCF pipeline was built specifically for services, and ASP.Net was built for rendering content via HTTP. Both have their pluses and minuses.
If you are comfortable with the ASP.net stack, and don't have to worry about things like standards, then ASP.Net is fine.
But if you want the best of both worlds, try WCF Data Services, all the cool built-in features of WCF, with none of the hassles. Currently, MVC does not have a view engine to generate OData.
Disclaimer: I've tried Googling for something that will do what I want, but no luck there. I'm hoping someone here might be able to lend a hand.
Background
I have a .NET class library that accesses a secure web service with the WSE 2.0 library. The web service provides a front-end to a central database (it's actually part of a data-sharing network spanning multiple customers) and the class library provides a simple wrapper around the web service calls to make it accessible from a legacy VB6 application. The legacy application uses the class library to retrieve and publish information to the web service. Currently, the application and class library DLL are both installed client-side on multiple workstations.
The Problem
The catch is that the web service we are accessing uses HTTPS and a valid X509 client certificate needs to be presented to the web service in order to access it. Since all of our components live on the client machine, this has led to deployment problems. For example, we have to download and install per-user certificates on each client machine, one for each user who might need to access the web service through our application. What's more, the web server itself must be accessed through a VPN (OpenVPN in particular), which means a VPN client has to be installed and configured on every client machine. It is a major pain (some of our customers have dozens of workstations).
The Proposed Solution
The proposed solution is to move all of this logic to a central server on the customer site. In this scenario, our legacy application would communicate with a local server, which will then go off and forward requests to the real web service. In addition, all of the X509 certificates would be installed on the server, instead of on each individual client computer, as part of the effort to simplify and centralize deployment.
So far, we've come up with three options:
Find a ready-made SOAP proxy server which can take incoming HTTP-based SOAP requests, modify the Host header and routing-related parts of the SOAP message (so they are pointing to the real web server), open an SSL connection to the real web server, present the correct client certificate to the server (based on a username-to-certificate mapping), forward the modified request, read the response, convert it back to plaintext, and send it back to the client.
Write a proxy server by hand that does everything I just mentioned.
Think of completely different and hopefully better way to solve this problem.
Rationale
The rationale for trying to find and/or write a SOAP proxy server is that our existing .NET wrapper library wouldn't have to be modified at all. We would simply point it at the proxy server instead of the real web service endpoint, using a plain HTTP connection instead of HTTPS. The proxy server will handle the request, modify it to so that the real web service will accept it (i.e. things like changing the SOAPAction header so that it is correct), handle the SSL/certificate handshake, and send the raw response data back to the client.
However, this sounds like an awful hack to me me at best. So, what our my options here?
Do I bite the bullet and write my own HTTP/SSL/SOAP/X509 aware proxy server to do all this?
Or...is there a ready-made solution with an extensible enough API that I can easily make it do what I want
Or...should I take a completely different approach?
The key issues we are trying to solve are (a) centralizing where certificates are stored to simplify installation and management of certificates and (b) setting things up so that the VPN connection to the web server only occurs from a single machine, instead of needing every client to have VPN client software installed.
Note we do not control the web server that is hosting the web service.
EDIT: To clarify, I have already implemented a (rather crappy) proxy server in C# that does meet the requirements, but something feels fundamentally wrong to me about this whole approach to the problem. So, ultimately, I am looking either for reassurance that I am on the right track, or helpful advice telling me I'm going about this the completely wrong way, and any tips for doing it a better way (if there is one, which I suspect there is).
Apache Camel would fit the bill perfectly. Camel is a lightweight framework for doing exactly this kind of application integration. I've used it to do some similar http proxying in the past.
Camel uses a very expressive DSL for defining routes between endpoint. In your case you want to stand up a server that is visible to all the client machines at your customer site and whatever requests it receives you want to route 'from' this endpoint 'to' your secure endpoint via https.
You'll need to create a simple class that defines the route. It should extend RouteBuilder and override the configure method
public class WebServiceProxy extends RouteBuilder
{
public void configure()
{
from("jetty:http://0.0.0.0:8080/myServicePath")
.to("https://mysecureserver/myServicePath");
}
}
Add this to a Camel context and you'll be good to go.
CamelContext context = new DefaultCamelContext();
context.addRoute(new WebServiceProxy());
context.start();
This route will create a webserver using jetty bound to 8080 on all local interfaces. Any requests sent to /myServicePath will get routed directly to your webservice defined by the uri https://mysecureserver/myServicePath. You define the endpoints using simple uris and the dsl and camel takes care of the heavy lifting.
You may need to configure a keystore with your certs in in and make it available to the http component. Post again if you've trouble here ;)
I'd read the camel docs for the http component for more details, check the unit tests for the project too as they are chock full of examples and best practices.
HTH.
FYI: To have the http component use your keystore, you'll need to set the following properties
System.setProperty("javax.net.ssl.trustStore", "path/to/keystore");
System.setProperty("javax.net.ssl.trustStorePassword", "keystore-password");
You should look into WCF, which supports the WS-Addressing protocol. I believe I've seen articles (in MSDN, I think) on writing routers using WCF.
You should also get rid of WSE 2.0 as soon as possible. It's very badly obsolete (having been replaced by WSE 3.0, which is also obsolete). All of its functions have been superceded by WCF.
I believe an ESB (Enterprise Service Bus) could be a viable, robust solution to your problem. There is an open source ESB called Mule, which I've never used. I did mess around with ALSB (AquaLogic Service Bus) a while back, but it would be expensive for what you are describing. Anyway, the thing that you would want to look at in particular is the routing. I'm not sure it would be a simple plug 'n play, but it is indeed another option.
You can also do this with Microsoft ISA Server, a commercial Proxy/Cache server. It will do many of the things you need out of the box. For anything that is not possible out of the box, you can write an extension to the server to get it done.
ISA Server is not free.
ISA is now being renamed to "Microsoft Forefront Threat Management Gateway".
It is much more than a web proxy server, though - it has support for many protocols and
lots of features. Maybe more than you need.
There is a service virtualization tool from Microsoft available on Codeplex called the Managed Service Engine which is intended to decouple the client from the web service implementation. It might fill the bill or give you a running start. I haven't really investigated it thoroughly, just skimmed an article in MSDN and your description reminded me of it.
http://www.codeplex.com/servicesengine
http://msdn.microsoft.com/en-us/magazine/dd727511.aspx
Your security model doesn't make sense to me. What is the purpose of using HTTPS? Usually it is to authenticate the service to the clients. In that case, why does the server need to keep the clients' certificates? It is the clients who should be keeping the server's X509 Certificate.
Why do you need to go through VPN? If you need to authenticate clients, there are better ways to do that. You can either enable mutual authentication in SSL, or use XML-Security and possibly WS-Security to secure the service at the SOAP level. Even if you do use SSL to authenticate clients, you still shouldn't keep all the client certificates on the server, but rather use PKI and verify the client certificates to a trusted root.
Finally, specifically for your proposed proxy-based solution, I don't see why you need anything SOAP-specific. Don't you just need a web server that can forward any HTTP request to a remote HTTPS server? I don't know how to do this offhand, but I'd be investigating the likes of Apache and IIS...