Summary: I need to know if there is an existing light-weight implementation of REST+JSON in .NET world which does not use WCF. If not, I am looking for some folks who would be interested to start a joint venture for an Open Source project.
I do not know about you but I was a big fan of WCF when it came out and I praised its design for its modularity and extensibility. However, as I used it more and more often, fundamental issues started to come into light to the point that I now feel it has to be scrapped and redesigned. That seems to be a big statement but I believe these are major issues:
First of all, WCF internally uses SOAP for message which means if the transport message is not SOAP, we incur the cost of transforming to and back from SOAP for every call. This is expensive and time consuming.
Transforming the outgoing message requires "plugging in" a message inspector and "stealing" the message. As the name implies, this is an inspector (must be used for inspection and logging) so using that for changing the message is frankly a hack.
It was design according to WSDL and the world has changed so much since 2001. Implementing REST also requires stealing the message. WCF was designed according to WSDL and not REST.
Channel stack is unnecessarily heavy.
The main stack is protocol agnostic. This is not a advantage, it is a fundamental flaw. As you know, access to a lot of protocol level information was added later because was impossible to implement some important user scenarios. For example, client’s IP address in TCP was not accessible and added later (now accessible using perationContext.Current.IncomingMessageProperties[RemoteEndpointMessageProperty.Name])
Interoperability with other platforms can be an issue.
Now it seems that a lot of designs are moving towards simplicity of JSON and REST. I just love their simplicity and I can see my washing machine consuming JSON in 5-10 years and hosting a REST service! I believe their implementation in .NET was a hack and we seriously need a very light weight and simple framework (because these are simple and light weight) to host REST+JSON services inside and outside IIS. I hope such a framework exist but if not, I am really eager to get something going with a number of like-minded folks.
So what do you think? Does such a framework exist? If not, is anyone interested?
MVC that spits out JSON instead of HTML seems like a possibility. You have the freedom to either use the JsonDataContractSerializer or JSON.Net to serialize your datacontracts.
Take a look at OpenRasta. It looks like it addresses many of your concerns.
If you really don't want to use IIS, you can implement your own HTTP listener process. This let's you write your own standalone application to respond to HTTP requests (which may be run as a service if you so desire) without any of the overhead of IIS, WCF or any other container process framework. Your process would live on top of the HTTP.sys functionality exposed by Windows, and exposed by the .Net framework through the HttpListener class.
Take a look at http://msdn.microsoft.com/en-us/library/system.net.httplistener.aspx
Note that you will need to write your own infrastructure for matching incoming requests and dispatching them to corresponding handlers (the equivalent of ASP.Net MVC's UrlRoutingModule/RouteTable.Routes/MvcRouteHandler), and you will need to flow the HttpListenerContext everywhere in order to examine the incoming request and complete it. But this gives you the ultimate in flexibility in what you can do.
And it certainly performs - I have benchmarked a basic HttpListener implementation on a standard desktop-class machine at over 3000 requests served/second, so the framework itself will not hold you back.
There is MicroRest, an open source project I started a while. Here's the blurb I wrote:
MicroRest is a tiny REST framework - 5 classes, around 500 lines of code. All output is JSON. It allows you to add REST capabilities to your ASP.NET applications without needing to go through the huge ugly mess of WCF rest (which doesn't provide 'clean' URLs in 3.5). It also allows you to use POCOs (and complex objects in some cases) inside your REST methods, where WCF restricts you to using ints and strings
Contributions are very welcome - it does rely on System.Web.Routing right now, so needs Cassini/IISExpress as an embedded web server. I'm looking at writing a custom route parser so it can move to Kayak as some point.
Related
I have been researching whether or not to use WCF for a new project we are going to be working on.
Basically the only reason which will prevent us from using it is the new project must be able to communicate with a legacy server which talks via .Net's TcpClient class with binary messages.
I am wondering if I can write a custom binding perhaps to send and receive messages from the server. I have managed to find that I can write custom bindings and encodings. But I am not sure if I can read messages as bytes and not soap messages.
One possible solution I thought of is to write a custom encoding which will transform the bytes into soap messages and vice versa. But I have not checked up on this or thought it through much.
Jason,
I'd suggest (not sure if this is an answer but I can't post comments, sorry),
you rather go with either a full sockets or a full WCF solution (meaning both client and the server).
Given you have a legacy server with via sockets - it's much easier to make a sockets client with some custom-protocol you have, and parsing, basic error handling etc.
...and you'll get it to be faster too (not sure what's the purpose, nature of the app and communication you're having with the server, or do you need some other WCF features etc.).
See this thread which is more or less your case...
WCF TCP client with Java Socket server on custom XML messages
...basically, you'd need to write a transport channel - which would again pretty much have to support everything that you'd need it to do for a 'sockets-only' client + extra work and layers.
That normally only makes sense if you're
a) going to reuse that for later on development, so e.g. you could just plug-in that solution for different servers or if you have many developers and large code base etc. (if you don't, making a sockets solution a separate lib and reusing is still easier) or
b) or you need some specific feature that ain't easily reproducible 'by hand and sockets' - still to support any of that you'd have to wrap it up pretty thoroughly anyway, or...
c) some 3rd party lib - that I'm not aware of for such cases (usually this falls into a 'too custom' work),
hope this helps some
Not sure if you've seen Carlos Figueira's series of posts on WCF extensibility but they're well worth a read if you want to understand what's possible and how to go about doing it.
http://blogs.msdn.com/b/carlosfigueira/archive/2011/03/14/wcf-extensibility.aspx
There is an example of a custom TCP based binding for JSON-RPC that you may be able to use a basis for a new transport binding in your case.
http://blogs.msdn.com/b/carlosfigueira/archive/2011/12/08/wcf-extensibility-transport-channels-request-channels-part-1.aspx
I searched a lot, apologies if I missed something obvious. And thanks for reading the looong text below.
I have a 3rd party (read: No way to access/change the source) application here. It consists of a server (Windows service) and an API, that talks to the server via remoting.
For several reasons I'd like to expose this API over WCF (see subject: One reason is a WCF client).
The problem is, the API is
unchangeable (follows 3rd party rule)
using no WCF itself (it is serializable/MarshalByRef where necessary for Remoting)
using lots of interfaces and internal implementation classes
Following 1 I cannot use the (quite intrusive) WCF attributes myself.
Following 2 the API itself can be used "over the wire" (they support remoting via TCP and HTTP), but remoting is not good enough for me.
Following 3 I have mostly interfaces (which WCF won't handle well, cannot (de-)serialize). The implementation classes could be sent over, but - I cannot access them.
The general usage for this API is based on a single interface (and its members/properties), so the typical usage is like
var entryPoint = new ApiClientEntryPoint();
entryPoint.SomeMethodCall();
entryPoint.PropertyExposingAnInterface.SomeOtherMethodCall();
and so on.
What I'd really like to do is generate (with as little effort/code as possible) a proxy (not in the typical WCF sense) that I expose via WCF and that serializes this hierarchy mapping every call/property on the client to the real thing on the server.
The closest I've come so far is stumbling upon this project, but I wonder if there are more/other tools available that take a medium to large part of this work off my shoulder.
If there are any general other advices, better approaches to wrap something preexisting and unchangable into WCF, please share.
My advice is to use a facade pattern. Create a new WCF service that is specific to your usage and wrap the 3rd party service. Clients would talk to your service and you would talk to the 3rd party. But clients would not talk to the 3rd party directly.
This would work in most but not all scenarios. I'm not sure of your particular scenario so YMMV.
BTW you can look at WCF RIA Services which is good for exposing services to Silverlight where you can avoid doing a lot of the hand coding of service stuff. But again depending on your particular scenario it might not be the best way to go.
Edit:
It's now clear that the API is too big and/or the usage patterns of the clients are too varied in order to effectively use a facade. The only other thing I can suggest is to look at using a code generation tool. Use reflection (assuming it is a .NET API?) to pull apart the API and then codegen new services using the details you gathered. You could look at the T4 templates built into Visual Studio or you could look at a more "robust" tool such as CodeSmith. But I'm guessing this would be some painful code to write. I'm not aware of an automated solution for this.
Is the API well documented? If so, is the documentation in a parseable format such as XML or well-structured HTML? In that case you might be able to codegen from the documentation as opposed to reflecting through the code. This might be quicker depending on the particulars.
Okay, hair brained scheme #1 on my side:
Use Visual Studio Refactor menu to "extract interface" on 'ApiClientEntryPoint'.
Create a new WCF service which implements the above Interface and get VS to generate the method stubs for you.
'For PropertyExposingAnInterface.SomeOtherMethodCall' You will have to flatten the interfaces as there is no concept of a "nested" service operation.
Your only other option will be to use T4 code gen ,which will probably take longer than the above idea.
I've been banging my head trying to figure some things out. So, I'm looking for advice and research material (via links). Here's the scenario:
We have a library (say, CommonLib) that contains resources needed by several other applications (say, AppA, AppB, AppC, and so on...). Now, the way this is currently working is AppA instances, checks to see if the specific port is available. If it isn't, then it kicks CommonLib ("Hey, wake up") and the service is started. Then AppA is happy and off we go.
Now, I've done a lot of research on Remoting.Channels and I've come to the conclusion that I'm starting an application built on a technology that is considered 'legacy'. Well...I don't like that. Honestly, WCF is way more overhead than we require and not fully implemented in Mono. We are targeting multi-platform compatibility (Windows, Mono, Linux) so we are researching all options.
The idea of remoting started, in the first place, because we wanted CommonLib to be a guaranteed single instance (as I understand it, a singleton is pretty much only guaranteed to be a singleton within a given AppDomain - feel free to correct me if I'm wrong). Anyway, I realized the power of remoting and decided to begin some experimental implementation. I have been successful in my initial use of the MarshalByRefObject. But, I'm concerned about the continued implementation of this legacy technology.
So, with all this...I am considering how I can implement CommonLib (as a host application) and, without remoting, implement MarshalByRefObject through Stream, standard TCP Socket, or some other way. What I'm thinking is, instead of instancing AppA to get CommonLib running, just implement CommonLib as the base app. Then, you select what app (really just a 'hosted' .dll) you want instanced within CommonLib. CommonLib would then load that .dll into the CommonLib framework along with whatever custom controls that hosted app uses. Along with this idea, I'd forego the requirement (for now) that CommonLib must be a genuine singleton.
So...that is a detail of our scenario. Again, my question is really 2 parts: (a) What technology(ies) should I research, and (b) Do I need to be concerned with the legacy status of the remoting technology?
Any other advice, comments, or questions are more than welcome.
UPDATE 1: I'm starting off with this snippet. This will allow me to load a file (or script) with a list of apps (or plug-ins) that have been installed. I can create this file as Xml or Binary formatted. When a new app is installed, the file & path can be added. Hmmm...I don't necessarily need to use MarshalByRefObject.
While WCF may not be as complete in Mono, Mono 2.6 provides everything required for silverlight / moonlight, so a WCF-based implementation should be perfectly feasible. As long as you don't try anything exotic (different transports, inspectors etc) it should be more than sufficient to provide an RPC stack that is reliable between windows / mono / etc.
The key difference between WCF and remoting is in usage - remoting is based abound an object that pretends to be at a different end, where-as WCF is based around a service; the point being you should base your interactions around discrete methods (rather than accessing properties etc) - this also has the advantage of helping making it explicit when you are crossing the boundary.
Another option would be to write a very basic socket server; very lightweight, and you could use something like protobuf-net to provide a portable (cross-platform) serializer implementation (you shouldn't really trust BinaryFormatter between the two - it is... flakey).
In short - I wouldn't build around MarshalByRefObject at all; I would write a service layer, something like:
interface IMyService {
void Method1();
int Method2(string s);
}
and abstract these details away from the caller. If you end up using WCF, then that is all you need; and for existing remoting support I would write an IMyService implementation that encapsulates (privately) the whole MarshalByRefObject story. Ditto if I wrote a socket server.
I'm not sure that .NET Remoting is obsoleted by WCF. I think they have somewhat different use cases; WCF (deliberately) has no concept of "marshal by reference" because it's designed for distributed and (relatively) loosely coupled apps that might need to avoid chatty protocols due to latency etc. If your components are naturally tightly coupled, latency will be low but performance needs to be high, preserving rich .NET types is important, etc. then Remoting may still be a good fit. Anyway, I wouldn't worry about being "legacy", "legacy" technologies at least on Windows/.NET have a way of staying around for quite some time if they get a decent amount of usage. Remoting still exists in the latest (4.0) version of .NET.
None of this is meant as a claim that Remoting necessarily is the best fit for your situation ...
what is the diff, advantage and disadvantage between remoting and socket... which is best way for Server-Client functionality....
Sockets are raw binary streams between two endpoints. You would need to wrap your own RPC (etc) layer to process the messages, and deal with a lot of infrastructure code. However, since they are so close to the metal this can be very, very efficient. It is not tied to any specific architecture, as long as both ends talk the same message format. Tools like protobuf-net can help you construct binary messages for streams (rather than rolling your own serialization code).
Remoting is a .NET specific tool, and is very brittle re versioning. I wouldn't recommend remoting for client/server - use things like WCF instead.
WCF is a more flexible comms stack - a lot of power and complexity, but arguably a bit of bloat too (xml, complex security etc). It is data-contract based, so roughly open (client/server can be different), but still a bit .NET focused.
edit For info, protobuf-net provides an RPC stack too; at the moment there is only an HTTP implementation provided, but at some point I'll add raw TCP/IP.
Direct socket manipulation can give you more power, flexibility, performance, and unfortunately complexity when compared to Remoting or WCF. However, if you need the benefits of low-level TCP/IP, such as non-blocking IO and custom protocols, tools such as Ragel and frameworks like Mina can ease the complexity burden. I recommend trying the higher-level APIs like WCF first and only use direct sockets if these don't meet your needs.
I second most that Marc Gravell wrote - specifically remoting and internal serialization are "easy" but are very easy to break and often do not scale well to a public network (I'm not that familiar with .net remoting, but I guess it needs a well known registry service which is often problematic when going out of the clean lab environment).
Implementing a standard or even roll-your-own RPC is harder but safer in the long run: you do not have problems with code revisions (or they are easier to control), scaling is fully controlled by your own code, and its easy to develop components using various technologies.
There are many many many tools that help you easily build RPC mechanisms over sockets, but I really like to use plain old HTTP - get a simple HTTP embedded server running inside your server process and your client just needs to have an HTTP client to send messages. If you develop your own simple RESTful call semantics (instead of using some bloated message format like SOAP or XML-RPC), then there is really almost nothing to do :-)
I would say that choosing between sockets and remoting you better consider on what type of application you are developing. Sockets are definitely for your own protocol implementations, low level programming and the only way to go if you have to communicate to the other tcp/ip applications. Remoting is a preffered way to develop new .NET communication applications, where you don't need to come down to tcp/ip stack and ensure your application talks to the others (probably legacy applications). In case you could go only with .NET it's better to choose .NET 3.5 and WCF framework instead of .net 2.0 remoting, the last one is dead and unsupported technology.
I have two programs. One is in C# and another one in Java.
Those programs will, most probably, always run on the same machine.
What would be the best way to let them talk to each other?
So, to clarify the problem:
This is a personal project (so professional/costly libraries are a no go).
The message volume is low, there will be about 1 to 2 messages per second.
The messages are small, a few primitive types should do the trick.
I would like to keep the complexity low.
The java application is deployed as a single jar as a plugin for another application. So the less external libraries I have to merge, the better.
I have total control over the C# application.
As said earlier, both application have to run on the same computer.
Right now, my solution would be to use sockets with some sort of csv-like format.
I am author of jni4net, open source interprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed. I hope it will help you.
Kyle has the right approach in asking about the interaction. There is no "correct" answer without knowing what the usage patterns are likely to be.
Any architectural decision -- especially at this level -- is a trade-off.
You must ask yourself:
What kind of messages need to be passed between the systems?
What types of data need to be shared?
Is there an important requirement to support complex model objects or will primitives + arrays do?
what is the volume of the data?
How frequently will the interactions occur?
What is the acceptable communication latency?
Until you have an understanding of the answers, or potential answers, to those questions, it will be difficult to choose an implementation architecture. Once we know which factors are important, it will be far easier to choose the more suitable implementation candidates that reflect the requirements of the running system.
I've heard good things about IKVM, the JVM that's made with .NET.
Ice from ZeroC is a really high performance "enterprisey" interop layer that supports Java and .net amongst others. I think of it as an updated Corba - it even has its own object oriented interface definition language called Slice (like Corba's IDL, but actually quite readable).
The feature set is extensive, with far more on offer than web services, but clearly it isn't an open standard, so not a decision to make lightly. The generated code it spits out is somewhat ugly too...
I realize you're talking about programs on the same machine, but I've always liked the idea of passing messages in XML over HTTP.
Your server could be a web server that's ready to accept an XML payload. Your client can send HTTP messages with XML in the body, and receive an HTTP response with XML in it.
One reason I like this is that HTTP is such a widely used protocol that it's easy to accept or create HTTP POST or GET requests in any language (in the event that you decide to change either the client or server language in the future). HTTP and XML have been around for a while, so I think they're here to stay.
Another reason I like it is that your server could be used by other clients, too, as long as they know HTTP and XML.
I used JNBridge (http://www.jnbridge.com/jnbpro.htm) on a relatively simple project where we had a .NET client app using a relatively significant jar file full of business object logic that we didn't want to port. It worked quite nicely, but I wouldn't say we fully exercised the capabilities of JNBridge.
I am a big fan of Thrift an interoperability stack from Facebook. You said they code will probably run on the same machine so it could be overkill but you can still use it.
If they are separate programs and running as independent applications,you may use sockets. I know it's bit complex to define communication protocol but it'll be quite straight-forward.
However if you have just two separate programs but want to run them as single application, then I guess IKVM is a better approach as suggested by marxidad.
It appears a very similar question has been asked before here on stack overflow (I was searching Google for java windows shared memory):
Efficient data transfer from Java to C++ on windows
From the answer I would suggest you to investigate:
"Your fastest solution will be memory
mapping a shared segment of memory,
and them implementing a ring-buffer or
other message passing mechanism. In
C++ this is straight forward, and in
Java you have the FileChannel.map
method which makes it possible."