Creative use of MarshalByRefObject - c#

I've been banging my head trying to figure some things out. So, I'm looking for advice and research material (via links). Here's the scenario:
We have a library (say, CommonLib) that contains resources needed by several other applications (say, AppA, AppB, AppC, and so on...). Now, the way this is currently working is AppA instances, checks to see if the specific port is available. If it isn't, then it kicks CommonLib ("Hey, wake up") and the service is started. Then AppA is happy and off we go.
Now, I've done a lot of research on Remoting.Channels and I've come to the conclusion that I'm starting an application built on a technology that is considered 'legacy'. Well...I don't like that. Honestly, WCF is way more overhead than we require and not fully implemented in Mono. We are targeting multi-platform compatibility (Windows, Mono, Linux) so we are researching all options.
The idea of remoting started, in the first place, because we wanted CommonLib to be a guaranteed single instance (as I understand it, a singleton is pretty much only guaranteed to be a singleton within a given AppDomain - feel free to correct me if I'm wrong). Anyway, I realized the power of remoting and decided to begin some experimental implementation. I have been successful in my initial use of the MarshalByRefObject. But, I'm concerned about the continued implementation of this legacy technology.
So, with all this...I am considering how I can implement CommonLib (as a host application) and, without remoting, implement MarshalByRefObject through Stream, standard TCP Socket, or some other way. What I'm thinking is, instead of instancing AppA to get CommonLib running, just implement CommonLib as the base app. Then, you select what app (really just a 'hosted' .dll) you want instanced within CommonLib. CommonLib would then load that .dll into the CommonLib framework along with whatever custom controls that hosted app uses. Along with this idea, I'd forego the requirement (for now) that CommonLib must be a genuine singleton.
So...that is a detail of our scenario. Again, my question is really 2 parts: (a) What technology(ies) should I research, and (b) Do I need to be concerned with the legacy status of the remoting technology?
Any other advice, comments, or questions are more than welcome.
UPDATE 1: I'm starting off with this snippet. This will allow me to load a file (or script) with a list of apps (or plug-ins) that have been installed. I can create this file as Xml or Binary formatted. When a new app is installed, the file & path can be added. Hmmm...I don't necessarily need to use MarshalByRefObject.

While WCF may not be as complete in Mono, Mono 2.6 provides everything required for silverlight / moonlight, so a WCF-based implementation should be perfectly feasible. As long as you don't try anything exotic (different transports, inspectors etc) it should be more than sufficient to provide an RPC stack that is reliable between windows / mono / etc.
The key difference between WCF and remoting is in usage - remoting is based abound an object that pretends to be at a different end, where-as WCF is based around a service; the point being you should base your interactions around discrete methods (rather than accessing properties etc) - this also has the advantage of helping making it explicit when you are crossing the boundary.
Another option would be to write a very basic socket server; very lightweight, and you could use something like protobuf-net to provide a portable (cross-platform) serializer implementation (you shouldn't really trust BinaryFormatter between the two - it is... flakey).
In short - I wouldn't build around MarshalByRefObject at all; I would write a service layer, something like:
interface IMyService {
void Method1();
int Method2(string s);
}
and abstract these details away from the caller. If you end up using WCF, then that is all you need; and for existing remoting support I would write an IMyService implementation that encapsulates (privately) the whole MarshalByRefObject story. Ditto if I wrote a socket server.

I'm not sure that .NET Remoting is obsoleted by WCF. I think they have somewhat different use cases; WCF (deliberately) has no concept of "marshal by reference" because it's designed for distributed and (relatively) loosely coupled apps that might need to avoid chatty protocols due to latency etc. If your components are naturally tightly coupled, latency will be low but performance needs to be high, preserving rich .NET types is important, etc. then Remoting may still be a good fit. Anyway, I wouldn't worry about being "legacy", "legacy" technologies at least on Windows/.NET have a way of staying around for quite some time if they get a decent amount of usage. Remoting still exists in the latest (4.0) version of .NET.
None of this is meant as a claim that Remoting necessarily is the best fit for your situation ...

Related

Can C# Mono create tunnel interfaces multiplatform?

I haven't been able to find a solution answer on this for the past 2 days:
Can C# Mono, or a supported third party framework, be used to create and manage layer 2 tunnel over a virtual interface across the 3 major platforms (OSX, Windows, Linux)?
At a high-level an application like Hamachi or Tunngle would be a real world example for what I'd like to achieve at a basic level.
The intention behind this question is whether it would be possible to write effective cross-platform code or whether I would have to resort to platform-specific code to implement the virtual interfaces.
That depends. Since L2TP is actually accomplished using UDP datagrams, there's no reason why you can't implement it in C#. However, integrating it with the operating system (as a virtual interface driver etc.) is more or less impossible - I'd expect the only real way would be to have a small native wrapper that calls the managed code that does most of the work.
In other words - you can write Hamachi in .NET just fine. Writing the Hamachi Network Adapter is the tricky part. Also, if you just want to add L2TP capabilities to your applications, there's no problem (instead of TcpClient/UdpClient etc., you'd just use your own class that communicates with your L2TP class). However, integrating it to the IP infrastructure does require you to write a driver, which is usually a native-only territory.
It might be that there are some ready-to-use solutions that have the virtual network adapter which can call DLLs, but I'm not aware of any. A very unsafe way would also be to create hooks on Socket calls, but I'm not going to elaborate since that's extremely tricky and a bad idea overall :D
In other words, you have the option to use a hybrid approach - have the minimal native drivers for all the platforms you want to support, and let them call your managed library to do all the real work - the managed library can then be platform independent (as long as Mono is supported there :)).
Now, each OS probably has its own VPN client, which you could concievably use from .NET. However, that also means that your application will have to be able to support each of these OSes and their different VPN clients separately - and that will be tricky.
If you want to go the way of writing your own network interface driver, a good way to start on Windows is the Driver Development Kit, which has some sample source code for NIC drivers. Windows uses NDI (http://en.wikipedia.org/wiki/Network_Driver_Interface_Specification), which has some support even on *nix family of OSes, so it might be possible to do this relatively easily - but don't forget, you're still writing a driver. Unless you have significant experience with C/C++/ASM and OS kernels and driver models, you're probably out of your league here. This is the stuff that leads to BSODs :)
There's also some related technologies like TDI (Transport Driver Interface) or WFP (Windows Filtering Platform) which could be used to do all this in user-space, rather than kernel/driver-space. However, those are Windows technologies. You'll have to find the equivalents on the other OSes you want to support, and you'll have to do some magic to make it all work in one cross-platform application. And while doing all this, you want to maintain performance - which requires very careful programming in .NET (it's easy to write reliable code in .NET, but it's harder to get cutting-edge performance. C/C++/ASM is quite the opposite - it's relatively easy to do things fast, but reliability suffers).

Succinct and light-weight API: REST+JSON in .NET

Summary: I need to know if there is an existing light-weight implementation of REST+JSON in .NET world which does not use WCF. If not, I am looking for some folks who would be interested to start a joint venture for an Open Source project.
I do not know about you but I was a big fan of WCF when it came out and I praised its design for its modularity and extensibility. However, as I used it more and more often, fundamental issues started to come into light to the point that I now feel it has to be scrapped and redesigned. That seems to be a big statement but I believe these are major issues:
First of all, WCF internally uses SOAP for message which means if the transport message is not SOAP, we incur the cost of transforming to and back from SOAP for every call. This is expensive and time consuming.
Transforming the outgoing message requires "plugging in" a message inspector and "stealing" the message. As the name implies, this is an inspector (must be used for inspection and logging) so using that for changing the message is frankly a hack.
It was design according to WSDL and the world has changed so much since 2001. Implementing REST also requires stealing the message. WCF was designed according to WSDL and not REST.
Channel stack is unnecessarily heavy.
The main stack is protocol agnostic. This is not a advantage, it is a fundamental flaw. As you know, access to a lot of protocol level information was added later because was impossible to implement some important user scenarios. For example, client’s IP address in TCP was not accessible and added later (now accessible using perationContext.Current.IncomingMessageProperties[RemoteEndpointMessageProperty.Name])
Interoperability with other platforms can be an issue.
Now it seems that a lot of designs are moving towards simplicity of JSON and REST. I just love their simplicity and I can see my washing machine consuming JSON in 5-10 years and hosting a REST service! I believe their implementation in .NET was a hack and we seriously need a very light weight and simple framework (because these are simple and light weight) to host REST+JSON services inside and outside IIS. I hope such a framework exist but if not, I am really eager to get something going with a number of like-minded folks.
So what do you think? Does such a framework exist? If not, is anyone interested?
MVC that spits out JSON instead of HTML seems like a possibility. You have the freedom to either use the JsonDataContractSerializer or JSON.Net to serialize your datacontracts.
Take a look at OpenRasta. It looks like it addresses many of your concerns.
If you really don't want to use IIS, you can implement your own HTTP listener process. This let's you write your own standalone application to respond to HTTP requests (which may be run as a service if you so desire) without any of the overhead of IIS, WCF or any other container process framework. Your process would live on top of the HTTP.sys functionality exposed by Windows, and exposed by the .Net framework through the HttpListener class.
Take a look at http://msdn.microsoft.com/en-us/library/system.net.httplistener.aspx
Note that you will need to write your own infrastructure for matching incoming requests and dispatching them to corresponding handlers (the equivalent of ASP.Net MVC's UrlRoutingModule/RouteTable.Routes/MvcRouteHandler), and you will need to flow the HttpListenerContext everywhere in order to examine the incoming request and complete it. But this gives you the ultimate in flexibility in what you can do.
And it certainly performs - I have benchmarked a basic HttpListener implementation on a standard desktop-class machine at over 3000 requests served/second, so the framework itself will not hold you back.
There is MicroRest, an open source project I started a while. Here's the blurb I wrote:
MicroRest is a tiny REST framework - 5 classes, around 500 lines of code. All output is JSON. It allows you to add REST capabilities to your ASP.NET applications without needing to go through the huge ugly mess of WCF rest (which doesn't provide 'clean' URLs in 3.5). It also allows you to use POCOs (and complex objects in some cases) inside your REST methods, where WCF restricts you to using ints and strings
Contributions are very welcome - it does rely on System.Web.Routing right now, so needs Cassini/IISExpress as an embedded web server. I'm looking at writing a custom route parser so it can move to Kayak as some point.

How to Implement Loose Coupling with a SOA Architecture

I've been doing a lot of research lately about SOA and ESB's etc.
I'm working on redesigning some legacy systems at work now and would like to build it with more of a SOA architecture than it currently has. We use these services in about 5 of our websites and one of the biggest problems we have right now with our legacy system is that almost all the time when we make bug fixes or updates we need to re-deploy our 5 websites which can be a quite time consuming process.
My goal is to make the interfaces between services loosely coupled so that changes can be made without having to re-deploy all the dependent services and websites.
I need the ability to extend an already existing service interface without breaking or updating any of its dependencies. Have any of you encountered this problem before? How did you solve it?
I suggest looking at a different style of services than maybe you've been doing so far. Consider services that collaborate with each other using events, rather than request/response. I've been using this approach for many years with clients in various verticals with a great deal of success. I've written up quite a bit about these topics in the past 4 years. Here's one place where you can get started:
http://www.udidahan.com/2006/08/28/podcast-business-and-autonomous-components-in-soa/
Hope that helps.
There are a couple of approaches you can take. Our SOA architecture involves XML messages sent to and from the services. One way we achieve what you describe is by avoiding the use of a data binding library to our XML schema and use a generic XML parser to get just the data nodes you want ignoring those you aren't interested in. This way the service can add additional new nodes to the message without breaking anyone currently using it. We typically only do this when we need just one or two pieces of information from a larger schema structure.
Alternatively, the other (preferred) solution we use is versioning. A version of a service adheres to a particular schema/interface. When the schema changes (e.g the interface is extended or modified), we create a new version of the service. At any time we may have 2 or 3 versions on the go at any one time. In time, we deprecate and then remove older versions, while eventually migrating dependent code onto newer versions. This way those dependent on the service can continue using the existing version of the service while some particular dependency can 'upgrade' to the new version. Which versions of a service are called are defined in a configuration file for the dependent code. Note that it is not only the schema which gets versioned, but all of the underlying implementation code as well.
Hope this helps.
What you're asking isn't an easy topic. There are many ways you can go about making your Service Oriented Architecture loosely coupled.
I suggest checking out Thomas Erl's SOA book series. It explains everything pretty clearly and in-depth.
There are a few common pratices to achieve loose coupling for services.
Use doc/literal style of web services, think in data (the wire format) instead of RPC, avoid schema-based data binding.
Abide strictly by the contract when sending out data, but keep few assumptions processing incoming data, xpath is a good tool for that (loose in, tight out)
Use ESB and avoid any directly point to point communication between services.
Here is a rough checklist for evaluating whether your SOA implements Loose Coupling:
Location of the called system (its physical address): Does your
application use direct URLs for accessing systems or is the
application decoupled via an abstraction layer that is responsible
for maintaining connections between systems? The Services Registry
and the service group paradigm used in SAP NetWeaver CE are good
examples of what such an abstraction might look like. Using an
enterprise service bus (ESB) is another example. The point is that
the application should not hard code the physical address of the
called system in order to truly be considered loosely coupled.
Number of receivers: Does the application specify which systems are
the receivers of a service call? A loosely coupled composite will not
specify particular systems but will leave the delivery of its
messages to a service contract implementation layer. A tightly
coupled application will explicitly call the receiving systems in
order; a loosely coupled application simply makes calls to the
service interface and allows the service contract implementation
layer to take care of the details of delivering messages to the right
systems.
Availability of systems: Does your application require that all the
systems that you are connecting to be up and running all the time?
Obviously, this is a very difficult requirement especially if you
want to connect to external systems that are not under your control.
If the answer is that all systems must be running all the time, the
application is tightly coupled in this regard.
Data format: Does the application reuse the data formats provided by
the backend systems or are you using a canonical data type system
that is independent of the type systems used in the called
applications? If you are reusing the data types of the backend
systems, you probably have to struggle with data type conversions in
your application, and this is not a very loosely coupled approach.
Response time: Does the application require called systems to respond
within a certain timeframe or is it acceptable for the application to
receive an answer minutes, hours, or even days later?

remoting vs socket

what is the diff, advantage and disadvantage between remoting and socket... which is best way for Server-Client functionality....
Sockets are raw binary streams between two endpoints. You would need to wrap your own RPC (etc) layer to process the messages, and deal with a lot of infrastructure code. However, since they are so close to the metal this can be very, very efficient. It is not tied to any specific architecture, as long as both ends talk the same message format. Tools like protobuf-net can help you construct binary messages for streams (rather than rolling your own serialization code).
Remoting is a .NET specific tool, and is very brittle re versioning. I wouldn't recommend remoting for client/server - use things like WCF instead.
WCF is a more flexible comms stack - a lot of power and complexity, but arguably a bit of bloat too (xml, complex security etc). It is data-contract based, so roughly open (client/server can be different), but still a bit .NET focused.
edit For info, protobuf-net provides an RPC stack too; at the moment there is only an HTTP implementation provided, but at some point I'll add raw TCP/IP.
Direct socket manipulation can give you more power, flexibility, performance, and unfortunately complexity when compared to Remoting or WCF. However, if you need the benefits of low-level TCP/IP, such as non-blocking IO and custom protocols, tools such as Ragel and frameworks like Mina can ease the complexity burden. I recommend trying the higher-level APIs like WCF first and only use direct sockets if these don't meet your needs.
I second most that Marc Gravell wrote - specifically remoting and internal serialization are "easy" but are very easy to break and often do not scale well to a public network (I'm not that familiar with .net remoting, but I guess it needs a well known registry service which is often problematic when going out of the clean lab environment).
Implementing a standard or even roll-your-own RPC is harder but safer in the long run: you do not have problems with code revisions (or they are easier to control), scaling is fully controlled by your own code, and its easy to develop components using various technologies.
There are many many many tools that help you easily build RPC mechanisms over sockets, but I really like to use plain old HTTP - get a simple HTTP embedded server running inside your server process and your client just needs to have an HTTP client to send messages. If you develop your own simple RESTful call semantics (instead of using some bloated message format like SOAP or XML-RPC), then there is really almost nothing to do :-)
I would say that choosing between sockets and remoting you better consider on what type of application you are developing. Sockets are definitely for your own protocol implementations, low level programming and the only way to go if you have to communicate to the other tcp/ip applications. Remoting is a preffered way to develop new .NET communication applications, where you don't need to come down to tcp/ip stack and ensure your application talks to the others (probably legacy applications). In case you could go only with .NET it's better to choose .NET 3.5 and WCF framework instead of .net 2.0 remoting, the last one is dead and unsupported technology.

Java and C# interoperability

I have two programs. One is in C# and another one in Java.
Those programs will, most probably, always run on the same machine.
What would be the best way to let them talk to each other?
So, to clarify the problem:
This is a personal project (so professional/costly libraries are a no go).
The message volume is low, there will be about 1 to 2 messages per second.
The messages are small, a few primitive types should do the trick.
I would like to keep the complexity low.
The java application is deployed as a single jar as a plugin for another application. So the less external libraries I have to merge, the better.
I have total control over the C# application.
As said earlier, both application have to run on the same computer.
Right now, my solution would be to use sockets with some sort of csv-like format.
I am author of jni4net, open source interprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed. I hope it will help you.
Kyle has the right approach in asking about the interaction. There is no "correct" answer without knowing what the usage patterns are likely to be.
Any architectural decision -- especially at this level -- is a trade-off.
You must ask yourself:
What kind of messages need to be passed between the systems?
What types of data need to be shared?
Is there an important requirement to support complex model objects or will primitives + arrays do?
what is the volume of the data?
How frequently will the interactions occur?
What is the acceptable communication latency?
Until you have an understanding of the answers, or potential answers, to those questions, it will be difficult to choose an implementation architecture. Once we know which factors are important, it will be far easier to choose the more suitable implementation candidates that reflect the requirements of the running system.
I've heard good things about IKVM, the JVM that's made with .NET.
Ice from ZeroC is a really high performance "enterprisey" interop layer that supports Java and .net amongst others. I think of it as an updated Corba - it even has its own object oriented interface definition language called Slice (like Corba's IDL, but actually quite readable).
The feature set is extensive, with far more on offer than web services, but clearly it isn't an open standard, so not a decision to make lightly. The generated code it spits out is somewhat ugly too...
I realize you're talking about programs on the same machine, but I've always liked the idea of passing messages in XML over HTTP.
Your server could be a web server that's ready to accept an XML payload. Your client can send HTTP messages with XML in the body, and receive an HTTP response with XML in it.
One reason I like this is that HTTP is such a widely used protocol that it's easy to accept or create HTTP POST or GET requests in any language (in the event that you decide to change either the client or server language in the future). HTTP and XML have been around for a while, so I think they're here to stay.
Another reason I like it is that your server could be used by other clients, too, as long as they know HTTP and XML.
I used JNBridge (http://www.jnbridge.com/jnbpro.htm) on a relatively simple project where we had a .NET client app using a relatively significant jar file full of business object logic that we didn't want to port. It worked quite nicely, but I wouldn't say we fully exercised the capabilities of JNBridge.
I am a big fan of Thrift an interoperability stack from Facebook. You said they code will probably run on the same machine so it could be overkill but you can still use it.
If they are separate programs and running as independent applications,you may use sockets. I know it's bit complex to define communication protocol but it'll be quite straight-forward.
However if you have just two separate programs but want to run them as single application, then I guess IKVM is a better approach as suggested by marxidad.
It appears a very similar question has been asked before here on stack overflow (I was searching Google for java windows shared memory):
Efficient data transfer from Java to C++ on windows
From the answer I would suggest you to investigate:
"Your fastest solution will be memory
mapping a shared segment of memory,
and them implementing a ring-buffer or
other message passing mechanism. In
C++ this is straight forward, and in
Java you have the FileChannel.map
method which makes it possible."

Categories

Resources