remoting vs socket - c#

what is the diff, advantage and disadvantage between remoting and socket... which is best way for Server-Client functionality....

Sockets are raw binary streams between two endpoints. You would need to wrap your own RPC (etc) layer to process the messages, and deal with a lot of infrastructure code. However, since they are so close to the metal this can be very, very efficient. It is not tied to any specific architecture, as long as both ends talk the same message format. Tools like protobuf-net can help you construct binary messages for streams (rather than rolling your own serialization code).
Remoting is a .NET specific tool, and is very brittle re versioning. I wouldn't recommend remoting for client/server - use things like WCF instead.
WCF is a more flexible comms stack - a lot of power and complexity, but arguably a bit of bloat too (xml, complex security etc). It is data-contract based, so roughly open (client/server can be different), but still a bit .NET focused.
edit For info, protobuf-net provides an RPC stack too; at the moment there is only an HTTP implementation provided, but at some point I'll add raw TCP/IP.

Direct socket manipulation can give you more power, flexibility, performance, and unfortunately complexity when compared to Remoting or WCF. However, if you need the benefits of low-level TCP/IP, such as non-blocking IO and custom protocols, tools such as Ragel and frameworks like Mina can ease the complexity burden. I recommend trying the higher-level APIs like WCF first and only use direct sockets if these don't meet your needs.

I second most that Marc Gravell wrote - specifically remoting and internal serialization are "easy" but are very easy to break and often do not scale well to a public network (I'm not that familiar with .net remoting, but I guess it needs a well known registry service which is often problematic when going out of the clean lab environment).
Implementing a standard or even roll-your-own RPC is harder but safer in the long run: you do not have problems with code revisions (or they are easier to control), scaling is fully controlled by your own code, and its easy to develop components using various technologies.
There are many many many tools that help you easily build RPC mechanisms over sockets, but I really like to use plain old HTTP - get a simple HTTP embedded server running inside your server process and your client just needs to have an HTTP client to send messages. If you develop your own simple RESTful call semantics (instead of using some bloated message format like SOAP or XML-RPC), then there is really almost nothing to do :-)

I would say that choosing between sockets and remoting you better consider on what type of application you are developing. Sockets are definitely for your own protocol implementations, low level programming and the only way to go if you have to communicate to the other tcp/ip applications. Remoting is a preffered way to develop new .NET communication applications, where you don't need to come down to tcp/ip stack and ensure your application talks to the others (probably legacy applications). In case you could go only with .NET it's better to choose .NET 3.5 and WCF framework instead of .net 2.0 remoting, the last one is dead and unsupported technology.

Related

Which technology to choose for PubSub between java service and C# client

Which technology would you suggest for PubSub between Java Service and C# desktop client.
What do you think about CometD? Is there any nice .net API for it?
Server and client will run within the same organization so can use different protocols
Is CometD a right choice at all or would it be better to use TCP instead of HTTP?
Since your apps run within the same organization, you will likely be able to use more efficient transport than HTTP, or even TCP, depending on your situation and requirements.
The Data Distribution Service (DDS) is a standard by the OMG based on Pub/Sub. Standardized language bindings are C, C++, Java and Ada, but C# and others are available as well. Different languages as well as operating systems can be mixed in your system. Structured datatypes to be distributed are specified in a language-neutral format (by the standard a subset of OMG IDL), which is then translated into language-specific interfaces and datatypes to be used by your applications.
It can use different transport layers, like UDP or also TCP. Although I can not determine from your brief description whether DDS would be your best choice, I think it is worth investigating. See this Wikipedia entry for a very brief introduction and list of references.
Warning: I have only used cometd (no atmosphere or even none-java solutions)
I like cometd as it was very fast to get started and the documentation was good. Also the javascript API worked without any issues.
and then the bayeux specification could be implemented on c# too: https://github.com/Oyatel/CometD.NET
http://bugs.cometd.org/browse/COMETD-23
You might want to look at some messaging protocols such as AMQP and STOMP. There's decent support for both protocols in both Java and .NET and you can choose your choice of message brokers such as RabbitMQ or ActiveMQ and others.

Should I use WCF to implement a given binary network protocol?

I have a client device (POS handheld) which communicates via TCP/IP or RS232 with it's server. The protocol is a given binary format, which I cannot change. I have to implement a server for that device. My impression is, that WCF would be a better choice than implementing anything by hand. But because it would require quite some time to give it a try, I would like to ask for advice whether it's a good idea and if it's possible to fine tune WCF to such a level of detail.
I found some questions which are similar to mine, but in those cases the OP had always full control over client and server. That's not the case for my scenario.
If WCF is a good idea - which I assume - some starting points would be very appreciated. Most documentation focus on SOAP, REST, ... and not on the lower levels I would have to work on.
Having worked with WCF for many years (and liking it), I don't think it's the best option for your needs. As Phil mentioned, it's sweet spot is around web services, not low level communication. To implement that in WCF, you'd need to write a custom transport, which, as with almost all of the low-level (channel programming), involves a lot of code. This transport would need to use sockets to understand the device protocol, and you'd need to somehow convert the messages from the protocol into WCF messages.
If the protocol is simple, I think that a "pure" socket based implementation would be the best way to go. The socket handling code (to communicate with the device) would be needed in a WCF solution anyway, but you can create your own message types instead of having to conform to a (rather SOAP-friendly) message protocol used by WCF.
One advantage that you'd have if you were to go all the way and implement the custom WCF transport which "talks" that protocol would be if you were to expose it to many different people who are already used to a WCF-way of implementing services - you'd have to bear the initial (very high) cost of writing a WCF transport, but later on people can write services for that device using the nice contract model that WCF provides.
WCF has quite a learning curve as is, and if you need to be customizing lots of very low-level things, the curve will be steeper.
Also, the reason WCF was created was to allow the developer to not worry about lower level implementation details. It seems like you want the best of both worlds, which means you will probably be spending most of your time fighting WCF to get it to work how you want.
Disclaimer: While I have a basic understanding of WCF, I am not an expert and I could be wrong.

Object serialization and networking in C#

I'm working on a simple network project, and would like to transfer objects directly using a TCPListener / Client connection. I would like to avoid the WCF overhead, and just have a simple way of serializing the object on the way out, sending it over the network, and finally restoring it back to the original on the other end.
Thanks
Remoting is out of favor now that there is WCF. WCF is highly optimized for performance and will win over remoting in most cases. See http://msdn.microsoft.com/en-us/library/bb310550.aspx. You don't mention whether you are worried about runtime overhead or the overhead of learning how to use WCF. That being said, you can reduce the runtime overhead by using the binary TCP transport instead of the HTTP one. It works well, though HTTP (SOAP) is, of course, highly popular now. Your service can support multiple transports (i.e., TCP and HTTP) to work well with .NET clients (TCP transport) and other standards-compliant clients (HTTP SOAP transport).
Look into .NET Remoting; it makes things easy!
And it's a big topic to display a sample in a comment here =)
Read here: http://msdn.microsoft.com/en-us/library/kwdt6w2k(VS.71).aspx
Well if you need ONLY serialization/deserialization behavior without involving WCF or Remoting, there is a plenty ways:
Standard serialization via SerializableAttribute and BinaryFormatter
XML Serialization
Json.NET
Google's Protocol Buffers via ProtoBuf.NET
All approaches have strong and weak sides, but I believe that for your needs #1 will be enough. Reasonable compact and without external libraries need.

Creative use of MarshalByRefObject

I've been banging my head trying to figure some things out. So, I'm looking for advice and research material (via links). Here's the scenario:
We have a library (say, CommonLib) that contains resources needed by several other applications (say, AppA, AppB, AppC, and so on...). Now, the way this is currently working is AppA instances, checks to see if the specific port is available. If it isn't, then it kicks CommonLib ("Hey, wake up") and the service is started. Then AppA is happy and off we go.
Now, I've done a lot of research on Remoting.Channels and I've come to the conclusion that I'm starting an application built on a technology that is considered 'legacy'. Well...I don't like that. Honestly, WCF is way more overhead than we require and not fully implemented in Mono. We are targeting multi-platform compatibility (Windows, Mono, Linux) so we are researching all options.
The idea of remoting started, in the first place, because we wanted CommonLib to be a guaranteed single instance (as I understand it, a singleton is pretty much only guaranteed to be a singleton within a given AppDomain - feel free to correct me if I'm wrong). Anyway, I realized the power of remoting and decided to begin some experimental implementation. I have been successful in my initial use of the MarshalByRefObject. But, I'm concerned about the continued implementation of this legacy technology.
So, with all this...I am considering how I can implement CommonLib (as a host application) and, without remoting, implement MarshalByRefObject through Stream, standard TCP Socket, or some other way. What I'm thinking is, instead of instancing AppA to get CommonLib running, just implement CommonLib as the base app. Then, you select what app (really just a 'hosted' .dll) you want instanced within CommonLib. CommonLib would then load that .dll into the CommonLib framework along with whatever custom controls that hosted app uses. Along with this idea, I'd forego the requirement (for now) that CommonLib must be a genuine singleton.
So...that is a detail of our scenario. Again, my question is really 2 parts: (a) What technology(ies) should I research, and (b) Do I need to be concerned with the legacy status of the remoting technology?
Any other advice, comments, or questions are more than welcome.
UPDATE 1: I'm starting off with this snippet. This will allow me to load a file (or script) with a list of apps (or plug-ins) that have been installed. I can create this file as Xml or Binary formatted. When a new app is installed, the file & path can be added. Hmmm...I don't necessarily need to use MarshalByRefObject.
While WCF may not be as complete in Mono, Mono 2.6 provides everything required for silverlight / moonlight, so a WCF-based implementation should be perfectly feasible. As long as you don't try anything exotic (different transports, inspectors etc) it should be more than sufficient to provide an RPC stack that is reliable between windows / mono / etc.
The key difference between WCF and remoting is in usage - remoting is based abound an object that pretends to be at a different end, where-as WCF is based around a service; the point being you should base your interactions around discrete methods (rather than accessing properties etc) - this also has the advantage of helping making it explicit when you are crossing the boundary.
Another option would be to write a very basic socket server; very lightweight, and you could use something like protobuf-net to provide a portable (cross-platform) serializer implementation (you shouldn't really trust BinaryFormatter between the two - it is... flakey).
In short - I wouldn't build around MarshalByRefObject at all; I would write a service layer, something like:
interface IMyService {
void Method1();
int Method2(string s);
}
and abstract these details away from the caller. If you end up using WCF, then that is all you need; and for existing remoting support I would write an IMyService implementation that encapsulates (privately) the whole MarshalByRefObject story. Ditto if I wrote a socket server.
I'm not sure that .NET Remoting is obsoleted by WCF. I think they have somewhat different use cases; WCF (deliberately) has no concept of "marshal by reference" because it's designed for distributed and (relatively) loosely coupled apps that might need to avoid chatty protocols due to latency etc. If your components are naturally tightly coupled, latency will be low but performance needs to be high, preserving rich .NET types is important, etc. then Remoting may still be a good fit. Anyway, I wouldn't worry about being "legacy", "legacy" technologies at least on Windows/.NET have a way of staying around for quite some time if they get a decent amount of usage. Remoting still exists in the latest (4.0) version of .NET.
None of this is meant as a claim that Remoting necessarily is the best fit for your situation ...

What is the best way for C# app to communicate unix c++ app

the ways I can think of
Web service or soap
Socket
Database table
shared file
Any concise example you know of for webservice?
Web services or soap would be fairly easy, however, if the C++ application isn't a web server naturally (or the C# application), it may be easier to just use socket programming directly.
Sockets are fairly easy to use from both C# and C++. They give you complete control over the type of date transmitted, at the cost of potentially a little more work in the handling.
The biggest issues to watch for are probably endianness of binary data, and encoding of text data, if you use sockets directly. Otherwise, it's very easy.
Since you are already aware of the Web service and socket approach, I'll mention some other options. If you like simplicity, check out XML-RPC. This is what SOAP was before large standards committees and corporate interests began to control the specification. You can find implementations of XML-RPC for just about every major programming language out there. Hessian is an interesting binary protocol that has many fans and supports just about every major language as well. Protocol Buffers is popular within Google. The official version from Google does not support C#. However, the two highest rep users of SO do provide ports of protobuf for the .Net space.
I will probably be ridiculed for this, but also take a look at CORBA. It's not in vogue these days, but has many substantial technical creds, especially if one end of the communication is C++. IMHO, it's WS-* with OO support and no angle brackets required. For interop, I think it still should have a seat at the table. When engaged in C++ development, I found OmniOrb to be quite effective and efficient. Take a look at this SO Question for some pointers concerning using CORBA in .Net.
Sockets are easiest; and I would always go for that first. If database is an option, that's also trivial, but that would really depend. If it's queued events, that would make sense, but if it's request/response, it's probably not so great.
you can use gsoap to have a C/C++ program use a webservice.
You can also call a cgi program that is written in C++.
I have written a server in C that communicated with a C# client, and the endianess can be a pain to deal with, webservices is so much simpler.
Do you want it to communicate with each other (for instance, through tcp (like many others have pointed)) or do you want to be able to translate objects from C# to C++? If so, check out Apache Thrift (http://incubator.apache.org/thrift/).

Categories

Resources