I have read numerous different things about IPC between 2 C# applications and their pros and cons, but don't feel like I have reached a satisfactory answer yet for my use case.
I have an object that already exists that will change frequently (I am trying to attach my tool to a game and use it to debug elements created with the tool). As a result, I don't beleive serialisation is appropriate as I would essentially be serialising/de-serialsing the object 60 times a second for no good reason. As a result, piping is not possible (or am I missing something here?).
As the game is running in Unity, I am limited to .NET 3.5 technologies so can't use the new .NET4 shared memory class.
So it seemed like .NET remoting is the way to go. It is less than ideal - I have no need for network support, the object I want to share is in memory, no real reason for the overhead of using proxies and sending messages to change it.
However, this tutorial which everyone links to doesn't seem to be good - the source code doesn't compile and when I got it to compile it crashed. The tutorial itself makes no reference to the Cache class which seems central and I can't see, even with the source code, how it would fit in with my application. Is there a better resource? Is this really the best approach.
Finally I am left with interpolating with C++ to use the unmanaged functionality of creating a shared memory and moving the object into it. Before I go down that rabbit hole, I wanted to confirm they really isn't a better way.
Update - Some more information
At the moment just trying things out so 2 console applications. However, in the end I have one C#/Winforms application (this is is .NET 4.0 if that helps) which I will connect to the Unity process (which obviously I have no control over). I have a DLL which is used by both the tool and Unity. I was going to have a class in that which would allow the tool to access the objects (e.g. if I could use pipes, from Unity I'd call into this class to create the pipe and then connect to the pipe from the tool).
The objects themselves represent essentially a finite state machine whose description is loaded from an XML file. It would be possible to recreate the object from a very minimal amount of data. However, I would rather avoid hand coding a solution that uses some kind of event/message system to keep the objects in sync with regard to which state is active etc.
I think WCF with NetNamedPipeBinding would be an easier/better option.
Example here.
What kind of data are we talking about here? If it is symbols / metrics, it could be delimited and stored in a memory mapped file and shared.
Since you're on 3.5, you can't use the MMF file directly, but FileMap should work for you
https://github.com/tomasr/filemap/tree/master
In my company we are using a solution that is a Combination of XML object Serialisation and the FileSytemWatcher that is more or less the same as named Pipes but it is fast and its working well.
Related
I'm getting into socket programming on C#.net for the project I am writing.
The project will be a multi-client system of which the client services monitor system resources and report back to a central server(s).
I was looking into Remoting, WCF etc. to determine which would be best for me. I settled with socket programming because of a number of requirements:
Sockets are faster than the rest with little overhead
I can support more connections per resources on the servers
Sockets can, with albeit modifications, allow me to interact with UNIX based systems also.
I can implement encryption on the link myself in code without having to rely on an SSL cert.
I may be wrong in my thinking here? If I am please do tell. Some suggest WCF as it is "wasy to use" and does everything I want but I believe it is that much slower with overheads.
My main issue is, that while the client machine will not keep a connection open, there could be thousands, or tens of thousands, of client machines and I would have to assume that the machine will be hammered with connections. Considering minimum times between connections, per client, may be as small as 1 min.
Now to my problem at hand: How to send multiple objects over the link and more importantly how to determine what they are on the other side?
I'm assuming this is possible over one connection as I have read a number of articles saying so, albeit they describe the methods differently and so are the examples.
The issue is I can't find any example that actually does this. No example shows how to send multiple objects and then how to determine what they are on the other side.
Can anyone help me here or point me to examples. I'm quite able to figure stuff out once I have a base to work from.
It feels like the classic case of premature optimization and reinventing the wheel. I might be mistaken because I don't know what are your performance requirements, development resources and time frame. But I suspect that very smart people had the same problems before you. And they came up with the variety of solutions, including HTTP (SOAP, REST) and XMPP (if you want statefull protocol). A lot can be done to improve performance even at the app level (minimizing amount of data send over wire, caching etc). Without complexity overhead introduced by using sockets directly. You probably know all of this but I highly recommend you to evaluate your decision again.
As to usual serialization format suspects like XML and JSON you might also want to look at Protocol Buffers:
protocol buffers is the name of the binary serialization format used
by Google for much of their data communications. It is designed to be:
small in size - efficient data storage (far smaller than xml)
cheap to process - both at the client and server
platform independent - portable between different programming architectures
extensible - to add new data to old messages
You might be trying to reinvent the wheel here. But since you have set your mind about using sockets (been there, done that, learned a lot), read those links on serialization:
How to deal with XML in C#
XML Serialization and Inherited Types
How do I serialize a C# anonymous type to a JSON string?
Parse JSON in C#
I am looking into the possibility of Distributing a data structure across multiple machines. I would like it to run in a process on each machine, and using (Multicast?) replicate a copy of the full data structure on all of the machines.
Does anyone have any experience in this that could point me in the right direction?
Distributed locking is hard -- and you might well need to lock, unless you're reading only. I suggest you take a look at a distributed caching framework like Microsoft's Velocity (which may be renamed as part of Azure now), or the free, open source and very good memcached.
There are other, pay-for options too -- notably GemFire and Coherence.
I'd give memcache a go, it works pretty well.
You can write your data into a central database. Then each instance can access it. If you want to modify the data from an instance you shoul implement a mechanism of locking the data in the database. Is this of any help?
Publish the structure through ØMQ using PUB/SUB sockets and then you can switch between TCP or IP multicast depending on your requirements and network quality.
If your data structure is sufficiently organised you should also be able to send updates to the structure without much issue.
You can also look into Hazelcast which is a Java based solution.
Here is a direct link (1) which talks about some of the internals.
(1) http://www.hazelcast.com/documentation.jsp#Internals
And there is already a suggestion to use (repcached) memcached which should be easy to use as well.
To answer your question direction, you should probably learn about state machine replication and then look for implementations of either virtual synchrony or Paxos for your platform, to use as a building block.
Pragmatically, I would advise considering a coordination service such as ZooKeeper, that would save you a lot of trouble.
My friend and I are having a disagreement over an application development issue. It's a simple production management application.
According to my friend, the front-end stores data in XML, and a Java program will read the XML document, store it (at the back-end), and apply some business logic and again store the results into another XML document. And a C# front-end will display the result ( He wants to use sockets for communicating the status of XML ).
I think this is a bad idea. I suggested that whole application should either be written in C# or in Java.
Note: The application is standalone. It's not used over a network.
Have any of you tried this? Please share your thoughts :)
You're right and your friend gave a bad idea. In addition, from your question, I see there are several troubled issues, I don't know where to begin so I will just list them but not in any particular oder. But the ground rule for you to read on is you must agree that simpler is better as Einstein said "Things should be made as simple as possible but not simpler" (or something like that, I don't remember the exact quote).
The notion of front end and back end do not really apply to a desktop application. Rather, you want to use the MVC pattern for the separation of concerns. Wikipedia might be a good place to start learning or reviewing.
Why use of socket (which is totally unnecessary) if you don't have to. This is the first reason why it's a bad idea as you would not need using sockets if everything is done in language, running in the same process space (Your app is a desktop or standalone app).
Similarly, why XML (again unnecessary). No need because you can just pass Java or C# objects around. With XML, first there is the signal to noise issue as tags are added to the true data. Then there is the time to parse, to construct the XML, additional libraries potentially, etc. Thid would be all that code which are introduced in your friend's approach.
These are the most obvious reasons. There are other reasons from a manager's or company's perspective:
To maintain this application, the manager or company needs to hire 2 different skillsets. This might not be true as most programmers are multilingual anyway. But that's not always the case.
In term of deployment, now you force your users to have both the JRE and the .NET framework installed just to be able to run your application. And either of those is not exactly a small footprint.
Talk about making it complex. Either Java or C#. One really isn't a backend for another. They both 'do the same things' as languages. The only difference is whether you want to leverage the power of .NET, or the power of the immensely huge Java frameworks.
This is a desktop application that "is not used over a network"... I'm failing to see a need for a real "backend" at all.
Write the desktop application in a single language, and store your data in XML.
So your application is standalone and doesn't need to work over a network, but your friend insists on connecting the front and back ends of the app using sockets? Something is wrong with this setup, seems more complicated than it needs to be.
It sounds to me like your friend wants to use Java because he understands Java's XML handling framework better. Personally, I think interop between Java and .NET is overkill. You'll save yourself a lot of man hours and frustration by writing the app in a single language of your choice.
What your friend is suggesting is to keep things modular. It doesn't really matter what language you use, but if throw it together in one big project you might make it non modular.
For that type of app I would use one only one language.
It may be possible to leverage the power of C# desktop app on the front-end and Use Java Ejb+Web Service (Jax-ws) on the backend. C# apps can read SOAP wsdl to create the stubs and the interfaces to access Java backend implemented by Jax-ws.
So my company stores alot of data in a foxpro database and trying to get around the performance hit of touching it directly I was thinking of messaging anything that can be done asynchronously for a snappier user experience. I started looking at ActiveMQ but don't know how well C# will hook with it. Wanting to hear what all of you guys think.
edit : It is going to be a web application. Anything touching this foxpro is kinda slow (probably because the person who set it up 10 years ago messed it all to hell, some of the table files are incredibly large). We replicate the foxpro to sql nightly and most of our data reads are ok being a day old so we are focusing on the writes. plus the write affects a critical part of the user experience (purchasing), we store it in sql and then just message to have it put into foxpro when it can. I wish we could just get rid of the foxpro, unfortunately the company doesn't want to get rid of a very old piece of software they bought that depends on it.
ActiveMQ works well with C# using the Spring.NET integrations and NMS. A post with some links to get you started in that direction is here. Also consider using MSMQ (The System.Messaging namespace) or a .NET based asynchronous messaging solution, with some options here.
MSMQ (Microsoft Message Queueing) may be a great choice. It is part of the OS and present as an optional component (can be installed via Add/Remove Programs / Windows Components), meaning it's free (as long you already paid for Windows, of course). MSMQ provides Win32/COM and System.Messaging APIs. More modern Windows Communication Foundation (aka Indigo) queued channels also use MSMQ.
Note that MSMQ is not supported on Home SKUs of Windows (XP Home and Vista Home)
Its worth mentioning that the ActiveMQ open source project defines a C# API for messaging called NMS which allows you to develop against a single C# / .Net API that can then use various messaging back ends such as
ActiveMQ
MSMQ
TibCo's EMS
any STOMP provider
any JMS provider via StompConnect
You may want to look at MSMQ. It can be used by .NET and VFP, but you'll need to rewrite to use them. Here's an article that tells you how to use MSMQ from VFP. https://learn.microsoft.com/en-us/previous-versions/visualstudio/foxpro/ms917361(v=msdn.10)
Sorry if this isn't what you are asking for...
Have you considered some sort of cache behind the scenes that acts a bit like the "bucket system" when using asynchronous sockets in c/c++ using winsock? Basicly, it works by accepting requests, and sends an immediate response back to the web app, and when it finally gets around to finding your record, it updates it on the app via AJAX or any other technology of your choice. Since I'm not a C# programmer I can't provide any specific example. Hope this helps!
Does the Fox app use .CDX indexes? If so, you might be able to improve performance by adding indexes without needing to change any program code. If it uses .IDX indexes, though, the change would have to be done in the actual app.
As the problem is with writes, I would look more towards >removing< any unneeded indexes on the tables. As is common in RDBMS, every index on a FoxPro table slows down a write operation as the indexes need to be updated, and as you aren't reading directly from (or presumably directly querying) the table you shouldn't need very many indexes. You might also want to look at any triggers or field rules on the tables as they may be slowing down the write operation. Be sure your referential integrity is still preserved, though..
I have two programs. One is in C# and another one in Java.
Those programs will, most probably, always run on the same machine.
What would be the best way to let them talk to each other?
So, to clarify the problem:
This is a personal project (so professional/costly libraries are a no go).
The message volume is low, there will be about 1 to 2 messages per second.
The messages are small, a few primitive types should do the trick.
I would like to keep the complexity low.
The java application is deployed as a single jar as a plugin for another application. So the less external libraries I have to merge, the better.
I have total control over the C# application.
As said earlier, both application have to run on the same computer.
Right now, my solution would be to use sockets with some sort of csv-like format.
I am author of jni4net, open source interprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed. I hope it will help you.
Kyle has the right approach in asking about the interaction. There is no "correct" answer without knowing what the usage patterns are likely to be.
Any architectural decision -- especially at this level -- is a trade-off.
You must ask yourself:
What kind of messages need to be passed between the systems?
What types of data need to be shared?
Is there an important requirement to support complex model objects or will primitives + arrays do?
what is the volume of the data?
How frequently will the interactions occur?
What is the acceptable communication latency?
Until you have an understanding of the answers, or potential answers, to those questions, it will be difficult to choose an implementation architecture. Once we know which factors are important, it will be far easier to choose the more suitable implementation candidates that reflect the requirements of the running system.
I've heard good things about IKVM, the JVM that's made with .NET.
Ice from ZeroC is a really high performance "enterprisey" interop layer that supports Java and .net amongst others. I think of it as an updated Corba - it even has its own object oriented interface definition language called Slice (like Corba's IDL, but actually quite readable).
The feature set is extensive, with far more on offer than web services, but clearly it isn't an open standard, so not a decision to make lightly. The generated code it spits out is somewhat ugly too...
I realize you're talking about programs on the same machine, but I've always liked the idea of passing messages in XML over HTTP.
Your server could be a web server that's ready to accept an XML payload. Your client can send HTTP messages with XML in the body, and receive an HTTP response with XML in it.
One reason I like this is that HTTP is such a widely used protocol that it's easy to accept or create HTTP POST or GET requests in any language (in the event that you decide to change either the client or server language in the future). HTTP and XML have been around for a while, so I think they're here to stay.
Another reason I like it is that your server could be used by other clients, too, as long as they know HTTP and XML.
I used JNBridge (http://www.jnbridge.com/jnbpro.htm) on a relatively simple project where we had a .NET client app using a relatively significant jar file full of business object logic that we didn't want to port. It worked quite nicely, but I wouldn't say we fully exercised the capabilities of JNBridge.
I am a big fan of Thrift an interoperability stack from Facebook. You said they code will probably run on the same machine so it could be overkill but you can still use it.
If they are separate programs and running as independent applications,you may use sockets. I know it's bit complex to define communication protocol but it'll be quite straight-forward.
However if you have just two separate programs but want to run them as single application, then I guess IKVM is a better approach as suggested by marxidad.
It appears a very similar question has been asked before here on stack overflow (I was searching Google for java windows shared memory):
Efficient data transfer from Java to C++ on windows
From the answer I would suggest you to investigate:
"Your fastest solution will be memory
mapping a shared segment of memory,
and them implementing a ring-buffer or
other message passing mechanism. In
C++ this is straight forward, and in
Java you have the FileChannel.map
method which makes it possible."