Distributed Data Structure across multiple machines with multicast - c#

I am looking into the possibility of Distributing a data structure across multiple machines. I would like it to run in a process on each machine, and using (Multicast?) replicate a copy of the full data structure on all of the machines.
Does anyone have any experience in this that could point me in the right direction?

Distributed locking is hard -- and you might well need to lock, unless you're reading only. I suggest you take a look at a distributed caching framework like Microsoft's Velocity (which may be renamed as part of Azure now), or the free, open source and very good memcached.
There are other, pay-for options too -- notably GemFire and Coherence.
I'd give memcache a go, it works pretty well.

You can write your data into a central database. Then each instance can access it. If you want to modify the data from an instance you shoul implement a mechanism of locking the data in the database. Is this of any help?

Publish the structure through ØMQ using PUB/SUB sockets and then you can switch between TCP or IP multicast depending on your requirements and network quality.
If your data structure is sufficiently organised you should also be able to send updates to the structure without much issue.

You can also look into Hazelcast which is a Java based solution.
Here is a direct link (1) which talks about some of the internals.
(1) http://www.hazelcast.com/documentation.jsp#Internals
And there is already a suggestion to use (repcached) memcached which should be easy to use as well.

To answer your question direction, you should probably learn about state machine replication and then look for implementations of either virtual synchrony or Paxos for your platform, to use as a building block.
Pragmatically, I would advise considering a coordination service such as ZooKeeper, that would save you a lot of trouble.

Related

Recommended way of sharing an object between C# processes

I have read numerous different things about IPC between 2 C# applications and their pros and cons, but don't feel like I have reached a satisfactory answer yet for my use case.
I have an object that already exists that will change frequently (I am trying to attach my tool to a game and use it to debug elements created with the tool). As a result, I don't beleive serialisation is appropriate as I would essentially be serialising/de-serialsing the object 60 times a second for no good reason. As a result, piping is not possible (or am I missing something here?).
As the game is running in Unity, I am limited to .NET 3.5 technologies so can't use the new .NET4 shared memory class.
So it seemed like .NET remoting is the way to go. It is less than ideal - I have no need for network support, the object I want to share is in memory, no real reason for the overhead of using proxies and sending messages to change it.
However, this tutorial which everyone links to doesn't seem to be good - the source code doesn't compile and when I got it to compile it crashed. The tutorial itself makes no reference to the Cache class which seems central and I can't see, even with the source code, how it would fit in with my application. Is there a better resource? Is this really the best approach.
Finally I am left with interpolating with C++ to use the unmanaged functionality of creating a shared memory and moving the object into it. Before I go down that rabbit hole, I wanted to confirm they really isn't a better way.
Update - Some more information
At the moment just trying things out so 2 console applications. However, in the end I have one C#/Winforms application (this is is .NET 4.0 if that helps) which I will connect to the Unity process (which obviously I have no control over). I have a DLL which is used by both the tool and Unity. I was going to have a class in that which would allow the tool to access the objects (e.g. if I could use pipes, from Unity I'd call into this class to create the pipe and then connect to the pipe from the tool).
The objects themselves represent essentially a finite state machine whose description is loaded from an XML file. It would be possible to recreate the object from a very minimal amount of data. However, I would rather avoid hand coding a solution that uses some kind of event/message system to keep the objects in sync with regard to which state is active etc.
I think WCF with NetNamedPipeBinding would be an easier/better option.
Example here.
What kind of data are we talking about here? If it is symbols / metrics, it could be delimited and stored in a memory mapped file and shared.
Since you're on 3.5, you can't use the MMF file directly, but FileMap should work for you
https://github.com/tomasr/filemap/tree/master
In my company we are using a solution that is a Combination of XML object Serialisation and the FileSytemWatcher that is more or less the same as named Pipes but it is fast and its working well.

Guidance for Migrating MS Access Apps to .Net Apps

I will soon begin the painful*(kidding)* process of migrating multiple, separate, Access Applications to "Real" applications*(notice the quotes, no flame wars please)*. Most likely this will be Web Apps as the usual reason is multiple users and deployability but I will take it case by case.
Some of these are traditional Access apps using Access as the back end and others are using SQL Server(a central one) as the back end.
What I am looking for is a combination of your experience doing this and what resources you used to help.
Websites, apps, standards, best practices, gotcha's, don't forget's, etcetera.
I am a 1 person C# shop with SQL Server back end so whether Web or not I will be looking that direction.
Also, is it overkill or unattainable to try and develop a Framework for this kind of thing? Would there just be TOO MANY variables to even try and walk this path? Anyone ever try this?
Some further info based on below questions. We currently have ~250 users and they are spread between 5 Locations.
What I meant by deployability is perhaps a little vague. I simply meant that we are a Non-Profit Organization and as such we do not have the best bandwidth available so deploying full apps, even through ClickOnce can be tricky when combinded with the highly fickle nature of my users*(I want that box purple, no green, no get rid of it altogether type stuff...)*.
My idea is to try and develop a "framework", of sorts, that will help to streamline the process of moving an Access App to a .Net App.
Now I fully understand that this "framework" may be nothing more than a set of steps and guidelines; like, Use ORM*(LINQ2SQL or SubSonic)*to generate DAL, Copy UI to corresponding UserControls, rewrite Business Logic.
I am just looking for your experience/expertise to help me streamline my streamlining process... ;)
Those apps which use an Access database to store tables and which need web access should first be upsized to SQL Server. There is a tool from the SQL Server group. SQL Server Migration Assistant for Access (SSMA Access)
Then consider moving to the web only that portion of the app that requires remote access. And leaving the rest of the app in Access. That could save a considerable amount of time.
Alternatively consider going to Terminal Server. That along with a VPN means just some software licensing costs and next to no work on your part.
That said what do you mean by "multiple users" and "deployability"? Possibly we can give you some suggestions there. Access is multi user out of the box. However if you have mission critical data or can't rekey the data in the event of a corruption or have more than 25-50 users on the LAN then you should be moving the data to SQL Server.
Now that it's public Access 2010 can deploy applications to the web. All kinds of very interesting stuff can be done. For more information check the Microsoft Access product group blog or my blog with the appropriate Access 2010 tags
Speaking from experience I think you would need to upgrade on a case by case basis. Upgrading is essentially a re-write from scratch and you should take the opportunity here to re-design as necessary. The type of application structure and code style used for Access (likely to be procedural I'm guessing) is very different to a well designed OO .Net app.
You will be able to re-use the SQL Server databases of course and, depending on the apps maybe even the Access ones. If you're feeling brave you could even try the upsizing wizard although I wouldn't recommend it as we found the results less than ideal.
I would also advise you take a look at some kind of ORM tool (we use Subsonic) as this can massively reduce the amount of boiler plate code you need to write. Some ORM tools will also generate DDL for your database too.
We follow these standards (good idea to pick a standard early on and stick to it we found) and also found this really useful to get up and running.
Hope this was some help.

Communication between C# applications - the easy way

I have two C# programs and I want to send some data back and forth between them. (And check if the data arrived to the other application.)
The two programs will always run on the same computer, so no networking capability is required. I've already read some questions with similar topics here, but I'm not entirely sure which is the right method for me. (WCF, Remoting, etc.)
What I want to know, is which one is the easier to implement for a beginner in C#?
(I don't want it to get too complicated anyway, it's only a few integers and some text that I want to send.)
If there isn't a real difference in difficulty, what advantages does one have over the other?
I'd really appreciate some simple example code as well.
Thanks in advance.
You can use Pipes to send data between different instances of your application. If you just need to tell the other instance that something has happened you can send messages from one application to another by using SendMessage api.
WCF essentially packages up the various methods of communication between applications (web services, remoting, MSMQ etc) in a single package, so that they are programmatically the same in the way that they are used, and the detail of what method is used is left for configuration of the binding between. A slight simplification perhaps, but essentially what it's about.
It is worth getting into WCF if you need inter-process communication, and this would certainly be my advice as to the way to go with this. It's worth looking at IDesign, who produce a number of articles on the subject, as well as some reusable code libraries, that you may find useful. Their Juval Lowy has also written an excellent book on the subject,
Another good point about WCF is that if your requirements ever change and all of a sudden you have to move one of the application to a different machine, requiring now network capability, you will only need to change configuration on both sides, instead of having to recode.
Plus, ad David said, WCF is a good tool to have in your bag.
Cheers, Wagner.
I found MSMQ is simple to implement.

What to use for Messaging with C#

So my company stores alot of data in a foxpro database and trying to get around the performance hit of touching it directly I was thinking of messaging anything that can be done asynchronously for a snappier user experience. I started looking at ActiveMQ but don't know how well C# will hook with it. Wanting to hear what all of you guys think.
edit : It is going to be a web application. Anything touching this foxpro is kinda slow (probably because the person who set it up 10 years ago messed it all to hell, some of the table files are incredibly large). We replicate the foxpro to sql nightly and most of our data reads are ok being a day old so we are focusing on the writes. plus the write affects a critical part of the user experience (purchasing), we store it in sql and then just message to have it put into foxpro when it can. I wish we could just get rid of the foxpro, unfortunately the company doesn't want to get rid of a very old piece of software they bought that depends on it.
ActiveMQ works well with C# using the Spring.NET integrations and NMS. A post with some links to get you started in that direction is here. Also consider using MSMQ (The System.Messaging namespace) or a .NET based asynchronous messaging solution, with some options here.
MSMQ (Microsoft Message Queueing) may be a great choice. It is part of the OS and present as an optional component (can be installed via Add/Remove Programs / Windows Components), meaning it's free (as long you already paid for Windows, of course). MSMQ provides Win32/COM and System.Messaging APIs. More modern Windows Communication Foundation (aka Indigo) queued channels also use MSMQ.
Note that MSMQ is not supported on Home SKUs of Windows (XP Home and Vista Home)
Its worth mentioning that the ActiveMQ open source project defines a C# API for messaging called NMS which allows you to develop against a single C# / .Net API that can then use various messaging back ends such as
ActiveMQ
MSMQ
TibCo's EMS
any STOMP provider
any JMS provider via StompConnect
You may want to look at MSMQ. It can be used by .NET and VFP, but you'll need to rewrite to use them. Here's an article that tells you how to use MSMQ from VFP. https://learn.microsoft.com/en-us/previous-versions/visualstudio/foxpro/ms917361(v=msdn.10)
Sorry if this isn't what you are asking for...
Have you considered some sort of cache behind the scenes that acts a bit like the "bucket system" when using asynchronous sockets in c/c++ using winsock? Basicly, it works by accepting requests, and sends an immediate response back to the web app, and when it finally gets around to finding your record, it updates it on the app via AJAX or any other technology of your choice. Since I'm not a C# programmer I can't provide any specific example. Hope this helps!
Does the Fox app use .CDX indexes? If so, you might be able to improve performance by adding indexes without needing to change any program code. If it uses .IDX indexes, though, the change would have to be done in the actual app.
As the problem is with writes, I would look more towards >removing< any unneeded indexes on the tables. As is common in RDBMS, every index on a FoxPro table slows down a write operation as the indexes need to be updated, and as you aren't reading directly from (or presumably directly querying) the table you shouldn't need very many indexes. You might also want to look at any triggers or field rules on the tables as they may be slowing down the write operation. Be sure your referential integrity is still preserved, though..

Java and C# interoperability

I have two programs. One is in C# and another one in Java.
Those programs will, most probably, always run on the same machine.
What would be the best way to let them talk to each other?
So, to clarify the problem:
This is a personal project (so professional/costly libraries are a no go).
The message volume is low, there will be about 1 to 2 messages per second.
The messages are small, a few primitive types should do the trick.
I would like to keep the complexity low.
The java application is deployed as a single jar as a plugin for another application. So the less external libraries I have to merge, the better.
I have total control over the C# application.
As said earlier, both application have to run on the same computer.
Right now, my solution would be to use sockets with some sort of csv-like format.
I am author of jni4net, open source interprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed. I hope it will help you.
Kyle has the right approach in asking about the interaction. There is no "correct" answer without knowing what the usage patterns are likely to be.
Any architectural decision -- especially at this level -- is a trade-off.
You must ask yourself:
What kind of messages need to be passed between the systems?
What types of data need to be shared?
Is there an important requirement to support complex model objects or will primitives + arrays do?
what is the volume of the data?
How frequently will the interactions occur?
What is the acceptable communication latency?
Until you have an understanding of the answers, or potential answers, to those questions, it will be difficult to choose an implementation architecture. Once we know which factors are important, it will be far easier to choose the more suitable implementation candidates that reflect the requirements of the running system.
I've heard good things about IKVM, the JVM that's made with .NET.
Ice from ZeroC is a really high performance "enterprisey" interop layer that supports Java and .net amongst others. I think of it as an updated Corba - it even has its own object oriented interface definition language called Slice (like Corba's IDL, but actually quite readable).
The feature set is extensive, with far more on offer than web services, but clearly it isn't an open standard, so not a decision to make lightly. The generated code it spits out is somewhat ugly too...
I realize you're talking about programs on the same machine, but I've always liked the idea of passing messages in XML over HTTP.
Your server could be a web server that's ready to accept an XML payload. Your client can send HTTP messages with XML in the body, and receive an HTTP response with XML in it.
One reason I like this is that HTTP is such a widely used protocol that it's easy to accept or create HTTP POST or GET requests in any language (in the event that you decide to change either the client or server language in the future). HTTP and XML have been around for a while, so I think they're here to stay.
Another reason I like it is that your server could be used by other clients, too, as long as they know HTTP and XML.
I used JNBridge (http://www.jnbridge.com/jnbpro.htm) on a relatively simple project where we had a .NET client app using a relatively significant jar file full of business object logic that we didn't want to port. It worked quite nicely, but I wouldn't say we fully exercised the capabilities of JNBridge.
I am a big fan of Thrift an interoperability stack from Facebook. You said they code will probably run on the same machine so it could be overkill but you can still use it.
If they are separate programs and running as independent applications,you may use sockets. I know it's bit complex to define communication protocol but it'll be quite straight-forward.
However if you have just two separate programs but want to run them as single application, then I guess IKVM is a better approach as suggested by marxidad.
It appears a very similar question has been asked before here on stack overflow (I was searching Google for java windows shared memory):
Efficient data transfer from Java to C++ on windows
From the answer I would suggest you to investigate:
"Your fastest solution will be memory
mapping a shared segment of memory,
and them implementing a ring-buffer or
other message passing mechanism. In
C++ this is straight forward, and in
Java you have the FileChannel.map
method which makes it possible."

Categories

Resources