RPC, Data Synchronization and RabbitMQ - c#

I need to do some Data Synchronization between a client and a server. I have decided to use RabbitMQ for Data Synchronization, in order for my program to be tolerant for network failure. I know how to use RabbitMQ, so no problem there.
All data needs to be stored in a local database, and then copied to an other database.
My problem is exactly which data to transfer over the network.
Let me show some example code:
This is the way I write to my database today. The body of my method is removed since it is irrelevant for the question.
public static class DatabaseHelper
{
public static void RegisterCheckout(string checkoutStationId, string employeeId, string destinationId)
{
//Insert in to database
}
}
So in short my program calls DatabaseHelper.RegisterCheckout("123", "321", "456"), the checkout is put into the local database.
But how do I serialize this method call, in order for me to reproduce it server side?

Related

Should a NI LabVIEW NetworkVariableManager be left connected?

I've received some C# code from a colleague for interacting with a cRIO device connected over ethernet. I'm trying to improve the code quality to make it a bit more comprehensible for future users, however I'm struggling a little bit to extract some relevant information from the API documentation. My main question is whether there would be problems caused in leaving a NetworkVariableManager in the Connected state?
Right now the code uses a class which looks something like
public class RIOVar<T>
{
public readonly string location;
public RIOVar(string location)
{
this.location = location;
}
public T Get()
{
using(NetworkVariableReader<T> reader = new NetworkVariableReader<T>(location) )
{
reader.Connect();
return reader.ReadData().GetValue()
}
}
public void Write(T value)
{
using(NetworkVariableWriter<T> writer = new NetworkVariableWriter<T>(location) )
{
writer.Connect();
writer.WriteValue(value);
}
}
}
The actual class does a lot more than this, but the part that actually communicates with the cRIO basically boils down to these two methods and the location data member.
What I'm wondering about is whether it would be better to instead have reader and writer as class members and Connect them in the constructor (at the point that they are constructed the connection should be posible), but what I don't know is if this would have some adverse effect on the way the computer and RIO communicate with each other (maybe a connected manager uses some resource or the program must maintain some sort of register...?) and therefore the approach here of having the manager connected only for the read/write operation is better design.
Keeping a variable connected keeps its backing resources in memory:
threads
sockets
data buffers
These resources are listed in the online help, but it's unclear to me if that list is complete:
NationalInstruments.NetworkVariable uses multiple threads to implement the reading and writing infrastructure. When reading or writing in a tight loop insert a Sleep call, passing 0, to allow a context switch to occur thereby giving the network variable threads time to execute.
... snip ...
NationalInstruments.NetworkVariable shares resources such as sockets and data buffers among connections that refer to the same network variable in the same program.
In my opinion, I'd expect better runtime performance by connecting/disconnecting as infrequently as possible. For example, when the network is reachable, connect; when it isn't, disconnect.

c# datagridview autorefresh in LAN

I am beginner in c# with a huge problem.
An application with datagridview in front (Termin plan for one work day) works on many PC's in LAN with MS Windows Server and with MySQL database.
How can I become the changes made on one workstation AUTOMATICALY on all other PC's WITHOUT any action on them (application only started).
I have a procedure for data and datagridview refresh, I must only know WHEN I must start this procedure, that means I must know WHEN any other workstation made any changes.
Thanks for any help!
A simple solution would be to use a timer and when it elapses you refresh you gridview. so on defined period of time it will be refreshed automatically. the problem can be that if you update to often there's a overload of accessing the db. to prevent this, you could make an serverapplication which handles all data
Let's say PC 1 is starting the client application.
First it connects to server application (the server stores the reference of the client e.g. in an list).
After that the user on PC1 makes changes and click on save, the software will send the changes to the server (e.g. a custom object with all needed information).
Server saves the changes to the DB
Serverapplication give a response to the specific client if it worked or not
If it worked, Send an custom object (for example named ChangesDoneEvent) to all clients that indicates that changes have been done.
All connected clients will receive that object and know now that the have to refresh their gridview.
For further information just search for C# Multi threaded Server Socket programming. For sending custom objects over network you will find many resources in the internet too, maybe this will help you Sending and receiving custom objects using Tcpclient class in C#
Declare Delegate on your form
public delegate void autocheck();
private System.Timers.Timer TTTimer = new System.Timers.Timer();
public void autofilldgv()
{
if (this.InvokeRequired)
{
this.Invoke(new autocheck(UpdateControls));
}
else
{
UpdateControls();
}
}
private void UpdateControls()
{
//call your method here
filldgv();
}
void TTTimer_Elapsed(object sender System.Timers.ElapsedEventArgs e)
{
mymethod();
}
public void mymethod()
{
//this method is executed by the background worker
autofilldgv();
}
private void frm_receptionView_Load(object sender, EventArgs e)
{
this.TTTimer.Interval = 1000; //1 sec interval
this.TTTimer.Elapsed += new System.Timers.ElapsedEventHandler(TTTimer_Elapsed);
this.TTTimer.Start();
}
The solution provided above is actually a good way to handle this scenario. Before implementing you might also want to think about the potential fall backs. It is possible that Client PC 's IP could change and since you are using sockets. The object reference added in the list could be faulted state. You might want to think of handling this pitfall.

Row by row streaming of data while waiting for response

I have a WCF service and client developed using C# which handles data transfer from SQL server database on the server to the SQL server database on the client end. I am facing some issues with the current architecture and planning to modify it to an idea I have, and would like to know if it is possible to achieve it, or how best can I modify the architecture to suite my needs.
The Server side database server is SQL 2008 R2 SP1 and client side servers are SQL 2000
Before I state the idea, below is the overview and current shortcomings of the architecture design I am using.
Overview:
Client requests for a table’s data.
WCF service queries the Server database for all pending data for the requested table. This data is loaded into a dataset.
WCF Compresses the Dataset using GZIP compression and converts it to byte for the client to download.
Client receives the Byte stream, un-compresses it and replicates the data from the Dataset to the physical table on the client database. This data is inserted row by row since in need the Primary key column filed to be returned to the server so that it can be flagged of as transferred.
Once the client has finished replicating the data, it uploads the successful rows Primary key fields back to the server, and in turn the server update each field one by one.
The above procedure uses a basic http binding, with streamed transfer mode.
Shortcomings:
This works great for little data, but when it comes to bulk data, maintaining the dataset in memory as the download is ongoing and also at the client side as replication is ongoing, is becoming impossible as sometimes the dataset size goes up to 4gb. The server can hold this much data since it’s a 32gb RAM server, but at the client side I get System out of memory exception since the client machine has 2gb RAM.
There are numerous deadlocks as the select query is running and also when updating since I am using transaction mode as read committed.
For bulk data it is very slow and completely hangs the client machine when the DTS is ongoing.
Idea in mind:
Maintain the same service and logic of row by row transfer since I cannot change this due to sensitivity of the data, but rather than downloading bulk data I plan to use the sample given in http://code.msdn.microsoft.com/Custom-WCF-Streaming-436861e6.
Thus the new flow will be as:
Upon receiving the download request, the server will open a connection to the DB using snapshot isolation as the transaction level.
Build the row by row object on the server and send it to the client on the requested channel, as the client receives each row object, it gets processed and a success or failure response is sent back to the server on the same method same channel, as I need to update the data on the same snapshot transaction.
This way I will reduce bulk objects in memory, and rely on SQL for the snapshot data that will be maintained in temdb once the transaction is initiated.
Challenge:
How can I send the row object and wait for a confirmation before sending the next one, as the update to the server row has to occur on the same snapshot transaction. Since if I create another method on the service to perform the flagging off, the snapshots will be different and this will cause issues in the integrity of the data in case the data undergoes changes after the snapshot transaction was initiated.
If this is the wrong approach, then please suggest a better one, as I am open to any suggestions.
If my understanding of the snapshot isolation is wrong, then please correct me as I am new to this.
Update 1:
I would like to achieve something like this when the client is the one requesting:
//Client Invokes this method on the server
public Stream GetData(string sTableName)
{
//Open the Snapshot Transaction on the Server
SqlDataReader rdr = operations.InitSnapshotTrans("Select * from " + sTableName + " Where Isnull(ColToCheck,'N') <> 'Y'");
//Check if there are rows available
if(rdr.HasRows)
{
while rdr.read()
{
SendObj sendobj = Logic.CreateObejct(rdr);
//Here is where i am stuck
//At this point I want to write the object to the Stream
...Write sendobj to Stream
//Once the client is done processing it reverts with a true for success or false for failuer.
if (returnObj == true)
{
operations.updateMovedRecord(rdr);
}
}
}
}
For the server sending i have written the code as Such (I used Pub Sub Model for this):
public void ServerData(string sServerText)
{
List<SubList> subscribers = Filter.GetClients();
if (subscribers == null) return;
Type type = typeof(ITransfer);
MethodInfo publishMethodInfo = type.GetMethod("ServerData");
foreach (SubList subscriber in subscribers)
{
try
{
//Open the Snapshot Transaction on the Server
SqlDataReader rdr = operations.InitSnapshotTrans("Select * from " + sTableName + " Where Isnull(ColToCheck,'N') <> 'Y'");
//Check if there are rows available
if(rdr.HasRows)
{
while rdr.read()
{
SendObj sendobj = Logic.CreateObejct(rdr);
bool rtnVal = Convert.ToBoolean(publishMethodInfo.Invoke(subscriber.CallBackId, new object[] { sendobj }));
if (rtnVal == true)
{
operations.updateMovedRecord(rdr);
}
}
}
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
}
Just off the top of my head, this sounds like it might take longer. That may or may not be a concern.
Given the requirement in challenge 1 (that everything happen in the context of one method call), it sounds like what actually needs to happen is for the server to call a method on the client, sending a record, and then waiting for the client to return confirmation. That way, everything that needs to happen, happens in the context of a single call (server to client). I don't know if that's feasible in your situation.
Another option might be to use some kind of double-queue system (perhaps with MSMQ?) so that the server and client can maintain an ongoing conversation within a single session.
I assume there's a reason why you can't just divide the data to be downloaded into manageable chunks and repeatedly execute the original process on the chunks. That sounds the least ambitious option, but you probably would have done it already if it met all your needs.

Async WCF Service

To make this easier to understand: We are using a database that does not have connection pooling built in. We are implementing our own connection pooler.
Ok so the title probably did not give the best description. Let me first Describe what I am trying to do. We have a WCF Service (hosted in a windows service) that needs to be able to take/process multiple requests at once. The WCF service will take the request and try to talk to (say) 10 available database connections. These database connections are all tracked by the WCF service and when processing are set to busy. If a request comes in and the WCF tries to talk to one of the 10 database connections and all of them are set to busy we would like the WCF service to wait for and return the response when it becomes available.
We have tried a few different things. For example we could have while loop (yuck)
[OperationContract(AsyncPattern=true)]
ExecuteProgram(string clientId, string program, string[] args)
{
string requestId = DbManager.RegisterRequest(clientId, program, args);
string response = null;
while(response == null)
{
response = DbManager.GetResponseForRequestId(requestId);
}
return response;
}
Basically the DbManager would track requests and responses. Each request would call the DbManager which would assign a request id. When a database connection is available it would assign (say) Responses[requestId] = [the database reponse]. The request would constantly ask the DbManager if it had a response and when it did the request could return it.
This has problems all over the place. We could possibly have multiple threads stuck in while loops for who knows how long. That would be terrible for performance and CPU usage. (To say the least)
We have also looked into trying this with events / listeners. I don't know how this would be accomplished so the code below is more of how we envisioned it working.
[OperationContract(AsyncPattern=true)]
ExecuteProgram(string clientId, string program, string[] args)
{
// register an event
// listen for that event
// when that event is called return its value
}
We have also looked into the DbManager having a queue or using things like Pulse/Monitor.Wait (which we are unfamiliar with).
So, the question is: How can we have an async WCF Operation that returns when it is able to?
WCF supports the async/await keywords in .net 4.5 http://msdn.microsoft.com/en-us/library/vstudio/hh191443.aspx. You would need to do a bit of refactoring to make your ExecuteProgram async and make your DbManager request operation awaitable.
If you need your DbManager to manage the completion of these tasks as results become available for given clientIds, you can map each clientId to a TaskCompletionSource. The TaskCompletionSource can be used to create a Task and the DbManager can use the TaskCompletionSource to set the results.
This should work, with a properly-implemented async method to call:
[OperationContract]
string ExecuteProgram(string clientId, string program, string[] args)
{
Task<string> task = DbManager.DoRequestAsync(clientId, program, args);
return task.Result;
}
Are you manually managing the 10 DB connections? It sounds like you've re-implemented database connection pooling. Perhaps you should be using the connection pooling built-in to your DB server or driver.
If you only have a single database server (which I suspect is likely), then just use a BlockingCollection for your pool.

WPF, WCF and Code First - Disconnected mode

Some architecture dilemma:
I'm using WPF as my client-side, EF Code First as my Data Access Layer, and WCF to connect between those. My probelm is hou to reupdate the UI after I did some changes to the DB, for example:
User insert new "Person" on the UI (ID=0)
User save the "Person" to the DB (ID=10, for example)
When talking about one user it's very simple - I can return the ID and update my UI as well (so next change to this person will be considered as "Update"), but what about adding more than one user at once, or updating other properties that was calculated on the server? should I return the whole graph? not to mention is very hard to remap it on the client side.
Before CodeFirst we could use STE, but it has it's own problems. anyone knows about known CodeFirst approach?
Would be happy to hear your voice.
Thanks!
You can send as the request to your wcf service the dateTime of your last update in client-side. But in the server-side you take all Persons which was updated/added after that dateTime and return it as the result. In this way you will get only modified/added Persons from your server-side.
So add lastUpdate collumn to your entity Person.
EDIT 1
If you want to server update the information in client but not client ask for news from server.
You can use the way like it works in Web Programming.
(1)The client-side asks server-side - "hey, my last update was at 20:00 10.02.2013", then server looks into DB - "is news after 20:00 10.02.2013?" if yes:
a) returns the news to the client
if no news in DB:
b) He dont returns null, but he does Thread.Sleep(somevalue). He sleeps then repeats the query to db and asks is "there news in db". So it's all repeats untill the news in DB will apear. After news in db appears he return the List<data> which is in updated after the dateTime. After that client gets the data he goes back to the point - (1).
So you dont make a lot of requests to the server but making only one request and wait for the news from the server.
Notice 2 things:
1) If the client waits too long the server side will throw the exception(dont remember actually the error code but it's not important now), so you have to catch this exception on client-side and make a new request to server-side. Also you have to configure on the server-side as long as you can wait time, to minimize the amount of requests from client.
2) You have to run this data-updater in the new thread not in the main, where the application runs.
How it will looks from the code(it may not work, i just want to show you the logic):
Server side:
public List<SomeData> Updater(DateTime clientSideLastUpdate)
{
List<SomeData> news = new List<SomeData>();
while(true)
{
List<SomeData> news = dbContext.SomeData.Where(e=>e.UpdateDateTime > clientSideLastUpdate).ToList();
if(news.Count()>0)
{
return news;
}
}
}
Client-side:
public static void Updater()
{
try
{
var news = someServiceReference.Updater(DateTime clientSideLastUpdate);
RenewDataInForms(news);
Updater();
}
catch(ServerDiesOrWhatElseExcepption)
{
Updater()
}
}
And somewhere in the code you run this updater in the new thread
Thread updaterThread = new Thread(Updater());
updaterThread.Start();
Edit 2
if you want update by one request all entities but not only SomeData then you have to add Dto object which will contain the List of every entities you want to be updatable. The server-side will complete and return this Dto object.
Hope it helps.

Categories

Resources