.NET architecture design issue - c#

Im just starting to work on a particular piece of development.
We have a .NET WCF application, MySql/EF DAL/ORM, that is called by a threaded job scheduler that pulls data from one client, stores it in our DB and passes the latest data to a another client and vice versa.
So to think about it as messages,
ClientB sends an order to ClientA through our system which transforms the order into a readable format for ClientA.
ClientA then can send messages to the ClientB through our system to say stuff like "your order is shipped" or "your order is late".
I need to take these messages and relay them onto ClientB but I want it to be transactional and for us to have full control over failed messages etc.
My current thoughts are, for simplicity sake, to have a OrderMessages table in our DB which receives messages, with a state of "Ready" which can then be processed by a factory and forwarded to the relevant client using a configuration stored against the clients.
Sorry for this being all over the place, but hopefully I've explained what im trying to do :/
Neil

Your proposed architecture is a classic queue table pattern. Remus Rusanu is the canonical resource for building such a thing with SQL Server. The ideas apply to other databases as well.
There is nothing wrong with this architecture. Note, that in case of an error when messaging a client you cannot know whether the message was received or not. There is no 100% solution for this problem. It is the Two Generals Problem.
If you make the clients pull directly from the database you can avoid this problem. Clients can use their own transactions in this case.

Consider leveraging a message platform for publishers and subscribers.
Specifically, consider using a hub and spoke pattern.
Also, BizTalk specializes in workflows across distributed systems.
Also consider the effort involved:
Transactions (short and long)
Error handling
Expected message formats
Orchestrations

Related

Communicating with .net via PHP

I have a .net winform application that I want to allow users to connect to via PHP.
I'm using PHP out of personal choice and to help keep costs low.
Quick overview:
People can connect to my .net app and start a new thread that will continue running even after they close the browser. They can then login at any time to see the status of what their thread is doing.
Currently I have come up with two ways to do this:
Idea 1 - Sockets:
When a user connects for the first time and spawns a thread a GUID is associated with their "web" login details.
Next time PHP connects to the app via a socket PHP sends a "GET.UPDATE" command with their GUID which is then added to a MESSAGE IN QUEUE for the given GUID.
The .net app spawned thread is checking the MESSAGE IN QUEUE and when it sees the "GET.UPDATE" command it then endcodes the data into json and adds it to the MESSAGE OUT QUEUE
The next time there is a PHP socket request from that GUID it sends the data in the MESSAGE OUT QUEUE.
Idea 2 - Database:
Same Idea as above but commands from PHP get put into a database
the .net app thread checks for new IN MESSAGES in the database
if it gets a GET.UPDATE command it adds the json encoded data to the database
Next time PHP connects it will check the database for new messages and report the data accordingly.
I just wonderd what of the two above ideas would be best. Messing about with sockets can quicly become a pain. But i'm worried with the database ideas that if I have 1000's of users we will have a database table that could begin to slow down if there is alot of messages in the queue
Any advice would be appricated.
Either solution is acceptable, but if you are looking at a high user load, you may want to reconsider your approach. A WinForms solution is not going to be nearly as robust as a WCF solution if you're looking at thousands of requests. I would not recommend using a database solely for messaging, unless results of your processes are already stored in the database. If they are, I would not recommend directly exposing the database, but rather gating database access through an exposed API. Databases are made to be highly available/scalable, so I wouldn't worry too much on load unless you are looking at a low-end database like SQLite.
If you are looking at publicly exposing the database and using it as a messaging service for whatever reason, might I suggest Postgresql's LISTEN/NOTIFY. Npgsql has good support for this and it's very easy to implement. Postgresql is also freely available with a large community for support.

How to effectively communicate between database bound applications?

We have a number of different old school client-server C# WinForm client-side apps that are essentially front-ends for the database. Then there is a C# server-side windows service that waits on the client apps to submit orders and then it processes them.
The way the server-side service finds out whether there is work to do is that it polls the database. Over the years the logic of polling for waiting orders has gotten a lot more complicated due to the myriad of business rules. So because of this, the polling stored proc itself uses quite a bit of SQL Server resources even if there is nothing to do. Add to this the requirement that the orders be processed the moment they are submitted and you got yourself a performance problem, as the database is being polled constantly.
The setup actually works fine right now, but the load is about to go through the roof and, it is obvious, that it won't hold up.
What are some effective ways to communicate between a bunch of different client-side apps and a server-side windows service, that will be more future-proof than the current method?
The database server is SQL Server 2005. I can probably get the powers that be to pony up for latest SQL Server if it really comes to that, but I'd rather not fight that battle.
There are numerous options ways you can notify the clients.
You can use a ready-made solution like NServiceBus, to publish information from the server to the clients or other servers. NServiceBus uses MSMQ to publish one message to multiple subscribers in a very easy and durable way.
You can use MSMQ or another queuing product to publish messages from the server that will be delivered to the clients.
You can host a WCF service on the Windows service and connect to it from each client using a Duplex channel. Each time there is a change the service will notify the appropriate clients or even all of them. This is more complex to code but also much more flexible. You could probably send enough information back to the clients that they wouldn't need to poll the database at all.
You can have the service broadcast a UDP packet to all clients to notify them there are changes they need to pull. You can probably add enough information in the packet to allow the clients to decide whether they need to pull data from the server or not. This is a very lightweight for the server and the network, but it assumes that all clients are in the same LAN.
Perhaps you can leverage SqlDependency to receive notifications only when the data actually changes.
You can use any messaging middleware like MSMQ, JMS or TIBCO to communicate between your client and the service.
By far the easiest, and most likely the cheapest, answer is to simply buy a bigger server.
Barring that, you are in for a development effort that has a high probability of early failure. By failure I don't mean that you end up scraping whatever it is you end up building. Rather, I mean you launch the changes and orders will be screwed up while you are debugging your myriad of business rules.
Quite frankly, I wouldn't consider approaching a communications change under pressure; presuming your statement about load going "through the roof" in the near term.
If your risk exposure is such that it has to be 100% functional day one (which is normal when you are expecting a large increase in orders), with no hiccups then just upsize the DB server. Heck, I wouldn't even install the latest sql server on it. Instead, just buy a larger machine, install the exact same OS and DB server (and patch levels) and move your database.
Then look at your architecture to determine what needs to go away and what can be salvaged.
If everybody connects to SQL Server then there is also the option of Service Broker. Unlike other messaging/queueing solution recommended so far it is entirely contained in your database (no separate product to deploy, administer and configure), it offers a single story vis-a-vis your backup/recovery and high availability needs ( no separate backup for message store, no separate DR/HA, whatever is your DB solution is also your messaging solution) and overs a uniform programming API (SQL).
Even when everything is within one single SQL Server instance (ie. there is no need to communicate over network between multiple SQL Service instances) Service Broker still has an ace that no one can match: activation. With activation you eliminate completely the need to poll because the system itself will launch your processing code (will 'activate') when there are events to process. The processing code can be internal (T-SQL procedure or SQLCLR .Net procedure) or external (see external activator).

Intense Distributed C# (WCF) Architecture Design

I want to design a new distributed application, but I have a few queries that I need some genius advice on, hopefully from you people:
Scenario
I currently support a legacy application that is starting to fall between the cracks.
It is a distributed Client-Server app implemented using .Net Remoting. I can't explain exactly what it does, because I'm not allowed to.......But let's just say that it does LOTS of MATHS. I want to re-design and re-write the application using WCF.
Pre-requisites
The server side of the implementation will be hosted in a Windows Service.
The client side will be a windows forms application.
The server side will perform lots of memory-intensive processing.
The server will spit this data out to multiple thin clients (20-ish).
The majority of the time the server will be passing data to the clients, but occasionally the clients will be persisting data back to the server.
The speed at which the data is transmitted is highly-important, however I'm well aware that WCF can handle fast distribution of data.
Encryption/Security is not that important as the app will run on a highly protected local network.
Queries
Given the information above:
1)What sort of design pattern am I best going with? - Baring in mind I want the server to continually PUSH the newly calculated information immediately to the clients, as opposed to the current implementation that involves the client pulling from the server continuously.
2)What type of WCF binding should I use to ensure maximum speed of data transfer? (as close to real-time as possible is what I'm after)
3)Should I use a class library to share the common objects between the client and the server applications?
4)What is the best way in which to databind my objects on the client side in order to see live updates continually as data changes?
If I've forgotten anything then feel free to point this out
Help greatly appreciated.
1) What sort of design pattern am I best going with?
Based on your comments, you're wanting to transform the current polling mechanism to an event-based mechanism. That is, instead of the client constantly checking the server for results, have the server notify the client when a new calculation result is available.
I would recommend using Juval Lowy's Publish-Subscribe Framework for this.
(source: microsoft.com)
.
This framework is described in detail in this MSDN article. And you can download the framework's source code for free at Lowy's website, IDesign.net.
Basically, the server logic that performs the calculations inside the Windows service is the Publishing Client in the graphic, and the various WinForm applications are the Subscribing Clients. The Pub/Sub Service lives in your Windows service. It manages the list of subscribing clients and provides a single endpoint for your server to publish calculation results to. In this way, your server performs a calculation and publishes the result once to the Pub/Sub Service endpoint. The Pub/Sub Service is then responsible for publishing the result to the subscribed clients.
2) What type of WCF binding should I use to ensure maximum speed of data transfer?
If all of your WCF communication were on a single machine, you'd want to use the NetNamedPipeBinding. However, since you will be distributed, you want to use the NetTcpBinding.
For WCF binding decisions, I have found this chart useful.
3) Should I use a class library to share the common objects between the client and the server applications?
Since you are in control of both the client and server side, I would highly recommend sharing a class library instead of using Visual Studio's "Add Service Reference" feature. For a detailed discussion of this, refer to this SO question-and-answer.
4) What is the best way in which to databind my objects on the client side in order to see live updates continually as data changes?
I suspect this will depend on what controls you use to display the data. One way that immediately comes to mind would be to have your client fill an in-memory data table as each calculation result is received. This data table could then be bound to a ListBox control, for example, that shows the results in calculation order.
This to me looks like you need to implement the Observer pattern, but distributed. Whereby new calculations are made to the service, and WCF just happens to be the mechanism by which you push your notification back to the client.
Generally speaking, you have your business logic housed in a windows service, whereby a type is a Subject (Observable). You could publish an endpoint for clients to register for notifications. This would be a WCF service, with potentially two operations:
RegisterClient(...)
UnregisterClient(...)
When a client is registered with service, it can receive updates, broadly speaking, the when the service has finished calculating a result, it could iterate through all registered clients and initiate a push. The push being a communication through an endpoint on the client.
A client endpoint might typically by
Notify(Result...);
And your server simply calls that when it has new data...
Typically you'd use TCP to maximise throughput.
This is by no means exactly what you should do, but perhaps its a direction to start in?

Email Notification Service

What are some ideas (using .NET and SQL 2005) for implementing a service that sends emails? The emails are to be data-driven. The date and time an email is to be sent is a field in a table.
I have built several high volume email notification services used to send data driven emails in the past. A few recommendations:
Look into using a high quality email service provider that specializes in managing bounces, unsubscribes, isp and black list management, etc. If sending email is critical to your business, but not your main business it will be worth it. Most will have an api for sending templated messages, click tracking, open rates and will have provide triggers etc.
Look into the SQL Server Service Broker to queue the actual messages, otherwise you can consider Microsoft Message Queuing Services. There is no need to build your own queuing service. We spent too much time dealing with queing infrastructure code when this was already solved.
Develop a flexible set of events on your business tier to allow for the triggering of such messages and put them in your queue asynchronously, this will save you alot of grief in the long run as opposed to polling on the DB or hacking it in with Database triggers.
You can use triggers to send emails on UPDATE/DELETE/INSERT. The triggers can be implemented with .Net, just send mails from there using the classes in System.Net.Mail namespace.
Here is a good article how to implement CLR (.Net) triggers in .Net.
For a light-weight SMPT server, and to minimize the delays, you can use the one, recommended in Kenny's answer.
Thanks everyone for the feedback. For simplicity's sake I've started out with a SP that looks up the reminders to be sent and uses sp_ send_ dbmail (SQL Database Mail) to send the emails. This runs on a job every minute. I update the record to indicate the reminder was sent with the MailItemId sent back from sp_ send_ dbmail. The volume of reminders expected is worst case in the 10^2 range per day.
I'd love to hear feedback about any shortcomings people think this solution may have.
By the way, I can't believe Vista doesn't come with a local SMTP server! Luckily Google is more generous, I used Gmail's server for testing.
Usually, I just spin up a process such as http://caspian.dotconf.net/menu/Software/SendEmail/
I was going to suggest SQL Server Notification Services, which will handle the job nicely. But I see that's been dropped from SQL Server 2008, so you probably don't want to go there.
Data Driven SSRS Subscriptions? Just a thought.

How should server push data to rich client

I'm writing a simple accounting program consists of several C# winform clients and a java server app that read/write data into a database. One of the requirement is that all C# clients should receive updates from the server. For example, if user a create a new invoice from his C# client, other users should see this new invoice from their client.
My experience is mainly on web development and I don't know what's the best way to fulfill this requirement with C#s client and Java servlet server.
My initial though is to run ActiveMQ with Glassfish and use messaging pub/sub method so that updates can be pushed to C# client. I will create different topics like newInvoice, cancelInvoice, etc in order to differentiate the message type. Each message will simply contains the object encoded in JSON.
But it seems to me that this involves quite a lot of work. Given that my user base is very small ( just 3 or 4 concurrent user), it seems to me that there should be some simpler solutions. (I'm not familiar socket programming :) )
I know this is a client-server programming 101 questions but would be great if any experienced programmer can point me to some simple solutions.
The simplest approach here is often to simply use a poll - i.e. have the clients query for data every (your time interval). That avoids a whole family of issues (firewalls, security, line-of-sight, resolution, client-tracking, etc).
With WCF, you can have callbacks on duplex channels (allowing the server to actively send a message to clients), but this is more complex. I value simplicity, so I usually just poll.
Tricks that help here are designing the system to have an inbuilt mechanism for querying "changes since x" - for example, an audit table, perhaps fed by database triggers. The exact details vary per project, of course.
Another option that you might want to look at is ADO.NET Sync Services; this does much of what you ask for, for keeping a local copy of the database up to date with the server - but has a few complexities of its own. This is available (IIRC) in the "Local Database Cache" VS template.
Rather than pushing information from the server to 1:N Clients, would it not be easier to have the clients Poll the server for updates every so often ? Or when the client launches and creates a connection to the server, the server could dynamically generate a new Message Queue for that Client Connection, which the client could then poll for updates?
There are several push technologies available to you, like ActiveMQ (as you mentioned), or XMPP. But if you only have 3 or 4 clients to concern yourself with, polling would be the simplest solution. It doesn't scale well, but that isn't really a concern in your case, unless your server is an 8086 or something 8-)
You may want to take a look at StreamHub Push Server - its a popular Comet server written in Java that has a .NET Client SDK for receiving updates from the server in C#. It also has a Java Client SDK and the usual Ajax/Comet web browser support giving you more flexibility in the future to push data to web, Java and C# clients.

Categories

Resources