Embeddable Queues? - c#

I have an Application that collects actions and sends them off to a remote server. As these actions aren't time critical (think of them as log lines), I want to queue them up and send them in batches.
That way, I also want to ensure that no message is ever lost (unless the hard drive crashes).
MSMQ seems rather heavyweight, arcane and weird to use. Also, it needs to be installed as a system component.
Serializing my messages into JSON and storing them in SQLite is trivial and straight forward, but before I do that, I wonder if there is a standardized (preferably AMQP compatible) queue that I doesn't require installation and can be embedded into an app?

I really think you should reconsider MSMQ.
It is installed by default in the Server versions of Windows.
Installation on non Server versions of windows is trivial.
It provides a built-in UI for observing the queue
I don't know what your standards are of 'heavy and arcane' - but I just used it for the first time in a project and it was the easiest part of the application. I certainly don't think its much more heavyweight than storing the queue in a database yourself.
If you prefer to use JSON, you can serialize the messages yourself and store as a string
You can configure a queue to be recoverable - so storing the queue on disk not in memory
The only serious objection that I can see is having to install MSMQ. If you are having to deploy this application far and wide on different versions of Windows, I can see that as a significant problem.

Graylog2 is a centralized logging solution that accepts log entries from AMQP messages. Perhaps you could adapt it to your use-case.
In any event, Graylog2 shows that AMQP works for jobs like collecting log messages without losing any.
AMQP doesn't require installation, because it is a protocol. You just need the client library for .NET. However you would need to install an MQ broker on a server somewhere on your LAN to manage the message flow. RabbitMQ is widely used because it is easy to install.
Also, once you start sending messages, then you will also need to have a process somewhere on the network, that receieves them and does something with them such as write to a db.

If you want a homebrew solution, you could install RabbitMQ on the logging server, embed RabbitMQ's .NET client into your application, then write a small program to read from the queue and write the events to disk.
RabbitMQ is fairly lightweight: the default install is only a few Mb and it normally uses about 11Mb of memory to run. It also provides an extension to AMQP, Publisher Confirms, which can be used to ensure that once the server accepts the log message, it will not be lost, unless the hard disk dies. The extension is non-standard, though, and it's probably not supported by other brokers.

Related

Simple persistent message queue in C# for a single process

I'm doing a simple Messaging system for a Windows Mobile in C#. The application consists in sending and receiving simple text messages using a Web service communication. The messages queue should be persistent, avoiding data lost if the connection with the web service fail or the application crash.
I know about MSMQ, RabbitMQ, DotNetMQ, but they should be installed in the device and this are really simple devices, I don't want to install any other tool in each of the mobiles just for this simple task.
I already implemented the function to write an XML serialized queue with the messages into a file and I read and write all the time from this file.
I'd appreciate any better idea to solve this problem.
Thanks
MSMQ does not need to be installed is supported natively on Windows Mobile 6.5 devices. BTW: there are still many vendors in industrial area providing WM65 based devices, so this is not yet outdated.
The Windows Mobile (CE) based MSMQ is persistent and simple to use. It is normally used for interprocess-communication on the device or for client server communication (which requires MSMQ installed on the 'server').
So, the main thread creates a MSMQ, one thread in your process fills the MSMQ and another can 'peek' and, after successful transmission, 'dequeue' messages from the same MSMQ. See here for a simple example.
I don't really know what's available for Windows Mobile but you can try using basic queue (normal or concurrent, depends on your app) accompanied by two files. Write everything that is enqueued to one "Enqueue log" file and write everything that is dequeued to another "Dequeue log" file.
This two files can always give you enough information to restore your queue, and you don't need to fully rewrite/fully serialize your queue. It needs to be implemented by hand though.
About dequeue:
for example, lets say I have a queue with 3 messages: "one", "two", "three". Now I want to send the next (also the first) message "one". I append the line "one - starting removal from queue" to my "dequeue log", then I dequeue "one" from my queue object and send it where I want it to be sent to. When it is sent, I append " - finished removal from queue" to my "dequeue log". Now I have a line "one - starting removal from queue - finished removal from queue" in my log file.
It doesn't matter when do I crash, I'll always will be able to restore the state of queue object (at least for now I fail to see any logic mistakes in this process). So imho it's not tricky but still... some code should be coded. And it would be a few pages of code.
Sure, there is a better idea is to use SQLite.
I hope this will help you.

Architecture of .NET MSMQ-based synchronization system

I have a straightforward, existing ASP.NET MVC web solution. The server-based stuff writes information to a database. I am now going to integrate/synchronize this system with a number of other 3rd-party systems. I want to separate the integration processing from the existing core processing, leaving the existing system as untouched as possible.
My plan is as follows:
whenever a database write occurs on the core system server I will write a message to an MSMQ Queue.
an entirely separate server-based windows service will poll the MSMQ, look at the message and will write messages to one or more 'outbound' sync MSMQ queues.
other windows services will monitor the 'outbound' sync queues, and will talk to the 3rd-party systems as necessary, managing the outbound synchronization.
I have a couple of questions:
Should I have a single windows service doing all this, or should I have several services, one central 'routing' one and one for each 3rd-party system?
Should I use WCF for any of this. Does that buy me anything, given that the 'trigger' for writing to the initial queue is already 'happening' on a server-based process?
Thanks very much.
To answer your questions:
Should I have a single windows service doing all this
Definitely not. What if you want to scale out the routing service, or relocate it?
Should I use WCF
If you have your heart set on msmq then the only advantage WCF gives you is it provides a convenient, proven way to design and host your service endpoints, and an alternative to mucking around in System.Messaging. I would say at this stage it doesn't matter that much.
Does that buy me anything
Not sure what you mean, but as Wiktor says in his post, you could chose not to use vanilla .Net or WCF and choose a service bus type framework such as masstransit or nservicebus.
The benefit here is it abstracts you away from the messaging sub-system so you could in theory move away from msmq in the future to rabbitmq or azure queues.
First, a separate windows service is always safer than any attempt to integrate this with your asp.net runtime.
Second, do not write anything by yourself. Use
http://code.google.com/p/masstransit/
It is straightforward and does everything you need. Reference the library from their nuget package, read some tutorials and you will love it.

How to effectively communicate between database bound applications?

We have a number of different old school client-server C# WinForm client-side apps that are essentially front-ends for the database. Then there is a C# server-side windows service that waits on the client apps to submit orders and then it processes them.
The way the server-side service finds out whether there is work to do is that it polls the database. Over the years the logic of polling for waiting orders has gotten a lot more complicated due to the myriad of business rules. So because of this, the polling stored proc itself uses quite a bit of SQL Server resources even if there is nothing to do. Add to this the requirement that the orders be processed the moment they are submitted and you got yourself a performance problem, as the database is being polled constantly.
The setup actually works fine right now, but the load is about to go through the roof and, it is obvious, that it won't hold up.
What are some effective ways to communicate between a bunch of different client-side apps and a server-side windows service, that will be more future-proof than the current method?
The database server is SQL Server 2005. I can probably get the powers that be to pony up for latest SQL Server if it really comes to that, but I'd rather not fight that battle.
There are numerous options ways you can notify the clients.
You can use a ready-made solution like NServiceBus, to publish information from the server to the clients or other servers. NServiceBus uses MSMQ to publish one message to multiple subscribers in a very easy and durable way.
You can use MSMQ or another queuing product to publish messages from the server that will be delivered to the clients.
You can host a WCF service on the Windows service and connect to it from each client using a Duplex channel. Each time there is a change the service will notify the appropriate clients or even all of them. This is more complex to code but also much more flexible. You could probably send enough information back to the clients that they wouldn't need to poll the database at all.
You can have the service broadcast a UDP packet to all clients to notify them there are changes they need to pull. You can probably add enough information in the packet to allow the clients to decide whether they need to pull data from the server or not. This is a very lightweight for the server and the network, but it assumes that all clients are in the same LAN.
Perhaps you can leverage SqlDependency to receive notifications only when the data actually changes.
You can use any messaging middleware like MSMQ, JMS or TIBCO to communicate between your client and the service.
By far the easiest, and most likely the cheapest, answer is to simply buy a bigger server.
Barring that, you are in for a development effort that has a high probability of early failure. By failure I don't mean that you end up scraping whatever it is you end up building. Rather, I mean you launch the changes and orders will be screwed up while you are debugging your myriad of business rules.
Quite frankly, I wouldn't consider approaching a communications change under pressure; presuming your statement about load going "through the roof" in the near term.
If your risk exposure is such that it has to be 100% functional day one (which is normal when you are expecting a large increase in orders), with no hiccups then just upsize the DB server. Heck, I wouldn't even install the latest sql server on it. Instead, just buy a larger machine, install the exact same OS and DB server (and patch levels) and move your database.
Then look at your architecture to determine what needs to go away and what can be salvaged.
If everybody connects to SQL Server then there is also the option of Service Broker. Unlike other messaging/queueing solution recommended so far it is entirely contained in your database (no separate product to deploy, administer and configure), it offers a single story vis-a-vis your backup/recovery and high availability needs ( no separate backup for message store, no separate DR/HA, whatever is your DB solution is also your messaging solution) and overs a uniform programming API (SQL).
Even when everything is within one single SQL Server instance (ie. there is no need to communicate over network between multiple SQL Service instances) Service Broker still has an ace that no one can match: activation. With activation you eliminate completely the need to poll because the system itself will launch your processing code (will 'activate') when there are events to process. The processing code can be internal (T-SQL procedure or SQLCLR .Net procedure) or external (see external activator).

Server Push vs Client Pull for Agent-Server Topology

I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc

How would you notifiy clients about changed data on the server using .Net 2.0?

Imagine a WinForms client app that displays fairly complex calculated data fetched from a server app with .Net Remoting over a HTTPChannel.
Since the client app might be running for a whole workday, I need a method to notify the client that new data is available so the user is able to start a reload of the data when he needs to.
Currently I am using remoted .Net events, serializing the event to the client and then rethrowing the event on the side of the client.
I am not very happy with this setup and plan to reimplement it.
Important for me is:
.Net 2.0 based technology
easy of use
low complexity
robust enough to survive a server or client restart still functional
When limited to .Net 2.0, how would you implement such a feature? What technologies / libraries would you use?
I am looking for inspiration on how to attack the problem.
Edit:
The client and server exist in the same organisation, typically a LAN, perhaps a WAN/VPN situation.
This mechanism should only make the client aware that there is new data available. I'd like to keep remoting for getting the actual data to the client since that is working pretty well. MSMQ comes with windows, doesn't it? So it should be ok to use it, but I'm open to any alternative.
I've implemented a similar notification mechanism using MSMQ. The client machine opens a local, public queue, and then advises the server of it's queue name. When changes occur, the server pushes notifications into all the client queues that it's be made aware of. This way the client will know that data is ready, even if it wasn't running when the notification was sent.
The only downside is that it requires MSMQ on the clients, so this may not work if you don't have that kind of control over your client's machines.
For an extra level of redundancy (for example, if a client machine is completely down, and therefore the client queue is unavailable) you could queue notifications on the server prior to dissemination to clients. Notifications in the server queues are only removed when the client is successfully contacted (or perhaps after 3 failed attempts, etc.)
Also in that regard, if the server fails to deliver messages to a client a measured number of times, over a measured period of time, then support entities are notified, error alerts go out, and the client queue is removed from the list of destinations. When I say "measured" I mean a frequency/duration that makes sense to the setting. In my case, it was 5 retries with 5 minute intervals between attempts.
It might also make sense to have the client "renew" it's notification subscription at intervals. If a renewal doesn't occur, then eventually the client queue is removed from the destination list by a "groomer" process in the service.
It sounds as though you need to implement a message-queue based solution. Easy to implement, can survive reboots, and the technology is mature both on the server (MSMQ, MGQSeries) and on the client (System.Messaging)
If you can't find anything built-in and assuming you know the address of all the clients, you could send them a UDP message when data changes. Using UdpClient, this is very easy. The datagram doesn't even need to contain any data if the client app can assume that any UDP data on a certain port means it needs to get new data from the server.
If necessary, you can even make this a broadcast packet (if you don't know who the clients are and they are on the same subnet as the server), so long as the server isn't too "chatty".
Whatever solution you decide on, I would urge you to avoid having the clients poll. This will create a lot of unecessary network traffic and still won't perform all that well.
I would usually use a UI timer on the client to periodically hit the server to see if there was new or updated data. (Assuming you have a mechanism to identify that you have new data like time stamps for new rows, or file time stamps, or a table with last-calculated dates, etc)
That way the server doesn't have to know about the clients. The clients can check at their leisure, etc.

Categories

Resources