I am creating a message-based architecture that currently uses polling clients to retrieve messages. For obvious reasons, I would like to register my clients to SQL Server 2008 in order to receive an event when a message is inserted into a table.
I have been round-and-round the web researching SQL Server Message Broker, CLR Stored Procedures, and StreamInsight, but I can't seem to find what I am looking for: a way for SQL Server to alert my services that a message has been received. Basically an event-driven rather than polling model.
Does this exist? Any ideas on where to start? Are there any examples?
Yes, this does exist. I've had success using SQL Service Broker. I'm unfamiliar with the other options you listed.
Setting up SSB is a pain because there are so many moving parts and details but it works nicely. The main part that helps you avoid polling is a stored procedure that you create and call from C#. In that short procedure is a RECEIVE WAITFOR statement which blocks your open and transacted connection until a message is available in your queue OR your timeout hits. In C#, whether you get a result or a timeout immediately run the procedure again to wait for the next item.
You'll want to limit the number of open connections you have to SQL ... to 1 if possible. If you have multiple interested parties, push all their stuff through that one connection and distribute it with a C# server by some other means.
Related
I am currently using a SqlDependency with a SQL Server 2012 Service Broker and I want to be able to have two servers configured both listening to the service broker and pull off the queue but message should only be pulled off the queue once total. Each machine should try and pull down what it can but if too many are coming in it should share a balance in pulling in what it can. Right now I start two instances of the program and both are listening. Once a new message is added they both pull off the same message off the queue and run the code.
Is SqlDependency not the solution to what I want to do? What is the better solution to something like this?
Once a new message is added they both pull off the same message off the queue and run the code
The behavior you describe is how SQLDependency is designed to work. If there are multiple listeners, all listeners are notified. For example, you can see this described in the SignalR SQL Backplane documentation
Notice how all VMs receive notification from SQL Server, including the VM that initiated the update.
If you want to distribute SQL Notifications across a pool of worker VMs, you need a way to share state. Note that the SQL Notification is only an indication that something changed and doesn't indicate what changed. One approach is to add a table to the database to act as a queue of jobs or actions. Subscribers can query this queue on each notification and claim the action by updating or deleting from this table. (Appropriate locks would have to be configured on the table)
Alternatively, you can do this using other tools for shared state, such as a message queue (eg. RabbitMQ), or distributed cache (eg. Redis)
You don't need SQL Notifications or SQLDependency. Each instance can execute:
WAITFOR(
RECEIVE TOP(1) * FROM {NameOfQueue}
), TIMEOUT #timeoutvalue;
This command will WAIT, leaving the connection open, until either a message is available or the timeout has occurred. On the timeout you receive no message so just connect and try again.
Each message can only be RECEIVED by a single process. Internally the row in the Server Broker queue is locked, and other readers will READPAST locked rows.
Because the SQL can be a little bit tricky, I've written what I think is a helpful wrapper class that you are free to use.
I have a .net winform application that I want to allow users to connect to via PHP.
I'm using PHP out of personal choice and to help keep costs low.
Quick overview:
People can connect to my .net app and start a new thread that will continue running even after they close the browser. They can then login at any time to see the status of what their thread is doing.
Currently I have come up with two ways to do this:
Idea 1 - Sockets:
When a user connects for the first time and spawns a thread a GUID is associated with their "web" login details.
Next time PHP connects to the app via a socket PHP sends a "GET.UPDATE" command with their GUID which is then added to a MESSAGE IN QUEUE for the given GUID.
The .net app spawned thread is checking the MESSAGE IN QUEUE and when it sees the "GET.UPDATE" command it then endcodes the data into json and adds it to the MESSAGE OUT QUEUE
The next time there is a PHP socket request from that GUID it sends the data in the MESSAGE OUT QUEUE.
Idea 2 - Database:
Same Idea as above but commands from PHP get put into a database
the .net app thread checks for new IN MESSAGES in the database
if it gets a GET.UPDATE command it adds the json encoded data to the database
Next time PHP connects it will check the database for new messages and report the data accordingly.
I just wonderd what of the two above ideas would be best. Messing about with sockets can quicly become a pain. But i'm worried with the database ideas that if I have 1000's of users we will have a database table that could begin to slow down if there is alot of messages in the queue
Any advice would be appricated.
Either solution is acceptable, but if you are looking at a high user load, you may want to reconsider your approach. A WinForms solution is not going to be nearly as robust as a WCF solution if you're looking at thousands of requests. I would not recommend using a database solely for messaging, unless results of your processes are already stored in the database. If they are, I would not recommend directly exposing the database, but rather gating database access through an exposed API. Databases are made to be highly available/scalable, so I wouldn't worry too much on load unless you are looking at a low-end database like SQLite.
If you are looking at publicly exposing the database and using it as a messaging service for whatever reason, might I suggest Postgresql's LISTEN/NOTIFY. Npgsql has good support for this and it's very easy to implement. Postgresql is also freely available with a large community for support.
We have a number of different old school client-server C# WinForm client-side apps that are essentially front-ends for the database. Then there is a C# server-side windows service that waits on the client apps to submit orders and then it processes them.
The way the server-side service finds out whether there is work to do is that it polls the database. Over the years the logic of polling for waiting orders has gotten a lot more complicated due to the myriad of business rules. So because of this, the polling stored proc itself uses quite a bit of SQL Server resources even if there is nothing to do. Add to this the requirement that the orders be processed the moment they are submitted and you got yourself a performance problem, as the database is being polled constantly.
The setup actually works fine right now, but the load is about to go through the roof and, it is obvious, that it won't hold up.
What are some effective ways to communicate between a bunch of different client-side apps and a server-side windows service, that will be more future-proof than the current method?
The database server is SQL Server 2005. I can probably get the powers that be to pony up for latest SQL Server if it really comes to that, but I'd rather not fight that battle.
There are numerous options ways you can notify the clients.
You can use a ready-made solution like NServiceBus, to publish information from the server to the clients or other servers. NServiceBus uses MSMQ to publish one message to multiple subscribers in a very easy and durable way.
You can use MSMQ or another queuing product to publish messages from the server that will be delivered to the clients.
You can host a WCF service on the Windows service and connect to it from each client using a Duplex channel. Each time there is a change the service will notify the appropriate clients or even all of them. This is more complex to code but also much more flexible. You could probably send enough information back to the clients that they wouldn't need to poll the database at all.
You can have the service broadcast a UDP packet to all clients to notify them there are changes they need to pull. You can probably add enough information in the packet to allow the clients to decide whether they need to pull data from the server or not. This is a very lightweight for the server and the network, but it assumes that all clients are in the same LAN.
Perhaps you can leverage SqlDependency to receive notifications only when the data actually changes.
You can use any messaging middleware like MSMQ, JMS or TIBCO to communicate between your client and the service.
By far the easiest, and most likely the cheapest, answer is to simply buy a bigger server.
Barring that, you are in for a development effort that has a high probability of early failure. By failure I don't mean that you end up scraping whatever it is you end up building. Rather, I mean you launch the changes and orders will be screwed up while you are debugging your myriad of business rules.
Quite frankly, I wouldn't consider approaching a communications change under pressure; presuming your statement about load going "through the roof" in the near term.
If your risk exposure is such that it has to be 100% functional day one (which is normal when you are expecting a large increase in orders), with no hiccups then just upsize the DB server. Heck, I wouldn't even install the latest sql server on it. Instead, just buy a larger machine, install the exact same OS and DB server (and patch levels) and move your database.
Then look at your architecture to determine what needs to go away and what can be salvaged.
If everybody connects to SQL Server then there is also the option of Service Broker. Unlike other messaging/queueing solution recommended so far it is entirely contained in your database (no separate product to deploy, administer and configure), it offers a single story vis-a-vis your backup/recovery and high availability needs ( no separate backup for message store, no separate DR/HA, whatever is your DB solution is also your messaging solution) and overs a uniform programming API (SQL).
Even when everything is within one single SQL Server instance (ie. there is no need to communicate over network between multiple SQL Service instances) Service Broker still has an ace that no one can match: activation. With activation you eliminate completely the need to poll because the system itself will launch your processing code (will 'activate') when there are events to process. The processing code can be internal (T-SQL procedure or SQLCLR .Net procedure) or external (see external activator).
We're migrating our databases to an offsite data center that contains newer more robust servers. I have a process that imports data from my application to our local sql server and it works great. However, I've moved my database to the new server and I am periodically receiving RPC timeout errors or cannot errors that state it can't make an RPC call.
The old sql server really only contained my database and a couple of other custom application databases. That said, the new server is hosting other databases as well as our Sharepoint database and Team Foundation Server database. While looking at the SQL Profiler, I notice many frequent RPC calls from a TFSService even though no one is using TFS at the time. Similarly, Sharepoint is constantly connecting through RPC as well, but unlike TFS, people are actively using it.
To me, those databases should be either by themselves or together on their own sql server. Am I wrong? Do you think the RPC calls from TFS and Sharepoint could be hogging my connection? If that's the case and if I'm not permitted to move the database and the another sql server, is there a way to configure TFS and Sharepoint to tone down the amount of "needless" interactions to the database? Any other ideas I should look for?
By the way, I've received this error from my machine as well as a from a virtual machine that exists in the data center so I don't think it's a connection (distance) issue.
Thank You.
Team Foundation Server 2010 has a notifications system built-in (not to be confused with the events/alerts system that sends E-Mail or SOAP events).
Each application tier periodically polls a table in the Tfs_Configuration database asking "has there been any notifications that I'm subscribed to happen since I last checked?". An example of a notification is when somebody changes a configuration setting, all the application tiers pick up that change almost immediately without having to restart.
In the SQL Profiler, this will look like a lot of activity and load on your server, but it's really not.
We have an internal app(Thick Client) that relies on our central SQL server. The app is a Desktop app that allows the users to work in "Offline" mode (e.g. Outlook). What I need to accomplish is a way to accurately tell if SQL is available or not.
What I have so far:
I currently use the following method -->
internal static void CheckSQLAvailability()
{
using (TcpClient tcpc = new TcpClient())
{
try
{
tcpc.Connect(Settings.Default.LiveSQLServer, Settings.Default.LiveSQLServerPort);
IsSQLAvailable = true;
}
catch
{
IsSQLAvailable = false;
}
}
}
I am not crazy about this approach for the following reasons.
Prone to false Negatives
Needs to be "manually" called
Seems "smelly" (the try/catch)
I had thought to use a timer and just call this every X(3??) minutes and also, if a negative result, try a second time to reduce the false negatives.
There is a similar question here -->Detecting if SQL server is running
but it differs from mine in these ways:
I am only checking 1 server
I am looking for a reactive way versus proactive
So in the end, is there a more elegant way to do this? It would all be "in-network" detection.
P.S. To offer some background as requested in an answer below: My app is a Basic CRUD app that can connect to our Central SQL Server or a local SQLExpress Server. I have a Merge Replication Module that keeps them in Sync and the DAL is bound to a User.Setting value. I can, already, manually flip them from Central to Local and back. I just want to implement a way to have it automatically do this. I have a NetworkChangeDetection class that works quite well but, obviously, does not detect the Remote SQL's.
Consider what the Windows Cluster monitor does for a SQL Server Cluster resource: it actually connects and runs a dummy query (SELECT ##version). This indicates that the SQL is running, is actively listening for requests, and is able to run a request and return a result. For the clustering monitor the response to this query is the 'heartbeat' of the server and if it fails to get a response, for whatever reason, it may initiate a cluster failover.
Only connecting to TCP has several drawbaks in my opinion:
it omits non-TCP protocols like local shared memory (LPC) or remote net pipes (SMB)
it requires hard codded TCP port number as opposed to let the instance port listening auto-discovery do its work (SQL Browser and friends)
it only establishes that the OS level socket can be established, it does not validate that the SQL Server itself is in a runnable state (non-yielding scheduler might block network IO requests acceptance, scheduler overload and worker starvation may do the same, memory resource exhaustion etc etc).
Unfortunately there is no way to get a notification from SQL Server itself of saying 'hey, I'm active, won't you send some requests?'. I don't know all the details of your fat client ('thick app'), but perhaps you should investigate a different metaphor: clients do all work locally, on SQL Express instances, and these instances synchronize the data when the server is available. Service Broker was designed specifically with this connection retry mode and it would hide the server availability due to its asynchronous loosely coupled programming API.