I'm currently in the process of building an application that receives thousands of small messages through a basic web service, the messages are published to a message queue (RabbitMQ). The application makes use of Dependancy Injection using StructureMap as its container.
I have a separate worker application that consumes the message queue and persists the messages to a database (SQL Server).
I have implemented the SQL Connection and RabbitMQ connections as singletons (Thread Local).
In an ideal world, this all works fine but if SQL Server or RabbitMQ connection is broken I need to reopen it, or potentially dispose and recreate/reconnect the resources.
I wrote a basic class to act as a factory that before it returns a resource, checks it is connected/open/working and if not, dispose it and recreate it - I'm not sure if this is "best practice" or if I'm trying to solve a problem that has already been solved.
Can anyone offer suggestions on how I could implement long running tasks that do a lot of small tasks (in my case a single INSERT statement) that don't require object instantiation for each task, but can gracefully recover from errors such as dropped connections?
RabbitMQ connections seem to be expensive and during high work loads I can quickly run out of handles so I'd like to reuse the same connection (per thread).
The Enterprise Library 5.0 Integration Pack for Windows Azure contains a block for transient fault handling. It allows you to specify retry behavior in case of errors.
It was designed with Windows Azure in mind but I'm sure it would be easy to write a custom policy based on what it offers you.
You can make a connection factory for RabbitMQ that has a connection pool. It would be responsible for handing out connections to tasks. You should check to see that the connections are ok. If not, start a new thread that closes/cleans the connection then returns it to the thread pool. Meanwhile return a functioning connection to the user.
It sounds complicated but it's the pattern for working with hard to initialize resources.
Related
We have a ServiceStack service (API) which provides HTTP endpoints hosted using AppSelfHostBase.
Those services later query database using ServiceStack.OrmLite.MySql. All methods are implemented using async / await pattern. Database connections are registered manually to Funq with Request reuse scope, and injected to the property of base DAL class.
This all works fine when this service is accessed only by HTTP requests.
We have another Windows service which calls this API. Since they could be hosted on the same server, we’ve implemented local IRestClientAsync for wrapping service calls, so the API service methods could be loaded to the Windows service, and accessed more efficiently (eg 1200 req/sec compared to 400 req/sec). This Windows service has a lot of threads running at the same time.
By doing this, we broke the Request lifecycle and are getting
“There is already an open DataReader associated with this Connection which must be closed first.”
error. We tried handling this manually using custom connection providers separating connections through threads using ThreadLocal and CallContext. This didn’t work all the time.
We tried handling Request lifecycle by calling OnBeginRequest(null); and OnEndRequest(); manually, but the performance was bad (close to HTTP calls) and also, got “open DataReader” errors.
We are using RequestContext.UseThreadStatic option, since the threads are instantiated from Quartz .NET job.
What could be the best solution for managing database connections? Can we make the current solution working reliably?
First thing I would do is not bother using the Async API's with MySql since it's not truly asynchronous as it ends up creating new threads behind the scenes to fake asynchrony which makes it even less efficient then using the Sync API's. You also can't use multiple readers with the same db connection which will end up throwing this exception.
So I'd firstly go back to using sync API's with MySql, if it's still an issue use transient scope (i.e. no re-use) instead of Request scope and let the db connection pooling do its job. Request scope holds on to the connection longer, taking up more resources than necessary.
Background:
I have developed a couple of WCF services for importing data. When receiving data my services publishes the request on an EasyNetQ service bus, hooked up to an RabbitMq server.
The consumer then takes the request, serializes it to XML and sends it as an parameter to a stored procedure for handling. The stored procedure in turn, performs a table merge for inserting or updating the data.
The Problem:
My problem is that i sometimes can ack quite a good amount of messages/sec and sometimes got very poor performance, which in turn leads my queues to build up in RabbitMq.
My application uses the following technologies:
TopShelf for hosting the web services.
Windsor Dependency Injection
Interceptors for logging, handling exceptions and timing performance.
EasyNetQ as message bus.
RabbitMq as message broker.
I have tried following things:
Executed same message several times and it seems that
the execution time varies strongly. When executing the stored
procedure in SQL Server Management Studio, the execution time is
about the same for all repetitions.
Wired up my solution against a local RabbitMq server and a local
database.
Removed interceptors for transaction handling.
Changed my db connection class from creating\opening a new connection
for each call to reuse the existing connection (removed using
statement for sql connection).
Does anyone have any ideas of what could be causing my problem?
Thanx in advance.
Matias
Assuming the slowness comes from rabbit - check the I/O of the disk in case you are keeping your messages durable and persisted, in case you are not involving disk, check the memory watermarks, in case you running high on memory, rabbit will flush it's messages to disk, this will lead to significant slowness during this process.
I am using WCF and I am putting a chatroom facility in my C# program. So I need to be able to send information from the server to the clients for two events -
When a user connects/disconnects I update the list of connected users and send that back to all clients for display in a TextBlock
When a user posts a message, I need the server to send that message out to all clients
So I am looking for advice on the best way of implementing this. I was going to use netTcpBinding for duplex callbacks to clients but then I ran into some issues regarding not being able to call back the client if the connection is closed. I need to use percall instances for scalibility. I was advised in this thread that I shouldnt leave connections open as it would 'significantly limit scalibity' - WCF duplex callbacks, how do I send a message to all clients?
However I had a look through the book Programming WCF Services and the author seems to state that this is not an issue because 'In between calls, the client holds a reference on a proxy that doesn’t have an actual object at the end of the wire. This means that you can dispose of the expensive resources the service instance occupies long before the client closes the proxy'
So which is correct, is it fine to keep proxies open on clients?
But even if that is fine it leads to another issue. If the service instances are destroyed between call, how can they do duplex callbacks to update the clients? Regarding percall instances, the author of Programming WCF Services says 'Because the object will be discarded once the method returns, you should not spin off background threads or dispatch asynchronous calls back into the instance'
Would I be better off having clients poll the service for updates? I would have imagined that this is much more inefficient than duplex callbacks, clients could end up polling the service 50+ times as often as using a duplex callback. But maybe there is no other way? Would this be scalable? I envisage several hundred concurrent users.
Since I am guilty of telling you that server callbacks won't scale, I should probably explain a bit more. Let me start by addressing your questions:
Without owning the book in question, I can only assume that the author is either referring to http-based transports or request-response only, with no callbacks. Callbacks require one of two things- either the server needs to maintain an open TCP connection to the client (meaning that there are resources in use on the server for each client), or the server needs to be able to open a connection to a listening port on the client. Since you are using netTcpBinding, your situation would be the former. wsDualHttpBinding is an example of the latter, but that introduces a lot of routing and firewall issues that make it unworkable over the internet (I am assuming that the public internet is your target environment here- if not, let us know).
You have intuitively figured out why server resources are required for callbacks. Again, wsDualHttpBinding is a bit different, because in that case the server is actually calling back to the client over a new connection in order to send the async reply. This basically requires ports to be opened on the client's side and punched through any firewalls, something that you can't expect of the average internet user. Lots more on that here: WSDualHttpBinding for duplex callbacks
You can architect this a few different ways, but it's understandable if you don't want the overhead (and potential for delay) of the clients constantly hammering the server for updates. Again, at several hundred concurrent users, you are likely still within the range that one good server could handle using callbacks, but I assume you'd like to have a system that can scale beyond that if needed (or at peak times). What I'd do is this:
Use callback proxies (I know, I told you not to)... Clients connecting create new proxies, which are stored in a thread-safe collection and occasionally checked for live-ness (and purged if found to be dead).
Instead of having the server post messages directly from one client to another, have the server post the messages to some Message Queue Middleware. There are tons of these out there- MSMQ is popular with Windows, ActiveMQ and RabbitMQ are FOSS (Free Open Source Software), and Tibco EMS is popular in big enterprises (but can be very expensive). What you probably want to use is a topic, not a queue (more on queues vs topics here).
Have a thread (or several threads) on the server dedicated to reading messages off of the topic, and if that message is addressed to a live session on that server, deliver that message to the proxy on the server.
Here's a rough sketch of the architecture:
This architecture should allow you to automatically scale out by simply adding more servers, and load balancing new connections among them. The message queueing infrastructure would be the only limiting factor, and all of the ones I mentioned would scale beyond any likely use case you'd ever see. Because you'd be using topics and not queues, every message would be broadcast to each server- you might need to figure out a better way of distributing the messages, like using hash-based partitioning.
I have an application that sends email and fax notifications when an item is complete. My implementation is working but it takes several seconds to construct (i.e. connecting to our servers). This ends up freezing the UI for several seconds until the notification services have been fully constructed and used. I'm already pushing the problem as far as it will go by injecting factories and creating my services at the last possible minute.
What options do I have for injecting external services that takes several seconds to construct? I'm thinking of instructing my container that these services are singletons, which would only construct the services once per application start-up.
Simple solution
I would give those services a longer lifetime (singleinstance).
The problem though is that TCP connections are usually disconnected if nothing have happened for a while. Which means that you need to have some kind of keep alive packets to keep them open.
More robust solution
imho the connection setup is not the problem. Contacting a service should not take long. You haven't specified what the external services are. I'm guessing that they are some kind of web services hosted in IIS. If so, make sure that the application pools aren't recycled too often. Starting a new application in IIS can take time.
The other thought is if you really need to wait for the service to complete? Why not queue the action (to be handled by the thread pool or in a separate thread) and let the user continue? If required, simply use a messagebox when the service have been called and something failed.
These services should be bootstrapped on application startup and then configured via DI using a singleton which is then injected to any classes that use the class in their constructor.
I can recommend Unity or Spring.Net. I've found Unity very easy to use for simple injection, so give that a look.
I'm creating an application that I want to put into the cloud. This application has one main function.
It hosts socket CLIENT sessions on behalf of other users (think of Beejive IM for the iPhone, where it hosts IM sessions for clients to maintain state on those IM networks, allowing the client to connect/disconnect at will, without breaking the IM network connection).
Now, the way I've planned it now, is that one 'worker instance' can likely only handle a finite number of client sessions (let's say 50,000 for argument sake). Those sessions will be very long lived worker tasks.
The issue I'm trying to get my head around is that I will sometimes need to perform tasks to specific client sessions (eg: If I need to disconnect a client session). With Azure, would I be able to queue up a smaller task that only the instance hosting that specific client session would be able to dequeue?
Right now I'm contemplating GoGrid as my provider, and I solve this issue by using Apache's Active Messaging Queue software. My web app enqueues 'disconnect' tasks that are assigned to a specific instance Id. Each client session is therefore assigned to a specific instance id. The instance then only dequeues 'disconnect' tasks that are assigned to it.
I'm wondering if it's feasible to do something similar on Azure, and how I would generally do it. I like the idea of not having to setup many different VM's to scale, but instead just deploying a single package. Also, it would be nice to make use of Azure's Queues instead of integrating a third party product such as Apache ActiveMQ, or even MSMQ.
I'd be very concerned about building a production application on Azure until the feature set, pricing, and licensing terms are finalized. For starters, you can't even do a cost comparison between it and e.g. GoGrid or EC2 or Mosso. So I don't see how it could possibly end up a front-runner. Also, we know that all of these systems will have glitches as they mature. Amazon's services are in much wider use than any of the others, and have been publicly available for much years. IMHO choosing Azure is a recipe for pain as they stabilize.
Have you considered Amazon's Simple Queue Service for queueing?
I think you can absolutely use Windows Azure for this. My recommendation would be to create a queue for each session you're tracking. Then enqueue the disconnect message (for example) on the queue for that session. The worker instance that's handling that connection should be the only one polling that queue, so it should handle performing the task on that connection.
Regarding the application hosting socket connections for clients to connect to, I'd double-check on what's allowed as I think only HTTP and HTTPS connections are allowed to be made with Azure.