I have an application that sends email and fax notifications when an item is complete. My implementation is working but it takes several seconds to construct (i.e. connecting to our servers). This ends up freezing the UI for several seconds until the notification services have been fully constructed and used. I'm already pushing the problem as far as it will go by injecting factories and creating my services at the last possible minute.
What options do I have for injecting external services that takes several seconds to construct? I'm thinking of instructing my container that these services are singletons, which would only construct the services once per application start-up.
Simple solution
I would give those services a longer lifetime (singleinstance).
The problem though is that TCP connections are usually disconnected if nothing have happened for a while. Which means that you need to have some kind of keep alive packets to keep them open.
More robust solution
imho the connection setup is not the problem. Contacting a service should not take long. You haven't specified what the external services are. I'm guessing that they are some kind of web services hosted in IIS. If so, make sure that the application pools aren't recycled too often. Starting a new application in IIS can take time.
The other thought is if you really need to wait for the service to complete? Why not queue the action (to be handled by the thread pool or in a separate thread) and let the user continue? If required, simply use a messagebox when the service have been called and something failed.
These services should be bootstrapped on application startup and then configured via DI using a singleton which is then injected to any classes that use the class in their constructor.
I can recommend Unity or Spring.Net. I've found Unity very easy to use for simple injection, so give that a look.
Related
I want to deploy an Windows services in parallel for redundancy and load balancing purposes.
How can i be sure that when the client sends a request to both of these services, that only 1 of them process the actual call?
Example:
When the client or other services sends a message to start a manufacturing process, both of these services will recieve that request. I want to make sure that only one of those services processes this request, so that manufacturing process do not get started twice!
Do they need to able to talk to themself?
Is there a possibility to sync those services?
Which is the most elegant/robust way of handling this problem?
Look into using a mutex to allow both services to only pick up a message once.
Mutex Description C#
Although, you'll need to make sure this can work in the way you want. this can help schedule between application processes and boundaries, but if this is deploy to two different machines, or Cloud services, the Mutex isn't going to work.
for that you'll need to figure out another of communicating across the applications, usually using a database or a MSMQ to create a message queue that you can pop messages off as you need them from each service.
The safest way, and also the best practice, for your example, would be to retrieve (not to peek) messages from a queue leveraging MSMQ. This gives you a clear explanation of the use case: https://learn.microsoft.com/en-us/previous-versions/windows/desktop/msmq/ms706253(v=vs.85)
~Pino
My worker role expose a WCF service and have multiple instances.
I want my client to call this service and make all instances working concurrently.
I'm trying to figure out what is the best way to do this scatter-gather task.
(And I'm trying to avoid service bus and use WCF only)
I can't think of any good way to do this without something like service bus topics. Or using custom functionality that does nearly the same thing. Why are you trying to avoid Service Bus?
There's really no way to make a client-side call to multiple server instances simultaneously, just using Azure's built-in services. Even using Service Bus topics, there's no way to guarantee that multiple subscribers will consume a message at the same time and execute at the same time (even with a message embargo time, you still cannot absolutely guarantee each subscriber will consume + process a message at an exact time).
This will need to be an application-side action. For example: You can queue up your wcf requests. Your queue-reader can then direct-connect to an internal endpoint on each instance, triggering an action to run in parallel. This won't give you exact parallel operation, but it will be pretty close. As another option, you can have several threads available per instance, and you could run the same request on each thread (again, managed by you).
In essence, this is an architectural facet of your app. Azure won't be able to facilitate a parallel-call across instances; you can take advantage of queues, internal services, etc. to accomplish this.
I'm currently in the process of building an application that receives thousands of small messages through a basic web service, the messages are published to a message queue (RabbitMQ). The application makes use of Dependancy Injection using StructureMap as its container.
I have a separate worker application that consumes the message queue and persists the messages to a database (SQL Server).
I have implemented the SQL Connection and RabbitMQ connections as singletons (Thread Local).
In an ideal world, this all works fine but if SQL Server or RabbitMQ connection is broken I need to reopen it, or potentially dispose and recreate/reconnect the resources.
I wrote a basic class to act as a factory that before it returns a resource, checks it is connected/open/working and if not, dispose it and recreate it - I'm not sure if this is "best practice" or if I'm trying to solve a problem that has already been solved.
Can anyone offer suggestions on how I could implement long running tasks that do a lot of small tasks (in my case a single INSERT statement) that don't require object instantiation for each task, but can gracefully recover from errors such as dropped connections?
RabbitMQ connections seem to be expensive and during high work loads I can quickly run out of handles so I'd like to reuse the same connection (per thread).
The Enterprise Library 5.0 Integration Pack for Windows Azure contains a block for transient fault handling. It allows you to specify retry behavior in case of errors.
It was designed with Windows Azure in mind but I'm sure it would be easy to write a custom policy based on what it offers you.
You can make a connection factory for RabbitMQ that has a connection pool. It would be responsible for handing out connections to tasks. You should check to see that the connections are ok. If not, start a new thread that closes/cleans the connection then returns it to the thread pool. Meanwhile return a functioning connection to the user.
It sounds complicated but it's the pattern for working with hard to initialize resources.
I have created a C# application that I want to split into server and client side. The client side should have only a UI, and the server side should manage logic and database.
But, I'm not sure about something: my application should be used by many users at the same time. If I move my application to a server, will WCF create another instance of the application for every user that logs in or there's only one instance of the application for all users?
If the second scenario is true, then how do I create separate application instances for every user that want to use my service? I want to keep my application logic on the server, to make users share the same database, but also to make every single instance independent (so several users can use the WCF service with different data). Someting like PHP does: same code, new instance of code for every user, shared database.
By default, WCF is stateless and instanceless. This basically means that any call may be issued by any client, without any clients or calls knowing about each other.
As long as you have kept that in mind while designing the service, you're good to go. What problems do you expect to occur, given how your service is built at this moment?
The server-side (handled by WCF) will usually not hold any state at all: all method calls would be self-contained and fairly atomic. For example, GetUsers, AddOrder etc. So there's no 'instance' of the app, and in fact the WCF service does not know that it's a particular app using it.
This is on purpose: if you wanted to write a web app, or simple query tool, it could use those same methods on the WCF service without having to be an 'app', or create an instance of anything on the server.
WCF can have objects with a long lifetime, that are stateful, a bit like remoting, but if you're following the pattern of 99.9% of other designs with WCF, you'll be using WCF as a web service.
Edit: from the sounds of your comments, you need to do some seriously and potentially in-depth reading about client-server architectures and the use of WCF. Start from scratch with something small, then try to apply it to your current application.
We have a large process in our application that runs once a month. This process typically runs in about 30 minutes and generates 342000 or so log events. Recently we updated our logging to a centralized model using WCF and are now having difficulty with performance. Whereas the previous solution would complete in about 30 minutes, with the new logging, it now takes 3 or 4 hours. The problem it seems is because the application is actually waiting for the WCF request to complete before execution continues. The WCF method is already configured as IsOneWay and I wrapped the call on the client side to that WCF method in a different thread to try to prevent this type of problem but it doesn't seem to have worked. I have thought about using the async WCF calls but thought before I tried something else I would ask here to see if there is a better way to handle this.
342000 log events in 30 minutes, if I did my math correctly, comes out to 190 log events per second. I think your problem may have to do with the default throttling settings in WCF. Even if your method is set to one-way, depending on if you're creating a new proxy for each logged event, calling the method will still block while the proxy is created, the channel is opened, and if you're using an HTTP-based binding, it will block until the message has been received by the service (an HTTP-based binding sends back a null response for a 1-way method call when the message is received). The default WCF throttling limits concurrent instances to 10 on the service side, which means only 10 requests will be handled at a time, and any further requests will get queued, so pair that with an HTTP binding, and anything after the first 10 requests are going to block at the client until it's one of the 10 requests getting handled. Without knowing how your services are configured (instance mode, etc.) it's hard to say more than that, but if you're using per-call instancing, I'd recommend setting MaxConcurrentCalls and MaxConcurrentInstances on your ServiceBehavior to something much higher (the defaults are 16 and 10, respectively).
Also, to build on what others have mentioned about aggregating multiple events and submitting them all at once, I've found it helpful to setup a static Logger.LogEvent(eventData) method. That way it's simple to use throughout your code, and you can control in your LogEvent method how you want logging to behave throughout your application, such as configuring how many events should get submitted at a time.
Making a call to another process or remote service (i.e. calling a WCF service) is about the most expensive thing you can do in an application. Doing it 342,000 times is just sheer insanity!
If you must log to a centralized service, you need to accumulate batches of log entries and then, only when you have say 1000 or so in memory, send them all to the service in one hit. This will give you a reasonable performance improvement.
log4net has a buffering system that exists outside the context of the calling thread, so it won't hold up your call while it logs. Its usage should be clear from the many appender config examples - search for the term bufferSize. It's used on many of the slower appenders (eg. remoting, email) to keep the source thread moving without waiting on the slower logging medium, and there is also a generic buffering meta-appender that may be used "in front of" any other appender.
We use it with an AdoNetAppender in a system of similar volume and it works wonderfully.
There's always the traditional syslog there are plenty of syslog daemons that run on Windows. Its designed to be a more efficient way of centralised logging than WCF, which is designed for less intensive opertions, especially if you're not using the tcpip WCF configuration.
In other words, have a go with this - the correct tool for the job.