I am working on project where we have decided to split our background tasks (network, CPU and IO intensive) into three windows services.
Now the question is, whether we should host all three services into a single process or create three independent services with their own processes.
Windows Service project template allows multiple services to be created, when installed they'll create separate entries in Service Control Manager (SCM) and can be controlled independently. The benefit here is better code management and code reuse.
However, if there is any performance drawback, which is the primary reason why we're having multiple services in the first place, I'd rather let go this benefit.
Please advise.
My suggestion is to go for Seperarte windows services created using topshelf or other technology hence they are independent of paltform
Scalability easly scalable as per need ,if one service is being used more then other, then that one service can be scale up by running multiple instance of same.
parallel processing as services are independent they can work in prallel hence performance improved.
Related
I've created a BackgroundService in a WebAPI based on the code examples here: https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice . The article doesn't give any guidance for implementing this in a multi-server environment. My use-case involves a FileSystemWatcher monitoring a shared network folder for changes. It works great.
The issue is there will be multiple instances of this and I don't want all of the instances responding - just one. Is this feasible, and if so, what steps do I need to implement? I've read about using queues, but I can't see how that will help. Also, Hangfire or similar is not an option. Do I need to re-examine my logic?
I can think of multiple ways to achieve this, with pros and cons.
Individual service
If you need only one instance of this, implement it as a standalone service and deploy on one server only. True, you can't leverage background processes, but do you really need to?
Configuration
Have a config value indicating where to register the service. This could be comma separated values and whatnot. This will require some deployment handling though, to change the config to on, on the server running the background service. It could even be a comma separated value to indicate server names.
Persist value in db
If there is a single database somewhere, you can have the services communicating through it. Have a table storing which server executes the background service and once the first one locks it, then the others just sleep. Some keep alive logic needs to be implemented as well.
I would honestly go with solution one. Individually scalable, deployable and no workaround needed.
A background service, indicates that it should be running on all instances if it's part of the application.
you need to go with microservice architecture.
On microserver will use file watcher and prepare queue
then you can have another microserver which works up that queue msg(this you can scale with multiple instance )
you also make another service/microservice to keep eye on the health of file watcher and do failover task
We have created a dotnet core web api project which is using SQL Server database. Now, we are planning to deploy this project to Microsoft Azure.
While the deployment of this application, we are also considering to enable autoscaling option (horizontal scaling).
Before, we do it. We want to have some questions that we want to clarify.
Should we need to add some additional code in our application which allows autoscaling to work properly?
Properly in a sense, as there can be more than one instance of the application running because of horizontal scaling. We are using database and more than one instance is running will it case race condition (i.e., two resources accessing the same data at a time). I mean we can add a transaction (or use locking) in our code to avoid these kinds of scenarios?
I want to know that is there any best practices to follow while implementing that kind of application?
Thank you and waiting for your answers!
Consider the following points when designing an autoscaling strategy:
The system must be designed to be horizontally scalable. Avoid making
assumptions about instance affinity; do not design solutions that
require that the code is always running in a specific instance of a
process. When scaling a cloud service or web site horizontally, do
not assume that a series of requests from the same source will always
be routed to the same instance. For the same reason, design services
to be stateless to avoid requiring a series of requests from an
application to always be routed to the same instance of a service.
When designing a service that reads messages from a queue and
processes them, do not make any assumptions about which instance of
the service handles a specific message because autoscaling could
start additional instances of a service as the queue length grows.
The Competing Consumers pattern describes how to handle this
scenario.
If the solution implements a long-running task, design this task to
support both scaling out and scaling in. Without due care, such a
task could prevent an instance of a process from being shutdown
cleanly when the system scales in, or it could lose data if the
process is forcibly terminated. Ideally, refactor a long-running task
and break up the processing that it performs into smaller, discrete
chunks. The Pipes and Filters pattern provides an example of how you
can achieve this. Alternatively, you can implement a checkpoint
mechanism that records state information about the task at regular
intervals, and save this state in durable storage that can be
accessed by any instance of the process running the task. In this
way, if the process is shutdown, the work that it was performing can
be resumed from the last checkpoint by using another instance.
For more information, follow the doc : https://github.com/Huachao/azure-content/blob/master/articles/best-practices-auto-scaling.md
Regarding this:
Properly in a sense, as there can be more than one instance of the application running because of horizontal scaling. We are using database and more than one instance is running will it case race condition (i.e., two resources accessing the same data at a time). I mean we can add a transaction (or use locking) in our code to avoid these kinds of scenarios?
Please keep in mind that, even if the app is running on a single machine, requests will still be handled concurrently. This means that even on a single machine 2 requests can cause the same entry in the database to be updated. So the above questions about race conditions apply to single instance web apps as well.
Try to avoid locking: the whole point of (horizontal) scaling is to gain performance benefits. By using locks you effectively remove this benefits as only one process at a time can use the locked resource.
Other points of considerations are:
If you are using an in-memory cache you might want to swap it out for a distributed cache.
The guidance at the MS docs
We're looking to replace a chunk of our business logic with WF4 workflows.
They're all pretty typical workflows: User action creates an instance, database effort, next user confirms, etc.
Our requirements for the workflow host are:
Create workflows from XAML definitions stored in a database (DynamicActivity)
Support workflows on different versions
Support long time-based events (we're currently aware of notifications after 5 days and rolling back a workflow after 30 days)
Support many instances of many workflows (we've identified 10 workflows with about 4000 in-flight, of which only a few are processing at any one time)
Retain all state after a service restart (including the time-based event)
Authenticate the calling user (WindowsAuthentication, if possible)
As part of the migration effort, I built some POCs using "WCF Workflow Service Application" projects, but from what I can see these aren't immediately possible.
I've got that #2 is done through WCF Routing, and my understanding is that WSH will handle #3 for us (is this true, given #5?), but I can't see how #1 would work from the default project structure.
I've solved #1 using WorkflowApplication instances, but this relied on using bookmarks to resume for each input event, and I wasn't convinced that WorkflowApplication would scale to our needs without unloading idle workflows, which breaks the Delay activity.
So, if you've stuck with me this far:
Is there a way to achieve all of this using WSH, either in the default project or by implementing some of it ourselves?
Are we better off writing our own "DurableDelay" activity that records the true wake time and unloads the workflow to be resumed by the host process, given the long durations and potential need to unload and reload workflows?
If WSH isn't going to do it, is there an existing alternative?
I'm not averse to writing our own host service to handle workflow lifecycle, and we've even drawn up the proposed design, but I didn't want to start down that route if it turns out that there is a ready-made solution.
Cheers
You can achieve #1 by using a VirtualPathProvider to load your workflows from a database instead of the file system. See How To Build Workflow Services with a Database Repository for more information about this.
Workflow versioning (#2) is something that is not supported in .NET 4.0, but in .NET 4.5 you have better support for real versioning. See What's New in Windows Workflow Foundation 4.5. However, if you don't need to change a workflow after it starts and just need that new instances start with a new version while already executing instances can finish using a previous workflow definition then you can implement versioning at the database level and just treat each workflow definition version has a different workflow service.
You can then use Workflow Services hosted in IIS (AppFabric) with a SQL Server instance store to get #3, #4 and #5 almost for free.
Finally for #6 and assuming you stick with .NET 4.0 you can take a look at WF Security Pack CTP 1.
I'm developing the same kind of workflows.
I also first gave a look to workflow services, but as our workflow were completely integrated in a business layer, I didn't want to use WCF to access workflows.
So I'm now using WorkflowApplication as host, so I can instantiate and manipulate the host.
Biggest problem was resuming workflows that use a delay activity (you need to check yourself in the database)
I design an application back-end. For now, it is a .NET process (a Console Application) which hosts various communication frameworks such as Agatha and NServiceBus.
I need to periodically update my datastore with values (coming from the application while it's running).
I found three possible ways:
Accept command line arguments, so I can call my console app with -update.
On start up a background thread will periodically invoke the update method.
Create an updater.exe app which will do the updates, but I will have code duplication since in some way it will need to query the data from the source in order to save it to the datastore.
Which one is better?
Use the simplest thing that will work. Sounds like option 1 is the way to go based on the info you have given.
Option 2 has threads, threads always complicate programs, more difficult to debug and write, greater chance of bugs.
Option 3, would mean that you have two apps, if you make a change you will have to deploy new versions of both, increasing maintenance costs.
I have nearly completed a Quartz.NET based Windows Service (using ADO.NET, not RAM jobs). The service copies/moves files to various paths depending upon a schedule. I have some concerns however. It is very important that this service has some sort of detection method/system that will detect when the program has failed for whatever reason - whether it's files failing to be copied, or the whole scheduler crashing . Just wondering what you guys think is the best way to do this? I have a couple of vague ideas but I'm looking to hear some more input.
Here are the methods that we use:
We monitor the windows service itself using the IT monitoring system. We use one of those commercial products that monitors servers, services, databases, etc, but there are open source projects that can do this for you if you don't already have one in place.
We log fatal execeptions to a database table and have a separate service monitoring that table for exceptions.
We also use an ADO.Net store, so we also monitor the Quartz.net tables for things like stuck triggers.
With things like this you can definitely go down the over engineering path. Just keep in mind the cost benefit of adding each of these options and then decide how much work you want to put into monitoring, VS the cost of an outage.