I've been looking at stateful services within Service Fabric. I've been digging through the examples, specifically the WordCount. They have a RunAsync method that looks like this inside of the WordCountService:
protected override async Task RunAsync(CancellationToken cancellationToken)
{
IReliableQueue<string> inputQueue = await this.StateManager.GetOrAddAsync<IReliableQueue<string>>("inputQueue");
while (true)
{
cancellationToken.ThrowIfCancellationRequested();
try
{
using (ITransaction tx = this.StateManager.CreateTransaction())
{
ConditionalValue<string> dequeuReply = await inputQueue.TryDequeueAsync(tx);
if (dequeuReply.HasValue)
{
//... {more example code here }
}
await Task.Delay(TimeSpan.FromMilliseconds(100), cancellationToken);
}
catch (TimeoutException)
{
//Service Fabric uses timeouts on collection operations to prevent deadlocks.
//If this exception is thrown, it means that this transaction was waiting the default
//amount of time (4 seconds) but was unable to acquire the lock. In this case we simply
//retry after a random backoff interval. You can also control the timeout via a parameter
//on the collection operation.
Thread.Sleep(TimeSpan.FromSeconds(new Random().Next(100, 300)));
continue;
}
catch (Exception exception)
{
//For sample code only: simply trace the exception.
ServiceEventSource.Current.MessageEvent(exception.ToString());
}
}
}
Essentially, in this example, the service is polling the ReliableQueue every 100ms for messages. Is there a way to do this without the poll? Can we subscribe to an event or something that gets triggered when a message is successfully added to the ReliableQueue?
I'd recommend using a ReliableDispatcher in your service, or just use a Dispatcher Service.
Using the Dispatcher Service allows you to write a method that is invoked whenever an item is enqueued on the underlying reliable queue.
For example:
public override async Task OnItemDispatchedAsync(
ITransaction transaction,
int value,
CancellationToken cancellationToken)
{
// Do something with the value that has been dequeued
}
Both the Reliable Dispatcher and Dispatcher Service can be used via a NuGet package and there's full documentation and samples on GitHub to get you started:
Example of using Dispatcher Service
Example of using Reliable Dispatcher
No, currently there are no events you can use for ReliableQueue. You have to poll for new items.
Related
I'm currently reading in data via a SerialPort connection in an asynchronous Task in a console application that will theoretically run forever (always picking up new serial data as it comes in).
I have a separate Task that is responsible for pulling that serial data out of a HashSet type that gets populated from my "producer" task above and then it makes an API request with it. Since the "producer" will run forever, I need the "consumer" task to run forever as well to process it.
Here's a contrived example:
TagItems = new HashSet<Tag>();
Sem = new SemaphoreSlim(1, 1);
SerialPort = new SerialPort("COM3", 115200, Parity.None, 8, StopBits.One);
// serialport settings...
try
{
var producer = StartProducerAsync(cancellationToken);
var consumer = StartConsumerAsync(cancellationToken);
await producer; // this feels weird
await consumer; // this feels weird
}
catch (Exception e)
{
Console.WriteLine(e); // when I manually throw an error in the consumer, this never triggers for some reason
}
Here's the producer / consumer methods:
private async Task StartProducerAsync(CancellationToken cancellationToken)
{
using var reader = new StreamReader(SerialPort.BaseStream);
while (SerialPort.IsOpen)
{
var readData = await reader.ReadLineAsync()
.WaitAsync(cancellationToken)
.ConfigureAwait(false);
var tag = new Tag {Data = readData};
await Sem.WaitAsync(cancellationToken);
TagItems.Add(tag);
Sem.Release();
await Task.Delay(100, cancellationToken);
}
reader.Close();
}
private async Task StartConsumerAsync(CancellationToken cancellationToken)
{
while (!cancellationToken.IsCancellationRequested)
{
await Sem.WaitAsync(cancellationToken);
if (TagItems.Any())
{
foreach (var item in TagItems)
{
await SendTagAsync(tag, cancellationToken);
}
}
Sem.Release();
await Task.Delay(1000, cancellationToken);
}
}
I think there are multiple problems with my solution but I'm not quite sure how to make it better. For instance, I want my "data" to be unique so I'm using a HashSet, but that data type isn't concurrent-friendly so I'm having to lock with a SemaphoreSlim which I'm guessing could present performance issues with large amounts of data flowing through.
I'm also not sure why my catch block never triggers when an exception is thrown in my StartConsumerAsync method.
Finally, are there better / more modern patterns I can be using to solve this same problem in a better way? I noticed that Channels might be an option but a lot of producer/consumer examples I've seen start with a producer having a fixed number of items that it has to "produce", whereas in my example the producer needs to stay alive forever and potentially produces infinitely.
First things first, starting multiple asynchronous operations and awaiting them one by one is wrong:
// Wrong
await producer;
await consumer;
The reason is that if the first operation fails, the second operation will become fire-and-forget. And allowing tasks to escape your supervision and continue running unattended, can only contribute to your program's instability. Nothing good can come out from that.
// Correct
await Task.WhenAll(producer, consumer)
Now regarding your main issue, which is how to make sure that a failure in one task will cause the timely completion of the other task. My suggestion is to hook the failure of each task with the cancellation of a CancellationTokenSource. In addition, both tasks should watch the associated CancellationToken, and complete cooperatively as soon as possible after they receive a cancellation signal.
var cts = new CancellationTokenSource();
Task producer = StartProducerAsync(cts.Token).OnErrorCancel(cts);
Task consumer = StartConsumerAsync(cts.Token).OnErrorCancel(cts);
await Task.WhenAll(producer, consumer)
Here is the OnErrorCancel extension method:
public static Task OnErrorCancel(this Task task, CancellationTokenSource cts)
{
return task.ContinueWith(t =>
{
if (t.IsFaulted) cts.Cancel();
return t;
}, default, TaskContinuationOptions.DenyChildAttach, TaskScheduler.Default).Unwrap();
}
Instead of doing this, you can also just add an all-enclosing try/catch block inside each task, and call cts.Cancel() in the catch.
My actor interacts with a non-Akka thing that has an async disposal. This disposal can take 5-10 seconds. I do this in PostStop() like so:
protected override void PostStop()
{
async Task DisposeThing()
{
Debug.WriteLine("Before Delay");
await Task.Delay(10000); // This would be the actual call to dispose the thing
Debug.WriteLine("After Delay");
};
ActorTaskScheduler.RunTask(async () =>
{
try
{
Debug.WriteLine("Before DisposeThing");
await DisposeThing();
Debug.WriteLine("After DisposeThing");
}
catch (Exception ex)
{
Debug.WriteLine($"An exception occured: {ex}");
}
finally
{
Debug.WriteLine("actor done disposing.");
}
});
base.PostStop();
}
Full gist here.
The parent does _childActor.Tell(PoisonPill.Instance). I also tried _childActor.GracefulStop with a large enough timeout.
In both cases, this prints:
Before DisposeThing
Before Delay
And that's it, the rest is never executed. Not even the finally executes (which I guess breaks C#? using doesn't work anymore, for instance).
Silently dropping await continuations (including finallys) could lead to some really tricky-to-understand bugs, so this leaves me with two questions:
when does Akka decide to simply drop an ongoing async function, is there a consistent model to be understood?
how should I write this in a way that is guaranteed to execute and not terminate the actor before disposal is done?
Update:
After sleeping on this I think I understand what's going on. Keep in mind that is mostly conjecture from someone that's been looking at Akka.Net for the past 2 days (e.g. this thread), and I post this because no one has answered yet.
The way Akka.Net implements async continuations is by having the actor executing the async function send ActorTaskSchedulerMessages to itself. This message points to the remaining work to be done after an await returns, and when the actor gets to process that message, it'll execute the continuation, up until the next await (or the end of the function if there no other await).
When you tell an actor to stop with a PoisonPill for instance, once that message is processed, no further messages are processed for that actor. This is fine when those messages are user-defined. However, this ends up also silently dropping any async continuations since they're also implemented as actor messages.
Indeed when running a program using the above actor, we can see this in the console:
[INFO][2022-01-11 2:59:43 PM][Thread 0004][akka://ActorSystem/user/$a/$a] Message [ActorTaskSchedulerMessage] from [akka://ActorSystem/user/$a/$a#132636847] to [akka://ActorSystem/user/$a/$a#132636847] was not delivered. 2 dead letters encountered. If this is not an expected behavior then [akka://ActorSystem/user/$a/$a#132636847] may have terminated unexpectedly. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
If this understanding is correct, this makes async extremely unreliable inside functions passed to ReceiveAsync, ActorTaskScheduler.RunTask etc. as you cannot ever assume anything after an await will get to execute, including exception handlers, cleanup code inside finallys, using statement disposal, etc. Actors can be killed stopped at any time.
I suppose then that since language primitives lose their meaning, what you need to do is wrap your Task-returning functions inside their own little actors and rely on Akka semantics rather than language semantics.
You captured what the issue was.
This is one solution I came up with:
using Akka.Actor;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using static Akka.NET_StackOverflow_Questions_tryout.Questions._70655287.ChildActor;
namespace Akka.NET_StackOverflow_Questions_tryout.Questions._70655287
{
public class ParentActor:ReceiveActor
{
private readonly IActorRef _child;
public ParentActor()
{
_child = Context.ActorOf(ChildActor.Prop());
Context.Watch(_child);
Receive<ShutDown>(s =>
{
_child.Forward(s);
});
Receive<Terminated>(t => {
var tt = t;
});
}
public static Props Prop()
{
return Props.Create(() => new ParentActor());
}
}
public class ChildActor : ReceiveActor
{
public ChildActor()
{
ReceiveAsync<ShutDown>(async _ =>
{
async Task DisposeThing()
{
Debug.WriteLine("Before Delay");
await Task.Delay(10000); // This would be the actual call to dispose the thing
Debug.WriteLine("After Delay");
};
await DisposeThing()
.ContinueWith(async task =>
{
if (task.IsFaulted || task.IsCanceled)
return; //you could notify the parent of this issue
await Self.GracefulStop(TimeSpan.FromSeconds(10));
});
});
}
protected override void PostStop()
{
base.PostStop();
}
public static Props Prop()
{
return Props.Create(()=> new ChildActor());
}
public sealed class ShutDown
{
public static ShutDown Instance => new ShutDown();
}
}
}
So instead of stopping the _childActor from the parentActor side you could send a shutdown message to the child to shutdown following the defined steps: first dispose the non-akka thing (to ensure it is truly not alive in-memory) afterwards, second, self-destruct the child which will notify the parent!
I've been working on a project and saw the below code. I am new to the async/await world. As far as I know, only a single task is performing in the method then why it is decorated with async/await. What benefits I am getting by using async/await and what is the drawback if I remove async/await i.e make it synchronous I am a little bit confused so any help will be appreciated.
[Route("UpdatePersonalInformation")]
public async Task<DataTransferObject<bool>> UpdatePersonalInformation([FromBody] UserPersonalInformationRequestModel model)
{
DataTransferObject<bool> transfer = new DataTransferObject<bool>();
try
{
model.UserId = UserIdentity;
transfer = await _userService.UpdateUserPersonalInformation(model);
}
catch (Exception ex)
{
transfer.TransactionStatusCode = 500;
transfer.ErrorMessage = ex.Message;
}
return transfer;
}
Service code
public async Task<DataTransferObject<bool>> UpdateUserPersonalInformation(UserPersonalInformationRequestModel model)
{
DataTransferObject<bool> transfer = new DataTransferObject<bool>();
await Task.Run(() =>
{
try
{
var data = _userProfileRepository.FindBy(x => x.AspNetUserId == model.UserId)?.FirstOrDefault();
if (data != null)
{
var userProfile = mapper.Map<UserProfile>(model);
userProfile.UpdatedBy = model.UserId;
userProfile.UpdateOn = DateTime.UtcNow;
userProfile.CreatedBy = data.CreatedBy;
userProfile.CreatedOn = data.CreatedOn;
userProfile.Id = data.Id;
userProfile.TypeId = data.TypeId;
userProfile.AspNetUserId = data.AspNetUserId;
userProfile.ProfileStatus = data.ProfileStatus;
userProfile.MemberSince = DateTime.UtcNow;
if(userProfile.DOB==DateTime.MinValue)
{
userProfile.DOB = null;
}
_userProfileRepository.Update(userProfile);
transfer.Value = true;
}
else
{
transfer.Value = false;
transfer.Message = "Invalid User";
}
}
catch (Exception ex)
{
transfer.ErrorMessage = ex.Message;
}
});
return transfer;
}
What benefits I am getting by using async/await
Normally, on ASP.NET, the benefit of async is that your server is more scalable - i.e., can handle more requests than it otherwise could. The "Synchronous vs. Asynchronous Request Handling" section of this article goes into more detail, but the short explanation is that async/await frees up a thread so that it can handle other requests while the asynchronous work is being done.
However, in this specific case, that's not actually what's going on. Using async/await in ASP.NET is good and proper, but using Task.Run on ASP.NET is not. Because what happens with Task.Run is that another thread is used to run the delegate within UpdateUserPersonalInformation. So this isn't asynchronous; it's just synchronous code running on a background thread. UpdateUserPersonalInformation will take another thread pool thread to run its synchronous repository call and then yield the request thread by using await. So it's just doing a thread switch for no benefit at all.
A proper implementation would make the repository asynchronous first, and then UpdateUserPersonalInformation can be implemented without Task.Run at all:
public async Task<DataTransferObject<bool>> UpdateUserPersonalInformation(UserPersonalInformationRequestModel model)
{
DataTransferObject<bool> transfer = new DataTransferObject<bool>();
try
{
var data = _userProfileRepository.FindBy(x => x.AspNetUserId == model.UserId)?.FirstOrDefault();
if (data != null)
{
...
await _userProfileRepository.UpdateAsync(userProfile);
transfer.Value = true;
}
else
{
transfer.Value = false;
transfer.Message = "Invalid User";
}
}
catch (Exception ex)
{
transfer.ErrorMessage = ex.Message;
}
return transfer;
}
The await keyword only indicates that the execution of the current function is halted until the Task which is being awaited is completed. This means if you remove the async, the method will continue execution and therefore immediately return the transfer object, even if the UpdateUserPersonalInformation Task is not finished.
Take a look at this example:
private void showInfo()
{
Task.Delay(1000);
MessageBox.Show("Info");
}
private async void showInfoAsync()
{
await Task.Delay(1000);
MessageBox.Show("Info");
}
In the first method, the MessageBox is immediately displayed, since the newly created Task (which only waits a specified amount of time) is not awaited. However, the second method specifies the await keyword, therefore the MessageBox is displayed only after the Task is finished (in the example, after 1000ms elapsed).
But, in both cases the delay Task is ran asynchronously in the background, so the main thread (for example the UI) will not freeze.
The usage of async-await mechanism mainly used
when you have some long calculation process which takes some time and you want it to be on the background
in UI when you don't want to make the main thread stuck which will be reflected on UI performance.
you can read more here:
https://learn.microsoft.com/en-us/dotnet/csharp/async
Time Outs
The main usages of async and await operates preventing TimeOuts by waiting for long operations to complete. However, there is another less known, but very powerful one.
If you don't await long operation, you will get a result back, such as a null, even though the actual request as not completed yet.
Cancellation Tokens
Async requests have a default parameter you can add:
public async Task<DataTransferObject<bool>> UpdatePersonalInformation(
[FromBody] UserPersonalInformationRequestModel model,
CancellationToken cancellationToken){..}
A CancellationToken allows the request to stop when the user changes pages or interrupts the connection. A good example of this is a user has a search box, and every time a letter is typed you filter and search results from your API. Now imagine the user types a very long string with say 15 characters. That means that 15 requests are sent and 15 requests need to be completed. Even if the front end is not awaiting the first 14 results, the API is still doing all the 15 requests.
A cancellation token simply tells the API to drop the unused threads.
I would like to chime in on this because most answers although good, do not point to a definite time when to use and when not.
From my experience, if you are developing anything with a front-end, add async/await to your methods when expecting output from other threads to be input to your UI. This is the best strategy for handling multithread output and Microsoft should be commended to come out with this when they did. Without async/await you would have to add more code to handle thread output to UI (e.g Event, Event Handler, Delegate, Event Subscription, Marshaller).
Don't need it anywhere else except if using strategically for slow peripherals.
I have the following two applications
Angular 6/7 App
.Net Core Web API
I am making GET request to API using Angular's HttpClient as shown below
this.subscription = this.httpClient.get('api/Controller/LongRunningProcess')
.subscribe((response) =>
{
// Handling response
});
API controller's LongRunningProcess method has the following code
[HttpGet]
[Route("LongRunningProcess")]
public async Task<IActionResult> LongRunningProcess(CancellationToken cancellationToken)
{
try
{
// Dummy long operation
await Task.Factory.StartNew(() =>
{
for (int i = 0; i < 10; i++)
{
// Option 1 (Not working)
if (cancellationToken.IsCancellationRequested)
break;
// Option 2 (Not working)
cancellationToken.ThrowIfCancellationRequested();
Thread.Sleep(6000);
}
}, cancellationToken);
}
catch (OperationCanceledException e)
{
Console.WriteLine($"{nameof(OperationCanceledException)} thrown with message: {e.Message}");
}
return Ok();
}
Now I want to cancel this long-running process so I am unsubscribing from client side as shown below
// On cancel button's click
this.subscription.unsubscribe();
Above code will cancel the request and I can see it is canceled in the Network tab of the browser as shown below
But it is not going to make IsCancellationRequested to true in the method LongRunningProcess of the API, so the operation will keep going.
[Note]: Both Option 1 and Option 2 in API method are not working even if I make a call using postman.
Question: Is there any way to cancel that LongRunningProcess method's operation?
When angular cancel request, you can get cancellation token from http context
CancellationToken cancellationToken = HttpContext.RequestAborted;
if (cancellationToken.IsCancellationRequested)
{
// The client has aborted the request
}
You dont need break in this case only use like this
[HttpGet]
[Route("LongRunningProcess")]
public async Task<IActionResult> LongRunningProcess(CancellationToken cancellationToken)
{
for (int i = 0; i < 10; i++)
{
cancellationToken.ThrowIfCancellationRequested();
// Dummy long operation
await Task.Factory.StartNew(() => Thread.Sleep(60000));
}
return Ok();
}
You can read it more here
This is because your dummy long operation does not monitor the canncellationToken. I'm not sure it is actually your intention to start 10 one-minute tasks all in parallel without any delay, which is what your code does.
In order to have a dummy long operation, the code would be like
[HttpGet]
[Route("LongRunningProcess")]
public async Task<IActionResult> LongRunningProcess(CancellationToken cancellationToken)
{
// Dummy long operation
await Task.Run(() =>
{
for (var i = 0; i < 60; i++)
{
if (cancel.IsCancellationRequested)
break;
Task.Delay(1000).Wait();
}
});
return Ok();
}
Task.Run is just equivalent to Task.Factory.StartNew, by the way.
However, if you just need a dummy long-run operation in your web API, then you can also simply use Task.Delay, which supports cancellation token. Task.Delay throws an exception when the request is canceled, so add exception handling code when you need to do something after request cancellation.
[HttpGet]
[Route("LongRunningProcess")]
public async Task<IActionResult> LongRunningProcess(CancellationToken cancellationToken)
{
// Dummy long operation
await Task.Delay(60000, cancel);
return Ok();
}
Any http observables still running at the time will complete and run their logic unless you unsubscribe in onDestroy(). Whether the consequences are trivial or not will depend upon what you do in the subscribe handler. If you try to update something that doesn't exist anymore you may get an error.
Tip: The Subscription contains a closed boolean property that may be useful in advanced cases. For HTTP this will be set when it completes. In Angular it might be useful in some situations to set a _isDestroyed property in ngDestroy which can be checked by your subscribe handler.
Tip 2: If handling multiple subscriptions you can create an ad-hoc new Subscription() object and add(...) any other subscriptions to it - so when you unsubscribe from the main one it will unsubscribe all the added subscriptions too.
So, best practice is to use takeUntil() and unsubscribe from http calls when the component is destroyed.
import { takeUntil } from 'rxjs/operators';
.....
ngOnDestroy(): void {
this.destroy$.next(); // trigger the unsubscribe
this.destroy$.complete(); // finalize & clean up the subject stream
}
var cancellationToken = new CanellationToken();
cancellationToken.CancelAfter(2000);
using (var response = await _httpClient.GetAsync("emp",
HttpCompletionOption.ResponseHeadersRead, cancellationTokenSource.Token))
{
response.EnsureSuccessStatusCode();
var stream = await response.Content.ReadAsStreamAsync();
var emp = await JsonSerializer.DeserializeAsync<List<empDto>>(stream, _options);
}
Further we can also have this "CancellationToken" class, which is nothing much Http client method which terminates the request after certain time-interval.
In angular subscription.unsubscribe(); closes the channel and causes CORE to cancel the API caller's thread, that's good.
Don't use await Task.Run(()... This creates a result/task that should be disposed, if not, the task keeps going, your pattern doesn't permit this - that's why it continues to run.
Simply - 'await this.YourLongRunningFunction()', I'm pretty sure that when the owning thread throws the OperationCancelled exception your task will end.
If "3" doesn't work, then pass a cancellation token to your long running task and set that when you catch your OperationCancelled exception.
I'm writing a Windows Service that will kick off multiple worker threads that will listen to Amazon SQS queues and process messages. There will be about 20 threads listening to 10 queues.
The threads will have to be always running and that's why I'm leaning towards to actually using actual threads for the worker loops rather than threadpool threads.
Here is a top level implementation. Windows service will kick off multiple worker threads and each will listen to it's queue and process messages.
protected override void OnStart(string[] args)
{
for (int i = 0; i < _workers; i++)
{
new Thread(RunWorker).Start();
}
}
Here is the implementation of the work
public async void RunWorker()
{
while(true)
{
// .. get message from amazon sqs sync.. about 20ms
var message = sqsClient.ReceiveMessage();
try
{
await PerformWebRequestAsync(message);
await InsertIntoDbAsync(message);
}
catch(SomeExeception)
{
// ... log
//continue to retry
continue;
}
sqsClient.DeleteMessage();
}
}
I know I can perform the same operation with Task.Run and execute it on the threadpool thread rather than starting individual thread, but I don't see a reason for that since each thread will always be running.
Do you see any problems with this implementation? How reliable would it be to leave threads always running in this fashion and what can I do to make sure that each thread is always running?
One problem with your existing solution is that you call your RunWorker in a fire-and-forget manner, albeit on a new thread (i.e., new Thread(RunWorker).Start()).
RunWorker is an async method, it will return to the caller when the execution point hits the first await (i.e. await PerformWebRequestAsync(message)). If PerformWebRequestAsync returns a pending task, RunWorker returns and the new thread you just started terminates.
I don't think you need a new thread here at all, just use AmazonSQSClient.ReceiveMessageAsync and await its result. Another thing is that you shouldn't be using async void methods unless you really don't care about tracking the state of the asynchronous task. Use async Task instead.
Your code might look like this:
List<Task> _workers = new List<Task>();
CancellationTokenSource _cts = new CancellationTokenSource();
protected override void OnStart(string[] args)
{
for (int i = 0; i < _MAX_WORKERS; i++)
{
_workers.Add(RunWorkerAsync(_cts.Token));
}
}
public async Task RunWorkerAsync(CancellationToken token)
{
while(true)
{
token.ThrowIfCancellationRequested();
// .. get message from amazon sqs sync.. about 20ms
var message = await sqsClient.ReceiveMessageAsync().ConfigureAwait(false);
try
{
await PerformWebRequestAsync(message);
await InsertIntoDbAsync(message);
}
catch(SomeExeception)
{
// ... log
//continue to retry
continue;
}
sqsClient.DeleteMessage();
}
}
Now, to stop all pending workers, you could simple do this (from the main "request dispatcher" thread):
_cts.Cancel();
try
{
Task.WaitAll(_workers.ToArray());
}
catch (AggregateException ex)
{
ex.Handle(inner => inner is OperationCanceledException);
}
Note, ConfigureAwait(false) is optional for Windows Service, because there's no synchronization context on the initial thread, by default. However, I'd keep it that way to make the code independent of the execution environment (for cases where there is synchronization context).
Finally, if for some reason you cannot use ReceiveMessageAsync, or you need to call another blocking API, or simply do a piece of CPU intensive work at the beginning of RunWorkerAsync, just wrap it with Task.Run (as opposed to wrapping the whole RunWorkerAsync):
var message = await Task.Run(
() => sqsClient.ReceiveMessage()).ConfigureAwait(false);
Well, for one I'd use a CancellationTokenSource instantiated in the service and passed down to the workers. Your while statement would become:
while(!cancellationTokenSource.IsCancellationRequested)
{
//rest of the code
}
This way you can cancel all your workers from the OnStop service method.
Additionally, you should watch for:
If you're playing with thread states from outside of the thread, then a ThreadStateException, or ThreadInterruptedException or one of the others might be thrown. So, you want to handle a proper thread restart.
Do the workers need to run without pause in-between iterations? I would throw in a sleep in there (even a few ms's) just so they don't keep the CPU up for nothing.
You need to handle ThreadStartException and restart the worker, if it occurs.
Other than that there's no reason why those 10 treads can't run for as long as the service runs (days, weeks, months at a time).