How To: Cancel Web Service Function Request - c#

This might be discussed elsewhere but I can't find it. I have a .Net Web Service that has a function that loops through a date range and runs calculations and updates records in a database. If you give this said function a long date range, it can take quite some time to complete. This being the case, I need a way to stop this function.
Is there a way of making the web service function call run in a identified thread so that I can cancel that thread if need be? Or am I over or under thinking this? I am using C# .Net Web Page with jQuery to perform the AJAX calls to the web Service function. Any help will be greatly appreciated.

Add a Cancel() method to your web service that sets a state variable. Then, simply have your long running operation periodically check this variable and stop if its set (with appropriate protection, of course).

You need to web service methods:
StartCalculation(parms) which spanws a long running operation and returns an ID
CancelCalculation(ID) which cancels the calculation by terminating the long running operation.
The implementation of the 'long running operation' depends on your service hosts (IIS, Windows Service, etc.).

Sure, you can do that. If you're using .NET 4, you can easily cancel a task:
CancellationTokenSource cts = new CancellationTokenSource();
var processingTask = Task.Factory.StartNew(() =>
{
foreach(var item in StuffToProcess())
{
cts.Token.ThrowIfCancellationRequested();
// Do your processing in a loop
}
});
var cancelTask = Task.Factory.StartNew(() =>
{
Thread.Sleep(/* The time you want to allow before cancelling your processing */);
cts.Cancel();
});
try
{
Task.WaitAll(processingTask, cancelTask);
}
catch(AggregateException ae)
{
ae.Flatten().Handle(x =>
{
if(x is OperationCanceledException)
{
// Do Anything you need to do when the task was canceled.
return true;
}
// return false on any unhandled exceptions, true for handled ones
});
}

Related

can I update one entity in parallel threads c#

using: Asp.net Core, Entityframework Core, ABP 4.5
I have a user registration and initialization flow. But it takes a long time. I want to parallelize this. This is due to updating from the same entity, but with a different field.
My goal:
1. The endpoint should respond as soon as possible;
2. Long initialization is processed in the background;
Code-before (minor details omitted for brevity)
public async Task<ResponceDto> Rgistration(RegModel input)
{
var user = await _userRegistrationManager.RegisterAsync(input.EmailAddress, input.Password, false );
var result = await _userManager.AddToRoleAsync(user, defaultRoleName);
user.Code = GenerateCode();
await SendEmail(user.EmailAddress, user.Code);
await AddSubEntities(user);
await AddSubCollectionEntities(user);
await CurrentUnitOfWork.SaveChangesAsync();
return user.MapTo<ResponceDto>();
}
private async Task AddSubEntities(User user)
{
var newSubEntity = new newSubEntity { User = user, UserId = user.Id };
await _subEntityRepo.InsertAsync(newSubEntity);
//few another One-to-One entities...
}
private async Task AddSubEntities(User user)
{
List<AnotherEntity> collection = GetSomeCollection(user.Type);
await _anotherEntitieRepo.GetDbContext().AddRangeAsync(collection);
//few another One-to-Many collections...
}
Try change:
public async Task<ResponceDto> Rgistration(RegModel input)
{
var user = await _userRegistrationManager.RegisterAsync(input.EmailAddress, input.Password, false );
Task.Run(async () => {
var result = await _userManager.AddToRoleAsync(user, defaultRoleName);
});
Task.Run(async () => {
user.Code = GenerateCode();
await SendEmail(user.EmailAddress, user.Code);
});
Task.Run(async () => {
using (var unitOfWork = UnitOfWorkManager.Begin())
{//long operation. defalt unitOfWork out of scope
try
{
await AddSubEntities(user);
}
finally
{
unitOfWork.Complete();
}
}
});
Task.Run(async () => {
using (var unitOfWork = UnitOfWorkManager.Begin())
{
try
{
await AddSubCollectionEntities(user);
}
finally
{
unitOfWork.Complete();
}
}
});
await CurrentUnitOfWork.SaveChangesAsync();
return user.MapTo<ResponceDto>();
}
Errors:
here I get a lot of different errors related to competition. frequent:
A second operation started on this context before a previous operation completed. This is usually caused by different threads using the same instance of DbContext.
In few registratin calls: Cannot insert duplicate key row in object 'XXX' with unique index 'YYY'. The duplicate key value is (70). The statement has been terminated.
I thought on the server every request in its stream, but apparently not.
or all users are successfully registered, but they don’t have some sub-entity in the database. it’s much easier not to register the user than to figure out where he was initialized incorrectly =(
how to keep the user entity “open” for updating and at the same time “closed” for changes initiated by other requests? How to make this code thread safe and fast, can anyone help with advice?
Using Task.Run in ASP.NET is rarely a good idea.
Async methods run on the thread pool anyway, so wrapping them in Task.Run is simply adding overhead without any benefit.
The purpose of using async in ASP.NET is simply to prevent threads being blocked so they are able to serve other HTTP requests.
Ultimately, your database is the bottleneck; if all these operations need to happen before you return a response to the client, then there's not much you can do other than to let them happen.
If it is possible to return early and allow some operations to continue running on the background, then there are details here showing how that can be done.
Task.Run is not the same as parallel. It takes a new thread from the pool and runs the work on that thread, and since you're not awaiting it, the rest of the code can move on. However, that's because you're essentially orphaning that thread. When the action returns, all the scoped services will be disposed, which includes things like your context. Any threads that haven't finished, yet, will error out as a result.
The thread pool is a limited resource, and within the context of a web application, it equates directly to the throughput of your server. Every thread you take is one less request you can service. As a result, you're more likely to end up queuing requests, which will only add to processing time. It's virtually never appropriate to use Task.Run in a web environment.
Also, EF Core (or old EF, for that matter) does not support parallelization. So, even without the other problems described above, it will stop you cold from doing what you're trying to do here, regardless.
The queries you have here are not complex. Even if you were trying to insert 100s of things at once, it should still take only milliseconds to complete. If there is any significant delay here, you need to look at the resources of your database server and your network latency, first.
More likely than not, the slow-down is coming from the sending of the email. That too can likely be optimized, though. I was in a situation once where it was taking emails 30 seconds to send, until I finally figured out that it was an issue with our Exchange server, where an IT admin had idiotically introduced a 30 second delay on purpose. Regardless, it is generally always preferable to background things like sending emails, since they aren't core to your app's functionality. However, that means actually processing them in background, i.e. queue them and process them via something like a hosted service or an entirely different worker process.

Parallel queued background tasks with hosted services in ASP.NET Core

I'm doing some tests with the new Background tasks with hosted services in ASP.NET Core feature present in version 2.1, more specifically with Queued background tasks, and a question about parallelism came to my mind.
I'm currently following strictly the tutorial provided by Microsoft and when trying to simulate a workload with several requests being made from a same user to enqueue tasks I noticed that all workItems are executed in order, so no parallelism.
My question is, is this behavior expected? And if so, in order to make the request execution parallel is it ok to fire and forget, instead of waiting the workItem to complete?
I've searched for a couple of days about this specific scenario without luck, so if anyone has any guide or examples to provide, I would be really glad.
Edit: The code from the tutorial is quite long, so the link for it is https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-2.1#queued-background-tasks
The method which executes the work item is this:
public class QueuedHostedService : IHostedService
{
...
public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Queued Hosted Service is starting.");
_backgroundTask = Task.Run(BackgroundProceessing);
return Task.CompletedTask;
}
private async Task BackgroundProceessing()
{
while (!_shutdown.IsCancellationRequested)
{
var workItem =
await TaskQueue.DequeueAsync(_shutdown.Token);
try
{
await workItem(_shutdown.Token);
}
catch (Exception ex)
{
_logger.LogError(ex,
$"Error occurred executing {nameof(workItem)}.");
}
}
}
...
}
The main point of the question is to know if anyone out there could share the knowledge of how to use this specific technology to execute several work items at the same time, since a server can handle this workload.
I tried the fire and forget method when executing the work item and it worked the way I intended it to, several tasks executing in parallel at the same time, I 'm jut no sure if this is an ok practice, or if there is a better or proper way of handling this situation.
The code you posted executes the queued items in order, one at a time but also in parallel to the web server. An IHostedService is running per definition in parallel to the web server. This article provides a good overview.
Consider the following example:
_logger.LogInformation ("Before()");
for (var i = 0; i < 10; i++)
{
var j = i;
_backgroundTaskQueue.QueueBackgroundWorkItem (async token =>
{
var random = new Random();
await Task.Delay (random.Next (50, 1000), token);
_logger.LogInformation ($"Event {j}");
});
}
_logger.LogInformation ("After()");
We add ten tasks which will wait a random amount of time. If you put the code in a controller method the events will still be logged even after controller method returns. But each item will be executed in order so that the output looks like this:
Event 1
Event 2
...
Event 9
Event 10
In order to introduce parallelism we have to change the implementation of the BackgroundProceessing method in the QueuedHostedService.
Here is an example implementation that allows two Tasks to be executed in parallel:
private async Task BackgroundProceessing()
{
var semaphore = new SemaphoreSlim (2);
void HandleTask(Task task)
{
semaphore.Release();
}
while (!_shutdown.IsCancellationRequested)
{
await semaphore.WaitAsync();
var item = await TaskQueue.DequeueAsync(_shutdown.Token);
var task = item (_shutdown.Token);
task.ContinueWith (HandleTask);
}
}
Using this implementation the order of the events logged in no longer in order as each task waits a random amount of time. So the output could be:
Event 0
Event 1
Event 2
Event 3
Event 4
Event 5
Event 7
Event 6
Event 9
Event 8
edit: Is it ok in a production environment to execute code this way, without awaiting it?
I think the reason why most devs have a problem with fire-and-forget is that it is often misused.
When you execute a Task using fire-and-forget you are basically telling me that you do not care about the result of this function. You do not care if it exits successfully, if it is canceled or if it threw an exception. But for most Tasks you do care about the result.
You do want to make sure a database write went through
You do want to make sure a Log entry is written to the hard drive
You do want to make sure a network packet is sent to the receiver
And if you care about the result of the Task then fire-and-forget is the wrong method.
That's it in my opinion. The hard part is finding a Task where you really do not care about the result of the Task.
You can add the QueuedHostedService once or twice for every CPU in the machine.
So something like this:
for (var i=0;i<Environment.ProcessorCount;++i)
{
services.AddHostedService<QueuedHostedService>();
}
You can hide this in an extension method and make the concurrency level configurable to keep things clean.

How to handle a reply for long-running REST request?

Background: We have import-functions that can take anywhere from a few seconds to 1-2 hours to run depending on the file being imported. We want to expose a new way of triggering imports, via a REST request.
Ideally the REST service would be called, trigger the import and reply with a result when done. My question is: since it can take up to two hours to run, is it possible to reply or will the request timeout for the caller? Is there a better way for this kind of operation?
What I use in these cases is an asynchronous operation that returns no result (void function result in case of c# Web API), then send the result asynchronously using a message queue.
E.g.
[HttpPut]
[Route("update")]
public void Update()
{
var task = Task.Run(() => this.engine.Update());
task.ContinueWith(t => publish(t, "Update()"));
}

How to run WCF web service and multithread task inside windows service separately?

I have windows service and WCF web service hosted inside. Infinite task needs to read some logs from device every 2 seconds. In same time web service methods should work properly when they are called. In my case, when i Debug it seems that web service methods calls interrupts Infinite task. So my task is not running on different thread.
How can I optimize my code to work separately from WCF web service? Where is the problem?
On windows service start
protected override void OnStart(string[] args){
//....other code for starting WCF web service....
work();
}
work method:
public async void Work() {
log.Debug("operation started");
Methods checkE = new Methods();
try
{
await checkE.PullLogs();
}
catch (Exception ex) {
log.Error(ex.Message);
}
}
This is PullLogs method:
public async Task PullLogs ()
{
while (true)
{
... some code ...
Parallel.ForEach(tasks, task =>
{
byte[] dataArrayPC;
byte[] dataArrayCT;
byte[] rezult;
PTest p = new PTest();
if (p.PingIt(task.Ip))
{
try
{
SDKCommunication con = new SDKCommunication(task.Id, task.Ip, port, timeout, false);
...some code...
while (indexPC <= indexCT )
{
int broj = con.ReadLogs(address, LOGS, indexPC, 16, 0);
rezult = con.GetLogs(broj);
readEventData(rezult);
indexPC = indexPC + 16;
if (maxiterrations > itteration) {
//send to Java web service
}
itteration++;
}
con.Dispose();
else { log.Debug("error in sdk"); con.Dispose(); }
}
catch (Exception e) { log.Debug(e.Message); }
}
else { log.Error("no connection to device: " + task.Ip); }
}
);
await Task.Delay(2000);
}
}
EDIT:
One more question, is it better to use while(true) and Task.Delay(2000) or have timer tick for every 2 seconds?
Thanks
I'm not sure you are seeing what you think you are seeing. It sounds like you observed with the debugger that WCF service call interrupted your PullLogs code ... but perhaps the code was executing concurrently on different threads, and the debugger just switched you from one to another when a breakpoint was hit or something similar.
In general your WCF service method calls should be executing on the IO Completion Thread Pool. Your TPL code in the Parallel.ForEach should be executing with the default TaskScheduler on the default Thread Pool.
See here for more on specifying your own sychronization context (which will determine which threads WCF code can execute on):
Synchronization Contexts in WCF
If your goal is to make sure that no WCF service calls are processed while your PullLogs code is running, then you will need a synchronization approach, like locking on the same object from the WCF methods, and also from the PullLogs code.
If instead your goal is to make sure that these two parts of your code are isolated, so that both are available to run simultaneously, then I don't think you need to do anything.
Now if you have observed that while your PullLogs code is executing, the WCF service is not as available as you want it to be, then I would take this to indicate that some resource (hardware threads, who knows) is being oversubscribed by the parallel loop in your PullLogs method. In that case probably the best you would be able to do is to naively limit concurrency in that loop to some smaller value ... which might slow down your PullLogs method, but could also go a long way towards making sure your WCF service remains available.
If you want to give this a try, you can create a LimitedConcurrencyTaskScheduler as is done here:
How to: Create a Task Scheduler That Limits Concurrency
and then in your code, supply an instance of this task scheduler to your call to Parallel.ForEach.
Again I don't think this should be necessary unless you have noticed an actual problem (so far you only indicate that you noticed some behavior in the debugger that you didn't expect, but that sounds perfectly reasonable to me).

How to test concurrency scenarios in .NET?

I've worked with concurrency, but I don't know any good way to test it.
I would want to know if there is any way to "force" Tasks to execute in an specific order to simulate tests cases.
For example:
Client #1 makes a request
Server starts retrieving data to Client #1
Client #2 makes another request while the server is still responding to Client #1
Assert << something >>
I have seen some people using custom TaskSchedulers. Does it make sense?
I had had that problem in several occurrences too. Eventually I have created a helper that can start a bunch of threads to execute concurrent actions. The helper provides synchronization primitives and logging mechanisms. Here is a code fragment from a unit test:
[Test]
public void TwoCodeBlocksInParallelTest()
{
// This static method runs the provided Action delegates in parallel using threads
CTestHelper.Run(
c =>
{
Thread.Sleep(1000); // Here should be the code to provide something
CTestHelper.AddSequenceStep("Provide"); // We record a sequence step for the expectations after the test
CTestHelper.SetEvent();
},
c =>
{
CTestHelper.WaitEvent(); // We wait until we can consume what is provided
CTestHelper.AddSequenceStep("Consume"); // We record a sequence step for the expectations after the test
},
TimeSpan.FromSeconds(10)); // This is a timeout parameter, if the threads are deadlocked or take too long, the threads are terminated and a timeout exception is thrown
// After Run() completes we can analyze if the recorded sequence steps are in the correct order
Expect(CTestHelper.GetSequence(), Is.EqualTo(new[] { "Provide", "Consume" }));
}
It can be used to test client/server or synchronization in components or simply run a thread with a timeout. I'll continue to improve this in the next weeks. Here is the project page:
Concurrency Testing Helper
This shouldn't be too difficult to simulate using tasks:
private async Task DoSomeAsyncOperation()
{
// This is just to simulate some work,
// replace this with a usefull call to the server
await Task.Delay(3000);
}
Now, lets consume it:
public async Task TestServerLoad()
{
var firstTaskCall = DoSomeAsyncOperation();
await Task.Delay(1000); // Lets assume it takes about a second to execute work agains't the server
var secondCall = DoSomeAsyncOperation();
await Task.WhenAll(firstTaskCall, secondCall); // Wait till both complete
}
This is basic producer-consumer problem in concurrency. If you want to test that case, just put a Thread.Sleep(100) to server which part responds consumers. In that way, your server will have a delay before sending response. And you can Invoke service request simply creating new threads in a loop.

Categories

Resources