Is there a way to figure out in ASP.NET Web API beta whether the HTTP request was cancelled (aborted by user of for any another reason)? I'm looking for opportunity to have a kind of cancellation token out-of-the-box that will signal that the request is aborted and therefore long-running ops should be aborted as well.
Possible related question - the use case for the CancellationTokenModelBinder class. What's the reason to have a separate binder for cancellation token?
You could check Response.IsClientConnected from time to time to see if the browser is still connected to the server.
I'd like to sum-up a bit. The only approach that seem to work is checking Response.IsClientConnected.
Here some technical details regarding what is going behind the stage:
here and here
This approach has some flaws:
Works only under IIS (no self-hosting, no Dev Server);
According to some SO answers may be slow (do not react immediately after client disconnected): here;
There are considerations regarding this call cost: here
At the end I came up with the following piece of code to inject CancellationToken based on IsClientConnected into the Web API controller:
public class ConnectionAbortTokenAttribute : System.Web.Http.Filters.ActionFilterAttribute
{
private readonly string _paramName;
private Timer _timer;
private CancellationTokenSource _tokenSource;
private CancellationToken _token;
public ConnectionAbortTokenAttribute(string paramName)
{
_paramName = paramName;
}
public override void OnActionExecuting(System.Web.Http.Controllers.HttpActionContext actionContext)
{
object value;
if (!actionContext.ActionArguments.TryGetValue(_paramName, out value))
{
// no args with defined name found
base.OnActionExecuting(actionContext);
return;
}
var context = HttpContext.Current;
if (context == null)
{
// consider the self-hosting case (?)
base.OnActionExecuting(actionContext);
return;
}
_tokenSource = new CancellationTokenSource();
_token = _tokenSource.Token;
// inject
actionContext.ActionArguments[_paramName] = _token;
// stop timer on client disconnect
_token.Register(() => _timer.Dispose());
_timer = new Timer
(
state =>
{
if (!context.Response.IsClientConnected)
{
_tokenSource.Cancel();
}
}, null, 0, 1000 // check each second. Opts: make configurable; increase/decrease.
);
base.OnActionExecuting(actionContext);
}
/*
* Is this guaranteed to be called?
*
*
*/
public override void OnActionExecuted(System.Web.Http.Filters.HttpActionExecutedContext actionExecutedContext)
{
if(_timer != null)
_timer.Dispose();
if(_tokenSource != null)
_tokenSource.Dispose();
base.OnActionExecuted(actionExecutedContext);
}
}
If you added CancellationToken in to controller methods, it will be automatically injected by the framework, and when a client calls xhr.abort() the token will be automatically cancelled
Something similar to
public Task<string> Get(CancellationToken cancellationToken = default(CancellationToken))
For MVC you can also refer to
HttpContext.Current.Response.IsClientConnected
HttpContext.Response.ClientDisconnectedToken
For .NetCore
services.AddTransient<ICustomInterface>(provider => {
var accessor = provider.GetService<IHttpContextAccessor>);
accessor.HttpContext.RequestAborted;
});
Related
I'm working on a project to demonstrate Authorization code flow. Therefore I don't want to use any library that handles the authentication for me, but I want to make the whole process myself.
I created an Blazor Server app (SignalR).
On index page there is a single "Connect" button, that starts the whole authentication process and it is as follows.
Index.razor
//on button click
protected async Task ConnectClick()
{
await ConnectService.Connect();
}
ConnectService.cs
public void CreateSession()
{
if(!_httpContextAccessor.HttpContext.Request.Cookies.TryGetValue("userId", out string userId))
{
_httpContextAccessor.HttpContext.Response.Cookies.Append("userId", Guid.NewGuid().ToString());
}
}
public async Task Connect()
{
CreateSession();
//Generate random string as "state" parameter for ACF
var state = Guid.NewGuid().ToString();
var authorizeArgs = new Dictionary<string, string>
{
{"client_id", ...},
{"scope", ...},
{"redirect_uri", ".../Auth/ConnectCallback"},
{"response_type", "code"},
{"state", state}
};
//Save state to cookie to verify in later step
_httpContextAccessor.HttpContext.Response.Cookies.Append("state", state);
var url = ... //prepare url, not important
_navigationManager.NavigateTo(url);
}
public async Task ConnectCallback(string code, string state)
{
//Verify state
if(!_httpContextAccessor.HttpContext.Request.Cookies.TryGetValue("state", out string stateValue) || stateValue != state)
{
throw new AuthenticationException();
}
... //rest of authentication steps
_httpContextAccessor.HttpContext.Request.Cookies.TryGetValue("userId", out string userId);
_memoryCache.Set(userId, access_token);
_navigationManager.NavigateTo("/mypage");
}
ConnectCallback.razor
#page "/Auth/ConnectCallback"
...
#code {
protected override async Task OnInitializedAsync()
{
await AuthService.ConnectCallback(HttpContextAccessor.HttpContext.Request.Query["code"][0], HttpContextAccessor.HttpContext.Request.Query["state"][0]);
}
I know that a library would handle this in much more cleaner way, but the goal is to show the flow in a small demo app.
This is the latest state. I don't know if it is better to save the access token directly in the browser, but for now I keep it in the memory paired with userId.
What happens in this case is whenever I try to append a cookie I will receive:
System.InvalidOperationException: Headers are read-only, response has already started.
Now, I understand I'm doing something wrong. Does anyone know what would be the proper way to this, or what am I doing wrong here? I don't seem to find any solution to this anywhere.
You can create a CookieController that you will use for cookie management and redirect to it.
IsEssential indicates that the cookie is necessary for the website to function correctly.
[Route("[controller]")]
[ApiController]
public class CookieController : ControllerBase
{
[HttpGet("SetStateCookie")]
public async Task<ActionResult> SetStateCookie()
{
CookieOptions opt = new CookieOptions
{
IsEssential = true
};
Response.Cookies.Append("state", $"{Guid.NewGuid()}", opt);
return Redirect("/");
}
}
I have a need in my asp.net webapi (framework .Net 4.7.2) to call Redis (using StackExchange.Redis) in order to delete a key in a fire and forget way and I am making some stress test.
As I am comparing the various way to have the max speed :
I have already test executing the command with the FireAndForget flag,
I have also measured a simple command to Redis by await it.
And I am now searching a way to collect a list of commands received in a window of 15ms and execute them all in one go by pipeling them.
I have first try to use a Task.Run Action to call Redis but the problem that I am observing is that under stress, the memory of my webapi keep climbing.
The memory is full of System.Threading.IThreadPoolWorkItem[] objects with the folowing code :
[HttpPost]
[Route("api/values/testpostfireforget")]
public ApiResult<int> DeleteFromBasketId([FromBody] int basketId)
{
var response = new DeleteFromBasketResponse<int>();
var cpt = Interlocked.Increment(ref counter);
Task.Run(async () => {
await db.StringSetAsync($"BASKET_TO_DELETE_{cpt}",cpt.ToString())
.ConfigureAwait(false);
});
return response;
}
So I think that under stress my api keep enqueing background task in memory and execute them one after the other as fast as it can but less than the request coming in...
So I am searching for a way to have only one long lived background thread running with the asp.net webapi, that could capture the commands to send to Redis and execute them by pipeling them.
I was thinking in runnning a background task by implementing IHostedService interface, but it seems that in this case the background task would not share any state with my current http request. So implementing a IhostedService would be handy for a scheduled background task but not in my case, or I do not know how...
Based on StackExchange.Redis documentation you can use CommandFlags.FireAndForget flag:
[HttpPost]
[Route("api/values/testpostfireforget")]
public ApiResult<int> DeleteFromBasketId([FromBody] int basketId)
{
var response = new DeleteFromBasketResponse<int>();
var cpt = Interlocked.Increment(ref counter);
db.StringSet($"BASKET_TO_DELETE_{cpt}", cpt.ToString(), flags: CommandFlags.FireAndForget);
return response;
}
Edit 1: another solution based on comment
You can use pub/sub approach. Something like this should work:
public class MessageBatcher
{
private readonly IDatabase target;
private readonly BlockingCollection<Action<IDatabaseAsync>> tasks = new();
private Task worker;
public MessageBatcher(IDatabase target) => this.target = target;
public void AddMessage(Action<IDatabaseAsync> task) => tasks.Add(task);
public IDisposable Start(int batchSize)
{
var cancellationTokenSource = new CancellationTokenSource();
worker = Task.Factory.StartNew(state =>
{
var count = 0;
var tokenSource = (CancellationTokenSource) state;
var box = new StrongBox<IBatch>(target.CreateBatch());
tokenSource.Token.Register(b => ((StrongBox<IBatch>)b).Value.Execute(), box);
foreach (var task in tasks.GetConsumingEnumerable(tokenSource.Token))
{
var batch = box.Value;
task(batch);
if (++count == batchSize)
{
batch.Execute();
box.Value = target.CreateBatch();
count = 0;
}
}
}, cancellationTokenSource, cancellationTokenSource.Token, TaskCreationOptions.LongRunning, TaskScheduler.Current);
return new Disposer(worker, cancellationTokenSource);
}
private class Disposer : IDisposable
{
private readonly Task worker;
private readonly CancellationTokenSource tokenSource;
public Disposer(Task worker, CancellationTokenSource tokenSource) => (this.worker, this.tokenSource) = (worker, tokenSource);
public void Dispose()
{
tokenSource.Cancel();
worker.Wait();
tokenSource.Dispose();
}
}
}
Usage:
private readonly MessageBatcher batcher;
ctor(MessageBatcher batcher) // ensure that passed `handler` is singleton and already already started
{
this.batcher= batcher;
}
[HttpPost]
[Route("api/values/testpostfireforget")]
public ApiResult<int> DeleteFromBasketId([FromBody] int basketId)
{
var response = new DeleteFromBasketResponse<int>();
var cpt = Interlocked.Increment(ref counter);
batcher.AddMessage(db => db.StringSetAsync($"BASKET_TO_DELETE_{cpt}", cpt.ToString(), flags: CommandFlags.FireAndForget));
return response;
}
I set up a CancellationTokenSource with handler
public class AppTimeout
{
public async Task Invoke(HttpContext httpContext)
{
var cancellationTokenSource = CancellationTokenSource.CreateLinkedTokenSource(httpContext.RequestAborted);
cancellationTokenSource.CancelAfter(myTimestamp);
cancellationTokenSource.Token.Register(() =>
{
log.info("...");
});
await _next(httpContext);
}
}
My problem is if I have only one request in timeout , the callback of cancellationTokenSource.Token is called for all request that have been processed by Invoke methode, even request that already finished in correct time
Do you know why I encounter this behaviour and how to fix it please?
using var registration = timeoutCancellationTokenSource.Token.Register(() => {
log.info($"timeout path is {path}");
});
// your other code here...
Now it will unregister correctly when complete, i.e. when leaving the scope of the using.
I work with some WIFI devices such as cameras.
The basic fellow that I implemented:
Someone presses a button.
The button calls my Web API endpoint.
My Web API end point calls one of the API's of camera (by HttpRequest).
Processing each request takes 5 second. And between each request should be 1 second delay. For instance, If you press the button 2 times with one second delay after each: First we expect 5 second for processing the first press, then one second delay and in the end we expect 5 second for the last process (second press).
To do that I am using Queued background tasks based on Fire and Forgot manner in .NetCore 3.1 project and it works fine when I am dealing with just one camera.
But the new requirement of the project is, The background task should handle multiple cameras. It means one queue per camera, and queues should work parallel based on the fellow that I described above.
For example if we have 2 devices camera-001 and camera-002 and 2 connected buttons btn-cam-001 and btn-cam-002, And the order of pressing(0.5sec delay after each press) : 2X btn-cam-001 and 1X btn-cam-002.
What really happens is FIFO. First the requests of btn-cam-001 will be processed and then btn-cam-002.
What I expect and need: Camera-002 should not wait to receive the request and the first requests towards both cameras 001/002 should be processed in a same time(Based on the exmaple). Like each camera has own queue and own process.
The question is how can I achieve that in .NetCore 3.1?
Appreciate any help.
My current background service:
public class QueuedHostedService : BackgroundService
{
public IBackgroundTaskQueue TaskQueue { get; }
private readonly ILogger _logger;
public QueuedHostedService(IBackgroundTaskQueue taskQueue, ILoggerFactory loggerFactory)
{
TaskQueue = taskQueue;
_logger = loggerFactory.CreateLogger<QueuedHostedService>();
}
protected override async Task ExecuteAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Queued Hosted Service is starting.");
while (!cancellationToken.IsCancellationRequested)
{
var workItem = await TaskQueue.DequeueAsync(cancellationToken);
try
{
await workItem(cancellationToken);
}
catch (Exception exception)
{
_logger.LogError(exception, $"Error occurred executing {nameof(workItem)}.");
}
}
_logger.LogInformation("Queued Hosted Service is stopping.");
}
}
And the current BackgroundTaskQueue:
public class BackgroundTaskQueue : IBackgroundTaskQueue
{
private readonly SemaphoreSlim _signal = new SemaphoreSlim(0);
private readonly ConcurrentQueue<Func<CancellationToken, Task>> _workItems =
new ConcurrentQueue<Func<CancellationToken, Task>>();
public void QueueBackgroundWorkItem(Func<CancellationToken, Task> workItem)
{
if (workItem is null)
{
throw new ArgumentNullException(nameof(workItem));
}
_workItems.Enqueue(workItem);
_signal.Release();
}
public async Task<Func<CancellationToken, Task>> DequeueAsync(CancellationToken cancellationToken)
{
await _signal.WaitAsync(cancellationToken);
_workItems.TryDequeue(out var workItem);
return workItem;
}
}
My current endpoint:
[HttpPost("hit")]
public ActionResult TurnOnAsync([FromBody] HitRequest request, CancellationToken cancellationToken = default)
{
try
{
var camera = ConfigurationHelper.GetAndValidateCamera(request.Device, _configuration);
_taskQueue.QueueBackgroundWorkItem(async x =>
{
await _cameraRelayService.TurnOnAsync(request.Device, cancellationToken);
Thread.Sleep(TimeSpan.FromSeconds(1));
});
return Ok();
}
catch (Exception exception)
{
_logger.LogError(exception, "Error when truning on the lamp {DeviceName}.", request.Device);
return StatusCode(StatusCodes.Status500InternalServerError, exception.Message);
}
}
Instead of a single BackgroundTaskQueue you could have one per camera. You could store the queues in a dictionary, having the camera as the key:
public IDictionary<IDevice, IBackgroundTaskQueue> TaskQueues { get; }
Then in your end-point use the queue that is associated with the requested camera:
_taskQueues[camera].QueueBackgroundWorkItem(async x =>
I've got an architectural problem I just can't find a suitable solution for. In my web application based on ASP.NET Core MVC 2.2, I want to pull data from a JWT secured API and publish it to connected clients using SignalR. Furthermore, the data should only be fetched, if at least one client is actually connected to the SignalR hub. My problem: I can't find a way to cancel the Task.Delay inside a while loop, when an additional client's connected. To clarify, let me show you what I came up with so far.
First of all, here's the API client class:
public class DataApiClient : IDataApiClient {
private readonly HttpClient httpClient;
private readonly string dataApiDataUrl;
public DataApiClient(HttpClient httpClient, IOptionsMonitor<DataSettings> dataSettings) {
this.httpClient = httpClient;
dataApiUrl = dataSettings.CurrentValue.dataApiUrl;
}
public async Task<DataOverview> GetData(string accessToken) {
DataOverview dataOverview = new DataOverview();
try {
httpClient.DefaultRequestHeaders.Accept.Clear();
// more httpClient setup
Task<Stream> streamTask = httpClient.GetStreamAsync(dataApiUrl);
dataOverview = serializer.ReadObject(await streamTask) as DataOverview;
} catch(Exception e) {
Debug.WriteLine(e.Message);
}
return dataOverview;
}
}
SignalR hub:
public interface IDataClient {
Task ReceiveData(DataOverview dataOverview);
}
public class DataHub : Hub<IDataClient> {
private volatile static int UserCount = 0;
public static bool UsersConnected() {
return UserCount > 0;
}
public override Task OnConnectedAsync() {
Interlocked.Increment(ref UserCount);
return base.OnConnectedAsync();
}
public override Task OnDisconnectedAsync(Exception exception) {
Interlocked.Decrement(ref UserCount);
return base.OnDisconnectedAsync(exception);
}
}
And a BackgroundService that gets the work done:
public class DataService : BackgroundService {
private readonly IHubContext<DataHub, IDataClient> hubContext;
private readonly IDataApiClient dataApiClient;
private readonly IAccessTokenGenerator accessTokenGenerator;
private AccessToken accessToken;
public DataService(IHubContext<DataHub, IDataClient> hubContext, IDataApiClient dataApiClient, IAccessTokenGenerator accessTokenGenerator) {
this.hubContext = hubContext;
this.dataApiClient = dataApiClient;
this.accessTokenGenerator = accessTokenGenerator;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken) {
while (!stoppingToken.IsCancellationRequested) {
if (DataHub.UsersConnected()) {
if (NewAccessTokenNeeded()) {
accessToken = accessTokenGenerator.GetAccessToken();
}
DataOverview dataOverview = dataApiClient.GetData(accessToken.Token).Result;
dataOverview.LastUpdated = DateTime.Now.ToString();
await hubContext.Clients.All.ReceiveData(dataOverview);
}
// how to cancel this delay, as soon as an additional user connects to the hub?
await Task.Delay(60000);
}
}
private bool NewAccessTokenNeeded() {
return accessToken == null || accessToken.ExpireDate < DateTime.UtcNow;
}
}
So my questions are:
My main problem: How can I cancel the Task.Delay() inside the ExecuteAsync() while loop the moment an additional user connects, so the newly connected client gets data immediately and doesn't have to wait until the Delay() task is over? I guess, this would have to be placed in OnConnectedAsync(), but calling the service from there doesn't seem to be a good solution.
Is this architecture even good? If not, how would you implement such a scenario?
Is there a better way to keep a count of currently connected SignalR users? I read that a static property in a Hub can be problematic, if more than one SignalR server is involved (but this is not case for me).
Does IHostedService/BackgroundService even make sense here? Since services that get added using AddHostedService() are transient now, doesn't the while loop inside the ExecuteAsync() method defeat the purpose of this approach?
What would be the best place to store a token such as JWT access tokens, so that transient instances can access it if it's valid and update it when it expired?
I also read about injecting a reference to a specific IHostedService, but that seems to be just wrong. Also this discussion on Github made me feel that there has to be a better way to design the communication between SignalR and continuously running services.
Any help would be greatly appreciated.