I want to log when configuration is changed.
I do this in Program.cs or Startup.cs:
ChangeToken.OnChange(
() => configuration.GetReloadToken(),
state => logger.Information("Configuration reloaded"),
(object)null
);
But I get double change reports, so it needs to be debounced. The advice is to do this:
ChangeToken.OnChange(
() => configuration.GetReloadToken(),
state => { Thread.Sleep(2000); logger.Information("Configuration reloaded"); },
(object)null
);
I'm using 2000 here as I'm not sure what's a reasonable value.
I've found that sometimes I still get multiple change detections, separated by 2000 milliseconds. So the debounce doesn't work for me, just causes a delay between reported changes. If I set a high value then I only get one report, but that isn't ideal (and conceals the problem).
So I'd like to know:
Is this really debouncing, or just queueing reported changes?
I've used values from 1000 to 5000 to varying success. What are others using?
Is the sleep issued to the server's main thread? I hope not!
The multiple change detection issue discussed here (and at least a dozen other issues in multiple repos) is something they refuse to address using a built-in mechanism.
The MS docs use a file hashing approach, but I think that debouncing is better.
My solution uses async (avoids async-in-sync which could blow up something accidentally) and a hosted service that debounces change detections.
Debouncer.cs:
public sealed class Debouncer : IDisposable {
public Debouncer(TimeSpan? delay) => _delay = delay ?? TimeSpan.FromSeconds(2);
private readonly TimeSpan _delay;
private CancellationTokenSource? previousCancellationToken = null;
public async Task Debounce(Action action) {
_ = action ?? throw new ArgumentNullException(nameof(action));
Cancel();
previousCancellationToken = new CancellationTokenSource();
try {
await Task.Delay(_delay, previousCancellationToken.Token);
await Task.Run(action, previousCancellationToken.Token);
}
catch (TaskCanceledException) { } // can swallow exception as nothing more to do if task cancelled
}
public void Cancel() {
if (previousCancellationToken != null) {
previousCancellationToken.Cancel();
previousCancellationToken.Dispose();
}
}
public void Dispose() => Cancel();
}
ConfigWatcher.cs:
public sealed class ConfigWatcher : IHostedService, IDisposable {
public ConfigWatcher(IServiceScopeFactory scopeFactory, ILogger<ConfigWatcher> logger) {
_scopeFactory = scopeFactory;
_logger = logger;
}
private readonly IServiceScopeFactory _scopeFactory;
private readonly ILogger<ConfigWatcher> _logger;
private readonly Debouncer _debouncer = new(TimeSpan.FromSeconds(2));
private void OnConfigurationReloaded() {
_logger.LogInformation("Configuration reloaded");
// ... can do more stuff here, e.g. validate config
}
public Task StartAsync(CancellationToken cancellationToken) {
ChangeToken.OnChange(
() => { // resolve config from scope rather than ctor injection, in case it changes (this hosted service is a singleton)
using var scope = _scopeFactory.CreateScope();
var configuration = scope.ServiceProvider.GetRequiredService<IConfiguration>();
return configuration.GetReloadToken();
},
async () => await _debouncer.Debounce(OnConfigurationReloaded)
);
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken) => Task.CompletedTask;
public void Dispose() => _debouncer.Dispose();
}
Startup.cs:
services.AddHostedService<ConfigWatcher>(); // registered as singleton
Hopefully, someone else can answer your questions, but I did run into this issue and found this Gist by cocowalla.
The code provided by cocowalla debounces instead of just waiting. It successfully deduplicated the change callback for me.
Cocowalla also includes an extension method so you can simply call OnChange on the IConfiguration.
Here's a sample:
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Primitives;
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
class Program
{
public static async Task Main(string[] args)
{
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile(path: "appsettings.json", optional: false, reloadOnChange: true)
.Build();
configuration.OnChange(() => Console.WriteLine("configuration changed"));
while (true)
{
await Task.Delay(1000);
}
}
}
public class Debouncer : IDisposable
{
private readonly CancellationTokenSource cts = new CancellationTokenSource();
private readonly TimeSpan waitTime;
private int counter;
public Debouncer(TimeSpan? waitTime = null)
{
this.waitTime = waitTime ?? TimeSpan.FromSeconds(3);
}
public void Debouce(Action action)
{
var current = Interlocked.Increment(ref this.counter);
Task.Delay(this.waitTime).ContinueWith(task =>
{
// Is this the last task that was queued?
if (current == this.counter && !this.cts.IsCancellationRequested)
action();
task.Dispose();
}, this.cts.Token);
}
public void Dispose()
{
this.cts.Cancel();
}
}
public static class IConfigurationExtensions
{
/// <summary>
/// Perform an action when configuration changes. Note this requires config sources to be added with
/// `reloadOnChange` enabled
/// </summary>
/// <param name="config">Configuration to watch for changes</param>
/// <param name="action">Action to perform when <paramref name="config"/> is changed</param>
public static void OnChange(this IConfiguration config, Action action)
{
// IConfiguration's change detection is based on FileSystemWatcher, which will fire multiple change
// events for each change - Microsoft's code is buggy in that it doesn't bother to debounce/dedupe
// https://github.com/aspnet/AspNetCore/issues/2542
var debouncer = new Debouncer(TimeSpan.FromSeconds(3));
ChangeToken.OnChange<object>(config.GetReloadToken, _ => debouncer.Debouce(action), null);
}
}
In the sample, the debounce delay is 3 seconds, for my small json file, the debounce delay stops deduplicating around 230 milliseconds.
Related
Using the code you posted in your new answer. But same error when I add a file and the method UpdateVergadering is called.
vergaderingRepository:
private readonly IDbContextFactory<ApplicationDbContext> _factory;
public VergaderingRepository(IDbContextFactory<ApplicationDbContext> dbContextFactory, IDbContextFactory<ApplicationDbContext> factory)
{
_factory = factory;
}
public async ValueTask<int> UpdateVergadering(Vergadering vergadering)
{
using var dbContext = _factory.CreateDbContext();
dbContext.Set<Vergadering>().Update(vergadering);
return await dbContext.SaveChangesAsync();
}
public async ValueTask<Vergadering> GetVergaderingVoorLiveNotulenAsync (int vergaderingId)
{
using var dbContext = _factory.CreateDbContext();
dbContext.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
return await dbContext.Set<Vergadering>().SingleOrDefaultAsync(x => x.Id == vergaderingId);
}
The error I get:
System.InvalidOperationException: 'The instance of entity type 'Bestuurslid' cannot be tracked because another instance with the same key value for {'Id'} is already being tracked. When attaching existing entities, ensure that only one entity instance with a given key value is attached.
You code never completes the component render, you loop within OnInitialized. You also confuse OnInitialized and OnInitializedAsync.
Here's a demo page that shows how to use the System.Timers.Timer with an event handler hooked up to the timer to handle the data get and UI update. OnInitializedAsync does the initial data get, sets up the timer, wires up the event handler and completes.
#page "/"
#implements IDisposable
<PageTitle>Index</PageTitle>
<h1>Hello, world!</h1>
Welcome to your new app.
<div class="alert alert-success">
#_message
</div>
#code {
private string? _message = "Not Set";
private System.Timers.Timer _timer = new System.Timers.Timer(2000);
protected async override Task OnInitializedAsync()
{
// Initial data get
_message = await GetData();
// set uo the timer and hook up the event handler
_timer.AutoReset = true;
_timer.Elapsed += this.OnTimerElapsed;
_timer.Start();
}
// Event handler for the timer
private async void OnTimerElapsed(object? sender, System.Timers.ElapsedEventArgs e)
{
_message = await GetData();
// must call this like this as the timer may be running on a different thread
await this.InvokeAsync(StateHasChanged);
}
private async ValueTask<string> GetData()
{
// emulate an async call to a Db or API
await Task.Delay(100);
return DateTime.Now.ToLongTimeString();
}
// Dispose of the event handler when the Renderer has finished with the component
public void Dispose()
=> _timer.Elapsed -= this.OnTimerElapsed;
}
Update on DbContexts and Async behaviour
Set up a DbContextFactory:
services.AddDbContextFactory<MyDbContext>(
options =>
options.UseSqlServer(#"Server=(localdb)\mssqllocaldb;Database=Test"));
And then use the factory to get Db context instances as you need them.
public sealed class MeetingBroker
{
private readonly IDbContextFactory<MyDbContext> _factory;
public MeetingBroker(IDbContextFactory<MyDbContext> factory)
{
_factory = factory;
}
public ValueTask<Vergadering> GetVergaderingByIdAsync(int vergaderingId)
{
using var dbContext = _factory.CreateDbContext();
// if you aren't editing the data then you don't need tracking. Imporves performance
dbContext.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
return await dbContext.Set<TRecord>().SingleOrDefaultAsync(x => x.Id == vergaderingId));
}
DbContextFctory Update
You've implemented the factory, but not the "Unit of Work" pattern. You're implementation uses the same context for all activity within the repository and will cause usage clashes.
Blazor lives in an async world so you need to code for situations where you have parallel processes running on the same resources.
Your Repository Pattern should look like this:
private readonly IDbContextFactory<ApplicationDbContext> _factoy;
public VergaderingRepository(IDbContextFactory<ApplicationDbContext> dbContextFactory)
{
// assigned the factory not a context
_factory = dbContextFactory;
}
public async ValueTask<Vergadering> GetVergaderingVoorLiveNotulenAsync (int vergaderingId)
{
// creates a context for each transaction
using dbContext = dbContextFactory.CreateDbContext();
return await dbContext.Set<Vergadering>().SingleOrDefaultAsync(x => x.Id == vergaderingId);
}
private readonly IDbContextFactory<ApplicationDbContext> _factory;
public VergaderingRepository(IDbContextFactory<ApplicationDbContext> dbContextFactory, IDbContextFactory<ApplicationDbContext> factory)
=> _factory = factory;
public async ValueTask<Vergadering> GetVergaderingVoorLiveNotulenAsync (int vergaderingId)
{
using var dbContext = _factory.CreateDbContext();
// Turning off tracking as this is only a query.
dbContext.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
return await dbContext.Set<Vergadering>().SingleOrDefaultAsync(x => x.Id == vergaderingId);
}
public async ValueTask<int> UpdateVergadering(Vergadering vergadering)
{
using var dbContext = _factory.CreateDbContext();
// Tracking is required for updates
//dbContext.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
dbContext.Set<Vergadering>().Update(vergadering);
return await dbContext.SaveChangesAsync();
}
I have a console app that has multiple BackgroundServices, each reading from the same Kafka topic using the Confluent.Kafka nuget package (v1.6.2). The topic has 3 partitions.
When the app starts, all the background services have their constructors called, however only one of the ExecuteAsync methods is ever called. If I add a Task.Delay() - the number of milliseconds doesn't seem to matter - at the start of each ExecuteAsync, everything works fine and all the background services run.
No exceptions are raised, as far as I can tell.
Does anyone have an idea of what may be happening, or where to look further?
Here's the code:
using Confluent.Kafka;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
namespace KafkaConsumer
{
class Program
{
static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
private static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) =>
{
services.AddHostedService<ConsumerA>();
services.AddHostedService<ConsumerB>();
services.AddHostedService<ConsumerC>();
});
}
public class ConsumerA : BackgroundService
{
private readonly ILogger<ConsumerA> _logger;
private readonly IConsumer<Ignore, string> _consumer;
public ConsumerA(ILogger<ConsumerA> logger)
{
_logger = logger;
var config = new ConsumerConfig()
{
BootstrapServers = #"server:port",
GroupId = "Group1",
AutoOffsetReset = AutoOffsetReset.Earliest
};
_consumer = new ConsumerBuilder<Ignore, string>(config).Build();
_logger.LogInformation("ConsumerA constructor");
}
protected override async Task ExecuteAsync(CancellationToken cancellationToken)
{
// await Task.Delay(10);
_logger.LogInformation("ConsumerA starting");
_consumer.Subscribe(new List<string> { "topic" });
while (!cancellationToken.IsCancellationRequested)
{
_ = _consumer.Consume(cancellationToken);
}
}
}
public class ConsumerB : BackgroundService
{
private readonly ILogger<ConsumerB> _logger;
private readonly IConsumer<Ignore, string> _consumer;
public ConsumerB(ILogger<ConsumerB> logger)
{
_logger = logger;
var config = new ConsumerConfig()
{
BootstrapServers = #"server:port",
GroupId = "Group1",
AutoOffsetReset = AutoOffsetReset.Earliest
};
_consumer = new ConsumerBuilder<Ignore, string>(config).Build();
_logger.LogInformation("ConsumerB constructor");
}
protected override async Task ExecuteAsync(CancellationToken cancellationToken)
{
// await Task.Delay(10);
_logger.LogInformation("ConsumerB starting");
_consumer.Subscribe(new List<string> { "topic" });
while (!cancellationToken.IsCancellationRequested)
{
_ = _consumer.Consume(cancellationToken);
}
}
}
public class ConsumerC : BackgroundService
{
private readonly ILogger<ConsumerC> _logger;
private readonly IConsumer<Ignore, string> _consumer;
public ConsumerC(ILogger<ConsumerC> logger)
{
_logger = logger;
var config = new ConsumerConfig()
{
BootstrapServers = #"server:port",
GroupId = "Group1",
AutoOffsetReset = AutoOffsetReset.Earliest
};
_consumer = new ConsumerBuilder<Ignore, string>(config).Build();
_logger.LogInformation("ConsumerC constructor");
}
protected override async Task ExecuteAsync(CancellationToken cancellationToken)
{
// await Task.Delay(10);
_logger.LogInformation("ConsumerC starting");
_consumer.Subscribe(new List<string> { "topic" });
while (!cancellationToken.IsCancellationRequested)
{
_ = _consumer.Consume(cancellationToken);
}
}
}
}
And the output:
(with no delays):
info: KafkaConsumer.ConsumerA[0]
ConsumerA constructor
info: KafkaConsumer.ConsumerB[0]
ConsumerB constructor
info: KafkaConsumer.ConsumerC[0]
ConsumerC constructor
info: KafkaConsumer.ConsumerA[0]
ConsumerA starting
(with delays added):
info: KafkaConsumer.ConsumerA[0]
ConsumerA constructor
info: KafkaConsumer.ConsumerB[0]
ConsumerB constructor
info: KafkaConsumer.ConsumerC[0]
ConsumerC constructor
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: c:\users\..\kafkaconsumer\bin\Debug\net5.0
info: KafkaConsumer.ConsumerC[0]
ConsumerC starting
info: KafkaConsumer.ConsumerA[0]
ConsumerA starting
info: KafkaConsumer.ConsumerB[0]
ConsumerB starting
When starting up the BackgroundServices, the framework is evidently doing something like this:
var starting1 = service1.ExecuteAsync(...); //all called in sequence without awaits inbetween
var starting2 = service2.ExecuteAsync(...);
var starting3 = service3.ExecuteAsync(...);
...
//will await the startings all at once later on
Of course, when it does this in one of your services, it immediately gets trapped in a synchronous loop in which it blockingly polls the Kafka consumer. The thread of execution is never yielded back to the framework to continue calling other services.
You can get around this by doing your synchronous looping on separate threads, leaving the framework to happily go about its business:
protected Task ExecuteAsync(...)
{
return Task.Run(() => { //runs the below on a separate thread from the threadpool
_logger.LogInformation("ConsumerC starting");
_consumer.Subscribe(new List<string> { "topic" });
while (!cancellationToken.IsCancellationRequested)
{
_ = _consumer.Consume(cancellationToken);
}
});
}
When dealing with async APIs, there's a general expectation that you don't sit on your given thread, as doing so can cause problems for things above you that are expecting the thread back. When you await things, the point of execution 'stays' in your code but really the thread is given back to the caller while a continuation is queued to carry on doing your stuff at the right time (transparently, mostly).
Unfortunately as far as I know the Kafka libraries don't have APIs for playing along with this, and so they require full threads of their own.
This is because your execute method is async but you don't use await inside that to inform SynchronizationContext.
Write your executeAsync method like this:
protected override async Task ExecuteAsync(CancellationToken cancellationToken)
{
// await Task.Delay(10);
_logger.LogInformation("ConsumerC starting");
await Task.Run(() => _consumer.Subscribe(new List<string> { "topic" }));
while (!cancellationToken.IsCancellationRequested)
{
_await Task.Run(() => consumer.Consume(cancellationToken));
}
}
I'm working on a program where I receive data from SignalR, perform processing, and then send a SignalR message back to the client once the processing has finished. I've found a couple of resources for how to do this, but I can't quite figure out how to implement it in my project.
Here's what my code looks like:
Bootstrapping
public static void Main(string[] args)
{
CancellationTokenSource cancellationTokenSource = new CancellationTokenSource();
List<ISystem> systems = new List<ISystem>
{
new FirstProcessingSystem(),
new SecondProcessingSystem(),
};
Processor processor = new Processor(
cancellationToken: cancellationTokenSource.Token,
systems: systems);
processor.Start();
CreateHostBuilder(args).Build().Run();
cancellationTokenSource.Cancel();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
public class Startup
{
// This method gets called by the runtime. Use this method to add services to the container.
// For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
public void ConfigureServices(IServiceCollection services)
{
services.AddSignalR();
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapHub<TestHub>("/testHub");
});
}
}
TestHub.cs
public class TestHub : Hub
{
public async Task DoStuff(Work work)
{
FirstProcessingSystem.ItemsToProcess.Add(work);
}
}
Work.cs
public class Work
{
public readonly string ConnectionId;
public readonly string Data;
public Work(string connectionId, string data)
{
ConnectionId = connectionId;
Data = data;
}
}
Processor.cs
public class Processor
{
readonly CancellationToken CancellationToken;
readonly List<ISystem> Systems;
public Processor(
CancellationToken cancellationToken,
List<ISystem> systems)
{
CancellationToken = cancellationToken;
Systems = systems;
}
public void Start()
{
Task.Run(() =>
{
while (!CancellationToken.IsCancellationRequested)
{
foreach (var s in Systems)
s.Process();
}
});
}
}
Systems
public interface ISystem
{
void Process();
}
public class FirstProcessingSystem : ISystem
{
public static ConcurrentBag<Work> ItemsToProcess = new ConcurrentBag<Work>();
public void Process()
{
while (!ItemsToProcess.IsEmpty)
{
Work work;
if (ItemsToProcess.TryTake(out work))
{
// Do things...
SecondProcessingSystem.ItemsToProcess.Add(work);
}
}
}
}
public class SecondProcessingSystem : ISystem
{
public static ConcurrentBag<Work> ItemsToProcess = new ConcurrentBag<Work>();
public void Process()
{
while (!ItemsToProcess.IsEmpty)
{
Work work;
if (ItemsToProcess.TryTake(out work))
{
// Do more things...
// Hub.Send(work.ConnectionId, "Finished");
}
}
}
}
I know that I can perform the processing in the Hub and then send back the "Finished" call, but I'd like to decouple my processing from my inbound messaging that way I can add more ISystems when needed.
Can someone please with this? (Also, if someone has a better way to structure my program I'd also appreciate the feedback)
aspnet has a very powerful dependency injection system, why don't you use it? By creating your worker services without dependency injection, you'll have a hard time using anything provided by aspnet.
Since your "processing systems" seem to be long running services, you'd typically have them implement IHostedService, then create a generic service starter (taken from here):
public class BackgroundServiceStarter<T> : IHostedService where T : IHostedService
{
readonly T _backgroundService;
public BackgroundServiceStarter(T backgroundService)
{
_backgroundService = backgroundService;
}
public Task StartAsync(CancellationToken cancellationToken)
{
return _backgroundService.StartAsync(cancellationToken);
}
public Task StopAsync(CancellationToken cancellationToken)
{
return _backgroundService.StopAsync(cancellationToken);
}
}
then register them to the DI container in ConfigureServices:
// make the classes injectable
services.AddSingleton<FirstProcessingSystem>();
services.AddSingleton<SecondProcessingSystem>();
// start them up
services.AddHostedService<BackgroundServiceStarter<FirstProcessingSystem>>();
services.AddHostedService<BackgroundServiceStarter<SecondProcessingSystem>>();
Now that you got all that set up correctly, you can simply inject a reference to your signalR hub using IHubContext<TestHub> context in the constructor parameters of whatever class that needs it (as described in some of the links you posted).
When I try to register more than one AddHostedService the method StartAsync invokes only the first one.
services.AddHostedService<HostServiceBox>(); // StartAsync is called
services.AddHostedService<HostServiceWebSocket>(); // DO NOT WORK StartAsync not called
services.AddHostedService<HostServiceLogging>(); // DO NOT WORK StartAsync not called
Below a code who work
I get around the problem by creating a helper
#statup.cs
public void ConfigureServices(IServiceCollection services)
{
JwtBearerConfiguration(services);
services.AddCors(options => options.AddPolicy("CorsPolicy", builder =>
{
builder
.AllowAnyMethod()
.AllowAnyHeader()
.AllowAnyOrigin()
.AllowCredentials();
}));
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1); ;
services.AddSignalR();
services.AddHostedService<HostServiceHelper>(); // <===== StartAsync is called
}
#HostServiceHelper.cs
public class HostServiceHelper : IHostedService
{
private static IHubContext<EngineHub> _hubContext;
public HostServiceHelper(IHubContext<EngineHub> hubContext)
{
_hubContext = hubContext;
}
public Task StartAsync(CancellationToken cancellationToken)
{
return Task.Run(() =>
{
Task.Run(() => ServiceWebSocket(), cancellationToken);
Task.Run(() => ServiceBox(), cancellationToken);
Task.Run(() => ServiceLogging(), cancellationToken);
}, cancellationToken);
}
public void ServiceLogging()
{
// your own CODE
}
public void ServiceWebSocket()
{
// your own CODE
}
public void ServiceBox()
{
// your own CODE
}
public Task StopAsync(CancellationToken cancellationToken)
{
//Your logical
throw new NotImplementedException();
}
}
OK, this is 2022 and .NET 6 is out. Now days it is not a problem to run multiple hosted services, as long as they are represented by different classes. Just like this:
public class Program
{
public static void Main(string[] args)
{
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) =>
{
services.AddHostedService<Worker>();
services.AddHostedService<Worker2>();
}).Build();
host.Run();
}
}
Both workers will run.
But what if we need multiple instances of the same service class to run parallel? That seems to be still impossible.
See relevant discussion here: https://github.com/dotnet/runtime/issues/38751
I ended up implementing my own utility function to start multiple tasks in parallel and collect all the exceptions properly. Here:
/// <summary>
/// Runs multiple cancelable tasks in parallel. If any of the tasks terminates, all others are cancelled.
/// </summary>
public static class TaskBunchRunner
{
public class BunchException : Exception
{
public AggregateException Agg { get; }
public BunchException(AggregateException agg) : base("Task bunch failed", agg)
{
Agg = agg;
}
public override string Message => $"Task bunch failed: {Agg.Message}";
public override string ToString() => $"BunchException -> {Agg.ToString()}";
}
public static async Task Bunch(this IEnumerable<Func<CancellationToken, Task>> taskFns, CancellationToken ct)
{
using CancellationTokenSource combinedTcs = CancellationTokenSource.CreateLinkedTokenSource(ct);
CancellationToken ct1 = combinedTcs.Token;
Task[] tasks = taskFns.Select(taskFn => Task.Run(() => taskFn(ct1), ct1)).ToArray();
// If any of the tasks terminated, it may be because of an error or a cancellation.
// In both cases we cancel all of them.
await Task.WhenAny(tasks); // this await will never throw
combinedTcs.Cancel();
var allTask = Task.WhenAll(tasks); // this will collect exceptions in an AggregateException
try
{
await allTask;
}
catch (Exception)
{
if (allTask.Exception != null) throw new BunchException(allTask.Exception);
throw;
}
// Why not just await Task.WhenAll() and let it throw whatever it is?
// Because await will unwrap the aggregated exception and rethrow just one of the inner exceptions,
// losing the information about others. We want all the exceptions to be logged, that is why
// we get the aggregated exception from the task. We also throw it wrapped into a custom exception, so the
// outer await (in the caller's scope) does not unwrap it again. :facepalm:
}
}
Now we create a single hosted service and make its ExecuteAsync method run several tasks as a bunch:
class MySingleService
{
private readonly string _id;
public MySingleService(string id){ _id = id; }
public async Task RunAsync(CancellationToken ct)
{
await Task.Delay(500, ct);
Console.WriteLine($"Message from service {_id}");
await Task.Delay(500, ct);
}
}
class MyHostedService: BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
MySingleService[] individuals = new[]
{
new MySingleService("1"),
new MySingleService("2"),
new MySingleService("3"),
};
await individuals
.Select<MySingleService, Func<CancellationToken, Task>>(s => s.RunAsync)
.Bunch(stoppingToken);
}
}
public class Program
{
public static void Main(string[] args)
{
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) =>
{
services.AddHostedService<MyHostedService>();
}).Build();
host.Run();
}
}
Note 1: the TaskBunchRunner class was taken from a real project and proven to work, while the usage example is made up and not tested.
Note 2: The Bunch method was designed for background services, which do not naturally complete, they keep running until cancelled or failed. So if one of the tasks in a bunch successfully completes, others will be cancelled (which is probably not what you would want). If you need support for completion, I suggest checking the result of WhenAny: if the race winner has run to completion, we need to remove it from the array and WhenAny again.
I know it is not exactly what the OP asked for. But may be useful for someone who ends up here having the same problem as I had.
A hosted service is usually a single task so I'd do it with a singleton.
// Hosted Services
services.AddSingleton<IHostedService, HttpGetCurrencyPairRequestSyncingService>();
services.AddSingleton<IHostedService, HttpPostCurrencyPairRequestSyncingService>();
And when in my class,
public class CurrencyPairCacheManagementService : BaseHostedService<CurrencyPairCacheManagementService>
, ICurrencyPairCacheManagementService, IHostedService, IDisposable
{
private ICurrencyPairService _currencyPairService;
private IConnectionMultiplexer _connectionMultiplexer;
public CurrencyPairCacheManagementService(IConnectionMultiplexer connectionMultiplexer,
IServiceProvider serviceProvider) : base(serviceProvider)
{
_currencyPairService = serviceProvider.GetService<CurrencyPairService>();
_connectionMultiplexer = connectionMultiplexer;
InitializeCache(serviceProvider);
}
/// <summary>
/// Operation Procedure for CurrencyPair Cache Management.
///
/// Updates every 5 seconds.
///
/// Objectives:
/// 1. Pull the latest currency pair dataset from cold storage (DB)
/// 2. Cross reference checking (MemoryCache vs Cold Storage)
/// 3. Update Currency pairs
/// </summary>
/// <param name="stoppingToken"></param>
/// <returns></returns>
/// <exception cref="NotImplementedException"></exception>
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("CurrencyPairCacheManagementService is starting.");
stoppingToken.Register(() => _logger.LogInformation("CurrencyPairCacheManagementService is stopping."));
while (!stoppingToken.IsCancellationRequested)
{
var currencyPairs = _currencyPairService.GetAllActive();
await Task.Delay(TimeSpan.FromSeconds(5), stoppingToken);
}
_logger.LogWarning("CurrencyPairCacheManagementService background task is stopping.");
}
public void InitializeCache(IServiceProvider serviceProvider)
{
var currencyPairs = _currencyPairService.GetAllActive();
// Load them individually to the cache.
// This way, we won't have to update the entire collection if we were to remove, update or add one.
foreach (var cPair in currencyPairs)
{
// Naming convention => PREFIX + CURRENCYPAIRID
// Set the object into the cache
}
}
public Task InproPair(CurrencyPair currencyPair)
{
throw new NotImplementedException();
}
}
ExecuteAsync gets hit first, before carrying on with what you want it to do. You might also want to remove the generics declaration I have because my Base class runs with generics (If you don't run your hosted service base class with generics then I don't think you'll need to inherit IHostedService and IDisposable explicitly).
How can I use .NET Core's default dependency injection in Hangfire?
I am new to Hangfire and searching for an example which works with ASP.NET Core.
See full example on GitHub https://github.com/gonzigonz/HangfireCore-Example.
Live site at http://hangfirecore.azurewebsites.net/
Make sure you have the Core version of Hangfire:
dotnet add package Hangfire.AspNetCore
Configure your IoC by defining a JobActivator. Below is the config for use with the default asp.net core container service:
public class HangfireActivator : Hangfire.JobActivator
{
private readonly IServiceProvider _serviceProvider;
public HangfireActivator(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public override object ActivateJob(Type type)
{
return _serviceProvider.GetService(type);
}
}
Next register hangfire as a service in the Startup.ConfigureServices method:
services.AddHangfire(opt =>
opt.UseSqlServerStorage("Your Hangfire Connection string"));
Configure hangfire in the Startup.Configure method. In relationship to your question, the key is to configure hangfire to use the new HangfireActivator we just defined above. To do so you will have to provide hangfire with the IServiceProvider and this can be achieved by just adding it to the list of parameters for the Configure method. At runtime, DI will providing this service for you:
public void Configure(
IApplicationBuilder app,
IHostingEnvironment env,
ILoggerFactory loggerFactory,
IServiceProvider serviceProvider)
{
...
// Configure hangfire to use the new JobActivator we defined.
GlobalConfiguration.Configuration
.UseActivator(new HangfireActivator(serviceProvider));
// The rest of the hangfire config as usual.
app.UseHangfireServer();
app.UseHangfireDashboard();
}
When you enqueue a job, use the registered type which usually is your interface. Don't use a concrete type unless you registered it that way. You must use the type registered with your IoC else Hangfire won't find it.
For Example say you've registered the following services:
services.AddScoped<DbManager>();
services.AddScoped<IMyService, MyService>();
Then you could enqueue DbManager with an instantiated version of the class:
BackgroundJob.Enqueue(() => dbManager.DoSomething());
However you could not do the same with MyService. Enqueuing with an instantiated version would fail because DI would fail as only the interface is registered. In this case you would enqueue like this:
BackgroundJob.Enqueue<IMyService>( ms => ms.DoSomething());
DoritoBandito's answer is incomplete or deprecated.
public class EmailSender {
public EmailSender(IDbContext dbContext, IEmailService emailService)
{
_dbContext = dbContext;
_emailService = emailService;
}
}
Register services:
services.AddTransient<IDbContext, TestDbContext>();
services.AddTransient<IEmailService, EmailService>();
Enqueue:
BackgroundJob.Enqueue<EmailSender>(x => x.Send(13, "Hello!"));
Source:
http://docs.hangfire.io/en/latest/background-methods/passing-dependencies.html
Note: if you want a full sample, see my blog post on this.
All of the answers in this thread are wrong/incomplete/outdated. Here's an example with ASP.NET Core 3.1 and Hangfire.AspnetCore 1.7.
Client:
//...
using Hangfire;
// ...
public class Startup
{
// ...
public void ConfigureServices(IServiceCollection services)
{
//...
services.AddHangfire(config =>
{
// configure hangfire per your requirements
});
}
}
public class SomeController : ControllerBase
{
private readonly IBackgroundJobClient _backgroundJobClient;
public SomeController(IBackgroundJobClient backgroundJobClient)
{
_backgroundJobClient = backgroundJobClient;
}
[HttpPost("some-route")]
public IActionResult Schedule([FromBody] SomeModel model)
{
_backgroundJobClient.Schedule<SomeClass>(s => s.Execute(model));
}
}
Server (same or different application):
{
//...
services.AddScoped<ISomeDependency, SomeDependency>();
services.AddHangfire(hangfireConfiguration =>
{
// configure hangfire with the same backing storage as your client
});
services.AddHangfireServer();
}
public interface ISomeDependency { }
public class SomeDependency : ISomeDependency { }
public class SomeClass
{
private readonly ISomeDependency _someDependency;
public SomeClass(ISomeDependency someDependency)
{
_someDependency = someDependency;
}
// the function scheduled in SomeController
public void Execute(SomeModel someModel)
{
}
}
As far as I am aware, you can use .net cores dependency injection the same as you would for any other service.
You can use a service which contains the jobs to be executed, which can be executed like so
var jobId = BackgroundJob.Enqueue(x => x.SomeTask(passParamIfYouWish));
Here is an example of the Job Service class
public class JobService : IJobService
{
private IClientService _clientService;
private INodeServices _nodeServices;
//Constructor
public JobService(IClientService clientService, INodeServices nodeServices)
{
_clientService = clientService;
_nodeServices = nodeServices;
}
//Some task to execute
public async Task SomeTask(Guid subject)
{
// Do some job here
Client client = _clientService.FindUserBySubject(subject);
}
}
And in your projects Startup.cs you can add a dependency as normal
services.AddTransient< IClientService, ClientService>();
Not sure this answers your question or not
Currently, Hangfire is deeply integrated with Asp.Net Core. Install Hangfire.AspNetCore to set up the dashboard and DI integration automatically. Then, you just need to define your dependencies using ASP.NET core as always.
If you are trying to quickly set up Hangfire with ASP.NET Core (tested in ASP.NET Core 2.2) you can also use Hangfire.MemoryStorage. All the configuration can be performed in Startup.cs:
using Hangfire;
using Hangfire.MemoryStorage;
public void ConfigureServices(IServiceCollection services)
{
services.AddHangfire(opt => opt.UseMemoryStorage());
JobStorage.Current = new MemoryStorage();
}
protected void StartHangFireJobs(IApplicationBuilder app, IServiceProvider serviceProvider)
{
app.UseHangfireServer();
app.UseHangfireDashboard();
//TODO: move cron expressions to appsettings.json
RecurringJob.AddOrUpdate<SomeJobService>(
x => x.DoWork(),
"* * * * *");
RecurringJob.AddOrUpdate<OtherJobService>(
x => x.DoWork(),
"0 */2 * * *");
}
public void Configure(IApplicationBuilder app, IServiceProvider serviceProvider)
{
StartHangFireJobs(app, serviceProvider)
}
Of course, everything is store in memory and it is lost once the application pool is recycled, but it is a quick way to see that everything works as expected with minimal configuration.
To switch to SQL Server database persistence, you should install Hangfire.SqlServer package and simply configure it instead of the memory storage:
services.AddHangfire(opt => opt.UseSqlServerStorage(Configuration.GetConnectionString("Default")));
I had to start HangFire in main function. This is how I solved it:
public static void Main(string[] args)
{
var host = CreateWebHostBuilder(args).Build();
using (var serviceScope = host.Services.CreateScope())
{
var services = serviceScope.ServiceProvider;
try
{
var liveDataHelper = services.GetRequiredService<ILiveDataHelper>();
var justInitHangfire = services.GetRequiredService<IBackgroundJobClient>();
//This was causing an exception (HangFire is not initialized)
RecurringJob.AddOrUpdate(() => liveDataHelper.RePopulateAllConfigDataAsync(), Cron.Daily());
// Use the context here
}
catch (Exception ex)
{
var logger = services.GetRequiredService<ILogger<Program>>();
logger.LogError(ex, "Can't start " + nameof(LiveDataHelper));
}
}
host.Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
}
Actually there is an easy way for dependency injection based job registration.
You just need to use the following code in your Startup:
public class Startup {
public void Configure(IApplicationBuilder app)
{
var factory = app.ApplicationServices
.GetService<IServiceScopeFactory>();
GlobalConfiguration.Configuration.UseActivator(
new Hangfire.AspNetCore.AspNetCoreJobActivator(factory));
}
}
However i personally wanted a job self registration including on demand jobs (recurring jobs which are never executed, except by manual trigger on hangfire dashboard), which was a little more complex then just that. I was (for example) facing issues with the job service activation, which is why i decided to share most of my implementation code.
//I wanted an interface to declare my jobs, including the job Id.
public interface IBackgroundJob {
string Id { get; set; }
void Invoke();
}
//I wanted to retrieve the jobs by id. Heres my extension method for that:
public static IBackgroundJob GetJob(
this IServiceProvider provider,
string jobId) => provider
.GetServices<IBackgroundJob>()
.SingleOrDefault(j => j.Id == jobId);
//Now i needed an invoker for these jobs.
//The invoker is basically an example of a dependency injected hangfire job.
internal class JobInvoker {
public JobInvoker(IServiceScopeFactory factory) {
Factory = factory;
}
public IServiceScopeFactory Factory { get; }
public void Invoke(string jobId)
{
//hangfire jobs should always be executed within their own scope.
//The default AspNetCoreJobActivator should technically already do that.
//Lets just say i have trust issues.
using (var scope = Factory.CreateScope())
{
scope.ServiceProvider
.GetJob(jobId)?
.Invoke();
}
}
//Now i needed to tell hangfire to use these jobs.
//Reminder: The serviceProvider is in IApplicationBuilder.ApplicationServices
public static void RegisterJobs(IServiceProvider serviceProvider) {
var factory = serviceProvider.GetService();
GlobalConfiguration.Configuration.UseActivator(new Hangfire.AspNetCore.AspNetCoreJobActivator(factory));
var manager = serviceProvider.GetService<IRecurringJobManager>();
var config = serviceProvider.GetService<IConfiguration>();
var jobs = serviceProvider.GetServices<IBackgroundJob>();
foreach (var job in jobs) {
var jobConfig = config.GetJobConfig(job.Id);
var schedule = jobConfig?.Schedule; //this is a cron expression
if (String.IsNullOrWhiteSpace(schedule))
schedule = Cron.Never(); //this is an on demand job only!
manager.AddOrUpdate(
recurringJobId: job.Id,
job: GetJob(job.Id),
cronExpression: schedule);
}
//and last but not least...
//My Method for creating the hangfire job with injected job id
private static Job GetJob(string jobId)
{
var type = typeof(JobInvoker);
var method = type.GetMethod("Invoke");
return new Job(
type: type,
method: method,
args: jobId);
}
Using the above code i was able to create hangfire job services with full dependency injection support. Hope it helps someone.
Use the below code for Hangfire configuration
using eForms.Core;
using Hangfire;
using Hangfire.SqlServer;
using System;
using System.ComponentModel;
using System.Web.Hosting;
namespace eForms.AdminPanel.Jobs
{
public class JobManager : IJobManager, IRegisteredObject
{
public static readonly JobManager Instance = new JobManager();
//private static readonly TimeSpan ZeroTimespan = new TimeSpan(0, 0, 10);
private static readonly object _lockObject = new Object();
private bool _started;
private BackgroundJobServer _backgroundJobServer;
private JobManager()
{
}
public int Schedule(JobInfo whatToDo)
{
int result = 0;
if (!whatToDo.IsRecurring)
{
if (whatToDo.Delay == TimeSpan.Zero)
int.TryParse(BackgroundJob.Enqueue(() => Run(whatToDo.JobId, whatToDo.JobType.AssemblyQualifiedName)), out result);
else
int.TryParse(BackgroundJob.Schedule(() => Run(whatToDo.JobId, whatToDo.JobType.AssemblyQualifiedName), whatToDo.Delay), out result);
}
else
{
RecurringJob.AddOrUpdate(whatToDo.JobType.Name, () => RunRecurring(whatToDo.JobType.AssemblyQualifiedName), Cron.MinuteInterval(whatToDo.Delay.TotalMinutes.AsInt()));
}
return result;
}
[DisplayName("Id: {0}, Type: {1}")]
[HangFireYearlyExpirationTime]
public static void Run(int jobId, string jobType)
{
try
{
Type runnerType;
if (!jobType.ToType(out runnerType)) throw new Exception("Provided job has undefined type");
var runner = runnerType.CreateInstance<JobRunner>();
runner.Run(jobId);
}
catch (Exception ex)
{
throw new JobException($"Error while executing Job Id: {jobId}, Type: {jobType}", ex);
}
}
[DisplayName("{0}")]
[HangFireMinutelyExpirationTime]
public static void RunRecurring(string jobType)
{
try
{
Type runnerType;
if (!jobType.ToType(out runnerType)) throw new Exception("Provided job has undefined type");
var runner = runnerType.CreateInstance<JobRunner>();
runner.Run(0);
}
catch (Exception ex)
{
throw new JobException($"Error while executing Recurring Type: {jobType}", ex);
}
}
public void Start()
{
lock (_lockObject)
{
if (_started) return;
if (!AppConfigSettings.EnableHangFire) return;
_started = true;
HostingEnvironment.RegisterObject(this);
GlobalConfiguration.Configuration
.UseSqlServerStorage("SqlDbConnection", new SqlServerStorageOptions { PrepareSchemaIfNecessary = false })
//.UseFilter(new HangFireLogFailureAttribute())
.UseLog4NetLogProvider();
//Add infinity Expiration job filter
//GlobalJobFilters.Filters.Add(new HangFireProlongExpirationTimeAttribute());
//Hangfire comes with a retry policy that is automatically set to 10 retry and backs off over several mins
//We in the following remove this attribute and add our own custom one which adds significant backoff time
//custom logic to determine how much to back off and what to to in the case of fails
// The trick here is we can't just remove the filter as you'd expect using remove
// we first have to find it then save the Instance then remove it
try
{
object automaticRetryAttribute = null;
//Search hangfire automatic retry
foreach (var filter in GlobalJobFilters.Filters)
{
if (filter.Instance is Hangfire.AutomaticRetryAttribute)
{
// found it
automaticRetryAttribute = filter.Instance;
System.Diagnostics.Trace.TraceError("Found hangfire automatic retry");
}
}
//Remove default hangefire automaticRetryAttribute
if (automaticRetryAttribute != null)
GlobalJobFilters.Filters.Remove(automaticRetryAttribute);
//Add custom retry job filter
GlobalJobFilters.Filters.Add(new HangFireCustomAutoRetryJobFilterAttribute());
}
catch (Exception) { }
_backgroundJobServer = new BackgroundJobServer(new BackgroundJobServerOptions
{
HeartbeatInterval = new System.TimeSpan(0, 1, 0),
ServerCheckInterval = new System.TimeSpan(0, 1, 0),
SchedulePollingInterval = new System.TimeSpan(0, 1, 0)
});
}
}
public void Stop()
{
lock (_lockObject)
{
if (_backgroundJobServer != null)
{
_backgroundJobServer.Dispose();
}
HostingEnvironment.UnregisterObject(this);
}
}
void IRegisteredObject.Stop(bool immediate)
{
Stop();
}
}
}
Admin Job Manager
public class Global : System.Web.HttpApplication
{
void Application_Start(object sender, EventArgs e)
{
if (Core.AppConfigSettings.EnableHangFire)
{
JobManager.Instance.Start();
new SchedulePendingSmsNotifications().Schedule(new Core.JobInfo() { JobId = 0, JobType = typeof(SchedulePendingSmsNotifications), Delay = TimeSpan.FromMinutes(1), IsRecurring = true });
}
}
protected void Application_End(object sender, EventArgs e)
{
if (Core.AppConfigSettings.EnableHangFire)
{
JobManager.Instance.Stop();
}
}
}