Hangfire Custom State Expiration - c#

I have implemented a custom state "blocked" that moves into the enqueued state after certain external requirements have been fulfilled.
Sometimes these external requirements are never fulfilled which causes the job to be stuck in the blocked state. What I'd like to have is for jobs in this state to automatically expire after some configurable time.
Is there any support for such a requirement? There is the ExpirationDate field, but from looking at the code it seems to be only used for final states.
The state is as simple as can be:
internal sealed class BlockedState : IState
{
internal const string STATE_NAME = "Blocked";
public Dictionary<string, string> SerializeData()
{
return new Dictionary<string, string>();
}
public string Name => STATE_NAME;
public string Reason => "Waiting for external resource";
public bool IsFinal => false;
public bool IgnoreJobLoadException => false;
}
and is used simply as _hangfireBackgroundJobClient.Create(() => Console.WriteLine("hello world"), new BlockedState());
At a later stage it is then moved forward via _hangfireBackgroundJobClient.ChangeState(jobId, new EnqueuedState(), BlockedState.STATE_NAME)

I would go for a custom implementation IBackgroundProcess taking example from DelayedJobScheduler
which picks up delayed jobs on a regular basis to enqueue it.
In this custom implementation I would use a JobStorageConnection.GetAllItemsFromSet("blocked") to get all the blocked job ids (where the DelayedJobScheduler uses JobStorageConnection.GetFirstByLowestScoreFromSet)
Then I would get each blocked job data with JobStorageConnection.GetJobData(jobId). For each of them, depending on its CreatedAt field, I would do nothing if the job is not expired, or change its state to another state (Failed ?) if it is expired.
The custom job process can be declared like this :
app.UseHangfireServer(storage, options,
new IBackgroundProcess[] {
new MyCustomJobProcess(
myTimeSpanForExpiration,
(IBackgroundJobStateChanger) new BackgroundJobStateChanger(filterProvider)) });
A difficulty here is to obtain an IBackgroundJobStateChanger as the server does not seem to expose its own.
If you use a custom FilterProvider as option for your server pass its value as filterProvider, else use (IJobFilterProvider) JobFilterProviders.Providers

Can you take advantage of EventWaitHandle?
Have a look at Generic Timout.
For example:
//action : your job
//timeout : your desired ExpirationDate
void DoSomething(Action action, int timeout)
{
EventWaitHandle waitHandle = new EventWaitHandle(false, EventResetMode.ManualReset);
AsyncCallback callback = ar => waitHandle.Set();
action.BeginInvoke(callback, null);
if (!waitHandle.WaitOne(timeout))
{
// Expired here
}
}

Related

NServiceBus events lost when published in separate thread

I've been working on getting long running messages working with NServiceBus on an Azure transport. Based off this document, I thought I could get away with firing off the long process in a separate thread, marking the event handler task as complete and then listening for custom OperationStarted or OperationComplete events. I noticed the OperationComplete event is not received by my handlers most cases. In fact, the only time it is received is when I publish it immediately after the OperationStarted event is published. Any actual processing in between somehow prevents the completion event from being received. Here is my code:
Abstract class used for long running messages
public abstract class LongRunningOperationHandler<TMessage> : IHandleMessages<TMessage> where TMessage : class
{
protected ILog _logger => LogManager.GetLogger<LongRunningOperationHandler<TMessage>>();
public Task Handle(TMessage message, IMessageHandlerContext context)
{
var opStarted = new OperationStarted
{
OperationID = Guid.NewGuid(),
OperationType = typeof(TMessage).FullName
};
var errors = new List<string>();
// Fire off the long running task in a separate thread
Task.Run(() =>
{
try
{
_logger.Info($"Operation Started: {JsonConvert.SerializeObject(opStarted)}");
context.Publish(opStarted);
ProcessMessage(message, context);
}
catch (Exception ex)
{
errors.Add(ex.Message);
}
finally
{
var opComplete = new OperationComplete
{
OperationType = typeof(TMessage).FullName,
OperationID = opStarted.OperationID,
Errors = errors
};
context.Publish(opComplete);
_logger.Info($"Operation Complete: {JsonConvert.SerializeObject(opComplete)}");
}
});
return Task.CompletedTask;
}
protected abstract void ProcessMessage(TMessage message, IMessageHandlerContext context);
}
Test Implementation
public class TestLongRunningOpHandler : LongRunningOperationHandler<TestCommand>
{
protected override void ProcessMessage(TestCommand message, IMessageHandlerContext context)
{
// If I remove this, or lessen it to something like 200 milliseconds, the
// OperationComplete event gets handled
Thread.Sleep(1000);
}
}
Operation Events
public sealed class OperationComplete : IEvent
{
public Guid OperationID { get; set; }
public string OperationType { get; set; }
public bool Success => !Errors?.Any() ?? true;
public List<string> Errors { get; set; } = new List<string>();
public DateTimeOffset CompletedOn { get; set; } = DateTimeOffset.UtcNow;
}
public sealed class OperationStarted : IEvent
{
public Guid OperationID { get; set; }
public string OperationType { get; set; }
public DateTimeOffset StartedOn { get; set; } = DateTimeOffset.UtcNow;
}
Handlers
public class OperationHandler : IHandleMessages<OperationStarted>
, IHandleMessages<OperationComplete>
{
static ILog logger = LogManager.GetLogger<OperationHandler>();
public Task Handle(OperationStarted message, IMessageHandlerContext context)
{
return PrintJsonMessage(message);
}
public Task Handle(OperationComplete message, IMessageHandlerContext context)
{
// This is not hit if ProcessMessage takes too long
return PrintJsonMessage(message);
}
private Task PrintJsonMessage<T>(T message) where T : class
{
var msgObj = new
{
Message = typeof(T).Name,
Data = message
};
logger.Info(JsonConvert.SerializeObject(msgObj, Formatting.Indented));
return Task.CompletedTask;
}
}
I'm certain that the context.Publish() calls are being hit because the _logger.Info() calls are printing messages to my test console. I've also verified they are hit with breakpoints. In my testing, anything that runs longer than 500 milliseconds prevents the handling of the OperationComplete event.
If anyone can offer suggestions as to why the OperationComplete event is not hitting the handler when any significant amount of time has passed in the ProcessMessage implementation, I'd be extremely grateful to hear them. Thanks!
-- Update --
In case anyone else runs into this and is curious about what I ended up doing:
After an exchange with the developers of NServiceBus, I decided on using a watchdog saga that implemented the IHandleTimeouts interface to periodically check for job completion. I was using saga data, updated when the job was finished, to determine whether to fire off the OperationComplete event in the timeout handler. This presented an other issue: when using In-Memory Persistence, the saga data was not persisted across threads even when it was locked by each thread. To get around this, I created an interface specifically for long running, in-memory data persistence. This interface was injected into the saga as a singleton, and thus used to read/write saga data across threads for long running operations.
I know that In-Memory Persistence is not recommended, but for my needs configuring another type of persistence (like Azure tables) was overkill; I simply want the OperationComplete event to fire under normal circumstances. If a reboot happens during a running job, I don't need to persist the saga data. The job will be cut short anyway and the saga timeout will handle firing the OperationComplete event with an error if the job runs longer than a set maximum time.
The cause of this is that if ProcessMessage is fast enough, you might get the current context before it gets invalidated, such as being disposed.
By returning from Handle successfully, you're telling NServiceBus: "I'm done with this message", so it may do what it wants with the context as well, such as invalidating it. In the background processor, you need an endpoint instance, not a message context.
By the time the new task starts running, you don't know if Handle has returned or not, so you should just consider the message has already been consumed and is thus unrecoverable. If errors happen in your separate task, you can't retry them.
Avoid long running processes without persistence. The sample you mention has a server that stores a work item from a message, and a process that polls this storage for work items. Perhaps not ideal, in case you scale out processors, but it won't lose messages.
To avoid constant polling, merge the server and the processor, poll inconditionally once when it starts, and in Handle schedule a polling task. Take care for this task to only poll if no other polling task is running, otherwise it may become worse than constant polling. You may use a semaphore to control this.
To scale out, you must have more servers. You need to measure if the cost of N processors polling is greater than sending to N servers in a round-robin fashion, for some N, to know which approach actually performs better. In practice, polling is good enough for a low N.
Modifying the sample for multiple processors may require less deployment and configuration effort, you just add or take processors, while adding or removing servers needs changing their enpoints in all places (e.g. config files) that point to them.
Another approach would be to break the long process into steps. NServiceBus has sagas. It's an approach usually implemented for a know or bounded amount of steps. For an unknown amount of steps, it's still feasible, although some might consider it an abuse of the seemingly intended purpose of sagas.

Single instance WCF service with concurrent tasks (that can be limited)

I am trying to build a WCF service that -
Is single instance
Allows clients to make multiple request to functions (eg. StartJob)
StarJob(request) 'queues' the request to the TaskFactory (one instance) running on Concurrent task schedule (implemented as per example
As tasks in the task factory are completed, the response is returned
While a task is running and more requests come in, they get queued (provide max concurrent number is reached)
Objective is to build a system that accepts requests from clients and queues them for processing.
Currently, my code (shown below), runs all requests simultaneously without taking the max concurrent number of task scheduler into account.
Questions
What am I missing out?
Any good example/reference I can look at? (I am sure this is not an uncommon use case)
Code
IService
[ServiceContract]
public interface ISupportService
{
[OperationContract]
Task<TaskResponse> StartTask(TaskRequest taskRequest);
}
Service
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
public class SupportService : ISupportService
{
private static TaskRequestHandler taskRequestHandler;
public SupportService()
{
taskRequestHandler = TaskRequestHandler.GetInstance();
}
public Task<TaskResponse> StartTask(TaskRequest taskRequest)
{
var tcs = new TaskCompletionSource<TaskResponse>();
if (!IsTaskRequestValid(taskRequest))
tcs.SetResult(new TaskResponse()});
taskRequestHandler.StartTaskAsync(taskRequest, lockHandler).ContinueWith(task => { tcs.SetResult(task.Result); });
return tcs.Task;
}
}
TaskRequestHandler
public class TaskRequestHandler
{
private ConcurrentTaskScheduler taskScheduler;
private TaskFactory taskFactory;
private TaskRequestHandler()
{
taskScheduler = new ConcurrentTaskScheduler(2);
taskFactory = new TaskFactory(taskScheduler);
}
private Task<TaskResponse> StartTaskAsync (TaskRequest request, LockHandler lockHandler)
{
var tcs = new TaskCompletionSource<TaskResponse>();
taskFactory.StartNew(() =>
{
//Some task with tcs.SetResults()
});
return tcs.Task;
}
}
Aaaah! A big miss on my part. The action executed in taskFactory was completing before I expected it to. As such, all the tasks appeared to be running in parallel.
I updated the action code to monitor the action completion correctly and raising correct callbacks, the above code worked fine.
However, made a minor change -
There is not need for StartTask(TaskRequest taskRequest) to return a Task. Rather, just returning the TaskResponse will suffice (as WCF takes care of Async and Sync functionality of every OperationContract)

Cache object with ObjectCache in .Net with expiry time

I am stuck in a scenario.
My code is like below :
Update : its not about how to use data cache, i am already using it and its working , its about expanding it so the method don't make call between the time of expiry and getting new data from external source
object = (string)this.GetDataFromCache(cache, cacheKey);
if(String.IsNullOrEmpty(object))
{
// get the data. It takes 100ms
SetDataIntoCache(cache, cacheKey, object, DateTime.Now.AddMilliseconds(500));
}
So user hit the cache and get data from it if the item expire it calls and get the data from service and save it in case , the problem is , when ever there is a pending request ( request ongoing ) the service send another request because the object is expired . in final there should be max 2-3 calls/ seconds and there are 10-20 calls per seconds to external service .
Is there any optimal way to doing it so no conflict between requests time other then creating own custom class with arrays and time stamps etc?
btw the saving code for cache is-
private void SetDataIntoCache(ObjectCache cacheStore, string cacheKey, object target, DateTime slidingExpirationDuration)
{
CacheItemPolicy cacheItemPolicy = new CacheItemPolicy();
cacheItemPolicy.AbsoluteExpiration = slidingExpirationDuration;
cacheStore.Add(cacheKey, target, cacheItemPolicy);
}
Use Double-checked locking pattern:
var cachedItem = (string)this.GetDataFromCache(cache, cacheKey);
if (String.IsNullOrEmpty(object)) { // if no cache yet, or is expired
lock (_lock) { // we lock only in this case
// you have to make one more check, another thread might have put item in cache already
cachedItem = (string)this.GetDataFromCache(cache, cacheKey);
if (String.IsNullOrEmpty(object)) {
//get the data. take 100ms
SetDataIntoCache(cache, cacheKey, cachedItem, DateTime.Now.AddMilliseconds(500));
}
}
}
This way, while there is an item in your cache (so, not expired yet), all requests will be completed without locking. But if there is no cache entry yet, or it expired - only one thread will get data and put it into the cache.
Make sure you understand that pattern, because there are some caveats while implementing it in .NET.
As noted in comments, it is not necessary to use one "global" lock object to protect every single cache access. Suppose you have two methods in your code, and each of those methods caches object using it's own cache key (but still using the same cache). Then you have to use two separate lock objects, because if you will use one "global" lock object, calls to one method will unnecessary wait for calls to the other method, while they never work with the same cache keys.
I have adapted the solution from Micro Caching in .NET for use with the System.Runtime.Caching.ObjectCache for MvcSiteMapProvider. The full implementation has an ICacheProvider interface that allows swapping between System.Runtime.Caching and System.Web.Caching, but this is a cut down version that should meet your needs.
The most compelling feature of this pattern is that it uses a lightweight version of a lazy lock to ensure that the data is loaded from the data source only 1 time after the cache expires regardless of how many concurrent threads there are attempting to load the data.
using System;
using System.Runtime.Caching;
using System.Threading;
public interface IMicroCache<T>
{
bool Contains(string key);
T GetOrAdd(string key, Func<T> loadFunction, Func<CacheItemPolicy> getCacheItemPolicyFunction);
void Remove(string key);
}
public class MicroCache<T> : IMicroCache<T>
{
public MicroCache(ObjectCache objectCache)
{
if (objectCache == null)
throw new ArgumentNullException("objectCache");
this.cache = objectCache;
}
private readonly ObjectCache cache;
private ReaderWriterLockSlim synclock = new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
public bool Contains(string key)
{
synclock.EnterReadLock();
try
{
return this.cache.Contains(key);
}
finally
{
synclock.ExitReadLock();
}
}
public T GetOrAdd(string key, Func<T> loadFunction, Func<CacheItemPolicy> getCacheItemPolicyFunction)
{
LazyLock<T> lazy;
bool success;
synclock.EnterReadLock();
try
{
success = this.TryGetValue(key, out lazy);
}
finally
{
synclock.ExitReadLock();
}
if (!success)
{
synclock.EnterWriteLock();
try
{
if (!this.TryGetValue(key, out lazy))
{
lazy = new LazyLock<T>();
var policy = getCacheItemPolicyFunction();
this.cache.Add(key, lazy, policy);
}
}
finally
{
synclock.ExitWriteLock();
}
}
return lazy.Get(loadFunction);
}
public void Remove(string key)
{
synclock.EnterWriteLock();
try
{
this.cache.Remove(key);
}
finally
{
synclock.ExitWriteLock();
}
}
private bool TryGetValue(string key, out LazyLock<T> value)
{
value = (LazyLock<T>)this.cache.Get(key);
if (value != null)
{
return true;
}
return false;
}
private sealed class LazyLock<T>
{
private volatile bool got;
private T value;
public T Get(Func<T> activator)
{
if (!got)
{
if (activator == null)
{
return default(T);
}
lock (this)
{
if (!got)
{
value = activator();
got = true;
}
}
}
return value;
}
}
}
Usage
// Load the cache as a static singleton so all of the threads
// use the same instance.
private static IMicroCache<string> stringCache =
new MicroCache<string>(System.Runtime.Caching.MemoryCache.Default);
public string GetData(string key)
{
return stringCache.GetOrAdd(
key,
() => LoadData(key),
() => LoadCacheItemPolicy(key));
}
private string LoadData(string key)
{
// Load data from persistent source here
return "some loaded string";
}
private CacheItemPolicy LoadCacheItemPolicy(string key)
{
var policy = new CacheItemPolicy();
// This ensures the cache will survive application
// pool restarts in ASP.NET/MVC
policy.Priority = CacheItemPriority.NotRemovable;
policy.AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(1);
// Load Dependencies
// policy.ChangeMonitors.Add(new HostFileChangeMonitor(new string[] { fileName }));
return policy;
}
NOTE: As was previously mentioned, you are probably not gaining anything by caching a value that takes 100ms to retrieve for only 500ms. You should most likely choose a longer time period to hold items in the cache. Are the items really that volatile in the data source that they could change that quickly? If so, maybe you should look at using a ChangeMonitor to invalidate any stale data so you don't spend so much of the CPU time loading the cache. Then you can change the cache time to minutes instead of milliseconds.
You will have to use locking to make sure request is not send when cache is expired and another thread is getting it from remote/slow service, it will look something like this (there are better implementations out there that are easier to use, but they require separate classes):
private static readonly object _Lock = new object();
...
object = (string)this.GetDataFromCache(cache, cacheKey);
if(object == null)
{
lock(_Lock)
{
object = (string)this.GetDataFromCache(cache, cacheKey);
if(String.IsNullOrEmpty(object))
{
get the data // take 100ms
SetDataIntoCache(cache, cacheKey, object, DateTime.Now.AddMilliseconds(500));
}
}
}
return object;
Also, you want to make sure your service doesn't return null as it will assume that no cache exists and will try to get the data on every request. That is why more advanced implementations typically use something like CacheObject, which supports null values storage.
By the way, 500 milliseconds is too small time to cache, you will end up lots of CPU cycle just to add/remove cache which will eventually remove cache too soon before any other request can get benefit of cache. You should profile your code to see if it actually benefits.
Remember, cache has lot of code in terms of locking, hashing and many other moving around data, which costs good amount of CPU cycles and remember, all though CPU cycles are small, but in multi threaded, multi connection server, CPU has lot of other things to do.
Original Answer https://stackoverflow.com/a/16446943/85597
private string GetDataFromCache(
ObjectCache cache,
string key,
Func<string> valueFactory)
{
var newValue = new Lazy<string>(valueFactory);
//The line below returns existing item or adds
// the new value if it doesn't exist
var value = cache.AddOrGetExisting(key, newValue, DateTimeOffset.Now.AddMilliseconds(500)) as Lazy<string>;
// Lazy<T> handles the locking itself
return (value ?? newValue).Value;
}
// usage...
object = this.GetDataFromCache(cache, cacheKey, () => {
// get the data...
// this method will be called only once..
// Lazy will automatically do necessary locking
return data;
});

Preventing simultaneous calls to a WCF function

I have a WCF service running as a windows service. This service references a DLL containing a class X with a public method func1. func1 calls another method func2(private) asynchronously using tasks(TPL). func2 performs a long running task independently. The setup is :
WCF
public string wcfFunc()
{
X obj = new X();
return obj.func1();
}
DLL
public class X
{
static bool flag;
public X()
{
flag = true;
}
public string func1()
{
if (!flag)
return "Already in action";
Task t = null;
t = Task.Factory.StartNew(() => func2(),TaskCreationOptions.LongRunning);
return "started";
}
void func2()
{
try
{
flag = false;
//Does a long running database processing work through .Net code
}
catch (Exception)
{
}
finally
{
flag = true;
}
}
}
The WCF function is called from a website. The website is used by multiple users. No two execution of the database processing func2 is allowed. Any user can trigger it. But during an execution, if any other user attempts to trigger it, it should show that the processing is already running.
I tried to use a static variable 'flag' to check it, but it does not seem to be working.
Any solutions? Thanks in advance.
You can read the following article, to prevent multiple calls to the WCF service method, you will need to first ensure that only one instances of your service can be created in addition to setting the concurrency mode.
In short, Make the following changes to your ServiceBehavior:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, InstanceContextMode=InstanceContextMode.Single)]
public class YourService: IYourService
{...}
NOTE : This will disable concurrency in all the methods exposed by your service, If you do not want that you will have to move the needed method to a separate service and then configure it as above.

Logging server events in signalR

I'm writing a C#-based web application using SignalR. So far I have a 'lobby' area (where open communication is allowed), and an 'session' area (where groups of 5 people can engage in private conversation, and any server interactions are only shown to the group).
What I'd like to do is create a 'logging' object in memory - one for each session (so if there are three groups of five people, I'd have three logging objects).
The 'session' area inherits from Hubs (and IDisconnect), and has several methods (Join, Send, Disconnect, etc.). The methods pass data back to the JavaScript client, which calls client-side JS functions. I've tried using a constructor method:
public class Session : Hub, IDisconnect
{
public class Logger
{
public List<Tuple<string, string, DateTime>> Log;
public List<Tuple<string, string, DateTime>> AddEvent(string evt, string msg, DateTime time)
{
if (Log == null)
{
Log = new List<Tuple<string, string, DateTime>>();
}
Log.Add(new Tuple<string, string, DateTime>(evt, msg, time));
return Log;
}
}
public Logger eventLog = new Logger();
public Session()
{
eventLog = new Logger();
eventLog.AddEvent("LOGGER INITIALIZED", "Logging started", DateTime.Now);
}
public Task Join(string group)
{
eventLog.AddEvent("CONNECT", "User connect", DateTime.Now);
return Groups.Add(Context.ConnectionId, group);
}
public Task Send(string group, string message)
{
eventLog.AddEvent("CHAT", "Message Sent", DateTime.Now);
return Clients[group].addMessage(message);
}
public Task Interact(string group, string payload)
{
// deserialise the data
// pass the data to the worker
// broadcast the interaction to everyone in the group
eventLog.AddEvent("INTERACTION", "User interacted", DateTime.Now);
return Clients[group].interactionMade(payload);
}
public Task Disconnect()
{
// grab someone from the lobby?
eventLog.AddEvent("DISCONNECT","User disconnect",DateTime.Now);
return Clients.leave(Context.ConnectionId);
}
}
But this results in the Logger being recreated every time a user interacts with the server.
Does anyone know how I'd be able to create one Logger per new session, and add elements to it? Or is there a simpler way to do this and I'm just overthinking the problem?
Hubs are created and disposed of all the time! Never ever put data in them that you expect to last (unless it's static).
I'd recommend creating your logger object as it's own class (not extending Hub/IDisconnect).
Once you have that create a static ConcurrentDictionary on the hub which maps SignalR groups (use these to represent your sessions) to loggers.
When you have a "Join" method triggered on your hub it's easy as looking up the group that the connection was in => Sending the logging data to the groups logger.
Checkout https://github.com/davidfowl/JabbR when it comes to making "rooms" and other sorts of groupings via SignalR
Hope this helps!

Categories

Resources