C# .NET - detect if code is running as a hangfire job - c#

Is it possible to detect if code is running as a hangfire job in C#?
Thank you
I would expect that some global system variable exists and indicates code is running as a part of hangfire job.

I would expect that some global system variable exists and indicates
code is running as a part of hangfire job.
I would be curious about your usecase. Having code depending explicitly on the execution context is more difficult to maintain and test in my opinion.
Is it because your code depends on HttpContext, which is not available when running the task with Hangfire ?
Anyway, you could achieve what you want with AsyncLocal and Hangfire ServerFilters
public static class HangfireTaskMonitor
{
public static AsyncLocal<bool> IsRunningInBackground { get; } = new AsyncLocal<bool>();
}
public class ContextTrackedAttribute : JobFilterAttribute, IServerFilter
{
public void OnPerforming(PerformingContext filterContext)
{
HangfireTaskMonitor.IsRunningInBackground.Value = true;
}
public void OnPerformed(PerformedContext filterContext)
{
HangfireTaskMonitor.IsRunningInBackground.Value = false;
}
}
Then put the [ContextTracked] attribute on your job method and test HangfireTaskMonitor.IsRunningInBackground.Value whenever you want.
The idea is somewhat simplified for clarity. For a more decoupled solution, I would have the HangfireTaskMonitor being injected as a singleton instead of a static class ; have the filter be a simple filter declared upon Hangfire configuration, instead of being an attribute.

Related

Does Private static variables gets shared with multiple sessions/user in web application?

I am having MVC web application in which i have written some code to get forms cookie..Some times when i log out automatically some other users name were gets displays in text-box for which i have never logged in.I believe its due to private static variable but not sure.I have below code that i have implemented can anyone help me for this.Thanks
////This is code i am using to set from cookie
private static string _formsCookieName;
private static string FormsCookieName
{
get
{
if (string.IsNullOrWhiteSpace(_formsCookieName))
{
_formsCookieName = FormsAuthentication.FormsCookieName;
}
return _formsCookieName;
}
}
private static string _formsCookiePath;
private static string FormsCookiePath
{
get
{
if (string.IsNullOrWhiteSpace(_formsCookiePath))
{
_formsCookiePath = FormsAuthentication.FormsCookiePath;
}
return _formsCookiePath;
}
}
public static UserSession LogoutAuthentication(HttpContextBase context)
{
UserSession session = null;
string cookieName = FormsCookieName;
try
{
HttpCookie httpCookie = context.Request.Cookies[cookieName];
httpCookie.Expires = DateTime.Now;
FormsAuthentication.SignOut();
}
catch
{
}
return session;
}
Yes, a static variable are shared amongst all threads.
Don't use static properties for values that should live only in the lifespam of your request. You can't even use [ThreadStatic] in asp.net because you don't control the thread pool, and the same thread can be reused to handle different requests .
And even when you DO want a static value that is mutated by different threads, you need to have locks in place to avoid race conditions.
Your FormCookieName class is request dependent, therefore it should only exist during the life spam of it. The poor man way of doing it would be to instantiate it in Application_BeginRequest and disposing it on Application_EndRequest of Global.aspx.cs, assuming .NET Framework 4.5.
The correct way of doing it, though, is using a DI container. They not only inject dependency, but manages the objects lifecycles. All major DI Containers have an HttpContext lifecycle manager of sorts, and .NET Core comes with a DI Container built in. In it, your code would become:
public void ConfigureServices(IServiceCollection services)
{
services.AddTransient<IFormsCookieName, FormsCookieName>();
}
And your controller:
public class FooController : ControllerBase
{
public FooController(IFormsCookieName formsCookieName)
{
// receives a FormsCookieName instance that can safely use it's non-static properties
}
}
EDIT: Full configuration of Unity would be too long and off-topic for stack overflow. But the basic idea is that the Dependency Injector Container will create an instance of a non-static FormsCookieName in the scope of your HttpContext and then dispose that and the end of the request. This ensures that every HttpContext gets it's own copy of FormsCookieNameand no data will mess up.
I recommend unity as DI Container. It's maintained by Microsoft, and it's performance has seen a lot of improvements in latest versions.
Configuring a DI Container isn't hard, and provides lots of benefits.

Quartz job not starting

I'm trying to use Abp.Quartz for scheduling jobs.
Working with net core 2.2, abp 4.5
I did everything like in docs here
https://aspnetboilerplate.com/Pages/Documents/Quartz-Integration , only resolved it in PostInitialize method.
At the end I tried exactly the same as in docs (start scheduler from controller).
But it not worked too. Job is not starting.
namespace Cloud
{
[DependsOn(typeof(AbpZeroCoreModule),
typeof(AbpQuartzModule))]
public class CloudCoreModule : AbpModule
{
public override void PreInitialize()
{
}
public override void Initialize()
{
IocManager.RegisterAssemblyByConvention(typeof(CloudCoreModule)
.GetAssembly());
}
public override void PostInitialize()
{
IocManager.Resolve<IQuartzScheduleJobManager>().ScheduleAsync<ApiRequestQueues.ApiRequestProcessor>(
job =>
{
job
.WithIdentity("RevolutApiProcessor")
.WithDescription("A job witch processing request front");
},
trigger =>
{
trigger
.StartNow()
.WithSimpleSchedule(
schedule =>
{
schedule
.RepeatForever()
.WithIntervalInSeconds(5)
.Build();
});
});
}
}
}
and here is class ApiRequestProcessor
public class ApiRequestProcessor : JobBase, ITransientDependency
{
public override async Task Execute(IJobExecutionContext context)
{
//some work
}
}
Better late to the party than never.
I've managed to get this working, though without Abp.Quartz package. If you're able to use the standard Quartz package, you can use the following steps:
Define and set up your jobs as normal in Startup.cs
Ensure that your class implements both IJob for Quartz to be happy, and ITransientDependency for ASP.NET Boilerplate to be happy
Very important, if your job code uses DbContext in any way, you need to ensure that the [UnitOfWork] decorator is on your job class (which is anyway important to ensure your jobs run atomically and do not affect any other transactions which may be occurring on your server)
In short, a working bare-bones job class might look like this:
using System;
using Quartz;
namespace MyProject.MyJobService
{
[UnitOfWork]
public class SimpleJob : IJob, ITransientDependency
{
public async Task Execute(IJobExecutionContext context)
{
Console.WriteLine("Hello from quartz job!");
}
}
}
More information and discussion can be found here (also how I finally arrived at this solution): https://github.com/aspnetboilerplate/aspnetboilerplate/issues/3757
When in doubt, carefully read what Quartz is telling you in the application output - it usually gives clues as to what is wrong, but I'm confident my solution will work for you 😊

Share Data Between Threads when using SpecFlow + SpecRunner

I am working on a test suit implementation which uses the SpecFlow + SpecRunner and XUnit. and we are trying to do parallel test execution and i would like to know is there are a way that i can run a hook in the begining of the test run and store the token value in a static variable so that that can be shared among threads.
to summarize is there a way that specflow offers a mechanism to share data between threads during parallel execution.
We can share the data using any one of the below approach
Scenario Context
Context Injection
Here, Approach 1 and 2 will not have any issue in multiple thread. Since, Context Injection life is specific to the scenario Level.
Approach 1 : we can define the Token Generation Step within the BeforeScenario hooks and the generated Token values can be updated in the ScenarioContext.
we can directly access the token from the scenario context in any place like below
Here, Token will be generated before each scenario run and it will not affect the Parallel execution.For more Details, Parallel-Execution
Scenarios and their related hooks (Before/After scenario, scenario block, step) are isolated in the different threads during execution and do not block each other. Each thread has a separate (and isolated) ScenarioContext.
Hooks Class:
public class CommonHooks
{
[BeforeScenario]
public static void Setup()
{
// Add Token Generation Step
var adminToken = "<Generated Token>";
ScenarioContext.Current["Token"] = adminToken;
}
}
Step Class:
[Given(#"I Get the customer details""(.*)""")]
public void WhenIGetTheCustomerDetails(string endpoint)
{
if(ScenarioContext.Current.ContainsKey("Token"))
{
var token = ScenarioContext.Current["Token"].ToString();
//Now the Token variable holds the token value from the scenario context and It can be used in the subsequent steps
}
else
{
Assert.Fail("Unable to get the Token from the Scenario Context");
}
}
If you wish to share the same token across multiple Step, then you can assign this token value within constructor and it can be used
For Example,
[Binding]
public class CustomerManagementSteps
{
public readonly string token;
public CustomerManagementSteps()
{
token= ScenarioContext.Current["Token"].ToString();
}
[Given(#"I Get the customer details""(.*)""")]
public void WhenIGetTheCustomerDetails(string endpoint)
{
//Now the Token variable holds the token value from the scenario context and It can be used in the subsequent steps
}
}
Approach 2: Context Injection details can be referred in the below link with an example
Context Injection
Updated
Given the downvote and comments, I've updated my code example to better show exactly one way you can use dependency injection here with code of your own design. This shared data will last the lifetime of the scenario and be used by all bindings. I think that's what you're looking for unless I'm mistaken.
//Stores whatever data you want to share
//Write this however you want, it's your code
//You can use more than one of these custom data classes of course
public class SomeCustomDataStructure
{
//If this is run in paralell, this should be thread-safe. Using List<T> for simplicity purposes
//Use EF, ConcurrentCollections, synchronization (like lock), etc...
//Again, do NOT copy this code for parallel uses as List<int> is NOT thread-safe
//You can force things to not run in parallel so this can be useful by itself
public List<int> SomeData { get; } = new List<int>();
}
//Will be injected and the shared instance between any number of bindings.
//Lifespan is that of a scenario.
public class CatalogContext : IDisposable
{
public SomeCustomDataStructure CustomData { get; private set; }
public CatalogContext()
{
//Init shared data however you want here
CustomData = new SomeCustomDataStructure();
}
//Added to show Dispose WILL be called at the end of a scenario
//Feel free to do cleanup here if necessary.
//You do NOT have to implement IDiposable, but it's supported and called.
public void Dispose()
{
//Below obviously not thread-safe as mentioned earlier.
//Simple example is all.
CustomData.SomeData.Clear();
}
}
[Binding]
public class SomeSteps
{
//Data shared here via instane variable, accessable to multiple steps
private readonly CatalogContext catalogContext;
//Dependency injection handled automatically here.
//Will get the same instance between other bindings.
public SomeSteps(CatalogContext catalogContext)
{
this.catalogContext = catalogContext;
}
[Given(#"the following ints")]
public void GivenTheFollowingInts(int[] numbers)
{
//This will be visible to all other steps in this binding,
//and all other bindings sharing the context
catalogContext.CustomData.SomeData.AddRange(numbers);
}
}

Console app with MVC, Ninject and WCF Service (Dispose issue?)

I have a MVC application with all Ninject stuff wired up properly. Within the application I wanted to add functionality to call a WCF service, which then sends bulk messages (i.e. bulk printing) to RabbitMQ queue .
A 'processor' app subscribes to messages in the queue and process them. This is where I also want to update some stuff in the database, so I want all my services and repositories from the MVC app to be available too.
The processor app implements the following:
public abstract class KernelImplementation
{
private IKernel _kernel;
public IKernel Kernel
{
get
{
if (_kernel != null)
return _kernel;
else
{
_kernel = new StandardKernel(new RepositoryModule(),
new DomainModule(),
new ServiceModule(),
new MessageModule());
return _kernel;
}
}
}
}
All Ninject repository bindings are specified within RepositoryModule, which is also used within MVC app and look like this:
Bind<IReviewRepository>().To<ReviewRepository>().InCallScope();
The processor class
public class Processor : KernelImplementation
{
private readonly IReviewPrintMessage _reviewPrintMessage;
public Processor()
{
_reviewPrintMessage = Kernel.Get<IReviewPrintMessage>();
[...]
_bus.Subscribe<ReviewPrintContract>("ReviewPrint_Id",
(reviewPrintContract) => _reviewPrintMessage.ProcessReviewPrint(reviewPrintContract));
//calling ProcessReviewPrint where I want my repositories to be available
}
}
Everything works fine until I update the database from the MVC app or database directly. The processor app doesn't know anything about those changes and the next time it tries to process something, it works on a 'cached' DbContext. I'm sure it's something to do with not disposing the DbContext properly, but I'm not sure what scope should be used for a console app (tried all sort of different scopes to no avail).
The only solution I can think of at the moment is to call WCF service back from the processor app and perform all the necessary updates within the service, but I would want to avoid that.
UPDATE: Adding update logic
Simplified ReviewPrintMessage:
public class ReviewPrintMessage : IReviewPrintMessage
{
private readonly IReviewService _reviewService;
public ReviewPrintMessage(IReviewService reviewService)
{
_reviewService = reviewService;
}
public void ProcessReviewPrint(ReviewPrintContract reviewPrintContract)
{
var review =
_reviewService.GetReview(reviewPrintContract.ReviewId);
[...]
//do all sorts of stuff here
[...]
_reviewService.UpdateReview(review);
}
}
UpdateReview method in ReviewService:
public void UpdateTenancyAgreementReview(TenancyAgreementReview review)
{
_tenancyAgreementReviewRepository.Update(review);
_unitOfWork.Commit();
}
RepositoryBase:
public abstract class EntityRepositoryBase<T> where T : class
{
protected MyContext _dataContext;
protected EntityRepositoryBase(IDbFactory dbFactory)
{
this.DbFactory = dbFactory;
_dbSet = this.DataContext.Set<T>();
}
[...]
public virtual void Update(T entity)
{
try
{
DataContext.Entry(entity).State = EntityState.Modified;
}
catch (Exception exception)
{
throw new EntityException(string.Format("Failed to update entity '{0}'", typeof(T).Name), exception);
}
}
}
Context itself is bound like this:
Bind<MyContext>().ToSelf().InCallScope();
From the description of scopes I thought that Transient scope was the right choice, but as I said earlier I tried all sorts including RequestScope, TransientScope, NamedScope and even Singleton (although I knew it wouldn't be desired behaviour), but none of them seem to be disposing the context properly.
What you'll need is one DbContext instance per transaction.
Now other "applications" like web-applications or wcf-service may be doing one transaction per request (and thus use something like InRequestScope(). Also note, that these application create an object graph for each request. However, that is a concept unknown to your console application.
Furthermore, scoping only affects the instantiation of objects. Once they are instantiated, Scoping does not have any effect on them.
So one way to solve your issue would be to create the (relevant) object tree/graph per transaction and then you could use InCallScope() (InCallScope really means "once per instantiation of an object graph", see here).
That would mean that you'd need a factory for IReviewPrintMessage (have a look at ninject.extensions.factory) and create an instance of IReviewPrintMessage every time you want to execute IReviewPrintMessage.ProcessReviewPrint.
Now you have re-created the "per request pattern".
However, regarding CompositionRoot this is not recommended.
Alternative: you can also only re-create the DbContext as needed. Instead of passing it around everywhere (DbContext as additional parameter on almost every method) you use a SynchronizationContext local storage (or if you don't use TPL/async await: a ThreadLocal). I've already described this method in more detail here

StructureMap doesn't appear to be ready during HttpModule constructor -- is this correct?

This question is more to confirm my diagnosis of an issue we encountered--or to find alternative explanations.
We have an HTTPModule which intercepts every request made to our webforms application. It's job is to translate specific querystring parameters which our integration partners send.
More importantly, it was wired to StructureMap like this:
public class SomeModule : IHttpModule
{
public SomeModule()
{
ObjectFactory.BuildUp(this);
}
public IDependency Dependency { get; set; }
}
During a previous release it appeared that the module wasn't being injected by the time it executed it's request-processing. That led to some (ugly) defensive check being added like:
public class SomeModule : IHttpModule
{
public SomeModule()
{
ObjectFactory.BuildUp(this);
if (SomeDependency == null)
{
// HACK: Not sure why this corrects the issue!
Dependency = ObjectFactory.GetInstance<ISomeDependency>();
}
}
public IDependency Dependency { get; set; }
}
You'll notice the HACK comment -- it resolved the issue but without good reason.
Well, this same module has been re-purposed on another site--and the previous hack no longer worked. After looking at it for some time I made the change to move the StructureMap call outside the constructor, and lo-and-behold, it works.
public class SomeModule : IHttpModule
{
public IDependency Dependency { get; set; }
public void IHttpModule.Init(HttpApplication context)
{
Initialize();
// the rest of the code
}
private bool _initialized;
private void Initialize()
{
if (_initialized)
{
return;
}
ObjectFactory.BuildUp(this);
_initialized = true;
}
}
So, my I have a few questions around this behavior:
My suspicion is that StructureMap was not fully initialized/configured when the HttpModule constructor was being called -- agree/disagree, any insight?
I haven't found any reference materials that state when to expect StructureMap to be initialized and ready to service requests. Is there any such documentation?
I wouldn't expect the behavior to be very predictable when you're trying to build up a type in its constructor. Many actions that are normally considered safe are not considered safe in a constructor (like using a virtual property in a class).
Where you moved the code to looks much better and I would leave it there. If you can't have the container create the instance itself (and therefore are forced to build it up) some sort of Initialize method is the preferred place to do the build up action.
To answer the question you have at the end of your post:
It is up to the developer to determine when StructureMap is initialized. In a web application, this is almost always done in the Global.asax in Application_Start(). In that case, I would expect the container to be ready when your module is called.

Categories

Resources