I need to use network-validated time when scheduling jobs.
Solution?
With the latest .NET Quartz library, does anyone knowledgeable in the inner workings of Quartz .NET know which class(es) actually implement the job to start at the specified time? (details below)
Or is there an alternative C# library / API that already supports NTP queries and that can schedule jobs?
Please note that using the approach of inputting the difference between local time and network time to a non-NTP scheduler does not work for me because I need to prevent users cheating by way of changing their local system time or timezone
Quartz .NET Details - where I got stuck in my investigation
I cannot find how the property StartTimeUtc of ITrigger is used (set with the StartAt method below)?
var sleepTrigger = (ISimpleTrigger)TriggerBuilder.Create()
.WithIdentity("SleepTimeTrigger")
.StartAt(sleepRunTime)
.WithSimpleSchedule(x => x.WithIntervalInHours(24).RepeatForever())
.Build();
i.e. I need to check the specific implementation that uses the StartTimeUtc timestamp in order to change the scheduling source code / add an option to schedule independently of local system time and use network time instead.
There is a static class under Quartz namespace which has two methods receiving a Func<DateTimeOffset> that you can use to return your NTP tunned date/time. It is the "official" Quartz date/time source, and is implemented to allow easy unit testing, but could also be used to customize the date/time. Here is a a sample usage:
SystemTime.Now = () => {
//return your custom datetime here
/*
var ntpTime = new NtpTime(server);
return ntpTime.NowDateTimeOffset;
*/
};
SystemTime.UtcNow = () => {
//return your custom datetime here
/*
var ntpTime = new NtpTime(server);
return ntpTime.UtcNowDateTimeOffset;
*/
};
Notice the code above will not compile, you need to implement your own method of getting the current DateTimeOffset for both Now and UtcNow. There are many ways to get the time from a NTP, you can find some approaches here and here. I suggest that your implementation caches the current datetime and increment it locally instead of asking the ntp server on every call, for performance reasons.
Related
Introduction
Hello all, we're currently working on a microservice platform that uses Azure EventHubs and events to sent data in between the services.
Let's just name these services: CustomerService, OrderService and MobileBFF.
The CustomerService mainly sends updates (with events) which will then be stored by the OrderService and MobileBFF to be able to respond to queries without having to call the CustomerService for this data.
All these 3 services + our developers on the DEV environment make use of the same ConsumerGroup to connect to these event hubs.
We currently make use of only 1 partition but plan to expand to multiple later. (You can see our code is already made to be able to read from multiple partitions)
Exception
Every now and then we're running into an exception though (if it starts it usually keeps throwing this error for an hour or something). For now we've only seen this error on DEV/TEST environments though.
The exception:
Azure.Messaging.EventHubs.EventHubsException(ConsumerDisconnected): At least one receiver for the endpoint is created with epoch of '0', and so non-epoch receiver is not allowed. Either reconnect with a higher epoch, or make sure all epoch receivers are closed or disconnected.
All consumers of the EventHub, store their SequenceNumber in their own Database. This allows us to have each consumer consume events separately and also store the last processed SequenceNumber in it's own SQL database. When the service (re)starts, it loads the SequenceNumber from the db and then requests events from here onwards untill no more events can be found. It then sleeps for 100ms and then retries. Here's the (somewhat simplified) code:
var consumerGroup = EventHubConsumerClient.DefaultConsumerGroupName;
string[] allPartitions = null;
await using (var consumer = new EventHubConsumerClient(consumerGroup, _inboxOptions.EventHubConnectionString, _inboxOptions.EventHubName))
{
allPartitions = await consumer.GetPartitionIdsAsync(stoppingToken);
}
var allTasks = new List<Task>();
foreach (var partitionId in allPartitions)
{
//This is required if you reuse variables inside a Task.Run();
var partitionIdInternal = partitionId;
allTasks.Add(Task.Run(async () =>
{
while (!stoppingToken.IsCancellationRequested)
{
try
{
await using (var consumer = new EventHubConsumerClient(consumerGroup, _inboxOptions.EventHubConnectionString, _inboxOptions.EventHubName))
{
EventPosition startingPosition;
using (var testScope = _serviceProvider.CreateScope())
{
var messageProcessor = testScope.ServiceProvider.GetService<EventHubInboxManager<T, EH>>();
//Obtains starting position from the database or sets to "Earliest" or "Latest" based on configuration
startingPosition = await messageProcessor.GetStartingPosition(_inboxOptions.InboxIdentifier, partitionIdInternal);
}
while (!stoppingToken.IsCancellationRequested)
{
bool processedSomething = false;
await foreach (PartitionEvent partitionEvent in consumer.ReadEventsFromPartitionAsync(partitionIdInternal, startingPosition, stoppingToken))
{
processedSomething = true;
startingPosition = await messageProcessor.Handle(partitionEvent);
}
if (processedSomething == false)
{
await Task.Delay(100, stoppingToken);
}
}
}
}
catch (Exception ex)
{
//Log error / delay / retry
}
}
}
}
The exception is thrown on the following line:
await using (var consumer = new EventHubConsumerClient(consumerGroup, _inboxOptions.EventHubConnectionString, _inboxOptions.EventHubName))
More investigation
The code described above is running in the MicroServices (which are hosted as AppServices in Azure)
Next to that we're also running 1 Azure Function that also reads events from the EventHub. (Probably uses the same consumer group).
According to the documentation here: https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-features#consumer-groups it should be possible to have 5 consumers per consumer group. It seems to be suggested to only have one, but it's not clear to us what could happen if we don't follow this guidance.
We did do some tests with manually spawning multiple instances of our service that reads events and when there were more then 5 this resulted in a different error which stated quite clearly that there could only be 5 consumers per partition per consumer group (or something similar).
Furthermore it seems like (we're not 100% sure) that this issue started happening when we rewrote the code (above) to be able to spawn one thread per partition. (Even though we only have 1 partition in the EventHub). Edit: we did some more log-digging and also found a few exception before merging in the code to spawn one thread per partition.
That exception indicates that there is another consumer configured to use the same consumer group and asserting exclusive access over the partition. Unless you're explicitly setting the OwnerLevel property in your client options, the likely candidate is that there is at least one EventProcessorClient running.
To remediate, you can:
Stop any event processors running against the same Event Hub and Consumer Group combination, and ensure that no other consumers are explicitly setting the OwnerLevel.
Run these consumers in a dedicated consumer group; this will allow them to co-exist with the exclusive consumer(s) and/or event processors.
Explicitly set the OwnerLevel to 1 or greater for these consumers; that will assert ownership and force any other consumers in the same consumer group to disconnect.
(note: depending on what the other consumer is, you may need to test different values here. The event processor types use 0, so anything above that will take precedence.)
To add to the Jesse's answer, I think the exception message is part of
the old SDK.
If you look into the docs, there 3 types of receiving modes defined there:
Epoch
Epoch is a unique identifier (epoch value) that the service uses, to enforce partition/lease ownership.
The epoch feature provides users the ability to ensure that there is only one receiver on a consumer group at any point in time...
Non-epoch:
... There are some scenarios in stream processing where users would like to create multiple receivers on a single consumer group. To support such scenarios, we do have ability to create a receiver without epoch and in this case we allow upto 5 concurrent receivers on the consumer group.
Mixed:
... If there is a receiver already created with epoch e1 and is actively receiving events and a new receiver is created with no epoch, the creation of new receiver will fail. Epoch receivers always take precedence in the system.
In our App, We are storing questions with Question's startdate, enddate and resultdate. We need to send notification to app (iPhone and Andorid) once startdate of question is arrives.
Can anybody let me know how can we achieve this?
We don't want to use pull method. like in particular time interval it will check for question startdate and send notification.
I have a URL to send Notification for question. I need to call this URL when question's startdate is arrived.
Thanks.
Take a look at Quartz :
Quartz.NET is a full-featured, open source job scheduling system that can be used from smallest apps to large scale enterprise systems
Quartz Enterprise Scheduler .NET
You can create a new Quarts Job, lets call it QuestionSenderJob. Then your application can schedule a task in Quartz scheduler, jobs can have many instances of same Job with custom data - in your case QuestionId.
Additionally it supports storing Job scheduling in your SQL database (there are DDL Scripts included) so you can create some relations if you need for UI for example.
You can find table-creation SQL scripts in the "database/dbtables" directory of the Quartz.NET distribution
Lesson 9: JobStores
This way you leave firing in right moment to Quartz engine.
When you will go through Quartz .NET basics, see this code snippet I made a for your case to schedule job. Perhaps some modifications will be necessary thought.
IDictionary<string, object> jobData = new Dictionary<string, object> { { "QuestionId", questionId } };
var questionDate = new DateTime(2016, 09, 01);
var questionTriggerName = string.Format("Question{0}_Trigger", questionId);
var questionTrigger = TriggerBuilder.Create()
.WithIdentity(questionTriggerName, "QuestionSendGroup")
.StartAt(questionDate)
.UsingJobData(new Quartz.JobDataMap(jobData))
.Build();
scheduler
.ScheduleJob(questionSenderJob, questionTrigger);
Then in Job you will get your questionId through JobExecutionContext.
public class QuestionSenderJob: IJob
{
public void Execute(JobExecutionContext context)
{
JobDataMap dataMap = context.JobDetail.JobDataMap;
// Extract question Id and send message
}
}
What about using the Task Scheduler Managed Wrapper?
You do not want to use pooling, but if you write your own class that will encapsulate Timer (e.g. System.Thread.Timer) and check for the time each second, that will not take much resources. Depending on how exact you need it, you could check also less often, e.g. each minute. Maybe you should reconsider it.
If you use any third party service to manage your push notification such as Azure Notification Hub, Parse.com, ... they offer an integrated way to schedule push notifications. Either by passing in a send date or let them run a job periodically. I'm a user of the Azure service and it works very well.
The best implementation i can advice right now is for you to send the notification from a server.
All you just need is a good scheduler that can dispatch operation.
For me, my server is powered by Javascript (NodeJS) so i use "node-schedule". All i just do is
var schedule = require('node-schedule');
//Reporting rule at minute 1 every hour
var rule = new schedule.RecurrenceRule();
rule.minute = 1;
schedule.scheduleJob(rule, function () {
console.log(new Date().toTimeString() + ' Testing Scheduler! Executing Every other minute');
//sendPush()
});
I have a CSV importer tool I use at my company that imports 10-20k records at a time but it can take a couple hours, the issue is that the application is connecting to an API that has an OAuth token that expires after an hour.
to me this sounds like a job for timer, but the actual code that imports and needs the oauth token are in modules since each vendor I have to upload have their own mappings to the api we use.
so I need to programmatically need to see if 3590 seconds (or 50 minutes) has passed so that I can refresh my OAuth token.
does anyone know how I can do this? if timer is not the best way to go, what way would you suggest?
it'd be nice if timer has an Elapsed property I could access from my other objects (like I can with background worker).
You could just make it part of your processing loop:
{
DateTime lastReset = DateTime.Min;
TimeSpan resetInterval = TimeSpan.FromMinutes(50);
foreach (var whatever in enumerable)
{
if ((DateTime.Now - lastReset) > resetInterval)
{
ResetAuthToken();
lastReset = DateTime.Now;
}
ProcessWhatever();
}
}
I would suggest that you can use the timer's elapsed event. This will be triggered based on the interval may be 50 minutes etc, which you can read from the configuration file of the windows service.
Then in the timer interval, you can just update a global variable [property] with the Auth token that will be used for the subsequent API calls.
In case you just want to keep the session alive, you can just refresh the token as itmse86 said. However, the timer elapsed event will come handy for you.
Reference here
Assume I have two Quartz.net jobs that
downloads a CSV file with a delta of changes for a period (e.g. 24h) and then imports the data (called IncrementalImportJob)
downloads a CSV file with a all the records and then imports the data (called FullImportJob)
The requirement is that IncrementalImportJob at a minimum once for the period (e.g. 24h). If that window is missed, or the job didn't complete successfully, then FullImportJob should run instead. The reason is that changes for that (missed) day would not be imported. This condition is rather exceptional.
The FullImportJob requires resources (time, CPU, database, memory) to import all the data, which may impact other systems. Further, the delta of changes are often minimal or non-existent. So the goal is to favour running the IncrementalImportJob when possible.
How does one configure quartz.net to run FullImportJob if IncrementalImportJob hasn't completed successfully in a specific time period (say 24h)?
Searching the web for "quartz.net recovery" and "quartz.net misfire" doesn't reveal whether its supported or whether its even possible.
There is native misfire handling in quartz.net, however it only goes as far as specifying whether the job should fire immediately again, or after a period of time or a number of times after misfiring.
I think one option is to handle this internally from IncrementalImportJob.
try
{
//download data
//import data
}
catch (Exception e) //something went wrong
{
//log the error
UpdateFullImportJobTrigger(sched);
}
//Reschedule FullImportJob to run at a time of your choosing.
public void UpdateFullImportJobTrigger(IScheduler sched)
{
Trigger oldTrigger = sched.getTrigger(triggerKey("oldTrigger", "group1");
TriggerBuilder tb = oldTrigger.getTriggerBuilder();
//if you want it to run based on a schedule use this:
Trigger newTrigger = tb.withSchedule(simpleSchedule()
.withIntervalInSeconds(10)
.withRepeatCount(10)
.build();
sched.rescheduleJob(oldTrigger.getKey(), newTrigger);
//or use simple trigger if you want it to run immediately and only once so that
//it runs again on schedule the next time.
}
This is one way of doing it. Another would be abstracting this logic to a maintenance job that checks the logs every so often and if it finds a failure message from IncrementalImportJob, it fires FullImportJob. However, this depends to some extent on your logging system (most people use NLog or log4net).
If on the other hand, your concern is that the job never ran in the first place because, for instance, the app/database/server was down, you could schedule FullImportJob to fire a few hours later and check if IncrementalImportJob has fired as follows:
//this is done from FullImportJob
//how you retrieve triggerKey will depend on whether
//you are using RAMJobStore or ADO.NET JobStore
public void Execute(IJobExecutionContext context)
{
ITrigger incImportJobTrigger = context.Scheduler.GetTrigger(triggerKey);
//if the job has been rescheduled with a new time quartz will set this to null
if (!incImportJobTrigger.GetPreviousFireTimeUtc().HasValue) return;
DateTimeOffset utcTime = incImportJobTrigger.GetPreviousFireTimeUtc().Value;
DateTime previousTireTime = utcTime.LocalDateTime;
if (previousTireTime.Day == DateTime.Now.Day) return;
//IncrementalImportJob has not ran today, let's run FullImportJob
}
Hope this helps.
Ok, little bit of background here. I have a large scale web application (MVC3) which does all kinds of unimportant stuff. I need this web application to have the ability to schedule ad-hoc Quartz.NET jobs in an Oracle database. Then, I want the jobs to be executed later on via a windows service. Ideally, I'd like to schedule them to run in even intervals, but with the option to add jobs via the web app.
Basically, the desired architecture is some variation of this:
Web app <--> Quartz.NET <--> Database <--> Quartz.NET <--> Windows Service
What I have coded up so far:
A windows service which (for now) schedules AND runs the Jobs. This obviously isn't going to be the case in the long run, but I'm wondering if I can keep just this and modify it to have it basically represent both "Quartz.NET's" in the diagram above.
The web app (details I guess aren't very important here)
The jobs (which are actually just another windows service)
And a couple important notes:
It HAS to be run from a windows service, and it HAS to be scheduled through the web app (to reduce load on IIS)
The architecture above can be rearranged a little bit, assuming the above bullet still applies.
Now, a few questions:
Is this even possible?
Assuming (1) passes, what do you guys think is the best architecture for this? See first bullet on what I've coded up.
Can somebody maybe give me a few Quartz methods that will help me out with querying the DB for jobs to execute once they're already scheduled?
There will be a bounty on this question in as soon as it is eligible. If the question is answered in a satisfactory way before then, I will still award the bounty to the poster of the answer. So, in any case, if you give a good answer here, you'll get a bounty.
I'll try answering your questions in the order you have them.
Yes, it's possible to do this. It's actually a common way of working with Quartz.Net. In fact, you can also write an ASP.Net MVC application that manages Quartz.Net schedulers.
Architecture. Ideally and at a high level, your MVC application will use the Quartz.Net API to talk to a Quartz.Net server that is installed as a windows service somewhere. Quartz.Net uses remoting to communicate remotely, so any limitations of using remoting apply (like it's not supported in Silverlight, etc). Quartz.Net provides a way to install it as a windows service out of the box, so there really isn't much work to be done here, other than configuring the service itself to use (in your case) an AdoJobStore, and also enabling remoting. There is some care to be taken around how to install the service properly, so if you haven't done that yet, take a look at this post.
Internally, in your MVC application you'll want to get a reference to the scheduler and store it as a singleton. Then in your code you'll schedule jobs and get information about the scheduler through this unique instance. You could use something like this:
public class QuartzScheduler
{
public QuartzScheduler(string server, int port, string scheduler)
{
Address = string.Format("tcp://{0}:{1}/{2}", server, port, scheduler);
_schedulerFactory = new StdSchedulerFactory(getProperties(Address));
try
{
_scheduler = _schedulerFactory.GetScheduler();
}
catch (SchedulerException)
{
MessageBox.Show("Unable to connect to the specified server", "Connection Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
}
public string Address { get; private set; }
private NameValueCollection getProperties(string address)
{
NameValueCollection properties = new NameValueCollection();
properties["quartz.scheduler.instanceName"] = "RemoteClient";
properties["quartz.scheduler.proxy"] = "true";
properties["quartz.threadPool.threadCount"] = "0";
properties["quartz.scheduler.proxy.address"] = address;
return properties;
}
public IScheduler GetScheduler()
{
return _scheduler;
}
}
This code sets up your Quart.Net client. Then to access the remote scheduler, just call
GetScheduler()
Querying
Here is some sample code to get all the jobs from the scheduler:
public DataTable GetJobs()
{
DataTable table = new DataTable();
table.Columns.Add("GroupName");
table.Columns.Add("JobName");
table.Columns.Add("JobDescription");
table.Columns.Add("TriggerName");
table.Columns.Add("TriggerGroupName");
table.Columns.Add("TriggerType");
table.Columns.Add("TriggerState");
table.Columns.Add("NextFireTime");
table.Columns.Add("PreviousFireTime");
var jobGroups = GetScheduler().GetJobGroupNames();
foreach (string group in jobGroups)
{
var groupMatcher = GroupMatcher<JobKey>.GroupContains(group);
var jobKeys = GetScheduler().GetJobKeys(groupMatcher);
foreach (var jobKey in jobKeys)
{
var detail = GetScheduler().GetJobDetail(jobKey);
var triggers = GetScheduler().GetTriggersOfJob(jobKey);
foreach (ITrigger trigger in triggers)
{
DataRow row = table.NewRow();
row["GroupName"] = group;
row["JobName"] = jobKey.Name;
row["JobDescription"] = detail.Description;
row["TriggerName"] = trigger.Key.Name;
row["TriggerGroupName"] = trigger.Key.Group;
row["TriggerType"] = trigger.GetType().Name;
row["TriggerState"] = GetScheduler().GetTriggerState(trigger.Key);
DateTimeOffset? nextFireTime = trigger.GetNextFireTimeUtc();
if (nextFireTime.HasValue)
{
row["NextFireTime"] = TimeZone.CurrentTimeZone.ToLocalTime(nextFireTime.Value.DateTime);
}
DateTimeOffset? previousFireTime = trigger.GetPreviousFireTimeUtc();
if (previousFireTime.HasValue)
{
row["PreviousFireTime"] = TimeZone.CurrentTimeZone.ToLocalTime(previousFireTime.Value.DateTime);
}
table.Rows.Add(row);
}
}
}
return table;
}
You can view this code on Github