Conditional logic based on time of day? - c#

I would like to run code conditionally based on the time of day. The code is within a while loop in several worker tasks that run throughout my programs lifetime. Performing the comparison on every loop iteration seems wasteful, is there a more efficient way to get this desired result?
The restate my question in code, I am asking if there is a more efficient to duplicate this functionality, perhaps using timers or some other scheduling mechanism:
while( workerNotCanceled )
{
var time = DateTime.Now;
if (time.Hour > 8 and time.Hour < 16)
DoWork();
}

I don't know what DoWork() does, but it's quite probable that the time comparison is neglectable compared to it. You will only have a lot of time comparisons after 16:00 o'clock and before 8:00 o'clock. If the loop is entered outside the time frame, you could block the thread until it should do its work. If it is after 16:00 o'clock, it will sleep until the next day 8:00 o'clock, if it's before 8 o'clock, it will sleep until the same day 8:00 o'clock. Note that when you use Thread.Sleep you will be unable to cancel the loop outside the working time frame. If you want to do this, you can use a cancellation token.
while( workerNotCanceled )
{
var time = DateTime.Now;
if (time.Hour > 8 and time.Hour < 16)
DoWork();
else if(time.Hour >= 16)
{
DateTime nextStart = DateTime.Now.Date.AddDays(1).AddHours(8);
Thread.Sleep(nextStart - DateTime.Now);
}
else
{
DateTime nextStart = DateTime.Now.Date.AddHours(8);
Thread.Sleep(nextStart - DateTime.Now);
}
}

So, I figure there would be different ways to approach this. If I had to do it, I probably would go for some variation of the strategy pattern and divide & conquer.
Divide & Conquer:
Separate switching / time checking from the actual job. So, I'd find some way to exchange a "do the job" strategy to a "do nothing" or "drop job" strategy.
That could be done using Quartz or a similar scheduling framework inside the app, which would trigger "switch off" and "switch on" jobs at the appropriate times.
Same could be done with cron or windows task scheduler which could trigger an api in your app. This opens up an attack vector, though.
Strategy pattern
That's relatively simple here: You'd have an interface with two implementations. The "switch on/off" jobs then simply "plug in" the appropriate implementation.
Example:
interface IWorker{
void DoWork(); // maybe with an argument
}
class ActiveWorker : IWorker
{
public void DoWork()
{
workerService.DoWork(); // replace with whatever is appropriate for you.
}
}
class InactiveWorker : IWorker
{
public void DoWork()
{
// Maybe just do nothing?
// Maybe actively drop a request?
}
}
In your consumer, you'd then have
IWorker worker; // Setup initially based on DateTime.Now
void ConsumingLooper()
{
//...
worker.DoWork(); // Based on which impl. is set to 'worker' will
// either handle or drop
}
Don't forget to add measures to handle the case where the looper wants to call worker.DoWork() while it is switched out. I left it out for brevity and also there are many different ways to achieve it. You may want to pick your favorite.

Related

What is the best approach to schedule events?

I have an application in which the user is able to create different notifications, like sticky notes and set their starting times. When he presses the start button a timer starts and these reminders should pop up at the time they were set for. I've searched for other answers, like this one, but the problem here is the notifications' times are different.
So what is the best way to schedule the events that activate the notifications?
I can think of two possible ways with their Pros and Cons:
Run a DispatcherTimer, that ticks every second and checks whether the time for a notification has come and pop it up. Pros: single DispatcherTimer instance. Cons: ticking every second and checking all notifications is an overhead.
Create a DispatcherTimer for each notification and let them handle time themselves. Pros: every timer ticks just once to pop the notification. Cons: too many timers is an overhead and may be hard to control.
Am I on the right track? Which of the two approaches is better, resource wise? Is there a third better way I am overlooking?
EDIT: If it makes any difference, the notifications should also auto close after some user-defined time and repeat at regular user-defined intervals.
I've used many methods to schedule events in C# Applications (Threads, Timers, Quartz API...), and I think that the Quertz.NET API -link- is the best tool you'll find (For me at least). It's easy and simple to use.
Example of your job class:
public class HelloJob : IJob
{
public void Execute(IJobExecutionContext context)
{
Console.WriteLine("Greetings from HelloJob!");
}
}
Example from the internet:
// Instantiate the Quartz.NET scheduler
var schedulerFactory = new StdSchedulerFactory();
var scheduler = schedulerFactory.GetScheduler();
// Instantiate the JobDetail object passing in the type of your
// class. Your class needs to implement a IJob interface
var job = new JobDetail("job1", "group1", typeof(HelloJob));
// Instantiate a trigger using the basic cron syntax.
// Example : run at 1AM every Monday - Friday.
var trigger = new CronTrigger(
"trigger1", "group1", "job1", "group1", "0 0 1 ? * MON-FRI");
// Add the job to the scheduler
scheduler.AddJob(job, true);
scheduler.ScheduleJob(trigger);
You'll find a helpful code example in the QuickSart guide here.
Regards.
If the notification system is going to be used inside single process, continue with single dispatcher timer. Make sure the dispatcher timer is set to the near notification. and each time a new notification is created or timer hit ,change the time to next nearest notification.
That way you can avoid processing every time.
eg: First time when somebody creates notification point timer to that time. If someone else create another notification before first hits change the timer to second. If the second time is after the first time, change the timer after dispatching the first notification call back. If its threaded you may need to work hard to get thread safety.
If notification is needed across process use windows task scheduler which already knows how to run timer and call our code on time. You may need to use some sort of IPC (WCF net.pipe, msmq etc...) to achieve notification.

Implement timeout for function/block

I would like to do sth like this pseudo-code:
try for new Timespan(0,0,4) {
// maximum execution time of this block: 4 seconds
result = longRunningFunction(parameter);
doSthWithResult(result);
...
} catch(TimeOutException) {
Console.WriteLine("TimeOut occured");
}
Is such a construct available in C#, and if not, how would I implement a behaviour that allows trying to execute a function/block for a certain amount of time?
If that helps: I am searching for ASP.NET WebAPI (although I could use the timeout in a WinForms App as well).
If you want the longRunningFunction() itself to stop, then you need to implement logic in that method to do so. How to do that depends on the exact implementation of the method, which you haven't provided, so that would be unanswerable given your current question.
However, in many cases it's sufficient to simply abandon an operation, letting it run to completion on its own but simply ignoring the result. You might call that "getting on with your life". :)
If that's the case here, then something like this should work for you:
Task<T> resultTask = Task.Run(() => longRunningFunction(parameter));
// maximum execution time of this block: 4 seconds
await Task.WhenAny(resultTask, Task.Delay(4000));
if (resultTask.IsCompleted)
{
doSthWithResult(resultTask.Result);
}
else
{
Console.WriteLine("TimeOut occurred");
}
Replace T in the resultTask declaration with whatever the actual return type for longRunningFunction() is.
Note that the above is opportunistic, in that even if the long-running operation takes longer than 4 seconds and the Task.Delay(4000) wins the race, as long as it completes by the time your code gets back to the if (resultTask.IsCompleted check, it will still be considered a success. If you want to give it a strict cut-off, ignoring the result if the Task.Delay(4000) completes first even if the operation finishes by the time you actually check which finished first, you can do that by looking at which task finished first:
Task winner = await Task.WhenAny(resultTask, Task.Delay(4000));
if (resultTask == winner)
{
doSthWithResult(resultTask.Result);
}
else
...
Finally: even if you do need the longRunningFunction() to stop, you can use the above technique and then interrupt the operation in the else clause where you report the time-out (via whatever mechanism is appropriate in your caseā€¦again, without the actual code it's not possible to say exactly what that would be).

Recalculating the "state" of 50 objects every second

I want to recalculate "StrategyState" of some object every second. I do not want to create a Timer for that, because at the same time, from another thread, I can access "StrategyState" property (also, Timer probably would be too heavy for my simple problem). I defined StrategyState type myself this way:
public enum StrategyState
{
OnlyKill,
Working,
ClosingStage1,
ClosingStage2,
Closed
}
I'm not sure if it will be "thread-safe" to write such object from one thread and to read from another thread.
So I was thinking to "lazy-update" my StrategyState State field, like that:
....
if ( /* State was not updated for one second or more. */ ) {
RecalculateState()
}
switch (State) {
.... // Work
How do I test state was not updated for one second or more without adding too much latency?
I can obviously create Stopwatch, but note that I need to update about 50 states totally for different objects in different threads. I'm not sure if I should add 50 Stopwatch to the system.
Probably, it's better to add one Stopwatch and share it, because I guess Stopwatch class is likely thread-safe.
What can you suggest?
Just add a DateTime for the last evaluation time:
private DateTime m_lastStateEvaluation = DateTime.MinValue;
And then
if ((DateTime.Now - m_lastStateEvaluation).TotalSeconds >= 1))
{
// Evaluate state
m_lastStateEvaluation = DateTime.Now;
}
This won't add too much time at all to the operation.
Incidentally, using a lock statement will resolve any threading issues if you use a timer. And you could have a single timer handle all 50 objects if you want.
I've implemented it like this:
private long recalculateStateTimestamp;
private const long oneSecond = 10000000;
.....
long newTime = DateTime.Now.Ticks;
if (newTime - recalculateStateTimestamp < oneSecond)
{
return;
}
recalculateStateTimestamp = newTime;
I assume this is one of the fastest ways to implement it. Also, it is partly thread-safe (enough for me).

When is Parallel.Invoke useful?

I'm just diving into learning about the Parallel class in the 4.0 Framework and am trying to understand when it would be useful. At first after reviewing some of the documentation I tried to execute two loops, one using Parallel.Invoke and one sequentially like so:
static void Main()
{
DateTime start = DateTime.Now;
Parallel.Invoke(BasicAction, BasicAction2);
DateTime end = DateTime.Now;
var parallel = end.Subtract(start).TotalSeconds;
start = DateTime.Now;
BasicAction();
BasicAction2();
end = DateTime.Now;
var sequential = end.Subtract(start).TotalSeconds;
Console.WriteLine("Parallel:{0}", parallel.ToString());
Console.WriteLine("Sequential:{0}", sequential.ToString());
Console.Read();
}
static void BasicAction()
{
for (int i = 0; i < 10000; i++)
{
Console.WriteLine("Method=BasicAction, Thread={0}, i={1}", Thread.CurrentThread.ManagedThreadId, i.ToString());
}
}
static void BasicAction2()
{
for (int i = 0; i < 10000; i++)
{
Console.WriteLine("Method=BasicAction2, Thread={0}, i={1}", Thread.CurrentThread.ManagedThreadId, i.ToString());
}
}
There is no noticeable difference in time of execution here, or am I missing the point? Is it more useful for asynchronous invocations of web services or...?
EDIT: I removed the DateTime with Stopwatch, removed the write to the console with a simple addition operation.
UPDATE - Big Time Difference Now: Thanks for clearing up the problems I had when I involved Console
static void Main()
{
Stopwatch s = new Stopwatch();
s.Start();
Parallel.Invoke(BasicAction, BasicAction2);
s.Stop();
var parallel = s.ElapsedMilliseconds;
s.Reset();
s.Start();
BasicAction();
BasicAction2();
s.Stop();
var sequential = s.ElapsedMilliseconds;
Console.WriteLine("Parallel:{0}", parallel.ToString());
Console.WriteLine("Sequential:{0}", sequential.ToString());
Console.Read();
}
static void BasicAction()
{
Thread.Sleep(100);
}
static void BasicAction2()
{
Thread.Sleep(100);
}
The test you are doing is nonsensical; you are testing to see if something that you can not perform in parallel is faster if you perform it in parallel.
Console.Writeline handles synchronization for you so it will always act as though it is running on a single thread.
From here:
...call the SetIn, SetOut, or SetError method, respectively. I/O
operations using these streams are synchronized, which means multiple
threads can read from, or write to, the streams.
Any advantage that the parallel version gains from running on multiple threads is lost through the marshaling done by the console. In fact I wouldn't be surprised to see that all the thread switching actually means that the parallel run would be slower.
Try doing something else in the actions (a simple Thread.Sleep would do) that can be processed by multiple threads concurrently and you should see a large difference in the run times. Large enough that the inaccuracy of using DateTime as your timing mechanism will not matter too much.
It's not a matter of time of execution. The output to the console is determined by how the actions are scheduled to run. To get an accurate time of execution, you should be using StopWatch. At any rate, you are using Console.Writeline so it will appear as though it is in one thread of execution. Any thing you have tried to attain by using parallel.invoke is lost by the nature of Console.Writeline.
On something simple like that the run times will be the same. What Parallel.Invoke is doing is running the two methods at the same time.
In the first case you'll have lines spat out to the console in a mixed up order.
Method=BasicAction2, Thread=6, i=9776
Method=BasicAction, Thread=10, i=9985
// <snip>
Method=BasicAction, Thread=10, i=9999
Method=BasicAction2, Thread=6, i=9777
In the second case you'll have all the BasicAction's before the BasicAction2's.
What this shows you is that the two methods are running at the same time.
In ideal case (if number of delegates is equal to number of parallel threads & there are enough cpu cores) duration of operations will become MAX(AllDurations) instead of SUM(AllDurations) (if AllDurations is a list of each delegate execution times like {1sec,10sec, 20sec, 5sec} ). In less idealcase its moving in this direction.
Its useful when you don't care about the order in which delegates are invoked, but you care that you block thread execution until every delegate is completed, so yes it can be a situation where you need to gather data from various sources before you can proceed (they can be webservices or other types of sources).
Parallel.For can be used much more often I think, in this case its pretty much required that you got different tasks and each is taking substantial duration to execute, and I guess if you don't have an idea of possible range of execution times ( which is true for webservices) Invoke will shine the most.
Maybe your static constructor requires to build up two independant dictionaries for your type to use, you can invoke methods that fill them using Invoke() in parallel and shorten time 2x if they both take roughly same time for example.

Output individual data from structure based on timer

I'am creating a "man-in-middle" style application that applies a network latency to the transmissions, not for malicious use I should declare.
However I'm having difficulty with the correct output mechanisms on the data structure (LinkedList<string> buffer = new LinkedList<string>();).
What should happen:
Read data into structure from clientA.
if (buffer.First != null && buffer.Last != null)
{
buffer.AddAfter(buffer.Last, ServerRead.ReadLine().ToString());
}
else
buffer.AddFirst(ServerRead.ReadLine().ToString());
Using an individual or overall timer to track when to release the data to ClientB. (adjustable timer to adjust latency)
Timer on item in structure triggers, thus releasing the packet to clientB.
Clean up free data structure node
if (buffer.First != null)
{
clientWrite.WriteLine(buffer.First.Value.ToString());
clientWrite.Flush();
buffer.RemoveFirst();
}
However I have been trying to use the System.Windows.Forms.Timer to create a global timer that triggers a thread which handles the data output to clientB. However I'am finding this technique to be too slow, even when setting the myTimer.Interval = 1; This creates a concurrency problem with when clearing up the list and adding to it, the temporary solution is by locking the resource but I feel this is adding to the slow performance of data output.
Question:
I need some ideas on a solution that can store data into a data structure and apply a timer (like an egg timer effect) on the data stored and when that timer runs out it will be sent on its way to the other clients.
Regards, House.
The linked list will work, and it's unlikely that locking it (if done properly) will cause poor performance. You'd probably be much better off using ConcurrentQueue. It's thread-safe, so you don't have to do any explicit blocking.
I would suggest using System.Threading.Timer rather than the Windows Forms timer. Note, though, that you're still going to be limited to about 15 ms resolution. That is, even with a timer interval of 1, your effective delay times will be in the range of 15 to 25 ms rather than 1 ms. It's just the way the timers are implemented.
Also, since you want to delay each item for a specified period of time (which I assume is constant), you need some notion of "current time." I don't recommend using DateTime.Now or any of its variants, because the time can change. Rather, I use Stopwatch to get an application-specific time.
Also, you'll need some way to keep track of release times for the items. A class to hold the item, and the time it will be sent. Something like:
class BufferItem
{
public string Data { get; private set; }
public TimeSpan ReleaseTime { get; private set; }
public BufferItem(string d, TimeSpan ts)
{
data = d;
ReleaseTime = ts;
}
}
Okay. Let's put it all together.
// the application clock
Stopwatch AppTime = Stopwatch.StartNew();
// Amount of time to delay an item
TimeSpan DelayTime = TimeSpan.FromSeconds(1.0);
ConcurrentQueue<BufferItem> ItemQueue = new ConcurrentQueue<BufferItem>();
// Timer will check items for release every 15 ms.
System.ThreadingTimer ReleaseTimer = new System.Threading.Timer(CheckRelease, null, 15, 15);
Receiving an item:
// When an item is received:
// Compute release time and add item to buffer.
var item = new BufferItem(data, AppTime.Elapsed + DelayTime);
ItemQueue.Add(item);
The timer proc.
void CheckRelease(object state)
{
BufferItem item;
while (ItemQueue.TryPeek(out item) && item.ReleaseTime >= AppTime)
{
if (ItemQueue.TryDequeue(out item))
{
// send the item
}
}
}
That should perform well and you shouldn't have any concurrency issues.
If you don't like that 15 ms timer ticking all the time even when there aren't any items, you could make the timer a one-shot and have the CheckRelease method re-initialize it with the next release time after dequeing items. Of course, you'll also have to make the receive code initialize it the first time, or when there aren't any items in the queue. You'll need a lock to synchronize access to updating the timer.

Categories

Resources