I want to recalculate "StrategyState" of some object every second. I do not want to create a Timer for that, because at the same time, from another thread, I can access "StrategyState" property (also, Timer probably would be too heavy for my simple problem). I defined StrategyState type myself this way:
public enum StrategyState
{
OnlyKill,
Working,
ClosingStage1,
ClosingStage2,
Closed
}
I'm not sure if it will be "thread-safe" to write such object from one thread and to read from another thread.
So I was thinking to "lazy-update" my StrategyState State field, like that:
....
if ( /* State was not updated for one second or more. */ ) {
RecalculateState()
}
switch (State) {
.... // Work
How do I test state was not updated for one second or more without adding too much latency?
I can obviously create Stopwatch, but note that I need to update about 50 states totally for different objects in different threads. I'm not sure if I should add 50 Stopwatch to the system.
Probably, it's better to add one Stopwatch and share it, because I guess Stopwatch class is likely thread-safe.
What can you suggest?
Just add a DateTime for the last evaluation time:
private DateTime m_lastStateEvaluation = DateTime.MinValue;
And then
if ((DateTime.Now - m_lastStateEvaluation).TotalSeconds >= 1))
{
// Evaluate state
m_lastStateEvaluation = DateTime.Now;
}
This won't add too much time at all to the operation.
Incidentally, using a lock statement will resolve any threading issues if you use a timer. And you could have a single timer handle all 50 objects if you want.
I've implemented it like this:
private long recalculateStateTimestamp;
private const long oneSecond = 10000000;
.....
long newTime = DateTime.Now.Ticks;
if (newTime - recalculateStateTimestamp < oneSecond)
{
return;
}
recalculateStateTimestamp = newTime;
I assume this is one of the fastest ways to implement it. Also, it is partly thread-safe (enough for me).
Related
I would like to run code conditionally based on the time of day. The code is within a while loop in several worker tasks that run throughout my programs lifetime. Performing the comparison on every loop iteration seems wasteful, is there a more efficient way to get this desired result?
The restate my question in code, I am asking if there is a more efficient to duplicate this functionality, perhaps using timers or some other scheduling mechanism:
while( workerNotCanceled )
{
var time = DateTime.Now;
if (time.Hour > 8 and time.Hour < 16)
DoWork();
}
I don't know what DoWork() does, but it's quite probable that the time comparison is neglectable compared to it. You will only have a lot of time comparisons after 16:00 o'clock and before 8:00 o'clock. If the loop is entered outside the time frame, you could block the thread until it should do its work. If it is after 16:00 o'clock, it will sleep until the next day 8:00 o'clock, if it's before 8 o'clock, it will sleep until the same day 8:00 o'clock. Note that when you use Thread.Sleep you will be unable to cancel the loop outside the working time frame. If you want to do this, you can use a cancellation token.
while( workerNotCanceled )
{
var time = DateTime.Now;
if (time.Hour > 8 and time.Hour < 16)
DoWork();
else if(time.Hour >= 16)
{
DateTime nextStart = DateTime.Now.Date.AddDays(1).AddHours(8);
Thread.Sleep(nextStart - DateTime.Now);
}
else
{
DateTime nextStart = DateTime.Now.Date.AddHours(8);
Thread.Sleep(nextStart - DateTime.Now);
}
}
So, I figure there would be different ways to approach this. If I had to do it, I probably would go for some variation of the strategy pattern and divide & conquer.
Divide & Conquer:
Separate switching / time checking from the actual job. So, I'd find some way to exchange a "do the job" strategy to a "do nothing" or "drop job" strategy.
That could be done using Quartz or a similar scheduling framework inside the app, which would trigger "switch off" and "switch on" jobs at the appropriate times.
Same could be done with cron or windows task scheduler which could trigger an api in your app. This opens up an attack vector, though.
Strategy pattern
That's relatively simple here: You'd have an interface with two implementations. The "switch on/off" jobs then simply "plug in" the appropriate implementation.
Example:
interface IWorker{
void DoWork(); // maybe with an argument
}
class ActiveWorker : IWorker
{
public void DoWork()
{
workerService.DoWork(); // replace with whatever is appropriate for you.
}
}
class InactiveWorker : IWorker
{
public void DoWork()
{
// Maybe just do nothing?
// Maybe actively drop a request?
}
}
In your consumer, you'd then have
IWorker worker; // Setup initially based on DateTime.Now
void ConsumingLooper()
{
//...
worker.DoWork(); // Based on which impl. is set to 'worker' will
// either handle or drop
}
Don't forget to add measures to handle the case where the looper wants to call worker.DoWork() while it is switched out. I left it out for brevity and also there are many different ways to achieve it. You may want to pick your favorite.
I'm trying to implement the Parallel.ForEach pattern and track progress, but I'm missing something regarding locking. The following example counts to 1000 when the threadCount = 1, but not when the threadCount > 1. What is the correct way to do this?
class Program
{
static void Main()
{
var progress = new Progress();
var ids = Enumerable.Range(1, 10000);
var threadCount = 2;
Parallel.ForEach(ids, new ParallelOptions { MaxDegreeOfParallelism = threadCount }, id => { progress.CurrentCount++; });
Console.WriteLine("Threads: {0}, Count: {1}", threadCount, progress.CurrentCount);
Console.ReadKey();
}
}
internal class Progress
{
private Object _lock = new Object();
private int _currentCount;
public int CurrentCount
{
get
{
lock (_lock)
{
return _currentCount;
}
}
set
{
lock (_lock)
{
_currentCount = value;
}
}
}
}
The usual problem with calling something like count++ from multiple threads (which share the count variable) is that this sequence of events can happen:
Thread A reads the value of count.
Thread B reads the value of count.
Thread A increments its local copy.
Thread B increments its local copy.
Thread A writes the incremented value back to count.
Thread B writes the incremented value back to count.
This way, the value written by thread A is overwritten by thread B, so the value is actually incremented only once.
Your code adds locks around operations 1, 2 (get) and 5, 6 (set), but that does nothing to prevent the problematic sequence of events.
What you need to do is to lock the whole operation, so that while thread A is incrementing the value, thread B can't access it at all:
lock (progressLock)
{
progress.CurrentCount++;
}
If you know that you will only need incrementing, you could create a method on Progress that encapsulates this.
Old question, but I think there is a better answer.
You can report progress using Interlocked.Increment(ref progress) that way you do not have to worry about locking the write operation to progress.
The easiest solution would actually have been to replace the property with a field, and
lock { ++progress.CurrentCount; }
(I personally prefer the look of the preincrement over the postincrement, as the "++." thing clashes in my mind! But the postincrement would of course work the same.)
This would have the additional benefit of decreasing overhead and contention, since updating a field is faster than calling a method that updates a field.
Of course, encapsulating it as a property can have advantages too. IMO, since field and property syntax is identical, the ONLY advantage of using a property over a field, when the property is autoimplemented or equivalent, is when you have a scenario where you may want to deploy one assembly without having to build and deploy dependent assemblies anew. Otherwise, you may as well use faster fields! If the need arises to check a value or add a side effect, you simply convert the field to a property and build again. Therefore, in many practical cases, there is no penalty to using a field.
However, we are living in a time where many development teams operate dogmatically, and use tools like StyleCop to enforce their dogmatism. Such tools, unlike coders, are not smart enough to judge when using a field is acceptable, so invariably the "rule that is simple enough for even StyleCop to check" becomes "encapsulate fields as properties", "don't use public fields" et cetera...
Remove lock statements from properties and modify Main body:
object sync = new object();
Parallel.ForEach(ids, new ParallelOptions {MaxDegreeOfParallelism = threadCount},
id =>
{
lock(sync)
progress.CurrentCount++;
});
The issue here is that ++ is not atomic - one thread can read and increment the value between another thread reading the value and it storing the (now incorrect) incremented value. This is probably compounded by the fact there's a property wrapping your int.
e.g.
Thread 1 Thread 2
reads 5 .
. reads 5
. writes 6
writes 6! .
The locks around the setter and getter don't help this, as there's nothing to stop the lock blocks themseves being called out of order.
Ordinarily, I'd suggest using Interlocked.Increment, but you can't use this with a property.
Instead, you could expose _lock and have the lock block be around the progress.CurrentCount++; call.
It is better to store any database or file system operation in a local buffer variable instead of locking it. locking reduces performance.
I have a class with a bunch of methods in. For example
private int forFunction(String exceptionFileList, FileInfo z, String compltetedFileList, String sourceDir)
{
atPDFNumber++;
exceptionFileList = "";
int blankImage = 1;
int pagesMissing = 0;
//delete the images currently in the folder
deleteCreatedImages();
//Get the amount of pages in the pdf
int numberPDFPage = numberOfPagesPDF(z.FullName);
//Convert the pdf to images on the users pc
convertToImage(z.FullName);
//Check the images for blank pages
blankImage = testPixels(#"C:\temp", z.FullName);
//Check if the conversion couldnt convert a page because of an error
pagesMissing = numberPDFPage - numberOfFiles;
}
Now what im trying now is to access that class in a thread.. but not just one thread, maybe about 5 threads to speed up processing, since one is a bit slow.
Now in my mind, thats going to be chaos... i mean one thread changing variables while the other thread is busy with them etc etc, and locking each and every variable in all of those methods... not going to have a good time...
So what im proposing, and dont know if its the right way.. is this
public void MyProc()
{
if (this method is open, 4 other threads must wait)
{
mymethod(var,var);
}
if (this method is open, 4 other threads must wait and done with first method)
{
mymethod2();
}
if (this method is open, 4 other threads must wait and done with first and second method)
{
mymethod3();
}
if (this method is open, 4 other threads must wait and done with first and second and third method)
{
mymethod4();
}
}
Would this be the right way to approach the problem of multiple threads accessing multiple methods at the same time?
Those threads will only access the Class 5 times, and no more, since the work load will be equally divided.
Yes, that is one of your options. The conditional expression you have, however, should be replaced using the lock statement, or even better, make the method synchronized:
[MethodImpl(MethodImplOptions.Synchronized)]
private int forFunction(String exceptionFileList, FileInfo z, String compltetedFileList, String sourceDir)
It is not really a conditional because there is nothing conditional going on here. The next coming thread must wait and then it must go on. It literally sleeps without executing any instructions and then it is woken up from the outside.
Note also that when you are worried about variables messed up during the parallel execution of a non-synchronized method, this only applies to member variables (class fields). It does not apply to local variables declared inside the method as each thread would have its own copy of those.
I have property definition in class where i have only Counters, this must be thread-safe and this isn't because get and set is not in same lock, How to do that?
private int _DoneCounter;
public int DoneCounter
{
get
{
return _DoneCounter;
}
set
{
lock (sync)
{
_DoneCounter = value;
}
}
}
If you're looking to implement the property in such a way that DoneCounter = DoneCounter + 1 is guaranteed not to be subject to race conditions, it can't be done in the property's implementation. That operation is not atomic, it actually three distinct steps:
Retrieve the value of DoneCounter.
Add 1
Store the result in DoneCounter.
You have to guard against the possibility that a context switch could happen in between any of those steps. Locking inside the getter or setter won't help, because that lock's scope exists entirely within one of the steps (either 1 or 3). If you want to make sure all three steps happen together without being interrupted, then your synchronization has to cover all three steps. Which means it has to happen in a context that contains all three of them. That's probably going to end up being code that does not belong to whatever class contains the DoneCounter property.
It is the responsibility of the person using your object to take care of thread safety. In general, no class that has read/write fields or properties can be made "thread-safe" in this manner. However, if you can change the class's interface so that setters aren't necessary, then it is possible to make it more thread-safe. For example, if you know that DoneCounter only increments and decrements, then you could re-implement it like so:
private int _doneCounter;
public int DoneCounter { get { return _doneCounter; } }
public int IncrementDoneCounter() { return Interlocked.Increment(ref _doneCounter); }
public int DecrementDoneCounter() { return Interlocked.Decrement(ref _doneCounter); }
Using the Interlocked class provides for atomic operations, i.e. inherently threadsafe as in this LinqPad example:
void Main()
{
var counters = new Counters();
counters.DoneCounter += 34;
var val = counters.DoneCounter;
val.Dump(); // 34
}
public class Counters
{
int doneCounter = 0;
public int DoneCounter
{
get { return Interlocked.CompareExchange(ref doneCounter, 0, 0); }
set { Interlocked.Exchange(ref doneCounter, value); }
}
}
If you're expecting not just that some threads will occasionally write to the counter at the same time, but that lots of threads will keep doing so, then you want to have several counters, at least one cache-line apart from each other, and have different threads write to different counters, summing them when you need the tally.
This keeps most threads out of each others ways, which stops them from flushing each others values out of the cores, and slowing each other up. (You still need interlocked unless you can guarantee each thread will stay separate).
For the vast majority of cases, you just need to make sure the occasional bit of contention doesn't mess up the values, in which case Sean U's answer is better in every way (striped counters like this are slower for uncontested use).
What exactly are you trying to do with the counters? Locks don't really do much with integer properties, since reads and writes of integers are atomic with or without locking. The only benefit one can get from locks is the addition of memory barriers; one can achieve the same effect by using Threading.Thread.MemoryBarrier() before and after you read or write a shared variable.
I suspect your real problem is that you are trying to do something like "DoneCounter+=1", which--even with locking--would perform the following sequence of events:
Acquire lock
Get _DoneCounter
Release lock
Add one to value that was read
Acquire lock
Set _DoneCounter to computed value
Release lock
Not very helpful, since the value might change between the get and set. What would be needed would be a method that would perform the get, computation, and set without any intervening operations. There are three ways this can be accomplished:
Acquire and keep a lock during the whole operation
Use Threading.Interlocked.Increment to add a value to _Counter
Use a Threading.Interlocked.CompareExchange loop to update _Counter
Using any of these approaches, it's possible to compute a new value of _Counter based on the old value, in such a fashion that the value written is guaranteed to be based upon the value _Counter had at the time of the write.
You could declare the _DoneCounter variable as "volatile", to make it thread-safe. See this:
http://msdn.microsoft.com/en-us/library/x13ttww7%28v=vs.71%29.aspx
I'am creating a "man-in-middle" style application that applies a network latency to the transmissions, not for malicious use I should declare.
However I'm having difficulty with the correct output mechanisms on the data structure (LinkedList<string> buffer = new LinkedList<string>();).
What should happen:
Read data into structure from clientA.
if (buffer.First != null && buffer.Last != null)
{
buffer.AddAfter(buffer.Last, ServerRead.ReadLine().ToString());
}
else
buffer.AddFirst(ServerRead.ReadLine().ToString());
Using an individual or overall timer to track when to release the data to ClientB. (adjustable timer to adjust latency)
Timer on item in structure triggers, thus releasing the packet to clientB.
Clean up free data structure node
if (buffer.First != null)
{
clientWrite.WriteLine(buffer.First.Value.ToString());
clientWrite.Flush();
buffer.RemoveFirst();
}
However I have been trying to use the System.Windows.Forms.Timer to create a global timer that triggers a thread which handles the data output to clientB. However I'am finding this technique to be too slow, even when setting the myTimer.Interval = 1; This creates a concurrency problem with when clearing up the list and adding to it, the temporary solution is by locking the resource but I feel this is adding to the slow performance of data output.
Question:
I need some ideas on a solution that can store data into a data structure and apply a timer (like an egg timer effect) on the data stored and when that timer runs out it will be sent on its way to the other clients.
Regards, House.
The linked list will work, and it's unlikely that locking it (if done properly) will cause poor performance. You'd probably be much better off using ConcurrentQueue. It's thread-safe, so you don't have to do any explicit blocking.
I would suggest using System.Threading.Timer rather than the Windows Forms timer. Note, though, that you're still going to be limited to about 15 ms resolution. That is, even with a timer interval of 1, your effective delay times will be in the range of 15 to 25 ms rather than 1 ms. It's just the way the timers are implemented.
Also, since you want to delay each item for a specified period of time (which I assume is constant), you need some notion of "current time." I don't recommend using DateTime.Now or any of its variants, because the time can change. Rather, I use Stopwatch to get an application-specific time.
Also, you'll need some way to keep track of release times for the items. A class to hold the item, and the time it will be sent. Something like:
class BufferItem
{
public string Data { get; private set; }
public TimeSpan ReleaseTime { get; private set; }
public BufferItem(string d, TimeSpan ts)
{
data = d;
ReleaseTime = ts;
}
}
Okay. Let's put it all together.
// the application clock
Stopwatch AppTime = Stopwatch.StartNew();
// Amount of time to delay an item
TimeSpan DelayTime = TimeSpan.FromSeconds(1.0);
ConcurrentQueue<BufferItem> ItemQueue = new ConcurrentQueue<BufferItem>();
// Timer will check items for release every 15 ms.
System.ThreadingTimer ReleaseTimer = new System.Threading.Timer(CheckRelease, null, 15, 15);
Receiving an item:
// When an item is received:
// Compute release time and add item to buffer.
var item = new BufferItem(data, AppTime.Elapsed + DelayTime);
ItemQueue.Add(item);
The timer proc.
void CheckRelease(object state)
{
BufferItem item;
while (ItemQueue.TryPeek(out item) && item.ReleaseTime >= AppTime)
{
if (ItemQueue.TryDequeue(out item))
{
// send the item
}
}
}
That should perform well and you shouldn't have any concurrency issues.
If you don't like that 15 ms timer ticking all the time even when there aren't any items, you could make the timer a one-shot and have the CheckRelease method re-initialize it with the next release time after dequeing items. Of course, you'll also have to make the receive code initialize it the first time, or when there aren't any items in the queue. You'll need a lock to synchronize access to updating the timer.