block variable for read - c#

I have the following code:
private DateTime lastUploadActivityTime = DateTime.Now;
private void HttpSendProgress(object sender, HttpProgressEventArgs e)
{
// update variable
lastUploadActivityTime = DateTime.Now;
......
boolThreadAvailableTargetSiteActive = false;
}
// this method is executed in different thread, than method above
private void ThreadCheckAvailableTargetSite()
{
while (boolThreadAvailableTargetSiteActive)
{
if (lastUploadActivityTime.AddSeconds(5) <= DateTime.Now)
{
MessageBox.Show("BREAK");
boolThreadAvailableTargetSiteActive = false;
}
Thread.Sleep(500);
}
}
I need to block the variable lastUploadActivityTime in first method (during lastUploadActivityTime = DateTime.Now;) to prevent read lastUploadActivityTime in second method (lastUploadActivityTime.AddSeconds(5) <= DateTime.Now). How can I do it? Does Mutex help me to prevent reading variable?

The lock keyword ensures that one thread does not enter a critical section of code while another thread is in the critical section. If another thread tries to enter a locked code, it will wait, block, until the object is released. Best practice is to define a private object to lock on, or a private static object variable to protect data common to all instances.
private object syncLock = new object();
private DateTime lastUploadActivityTime = DateTime.Now;
private void HttpSendProgress(object sender, HttpProgressEventArgs e)
{
// update variable
lock (syncLock)
{
lastUploadActivityTime = DateTime.Now;
}
}
// this method is executed in different thread, than method above
private void ThreadCheckAvailableTargetSite()
{
while (boolThreadAvailableTargetSiteActive)
{
lock (syncLock)
{
if (lastUploadActivityTime.AddSeconds(5) <= DateTime.Now)
{
MessageBox.Show("BREAK");
boolThreadAvailableTargetSiteActive = false;
}
}
Thread.Sleep(500);
}
}

mutex would be overkill, use lock instead in both methods to syncronize read
http://msdn.microsoft.com/en-us/library/c5kehkcz.aspx

Related

C# - How to check if Multi-threading execution has finished? [duplicate]

I have a windows forms app that I am checking all the serial ports to see if a particular device is connected.
This is how I spin off each thread. The below code is already spun off the main gui thread.
foreach (cpsComms.cpsSerial ser in availPorts)
{
Thread t = new Thread(new ParameterizedThreadStart(lookForValidDev));
t.Start((object)ser);//start thread and pass it the port
}
I want the next line of code to wait until all the threads have finished.
I've tried using a t.join in there, but that just processes them linearly.
List<Thread> threads = new List<Thread>();
foreach (cpsComms.cpsSerial ser in availPorts)
{
Thread t = new Thread(new ParameterizedThreadStart(lookForValidDev));
t.Start((object)ser);//start thread and pass it the port
threads.Add(t);
}
foreach(var thread in threads)
{
thread.Join();
}
Edit
I was looking back at this, and I like the following better
availPorts.Select(ser =>
{
Thread thread = new Thread(lookForValidDev);
thread.Start(ser);
return thread;
}).ToList().ForEach(t => t.Join());
Use the AutoResetEvent and ManualResetEvent Classes:
private ManualResetEvent manual = new ManualResetEvent(false);
void Main(string[] args)
{
AutoResetEvent[] autos = new AutoResetEvent[availPorts.Count];
manual.Set();
for (int i = 0; i < availPorts.Count - 1; i++)
{
AutoResetEvent Auto = new AutoResetEvent(false);
autos[i] = Auto;
Thread t = new Thread(() => lookForValidDev(Auto, (object)availPorts[i]));
t.Start();//start thread and pass it the port
}
WaitHandle.WaitAll(autos);
manual.Reset();
}
void lookForValidDev(AutoResetEvent auto, object obj)
{
try
{
manual.WaitOne();
// do something with obj
}
catch (Exception)
{
}
finally
{
auto.Set();
}
}
The simplest and safest way to do this is to use a CountdownEvent. See Albahari.
Store the Thread results in a list after they were spawned and iterate the list - during iteration call join then. You still join linearly, but it should do what you want.
You can use a CountDownLatch:
public class CountDownLatch
{
private int m_remain;
private EventWaitHandle m_event;
public CountDownLatch(int count)
{
Reset(count);
}
public void Reset(int count)
{
if (count < 0)
throw new ArgumentOutOfRangeException();
m_remain = count;
m_event = new ManualResetEvent(false);
if (m_remain == 0)
{
m_event.Set();
}
}
public void Signal()
{
// The last thread to signal also sets the event.
if (Interlocked.Decrement(ref m_remain) == 0)
m_event.Set();
}
public void Wait()
{
m_event.WaitOne();
}
}
Example how to use it:
void StartThreads
{
CountDownLatch latch = new CountDownLatch(availPorts.Count);
foreach (cpsComms.cpsSerial ser in availPorts)
{
Thread t = new Thread(new ParameterizedThreadStart(lookForValidDev));
//start thread and pass it the port and the latch
t.Start((object)new Pair(ser, latch));
}
DoSomeWork();
// wait for all the threads to signal
latch.Wait();
DoSomeMoreWork();
}
// In each thread
void NameOfRunMethod
{
while(running)
{
// do work
}
// Signal that the thread is done running
latch.Signal();
}

Is this the right paradigm for a preemptive, exclusive function in a multithreading environment?

I am implementing a preemptive, exclusive function in a multithreaded environment, where if a cancel request occurs even when the function is not running, when the function does run, it knows about this cancel request and does not run. I came across various different ways to do this in C# using ManualResetEvent and the like(something like the answer to this question Synchronizing a Timers.Timer elapsed method when stopping), however I was wondering if something as simple as what I am doing in the code below would suffice. Are there any inadvertent bugs that I am introducing here?
bool cancel = false;
bool running = false;
object Lock = new object();
void PremptiveExclusiveFunction() {
lock(Lock) {
if(running)
return;
running = true;
}
for(int i=0; i < numIter; i++) {
lock(Lock) {
if(cancel) {
cancel = false;
running = false;
return;
}
}
// iteration code
}
lock(Lock) {
running = false;
}
}
void Stop() {
lock(Lock) {
cancel = true;
}
}
As far as I know, this seems to handle my 3 requirements:
1. ability to preempt
2. exclusivity in time, where this only copy of this function can be running
3. a cancel request not being lost because Stop is called before PreemptiveExclusiveFunction
I'd be grateful if more experienced minds could point out if I am indeed missing something.
Entire function body can be locked to eliminate the running boolean:
object #lock = new object();
volatile bool cancel = false;
void Function () {
if (!Monitor.TryEnter(#lock))
return;
try {
for (var i = 0; i < 100; i++) {
if (cancel) {
cancel = false;
return;
}
// code
}
} finally {
Monitor.Exit(#lock);
}
}
void Stop () {
cancel = true;
}
+ Notice the volatile keyword:
http://msdn.microsoft.com/en-us/library/vstudio/x13ttww7(v=vs.100).aspx

How to stop thread when Lock is encountered?

I have the following code that starts some threads:
List<Stuff> lNewStuff = new List<Stuff>();
// populate lNewStuff
for (int i = 0; i < accounts.Length; i++)
{
Account aTemp = _lAccounts.Find(item => item.ID == accounts[i]);
Thread tTemp = new Thread(() => aTemp.ExecuteMe(lNewStuff));
tTemp.Start();
}
Then in the Account class you have the ExecuteMe method that has a lock:
public class Account
{
private Object lockThis = new Object();
public void ExecuteMe(List<Stuff> lNewStuff)
{
//Ensure only one thread at a time can run this code
lock (lockThis)
{
//main code processing
}
}
}
Now, sometimes the thread starts with lNewStuff == null since it sometimes does not find any New Stuff with the Account ID. This is normal for this project. The thread should always try to run but when null I want this thread to die and not wait when a lock is encountered.
So specifically:
If lNewStuff is null and there is a lock then terminate the thread. (how to do this?)
If lNewStuff is null and there is no lock then run normally (does this already)
If lNewStuff is not null and there is a lock then wait for the lock to finish (does this already)
if lNewStuff is not null and there is no lock then run normally (does this already)
When lNewStuff is null you could use Monitor.TryEnter and only continue if the lock is granted:
public class Account
{
private readonly object lockThis = new object();
public void ExecuteMe(List<Stuff> lNewStuff)
{
bool lockTaken = false;
try
{
if (lNewStuff == null)
{
// non-blocking - only takes the lock if it's available
Monitor.TryEnter(lockThis, ref lockTaken);
}
else
{
// blocking - equivalent to the standard lock statement
Monitor.Enter(lockThis, ref lockTaken);
}
if (lockTaken)
{
// main code processing
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(lockThis);
}
}
}
}
If lNewStuff is null and there is a lock then terminate the thread. (how to do this?) ,
do you want to still start a thread if lNewStuff is Null if answer is no then solution must be very simple.
List<Stuff> lNewStuff = new List<Stuff>();
// populate lNewStuff
for (int i = 0; i < accounts.Length; i++)
{
Account aTemp = _lAccounts.Find(item => item.ID == accounts[i]);
if(lNewStuff!=null)
{
Thread tTemp = new Thread(() => aTemp.ExecuteMe(lNewStuff));
tTemp.Start();
}
}
also you shd create a single lock object
private Object lockThis = new Object(); // this statement is creating new lock object with every account object, and hence does not ensure critical section protection.
Change this to
private static Object lockThis = new Object();
Just to be different:
public class Foo : IDisposable
{
private Semaphore _blocker;
public Foo(int maximumAllowed)
{
_blocker = new Semaphore(1,1);
}
public void Dispose()
{
if(_blocker != null)
{
_blocker.Dispose();
_blocker.Close();
}
}
public void LimitedSpaceAvailableActNow(object id)
{
var gotIn = _blocker.WaitOne(0);
if(!gotIn)
{
Console.WriteLine("ID:{0} - No room!", id);
return;
}
Console.WriteLine("ID:{0} - Got in! Taking a nap...", id);
Thread.Sleep(1000);
_blocker.Release();
}
}
Test rig:
void Main()
{
using(var foo = new Foo(1))
{
Enumerable.Range(0, 10)
.Select(t =>
Tuple.Create(t, new Thread(foo.LimitedSpaceAvailableActNow)))
.ToList()
.AsParallel()
.ForAll(t => t.Item2.Start(t.Item1));
Console.ReadLine();
}
}
Output:
ID:4 - Got in! Taking a nap...
ID:8 - No room!
ID:0 - No room!
ID:7 - No room!
ID:2 - No room!
ID:6 - No room!
ID:5 - No room!
ID:9 - No room!
ID:1 - No room!
ID:3 - No room!

Performant locking pattern

I'm working on the below code and am trying to make it as fast as it can be.
Basically the execute method gets called every time an event gets triggered in the system. What I am testing for is to see whether x number of minutes have passed since a reduce was last performed. If x number of minutes have passed then we should execute the task.
Since the events can be triggered from any thread and happen quite quickly, I thought that triggering the task out side of the lock (even though its a task) would be better than having it in the lock.
Does anyone have any feedback on how this can be improved?
public class TriggerReduce
{
private readonly object _lock = new object();
private readonly int _autoReduceInterval = 5;
private DateTime _lastTriggered;
public void Execute(object sender, EventArgs e)
{
var currentTime = DateTime.Now;
if (currentTime.Subtract(_lastTriggered).Duration().TotalMinutes > _autoReduceInterval)
{
var shouldRun = false;
lock (_lock)
{
if (currentTime.Subtract(_lastTriggered).Duration().TotalMinutes > _autoReduceInterval)
{
_lastTriggered = currentTime;
shouldRun = true;
}
}
if (shouldRun)
{
Task.Factory.StartNew(() =>
{
//Trigger reduce which is a long running task
}, TaskCreationOptions.LongRunning);
}
}
}
}
Oh, I wouldn't do that! Put the 'if (currentTime' and the 'shouldRun' stuff back inside the lock.
Don't change/check state outside a lock - it's sure to screw up.
In this case, a thread that has just set 'shouldRun' to true may have its decision reversed by another thread that enters and sets 'shouldRun' to false again before getting stuck on the lock. The first thread then does not get to the 'StartNew' and the later thread won't either because the first thread set the _lastTriggered to the current time.
OTOH :) since 'shouldRun' is an auto varaible and not a field, it is not state. Only one thread can get inside the lock, double-check the interval and update the _lastTriggered time.
I don't like this kind of double-check but, at the moment, can't see why it would not work.
Would it be helpful to avoid the lock and use Interlocked.Exchange instead?
E.g.
private long _lastTriggeredTicks;
private DateTime lastTriggered
{
get
{
var l = Interlocked.Read( ref _lastTriggeredTicks );
return new DateTime( l );
}
set
{
Interlocked.Exchange( ref _lastTriggeredTicks, value );
}
}
From what I understand Interlocked is faster than a lock statement.
public class TriggerReduce //StartNew is fast and returns fast
{
private readonly object _lock = new object();
private readonly int _triggerIntervalMins = 5;
private DateTime _nextTriggerAt = DateTime.MinValue;
private bool inTrigger = false;
public void Execute(object sender, EventArgs e)
{
lock (_lock)
{
var currentTime = DateTime.Now;
if (_nextTriggerAt > currentTime)
return;
_nextTriggerAt = currentTime.AddMinutes(_triggerIntervalMins);//runs X mins after last task started running (or longer if task took longer than X mins)
}
Task.Factory.StartNew(() =>
{
//Trigger reduce which is a long running task
}, TaskCreationOptions.LongRunning);
}
}
public class TriggerReduce//startNew is a long running function that you want to wait before you recalculate next execution time
{
private readonly object _lock = new object();
private readonly int _triggerIntervalMins = 5;
private DateTime _nextTriggerAt = DateTime.MinValue;
private bool inTrigger = false;
public void Execute(object sender, EventArgs e)
{
var currentTime;
lock (_lock)
{
currentTime = DateTime.Now;
if (inTrigger || (_nextTriggerAt > currentTime))
return;
inTrigger = true;
}
Task.Factory.StartNew(() =>
{
//Trigger reduce which is a long running task
}, TaskCreationOptions.LongRunning);
lock (_lock)
{
inTrigger = false;
_nextTriggerAt = DateTime.Now.AddMinutes(_triggerIntervalMins);//runs X mins after task finishes
//_nextTriggerAt = currentTime.AddMinutes(_triggerIntervalMins);//runs X mins after last task started running (or longer if task took longer than X mins)
}
}
}
Use Monitor.TryEnter.
if (Monitor.TryEnter(_lock))
{
try
{
if (currentTime.Subtract(_lastTriggered).Duration().TotalMinutes >
_autoReduceInterval)
{
_lastTriggered = currentTime;
shouldRun = true;
}
}
finally
{
Monitor.Exit(_lock);
}
}
I think you already have a fairly reasonable approach. The big problem is that you are accessing _lastTriggered outside of the lock. The double-checked locking idiom is not going to work here. Simply your code so that it looks like this.
public void Execute(object sender, EventArgs e)
{
var currentTime = DateTime.Now;
var shouldRun = false;
lock (_lock)
{
TimeSpan span = currentTime - _lastTriggeed;
if (span.TotalMinutes > _autoReduceInterval)
{
_lastTriggered = currentTime;
shouldRun = true;
}
}
if (shouldRun)
{
Task.Factory.StartNew(() =>
{
//Trigger reduce which is a long running task
}, TaskCreationOptions.LongRunning);
}
}

Executing method by timer inside threadpool

In my multi threaded web app I invoke in the ThreadPool SomeMethod which can throw an exception. Suppose I want to make a few attempts if it causes an exception at first call. I decide to use System.Timers.Timer inside my action for attempts. Can I use the code below? Is it safely?
static void Caller()
{
ThreadPool.QueueUserWorkItem(action =>
{
try
{
SomeMethod();
Console.WriteLine("Done.");
}
catch
{
var t = new System.Timers.Timer(1000);
t.Start();
var count = 0;
t.Elapsed += new System.Timers.ElapsedEventHandler((o, a) =>
{
var timer = o as System.Timers.Timer;
count++;
var done = false;
Exception exception = null;
try
{
Console.WriteLine(count);
SomeMethod();
done = true;
}
catch (Exception ex)
{
exception = ex;
}
if (done || count == 10)
{
Console.WriteLine(String.Format("Stopped. done: {0}, count: {1}", done, count));
t.Stop();
if (!done) throw exception;
}
});
}
});
Thread.Sleep(100000);
}
static void SomeMethod()
{
var x = 1 / new Random().Next(0, 2);
}
You should Dispose each Timer after use, that's for sure. But, probably you could do something even simpler:
static void Main()
{
ThreadPool.QueueUserWorkItem(action =>
{
while (TrySomeMethod() == false)
Thread.Sleep(1000);
});
// wait here
Console.Read();
}
static bool TrySomeMethod()
{
try
{
SomeMethod();
return true;
}
catch
{
return false;
}
}
I do not think that using a timer in a thread pool thread is a safe approach. I may be wrong, but the timer will raise its elapsed event when the thread method has already been finished to execute. In this case, the exception will be thrown. Also, I do not see that you are not disposing the timer which leads to resource leaks. If you explain why you need the timer, I will try to find a safe solution...
I don't see the point of using a Timer within a ThreadPool queue, because the ThreadPool would spawn a new thread, and the Timer would spawn a new thread as well.
I would just have a loop within that delegate, because it would not block the main thread either way. Groo showed a good example of that.

Categories

Resources