Does Timer.Change() ever return false? - c#

The .NET System.Threading Timer class has several overloaded Change() methods that return "true if the timer was successfully updated; otherwise, false."
Ref: http://msdn.microsoft.com/en-us/library/yz1c7148.aspx
Does this method ever actually return false? What would cause this to return false?

Joe Duffy (the development lead, architect, and founder of the Parallel
Extensions to the .NET Framework team at Microsoft) detailed in Concurrent Programming on Windows p 373
Note that although Change is typed as returning a bool, it will actually never return anything but true. If there is a problem changing the timer-such as the target object already having been deleted-an exception will be thrown.

This can in fact return false if the unmanaged extern ChangeTimerNative were to return false. However, this is awfully unlikely.
Take note to Microsoft's code:
bool status = false;
bool bLockTaken = false;
// prepare here to prevent threadabort from occuring which could
// destroy m_lock state. lock(this) can't be used due to critical
// finalizer and thinlock/syncblock escalation.
RuntimeHelpers.PrepareConstrainedRegions();
try
{
}
finally
{
do
{
if (Interlocked.CompareExchange(ref m_lock, 1, 0) == 0)
{
bLockTaken = true;
try
{
if (timerDeleted != 0)
throw new ObjectDisposedException(null, Environment.GetResourceString("ObjectDisposed_Generic"));
status = ChangeTimerNative(dueTime,period);
}
finally
{
m_lock = 0;
}
}
Thread.SpinWait(1); // yield to processor
}
while (!bLockTaken);
}
return status;
PLEASE NOTE that the ChangeTimerNative calls the ChangeTimerQueueTimer Windows API function so you can read that documentation to get a feel for how it might fail.

On checking the managed source, the only case in which it returns false is if the AppDomain timer (if one does not exist, it is created) represented by a private class AppDomainTimerSafeHandle - has SafeHandle.IsInvalid set to true.
Since AppDomainTimerSafeHandle inherits from SafeHandleZeroOrMinusOneIsInvalid, IsInvalid is implemented by it - when a timer is attempted to be created by the unmanaged infrastructure and ends up with a Safe-Handle which is reading from the definition Zero-Or-Minus-One-Is-Invalid.
All cases point to this being extremely unlikely.

Related

Alternative to a monitor to checking for the existence of a record, and writing it if it doesn't exist

So I have a bit of C# code that looks like the below (simplified for the purpose of the question, any bugs are from me making these changes). This code can be called from multiple threads or contexts, in an asynchronous fashion. The whole purpose of this is to make sure that if a record already exists, it is used, and if it doesn't it gets created. May or may not be great design, but this works as expected.
var timeout = TimeSpan.FromMilliseconds(500);
bool lockTaken = false;
try
{
Monitor.TryEnter(m_lock, timeout, ref lockTaken); // m_lock declared statically above
if (lockTaken)
{
var myDBRecord = _DBContext.MyClass.SingleOrDefault(x => x.ForeignKeyId1 == ForeignKeyId1
&& x.ForeignKeyId2 == ForeignKeyId2);
if (myDBRecord == null)
{
myDBRecord = new MyClass
{
ForeignKeyId1 == ForeignKeyId1,
ForeignKeyId2 == ForeignKeyId2
// ...datapoints
};
_DBContext.MyClass.Add(myDBRecord);
_DBContext.SaveChanges();
}
}
else
{
throw new Exception("Can't get lock");
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(m_lock);
}
}
The problem occurs if there are a lot of requests that come in, it can overwhelm the monitor, timing out if it has to wait too long. While the timeout for the lock can certainly be shorter, what is the preferred approach, if any, to addressing this type of a problem? Anything that would try to see if the monitor'd code needed to be entered would need to be part of that atomic operation.
I would suggest that you get rid of the monitor altogether and instead handle the duplicate key exception. You have to handle the condition where you are trying to enter a duplicate value anyway, why not do so directly?

c# lock function during async file write [duplicate]

I want to start some new threads each for one repeating operation. But when such an operation is already in progress, I want to discard the current task. In my scenario I need very current data only - dropped data is not an issue.
In the MSDN I found the Mutex class but as I understand it, it waits for its turn, blocking the current thread. Also I want to ask you: Does something exist in the .NET framework already, that does the following:
Is some method M already being executed?
If so, return (and let me increase some counter for statistics)
If not, start method M in a new thread
The lock(someObject) statement, which you may have come across, is syntactic sugar around Monitor.Enter and Monitor.Exit.
However, if you use the monitor in this more verbose way, you can also use Monitor.TryEnter which allows you to check if you'll be able to get the lock - hence checking if someone else already has it and is executing code.
So instead of this:
var lockObject = new object();
lock(lockObject)
{
// do some stuff
}
try this (option 1):
int _alreadyBeingExecutedCounter;
var lockObject = new object();
if (Monitor.TryEnter(lockObject))
{
// you'll only end up here if you got the lock when you tried to get it - otherwise you'll never execute this code.
// do some stuff
//call exit to release the lock
Monitor.Exit(lockObject);
}
else
{
// didn't get the lock - someone else was executing the code above - so I don't need to do any work!
Interlocked.Increment(ref _alreadyBeingExecutedCounter);
}
(you'll probably want to put a try..finally in there to ensure the lock is released)
or dispense with the explicit lock althogether and do this
(option 2)
private int _inUseCount;
public void MyMethod()
{
if (Interlocked.Increment(ref _inUseCount) == 1)
{
// do dome stuff
}
Interlocked.Decrement(ref _inUseCount);
}
[Edit: in response to your question about this]
No - don't use this to lock on. Create a privately scoped object to act as your lock.
Otherwise you have this potential problem:
public class MyClassWithLockInside
{
public void MethodThatTakesLock()
{
lock(this)
{
// do some work
}
}
}
public class Consumer
{
private static MyClassWithLockInside _instance = new MyClassWithLockInside();
public void ThreadACallsThis()
{
lock(_instance)
{
// Having taken a lock on our instance of MyClassWithLockInside,
// do something long running
Thread.Sleep(6000);
}
}
public void ThreadBCallsThis()
{
// If thread B calls this while thread A is still inside the lock above,
// this method will block as it tries to get a lock on the same object
// ["this" inside the class = _instance outside]
_instance.MethodThatTakesLock();
}
}
In the above example, some external code has managed to disrupt the internal locking of our class just by taking out a lock on something that was externally accessible.
Much better to create a private object that you control, and that no-one outside your class has access to, to avoid these sort of problems; this includes not using this or the type itself typeof(MyClassWithLockInside) for locking.
One option would be to work with a reentrancy sentinel:
You could define an int field (initialize with 0) and update it via Interlocked.Increment on entering the method and only proceed if it is 1. At the end just do a Interlocked.Decrement.
Another option:
From your description it seems that you have a Producer-Consumer-Scenario...
For this case it might be helpful to use something like BlockingCollection as it is thread-safe and mostly lock-free...
Another option would be to use ConcurrentQueue or ConcurrentStack...
You might find some useful information on the following site (the PDf is also downlaodable - recently downloaded it myself). The Adavnced threading Suspend and Resume or Aborting chapters maybe what you are inetrested in.
You should use Interlocked class atomic operations - for best performance - since you won't actually use system-level sychronizations(any "standard" primitive needs it, and involve system call overhead).
//simple non-reentrant mutex without ownership, easy to remake to support //these features(just set owner after acquiring lock(compare Thread reference with Thread.CurrentThread for example), and check for matching identity, add counter for reentrancy)
//can't use bool because it's not supported by CompareExchange
private int lock;
public bool TryLock()
{
//if (Interlocked.Increment(ref _inUseCount) == 1)
//that kind of code is buggy - since counter can change between increment return and
//condition check - increment is atomic, this if - isn't.
//Use CompareExchange instead
//checks if 0 then changes to 1 atomically, returns original value
//return true if thread succesfully occupied lock
return CompareExchange(ref lock, 1, 0)==0;
return false;
}
public bool Release()
{
//returns true if lock was occupied; false if it was free already
return CompareExchange(ref lock, 0, 1)==1;
}

What happens if I Monitor.Enter conditionally while another thread is in the critical section without a lock?

I'm attempting to reimplement functionality from a system class (Lazy<T>) and I found this unusual bit of code. I get the basic idea. The first thread to try for a value performs the calculations. Any threads that try while that's happening get locked at the gate, wait until release, and then go get the cached value. Any later calls notice the sentinel value and don't bother with the locks any more.
bool lockWasTaken = false;
var obj = Volatile.Read<object>(ref this._locker);
object returnValue = null;
try
{
if (obj != SENTINEL_VALUE)
{
Monitor.Enter(obj, ref lockWasTaken);
}
if (this.cachedValue != null) // always true after code has run once
{
returnValue = this.cachedValue;
}
else //only happens on the first thread to lock and enter
{
returnValue = SomeCalculations();
this.cachedValue = returnValue;
Volatile.Write<object>(ref this._locker, SENTINEL_VALUE);
}
return returnValue
}
finally
{
if (lockWasTaken)
{
Monitor.Exit(obj);
}
}
But let's say, after a change in the code, that another method resets the this._locker to it's original value and then goes in to lock and recalculate the cached value. While it does this, another thread happened to be picking up the cached value, so it's inside the locked section, but without a lock. What happens? Does it just execute normally while the thread with the lock also goes in parallel?
While it does this, another thread happened to be picking up the cached value, so it's inside the locked section, but without a lock. What happens? Does it just execute normally while the thread with the lock also goes in parallel?
Yes, it'll just execute normally.
That being said, this code appears like it could be removed entirely by using Lazy<T>. The Lazy<T> class provides a thread safe way to handle lazy instantiation of data, which appears to be the goal of this code.
Basically, the entire code could be replaced by:
// Have a field like the following:
Lazy<object> cachedValue = new Lazy<object>(() => SomeCalculations());
// Code then becomes:
return cachedValue.Value;

Checking if an object exists after calling Activator.GetObject

I'm developing a project with passive replication where servers exchange messages among themselves. The locations of each server are well-known by every other server.
So, it may happen that when a server comes up, it will check the other servers, that may haven't come up yet. When I call Activator.GetObject, is it the only way to find out that other servers are down by invoking a method on the object, and expect an IOException (such as the example below)?
try
{
MyType replica = (MyType)Activator.GetObject(
typeof(IMyType),
"tcp://localhost:" + location + "/Server");
replica.ping();
}
catch (IOEXception){} // server is down
I do this and it works most of the times (even though is slow), but sometimes it blocks on a method called NegotiateStream.ProcessRead during the process, and I can't understand why...
When a server is down, the timeout has always been slow for me (using a TcpChannel, which doesn't let you set the timeout properly in .NET Remoting). Below is a workaround for how I use my Ping function (it's likely a bit complex for your needs, so I'll explain the parts that matter for you):
[System.Diagnostics.DebuggerHidden] // ignore the annoying breaks when get exceptions here.
internal static bool Ping<T>(T svr)
{
// Check type T for a defined Ping function
if (svr == null) return false;
System.Reflection.MethodInfo PingFunc = typeof(T).GetMethod("Ping");
if (PingFunc == null) return false;
// Create a new thread to call ping, and create a timeout of 5 secons
TimeSpan timeout = TimeSpan.FromSeconds(5);
Exception pingexception = null;
System.Threading.Thread ping = new System.Threading.Thread(
delegate()
{
try
{
// just call the ping function
// use svr.Ping() in most cases
// PingFunc.Invoke is used in my case because I use
// reflection to determine if the Ping function is
// defined in type T
PingFunc.Invoke(svr, null);
}
catch (Exception ex)
{
pingexception = ex;
}
}
);
ping.Start(); // start the ping thread.
if (ping.Join(timeout)) // wait for thread to return for the time specified by timeout
{
// if the ping thread returned and no exception was thrown, we know the connection is available
if (pingexception == null)
return true;
}
// if the ping thread times out... return false
return false;
}
Hopefully the comments explain what I do here, but I'll give you a breakdown of the whole function. If you're not interested, just skip down to where I explain the ping thread.
DebuggerHidden Attribute
I set the DebuggerHidder attribute because when debugging, exceptions can be thrown here constantly in the ping thread, and they are expected. It is easy enough to comment this out should debugging this function become necessary.
Why I use reflection and a generic type
The 'svr' parameter is expected to be a type with a Ping function. In my case, I have a few different remotable interfaces implemented on the server with a common Ping function. In this way, I can just call Ping(svr) without having to cast or specify a type (unless the remote object is instantiated as an 'object' locally). Basically, this is just for syntactical convenience.
The Ping Thread
You can use whatever logic you want to determine an acceptable timeout, in my case, 5 seconds is good. I create a TimeSpan 'timeout' with a value of 5 seconds, an Exception pingexception, and create a new thread that tries to call 'svr.Ping()', and sets 'pingexception' to whatever exception is thrown when calling 'svr.Ping()'.
Once I call 'ping.Start()', I immediately use the boolean method ping.Join(TimeSpan) to wait for the thread to return successfully, or move on if the thread doesn't return within the specified amount of time. However, if the thread finished executing but an exception was thrown, we still don't want Ping to return true because there was a problem communicating with the remote object. This is why I use the 'pingexception' to make sure that no exceptions occurred when calling svr.Ping(). If 'pingexception' is null at the end, then I know I'm safe to return true.
Oh and to answer the question you originally asked (....sometimes it blocks on a method called NegotiateStream.ProcessRead during the process, and I can't understand why...), I have never been able to figure out the timeout issues with .NET Remoting, so this method is what I've baked and cleaned up for our .NET Remoting needs.
I've used an improved version of this with generics:
internal static TResult GetRemoteProperty<T, TResult>(string Url, System.Linq.Expressions.Expression<Func<T, TResult>> expr)
{
T remoteObject = GetRemoteObject<T>(Url);
System.Exception remoteException = null;
TimeSpan timeout = TimeSpan.FromSeconds(5);
System.Threading.Tasks.Task<TResult> t = new System.Threading.Tasks.Task<TResult>(() =>
{
try
{
if (expr.Body is System.Linq.Expressions.MemberExpression)
{
System.Reflection.MemberInfo memberInfo = ((System.Linq.Expressions.MemberExpression)expr.Body).Member;
System.Reflection.PropertyInfo propInfo = memberInfo as System.Reflection.PropertyInfo;
if (propInfo != null)
return (TResult)propInfo.GetValue(remoteObject, null);
}
}
catch (Exception ex)
{
remoteException = ex;
}
return default(TResult);
});
t.Start();
if (t.Wait(timeout))
return t.Result;
throw new NotSupportedException();
}
internal static T GetRemoteObject<T>(string Url)
{
return (T)Activator.GetObject(typeof(T), Url);
}

What is this unusual code in ThreadPool?

I was using Reflector to peruse some of the source for the .Net ThreadPool, when it showed this:
private static bool QueueUserWorkItemHelper(WaitCallback callBack, object state, ref StackCrawlMark stackMark, bool compressStack)
{
bool flag = true;
if (callBack == null)
{
throw new ArgumentNullException("WaitCallback");
}
EnsureVMInitialized();
if (ThreadPoolGlobals.useNewWorkerPool)
{
try
{
return flag;
}
finally
{
QueueUserWorkItemCallback callback = new QueueUserWorkItemCallback(callBack, state, compressStack, ref stackMark);
ThreadPoolGlobals.workQueue.Enqueue(callback, true);
flag = true;
}
}
// code below here removed
}
The try/finally block struck me as very unidiomatic C#. Why write it like this? What is the difference if you got rid of the try/finally and moved the return to the end?
I understand how Reflector works and that this might not be the original source. If you think that's the case, can you suggest what the original source might have been?
Microsoft has published the source to .NET - though I still use Reflector due to easier browsing. This is the actual code snippet from .NET 4.0.
//
// If we are able to create the workitem, we need to get it in the queue without being interrupted
// by a ThreadAbortException.
//
try { }
finally
{
QueueUserWorkItemCallback tpcallBack = new QueueUserWorkItemCallback(callBack, state, compressStack, ref stackMark);
ThreadPoolGlobals.workQueue.Enqueue(tpcallBack, true);
success = true;
}
Actually, this empty try block and code in finally block pattern is described in the Jeffrey Richter's book "CLR via C#".
The thing is that if something goes wrong and thread is aborted the finally blocks are guaranteed to execute. At least they are more likely to execute than try blocks. For more details you should look in the section of mentioned book which describes exceptions and error handling

Categories

Resources