Lock using atomic operations - c#

Yes, I'm aware of that the following question could be answered with "Use the lock keyword instead" or something similar. But since this is just for "fun", I don't care about those.
I've made a simple lock using atomic operations:
public class LowLock
{
volatile int locked = 0;
public void Enter(Action action)
{
var s = new SpinWait();
while (true)
{
var inLock = locked; // release-fence (read)
// If CompareExchange equals 1, we won the race.
if (Interlocked.CompareExchange(ref locked, 1, inLock) == 1)
{
action();
locked = 0; // acquire fence (write)
break; // exit the while loop
}
else s.SpinOnce(); // lost the race. Spin and try again.
}
}
}
I'm using the lock above in a simple for loop, that adds a string to a normal List<string>, with the purpose of making the add method thread-safe, when wrapped inside the Enter method from a LowLock.
The code looks like:
static void Main(string[] args)
{
var numbers = new List<int>();
var sw = Stopwatch.StartNew();
var cd = new CountdownEvent(10000);
for (int i = 0; i < 10000; i++)
{
ThreadPool.QueueUserWorkItem(o =>
{
low.Enter(() => numbers.Add(i));
cd.Signal();
});
}
cd.Wait();
sw.Stop();
Console.WriteLine("Time = {0} | results = {1}", sw.ElapsedMilliseconds, numbers.Count);
Console.ReadKey();
}
Now the tricky part is that when the main thread hits the Console.WriteLine that prints time and number of elements in the list, the number of elements should be equal to the count given to the CountdownEvent (10000) - It works most of the time, but sometimes there's only 9983 elements in the list, other times 9993. What am I overlooking?

I suggest you take a look at the SpinLock structure as it appears to do exactly what you want.
That said, this all looks really dicey, but I'll have a stab at it.
You appear to be trying to use 0 to mean 'unlocked' and 1 to mean 'locked'.
In which case the line:
if (Interlocked.CompareExchange(ref locked, 1, inLock) == 1)
isn't correct at all. It's just replacing the locked variable with the value of 1 (locked) if its current value is the same as the last time you read it via inLock = locked (and acquiring the lock if so). Worse, it is entering the mutual exclusion section if the original value was 1 (locked), which is the exact opposite of what you want to be doing.
You should actually be atomically checking that the lock has not been taken (original value == 0) and take it if you can (new value == 1), by using 0 (unlocked) as both the comparand argument as well as the value to test the return value against:
if (Interlocked.CompareExchange(ref locked, 1, 0) == 0)
Now even if you fixed this, we also need to be certain that the List<T>.Add method will 'see' an up to date internal-state of the list to perform the append correctly. I think Interlocked.CompareExchange uses a full memory barrier, which should create this pleasant side-effect, but this does seem a little dangerous to rely on (I've never seen this documented anywhere).
I strongly recommend staying away from such low-lock patterns except in the most trivial (and obviously correct) of scenarios unless you are a genuine expert in low-lock programming. We mere mortals will get it wrong.
EDIT: Updated the compare value to 0.

Interlocked.CompareExchange returns the original value of the variable, so you want something like this:
public class LowLock
{
int locked = 0;
public void Enter( Action action )
{
var s = new SpinWait();
while ( true )
{
// If CompareExchange equals 0, we won the race.
if ( Interlocked.CompareExchange( ref locked, 1, 0 ) == 0 )
{
action();
Interlocked.Exchange( ref locked, 0 );
break; // exit the while loop
}
s.SpinOnce(); // lost the race. Spin and try again.
}
}
}
I've removed the volatile and used a full fence to reset the flag, because volatile is hard

Related

While updating a value in concurrent dictionary is better to lock dictionary or value

I am performing two updates on a value I get from TryGet I would like to know that which of these is better?
Option 1: Locking only out value?
if (HubMemory.AppUsers.TryGetValue(ConID, out OnlineInfo onlineinfo))
{
lock (onlineinfo)
{
onlineinfo.SessionRequestId = 0;
onlineinfo.AudioSessionRequestId = 0;
onlineinfo.VideoSessionRequestId = 0;
}
}
Option 2: Locking whole dictionary?
if (HubMemory.AppUsers.TryGetValue(ConID, out OnlineInfo onlineinfo))
{
lock (HubMemory.AppUsers)
{
onlineinfo.SessionRequestId = 0;
onlineinfo.AudioSessionRequestId = 0;
onlineinfo.VideoSessionRequestId = 0;
}
}
I'm going to suggest something different.
Firstly, you should be storing immutable types in the dictionary to avoid a lot of threading issues. As it is, any code could modify the contents of any items in the dictionary just by retrieving an item from it and changing its properties.
Secondly, ConcurrentDictionary provides the TryUpdate() method to allow you to update values in the dictionary without having to implement explicit locking.
TryUpdate() requires three parameters: The key of the item to update, the updated item and the original item that you got from the dictionary and then updated.
TryUpdate() then checks that the original has NOT been updated by comparing the value currently in the dictionary with the original that you pass to it. Only if it is the SAME does it actually update it with the new value and return true. Otherwise it returns false without updating it.
This allows you to detect and respond appropriately to cases where some other thread has changed the value of the item you're updating while you were updating it. You can either ignore this (in which case, first change gets priority) or try again until you succeed (in which case, last change gets priority). What you do depend on your situation.
Note that this requires that your type implements IEquatable<T>, since that is used by the ConcurrentDictionary to compare values.
Here's a sample console app that demonstrates this:
using System;
using System.Collections.Concurrent;
using System.Threading;
using System.Threading.Tasks;
namespace Demo
{
sealed class Test: IEquatable<Test>
{
public Test(int value1, int value2, int value3)
{
Value1 = value1;
Value2 = value2;
Value3 = value3;
}
public Test(Test other) // Copy ctor.
{
Value1 = other.Value1;
Value2 = other.Value2;
Value3 = other.Value3;
}
public int Value1 { get; }
public int Value2 { get; }
public int Value3 { get; }
#region IEquatable<Test> implementation (generated using Resharper)
public bool Equals(Test other)
{
if (other is null)
return false;
if (ReferenceEquals(this, other))
return true;
return Value1 == other.Value1 && Value2 == other.Value2 && Value2 == other.Value3;
}
public override bool Equals(object obj)
{
return ReferenceEquals(this, obj) || obj is Test other && Equals(other);
}
public override int GetHashCode()
{
unchecked
{
return (Value1 * 397) ^ Value2;
}
}
public static bool operator ==(Test left, Test right)
{
return Equals(left, right);
}
public static bool operator !=(Test left, Test right)
{
return !Equals(left, right);
}
#endregion
}
static class Program
{
static void Main()
{
var dict = new ConcurrentDictionary<int, Test>();
dict.TryAdd(0, new Test(1000, 2000, 3000));
dict.TryAdd(1, new Test(4000, 5000, 6000));
dict.TryAdd(2, new Test(7000, 8000, 9000));
Parallel.Invoke(() => update(dict), () => update(dict));
}
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
for (int attempt = 0 ;; ++attempt)
{
var original = dict[0];
var modified = new Test(original.Value1 + 1, original.Value2 + 1, original.Value3 + 1);
var updatedOk = dict.TryUpdate(1, modified, original);
if (updatedOk) // Updated OK so don't try again.
break; // In some cases you might not care, so you would never try again.
Console.WriteLine($"dict.TryUpdate() returned false in iteration {i} attempt {attempt} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
}
}
}
There's a lot of boilerplate code there to support the IEquatable<T> implementation and also to support the immutability.
Fortunately, C# 9 has introduced the record type which makes immutable types much easier to implement. Here's the same sample console app that uses a record instead. Note that record types are immutable and also implement IEquality<T> for you:
using System;
using System.Collections.Concurrent;
using System.Threading;
using System.Threading.Tasks;
namespace System.Runtime.CompilerServices // Remove this if compiling with .Net 5
{ // This is to allow earlier versions of .Net to use records.
class IsExternalInit {}
}
namespace Demo
{
record Test(int Value1, int Value2, int Value3);
static class Program
{
static void Main()
{
var dict = new ConcurrentDictionary<int, Test>();
dict.TryAdd(0, new Test(1000, 2000, 3000));
dict.TryAdd(1, new Test(4000, 5000, 6000));
dict.TryAdd(2, new Test(7000, 8000, 9000));
Parallel.Invoke(() => update(dict), () => update(dict));
}
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
for (int attempt = 0 ;; ++attempt)
{
var original = dict[0];
var modified = original with
{
Value1 = original.Value1 + 1,
Value2 = original.Value2 + 1,
Value3 = original.Value3 + 1
};
var updatedOk = dict.TryUpdate(1, modified, original);
if (updatedOk) // Updated OK so don't try again.
break; // In some cases you might not care, so you would never try again.
Console.WriteLine($"dict.TryUpdate() returned false in iteration {i} attempt {attempt} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
}
}
}
Note how much shorter record Test is compared to class Test, even though it provides the same functionality. (Also note that I added class IsExternalInit to allow records to be used with .Net versions prior to .Net 5. If you're using .Net 5, you don't need that.)
Finally, note that you don't need to make your class immutable. The code I posted for the first example will work perfectly well if your class is mutable; it just won't stop other code from breaking things.
Addendum 1:
You may look at the output and wonder why so many retry attempts are made when the TryUpdate() fails. You might expect it to only need to retry a few times (depending on how many threads are concurrently attempting to modify the data). The answer to this is simply that the Console.WriteLine() takes so long that it's much more likely that some other thread changed the value in the dictionary again while we were writing to the console.
We can change the code slightly to only print the number of attempts OUTSIDE the loop like so (modifying the second example):
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
int attempt = 0;
while (true)
{
var original = dict[1];
var modified = original with
{
Value1 = original.Value1 + 1,
Value2 = original.Value2 + 1,
Value3 = original.Value3 + 1
};
var updatedOk = dict.TryUpdate(1, modified, original);
if (updatedOk) // Updated OK so don't try again.
break; // In some cases you might not care, so you would never try again.
++attempt;
}
if (attempt > 0)
Console.WriteLine($"dict.TryUpdate() took {attempt} retries in iteration {i} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
With this change, we see that the number of retry attempts drops significantly. This shows the importance of minimising the amount of time spent in code between TryUpdate() attempts.
Addendum 2:
As noted by Theodor Zoulias below, you could also use ConcurrentDictionary<TKey,TValue>.AddOrUpdate(), as the example below shows. This is probably a better approach, but it is slightly harder to understand:
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
int attempt = 0;
dict.AddOrUpdate(
1, // Key to update.
key => new Test(1, 2, 3), // Create new element; won't actually be called for this example.
(key, existing) => // Update existing element. Key not needed for this example.
{
++attempt;
return existing with
{
Value1 = existing.Value1 + 1,
Value2 = existing.Value2 + 1,
Value3 = existing.Value3 + 1
};
}
);
if (attempt > 1)
Console.WriteLine($"dict.TryUpdate() took {attempt-1} retries in iteration {i} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
If you just need to lock the dictionary value, for instance to make sure the 3 values are set at the same time. Then it doesn't really matter what reference type you lock over, just as long as it is a reference type, it's the same instance, and everything else that needs to read or modify those values are also locked on the same instance.
You can read more on how the Microsoft CLR implementation deals with locking and how and why locks work with a reference types here
Why Do Locks Require Instances In C#?
If you are trying to have internal consistency with the dictionary and the value, that's to say, if you are trying to protect not only the internal consistency of the dictionary and the setting and reading of object in the dictionary. Then the your lock is not appropriate at all.
You would need to place a lock around the entire statement (including the TryGetValue) and every other place where you add to the dictionary or read/modify the value. Once again, the object you lock over is not important, just as long as it's consistent.
Note 1 : it is normal to use a dedicated instance to lock over (i.e. some instantiated object) either statically or an instance member depending on your needs, as there is less chance of you shooting yourself in the foot.
Note 2 : there are a lot more ways that can implement thread safety here, depending on your needs, if you are happy with stale values, whether you need every ounce of performance, and if you have a degree in minimal lock coding and how much effort and innate safety you want to bake in. And that is entirely up to you and your solution.
The first option (locking on the entry of the dictionary) is more efficient because it is unlikely to create significant contention for the lock. For this to happen, two threads should try to update the same entry at the same time. The second option (locking on the entire dictionary) is quite possible to create contention under heavy usage, because two threads will be synchronized even if they try to update different entries concurrently.
The first option is also more in the spirit of using a ConcurrentDictionary<K,V> in the first place. If you are going to lock on the entire dictionary, you might as well use a normal Dictionary<K,V> instead. Regarding this dilemma, you may find this question interesting: When should I use ConcurrentDictionary and Dictionary?

Concurrent modification of double[][] elements without locking

I have a jagged double[][] array that may be modified concurrently by multiple threads. I should like to make it thread-safe, but if possible, without locks. The threads may well target the same element in the array, that is why the whole problem arises. I have found code to increment double values atomically using the Interlocked.CompareExchange method: Why is there no overload of Interlocked.Add that accepts Doubles as parameters?
My question is: will it stay atomic if there is a jagged array reference in Interlocked.CompareExchange? Your insights are much appreciated.
With an example:
public class Example
{
double[][] items;
public void AddToItem(int i, int j, double addendum)
{
double newCurrentValue = items[i][j];
double currentValue;
double newValue;
SpinWait spin = new SpinWait();
while (true) {
currentValue = newCurrentValue;
newValue = currentValue + addendum;
// This is the step of which I am uncertain:
newCurrentValue = Interlocked.CompareExchange(ref items[i][j], newValue, currentValue);
if (newCurrentValue == currentValue) break;
spin.SpinOnce();
}
}
}
Yes, it will still be atomic and thread-safe. Any calls to the same cell will be passing the same address-to-a-double. Details like whether it is in an array of as a field on an object are irrelevant.
However, the line:
double newCurrentValue = items[i][j];
is not atomic - that could in theory give a torn value (especially on x86). That's actually OK in this case because in the torn-value scenario it will just hit the loop, count as a collision, and redo - this time using the known-atomic value from CompareExchange.
Seems that you want eventually add some value to an array item.
I suppose there are only updates of values (array itself stays the same piece of memory) and all updates are done via this AddToItem method.
So, you have to read updated value each time (otherwise you lost changes done by other thread, or get infinite loop).
public class Example
{
double[][] items;
public void AddToItem(int i, int j, double addendum)
{
var spin = new SpinWait();
while (true)
{
var valueAtStart = Volatile.Read(ref items[i][j]);
var newValue = valueAtStart + addendum;
var oldValue = Interlocked.CompareExchange(ref items[i][j], newValue, valueAtStart);
if (oldValue.Equals(valueAtStart))
break;
spin.SpinOnce();
}
}
}
Note that we have to use some volatile method to read items[i][j]. Volatile.Read is used to avoid some unwanted optimizations that are allowed under .NET Memory Model (see ECMA-334 and ECMA-335 specifications).
Since we update value atomically (via Interlocked.CompareExchange) it's enough to read items[i][j] via Volatile.Read.
If not all changes to this array are done in this method, then it's better to write a loop in which create local copy of array, modify it and update reference to new array (using Volatile.Read and Interlocked.CompareExchange)

Why is a lock required here?

I'm looking at an example on p. 40 of Stephen Cleary's book that is
// Note: this is not the most efficient implementation.
// This is just an example of using a lock to protect shared state.
static int ParallelSum(IEnumerable<int> values)
{
object mutex = new object();
int result = 0;
Parallel.ForEach(source: values,
localInit: () => 0,
body: (item, state, localValue) => localValue + item,
localFinally: localValue =>
{
lock (mutex)
result += localValue;
});
return result;
}
and I'm a little confused why the lock is needed. Because if all we're doing is summing a collection of ints, say {1, 5, 6}, then we shouldn't need to care about the shared sum result being incremented in any order.
(1 + 5) + 6 = 1 + (5 + 6) = (1 + 6) + 5 = ...
Can someone explain where my thinking here is flawed?
I guess I'm a little confused by the body of the method can't simply be
int result = 0;
Parallel.ForReach(values, (val) => { result += val; });
return result;
Operations such as addition are not atomic and thus are not thread safe. In the example code, if the lock was omitted, it's completely possible that two addition operations are executed near-simultaneously, resulting in one of the values being overridden or incorrectly added. There is a method that is thread safe to increment integers: Interlocked.Add(int, int). However, since this isn't being used, the lock is required in the example code to ensure that exactly one non-atomic addition operation at most is completed at a time (in sequence and not in parallel).
Its not about the order in which 'result' is updated, its the race condition of updating it, remember that operator += is not atomic, so two threads may not see the udpate of another's thread before they touch it
The statement result += localValue; is really saying result = result + localValue; You are both reading and updating a resource(variable) shared by different threads. This could easily lead into a Race Condition. lockmakes sure this statement at any given moment in time is accessed by a single thread.

resolving threading conflicts in C#

lets suppose there is a static variable accessed by 2 threads.
public static int val = 1;
now suppose thread 1 execute's a statement like this
if(val==1)
{
val +=1
}
However after the check and before the addition in the above statement thread 2 changes the value of val to something else.
Now that would cause some nasty error. And this is happening in my code.
Is there any way that thread 1 gets noticed that the values of val has been changed and it instead of adding goes back and performs the check again.
Specifically for your example, you could:
var originalValue = Interlocked.CompareExchange(ref val,
2, //update val to this value
1); //if val matches this value
if(originalValue == 1)
{
//the update occurred
}
You could use a lock:
var lockObj = new object();
if(val == 1) // is the value what we want
{
lock(lockObj) // let's lock it
{
if(val == 1) // now that it's lock, check again just to be sure
{
val += 1;
}
}
}
If going that route, you'll have to make sure any code that modifies val also uses a lock with the same lock object.
lock(lockObj)
{
// code that changes `val`
}
You could use a lock around accessing val or updating val. See http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim.aspx
If you wish to increment an integral value in a thread-safe way, you can use the static Interlocked.Increment method: Interlocked.Increment Method (Int32)

Containskey VS Try Catch

I have a list of Vector2's Generated I have to check against a dictionary to see if they exist, this function gets executed every tick.
which would run fastest/ be better to do it this way?
public static bool exists(Vector2 Position, Dictionary<Vector2, object> ToCheck)
{
try
{
object Test = ToCheck[Position];
return (true);
}
catch
{
return (false);
}
}
Or should I stick with The norm ?
public static bool exists(Vector2 Position, Dictionary<Vector2, object> ToCheck)
{
if (ToCheck.ContainsKey(Position))
{
return (true);
}
return (false);
}
Thanks for the input :)
Side Note: (The Value for the key doesn't matter at this point or i would use TryGetValue instead of ContainsKey)
I know it's an old question, but just to add a bit of empirical data...
Running 50,000,000 look-ups on a dictionary with 10,000 entries and comparing relative times to complete:
..if every look-up is successful:
a straight (unchecked) run takes 1.2 seconds
a guarded (ContainsKey) run takes 2 seconds
a handled (try-catch) run takes 1.21 seconds
..if 1 out of every 10,000 look-ups fail:
a guarded (ContainsKey) run takes 2 seconds
a handled (try-catch) run takes 1.37 seconds
..if 16 out of every 10,000 look-ups fail:
a guarded (ContainsKey) run takes 2 seconds
a handled (try-catch) run takes 3.27 seconds
..if 250 out of every 10,000 look-ups fail:
a guarded (ContainsKey) run takes 2 seconds
a handled (try-catch) run takes 32 seconds
..so a guarded test will add a constant overhead and nothing more, and try-catch test will operate almost as fast as no test if it never fails, but kills performance proportionally to the number of failures.
Code I used to run tests:
using System;
using System.Collections.Generic;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{ Test(0);
Test(1);
Test(16);
Test(250);
}
private static void Test(int failsPerSet)
{ Dictionary<int, bool> items = new Dictionary<int,bool>();
for(int i = 0; i < 10000; i++)
if(i >= failsPerSet)
items[i] = true;
if(failsPerSet == 0)
RawLookup(items, failsPerSet);
GuardedLookup(items, failsPerSet);
CaughtLookup(items, failsPerSet);
}
private static void RawLookup
( Dictionary<int, bool> items
, int failsPerSet
){ int found = 0;
DateTime start ;
Console.Write("Raw (");
Console.Write(failsPerSet);
Console.Write("): ");
start = DateTime.Now;
for(int i = 0; i < 50000000; i++)
{ int pick = i % 10000;
if(items[pick])
found++;
}
Console.WriteLine(DateTime.Now - start);
}
private static void GuardedLookup
( Dictionary<int, bool> items
, int failsPerSet
){ int found = 0;
DateTime start ;
Console.Write("Guarded (");
Console.Write(failsPerSet);
Console.Write("): ");
start = DateTime.Now;
for(int i = 0; i < 50000000; i++)
{ int pick = i % 10000;
if(items.ContainsKey(pick))
if(items[pick])
found++;
}
Console.WriteLine(DateTime.Now - start);
}
private static void CaughtLookup
( Dictionary<int, bool> items
, int failsPerSet
){ int found = 0;
DateTime start ;
Console.Write("Caught (");
Console.Write(failsPerSet);
Console.Write("): ");
start = DateTime.Now;
for(int i = 0; i < 50000000; i++)
{ int pick = i % 10000;
try
{ if(items[pick])
found++;
}
catch
{
}
}
Console.WriteLine(DateTime.Now - start);
}
}
}
Definitely use the ContainsKey check; exception handling can add a large overhead.
Throwing exceptions can negatively impact performance. For code that routinely fails, you can use design patterns to minimize performance issues.
Exceptions are not meant to be used for conditions you can check for.
I recommend reading the MSDN documentation on exceptions generally, and on exception handling in particular.
Never use try/catch as a part of your regular program path. It is really expensive and should only catch errors that you cannot prevent. ContainsKey is the way to go here.
Side Note: No. You would not. If the value matters you check with ContainsKey if it exists and retrieve it, if it does. Not try/catch.
Side Note: (The Value for the key doesn't matter at this point or i would use TryGetValue instead of ContainsKey)
The answer you accepted is correct, but just to add, if you only care about the key and not the value, maybe you're looking for a HashSet rather than a Dictionary?
In addition, your second code snippet is a method which literally adds zero value. Just use ToCheck.ContainsKey(Position), don't make a method which just calls that method and returns its value but does nothing else.

Categories

Resources