In the following example when the "Submit" button is clicked the value of the static variable Count is incremented. But is this operation thread safe? Is using Appliation object the proper way of doing such operation? The questions are applicable to Web form application s as well.
The count always seem to increase as I click the Submit button.
View(Razor):
#{
Layout = null;
}
<html>
<body>
<form>
<p>#ViewBag.BeforeCount</p>
<input type="submit" value="Submit" />
</form>
</body>
</html>
Controller:
public class HomeController : Controller
{
public ActionResult Index()
{
ViewBag.BeforeCount = StaticVariableTester.Count;
StaticVariableTester.Count += 50;
return View();
}
}
Static Class:
public class StaticVariableTester
{
public static int Count;
}
No, it's not. The += operator is done in 3 steps: read the value of the variable, increase it by one, assign the new value. Expanded:
var count = StaticVariableTester.Count;
count = count + 50;
StaticVariableTester.Count = count;
A thread could be preempted between any two of these steps. This means that if Count is 0, and two threads execute += 50 concurrently, it's possible Count will be 50 instead of 100.
T1 reads Count as 0.
T2 reads Count as 0
T1 adds 0 + 50
T2 adds 0 + 50
T1 assigns 50 to Count
T2 assigns 50 to Count
Count equals 50
Additionally, it could also be preempted between your first two instructions. Which means two concurrent threads might both set ViewBag.BeforeCount to 0, and only then increment StaticVariableTester.Count.
Use a lock
private readonly object _countLock = new object();
public ActionResult Index()
{
lock(_countLock)
{
ViewBag.BeforeCount = StaticVariableTester.Count;
StaticVariableTester.Count += 50;
}
return View();
}
Or use Interlocked.Add
public static class StaticVariableTester
{
private static int _count;
public static int Count
{
get { return _count; }
}
public static int IncrementCount(int value)
{
//increments and returns the old value of _count
return Interlocked.Add(ref _count, value) - value;
}
}
public ActionResult Index()
{
ViewBag.BeforeCount = StaticVariableTester.IncrementCount(50);
return View();
}
Increment is not atomic so is not thread safe.
Check out Interlocked.Add:
Adds two 32-bit integers and replaces the first integer with the sum, as an atomic operation.
You'd use it like this:
Interlocked.Add(ref StaticVariableTester.Count, 50);
Personally I'd wrap this in your StaticVariableTester class:
public class StaticVariableTester
{
private static int count;
public static void Add(int i)
{
Interlocked.Add(ref count, i);
}
public static int Count
{
get { return count; }
}
}
If you want the returned values (as per dcastro's comment) then you could always do:
public static int AddAndGetNew(int i)
{
return Interlocked.Add(ref count, i);
}
public static int AddAndGetOld(int i)
{
return Interlocked.Add(ref count, i) - i;
}
In your code you could do
ViewBag.BeforeCount = StaticVariableTester.AddAndGetOld(50);
If a method (instance or static) only references variables scoped within that method then it is thread safe because each thread has its own stack. You also can achieve thread safety by using any variety of synchronization mechanisms.
This operation is not thread safe because it uses shared variable: ViewBag.BeforeCount.
What Makes a Method Thread-safe? What are the rules?
Related
I was trying to debug an issue today with duplicate ids in our application, and I noticed that running code from the Immediate Window does not act as expected. Here is my sample program I was using to test :
class Program
{
static void Main(string[] args)
{
ClassA a = new ClassA();
ClassB b = new ClassB();
Console.WriteLine(string.Format("A: {0}", a.Id));
Console.WriteLine(string.Format("B: {0}", b.Id));
Console.ReadLine();
}
}
public class ClassA
{
private static int _id = MySingleton.Instance.GetId(typeof(ClassA));
public int Id { get { return _id; } }
}
public class ClassB
{
private static int _id = MySingleton.Instance.GetId(typeof(ClassB));
public int Id { get { return _id; } }
}
public class MySingleton
{
private int _id = 0;
private object lockObj = new object();
Dictionary<string, int> _cache = new Dictionary<string, int>();
private static readonly MySingleton _mySingleton = new MySingleton();
public int GetId(Type t)
{
if (_cache.ContainsKey(t.FullName))
{
return _cache[t.FullName];
}
else
{
lock (lockObj)
{
Add(t.FullName, _id++);
return _id;
}
}
}
public void Add(string key, int value)
{
if (!_cache.ContainsKey(key))
_cache.Add(key, value);
}
public static MySingleton Instance
{
get { return _mySingleton; }
}
}
If I run this code, the output is
A: 1
B: 2
But if I put a breakpoint the static int _id and check the value of MySingleton.Instance.GetId(typeof(ClassA)) in the Immediate window, then it shows me a value of 1 the first time, put persists 0, so the output is :
A: 0
B: 2
If I put the breakpoint in ClassB and run MySingleton.Instance.GetId(typeof(ClassA)) in the Immediate window, it shows me a value of 2 the first time, but persists as 1 for any subsequent calls:
A: 1
B: 1
What I would actually expect because I am using _id++ instead of ++_id is that the output should be
A: 0
B: 1
Why is this happening?
First a few observations:
When you call GetID the first time, the cache is not filled for that type. The value is added to te cache and then increased. But that increased (and I assume incorrect) value is returned (1 for ClassA), NOT the value inside the cache, which is lower. So if you call GetID a second time for the same type, you get a value that is one lower (0 for ClassA).
When you use your debugger and watch results, those "watches" might modify the results. In your case: The watch window shows the incorrect value (1 for ClassA), and after that, de program uses the correct value (0 for ClassA).
Try to fix your code like this:
lock (lockObj)
{
Add(t.FullName, _id); // <-- remove the ++
return _id++; // <-- add the ++
}
This way, the orignal value of _id is returned before it is incremented.
Another observation:
You are using a Dictionay<K,V>. This one is not threadsafe. You should not only protect it with a lock during write-operations, but also during a read-operation. With your code as it is now, a read-during-a-write-operation might lead to bad results. Consider using the ConcurrentDictionary<K,V>.
I am confused about the accuracy of code in multi threading as some time I am getting wrong result.
Looks like it might fail. Below is the code.
public class MyKeyValue
{
public double Key { get; set; }
public double Value { get; set; }
}
public class CollMyKeyValue : List<MyKeyValue>
{
public void SumUpValues(CollMyKeyValue collection)
{
int count =0;
Parallel.For(count, this.Count,
(i) =>
{
this[count].Value = this[count].Value + collection[count].Value;
Interlocked.Increment(ref count);
});
}
}
Assuming the keys are same in both collection.
I want add the values of one collection into another. Is it therad safe ?
I have not put the this[count].Value = this[count].Value + collection[count].Value; in thread safe block.
Just remove the interlocked increment :
public void SumUpValues(CollMyKeyValue collection)
{
//int count =0;
Parallel.For(0, this.Count,
(i) =>
{
this[i].Value = this[i].Value + collection[i].Value;
//Interlocked.Increment(ref count);
});
}
Your version is altering the index variable inside the loop. The For loop does this automatically; in the parallel version each thread gets an i (or set of i) to do, so incrementing in the loop makes no sense.
Not sure what you're trying to do. But I guess you mean this.
public void SumUpValues(CollMyKeyValue collection)
{
Parallel.For(0, this.Count, (i) =>
{
this[i].Value += collection[i].Value;
});
}
First parameter says the Parallel.For where to start, altering that makes no sense. You get i as the parameter to the loop body which will tell you which iteration you're in.
So I have this recursive factorial function in c#. I am using it to deal with BigInteger. The problem arises when I want to deal with large integers and because my function is recursive it will cause a StackOverflow exception. Now the simple solution is to not make the function recursive. I am wondering if there is a way to get around this? I'm thinking along the lines of more ram allocated the the stack?
BigInteger Factorial(BigInteger n)
{
return n == 1 ? 1 : n * Factorial(n - 1);
}
I understand it is nice if you could express recursive functions in c# without worrying about the stack. But unfortunately that is not directly possible, and no matter how big you make the stack there will always be situations where you run out of stack space. Furthermore your performance will likely be pretty horrendous. If you have a tail recursive function like this factorial something can be done, that pretty much lets you express your function in the original recursive way, without the huge penalty.
Unfortunately c# does not directly support tail recursive calls, but workarounds are possible using a so-called "trampoline" construction.
See for example: http://bartdesmet.net/blogs/bart/archive/2009/11/08/jumping-the-trampoline-in-c-stack-friendly-recursion.aspx and http://www.thomaslevesque.com/2011/09/02/tail-recursion-in-c/
From the last blog, comes the following code that will allow you to perform the factorial as a tail recursive function without stack problems.
public static class TailRecursion
{
public static T Execute<T>(Func<RecursionResult<T>> func)
{
do
{
var recursionResult = func();
if (recursionResult.IsFinalResult)
return recursionResult.Result;
func = recursionResult.NextStep;
} while (true);
}
public static RecursionResult<T> Return<T>(T result)
{
return new RecursionResult<T>(true, result, null);
}
public static RecursionResult<T> Next<T>(Func<RecursionResult<T>> nextStep)
{
return new RecursionResult<T>(false, default(T), nextStep);
}
}
public class RecursionResult<T>
{
private readonly bool _isFinalResult;
private readonly T _result;
private readonly Func<RecursionResult<T>> _nextStep;
internal RecursionResult(bool isFinalResult, T result, Func<RecursionResult<T>> nextStep)
{
_isFinalResult = isFinalResult;
_result = result;
_nextStep = nextStep;
}
public bool IsFinalResult { get { return _isFinalResult; } }
public T Result { get { return _result; } }
public Func<RecursionResult<T>> NextStep { get { return _nextStep; } }
}
class Program
{
static void Main(string[] args)
{
BigInteger result = TailRecursion.Execute(() => Factorial(50000, 1));
}
static RecursionResult<BigInteger> Factorial(int n, BigInteger product)
{
if (n < 2)
return TailRecursion.Return(product);
return TailRecursion.Next(() => Factorial(n - 1, n * product));
}
}
You can create a new thread with the stacksize you want...
var tcs = new TaskCompletionSource<BigInteger>();
int stackSize = 1024*1024*1024;
new Thread(() =>
{
tcs.SetResult(Factorial(10000));
},stackSize)
.Start();
var result = tcs.Task.Result;
But as mentioned in comments, an iterative way for this would be better..
I have the following code:
static void Main(string[] args)
{
TaskExecuter.Execute();
}
class Task
{
int _delay;
private Task(int delay) { _delay = delay; }
public void Execute() { Thread.Sleep(_delay); }
public static IEnumerable GetAllTasks()
{
Random r = new Random(4711);
for (int i = 0; i < 10; i++)
yield return new Task(r.Next(100, 5000));
}
}
static class TaskExecuter
{
public static void Execute()
{
foreach (Task task in Task.GetAllTasks())
{
task.Execute();
}
}
}
I need to change the loop in Execute method to paralle with multiple threads, I tried the following, but it isn't working since GetAllTasks returns IEnumerable and not a list
Parallel.ForEach(Task.GetAllTasks(), task =>
{
//Execute();
});
Parallel.ForEach works with IEnumerable<T>, so adjust your GetAllTasks to return IEnumerable<Task>.
Also .net has widely used Task class, I would avoid naming own class like that to avoid confusion.
Parallel.ForEach takes an IEnumerable<TSource>, so your code should be fine. However, you need to perform the Execute call on the task instance that is passed as parameter to your lambda statement.
Parallel.ForEach(Task.GetAllTasks(), task =>
{
task.Execute();
});
This can also be expressed as a one-line lambda expression:
Parallel.ForEach(Task.GetAllTasks(), task => task.Execute());
There is also another subtle bug in your code that you should pay attention to. Per its internal implementation, Parallel.ForEach may enumerate the elements of your sequence in parallel. However, you are calling an instance method of the Random class in your enumerator, which is not thread-safe, possibly leading to race issues. The easiest way to work around this would be to pre-populate your sequence as a list:
Parallel.ForEach(Task.GetAllTasks().ToList(), task => task.Execute());
This worked on my linqpad. I just renamed your Task class to Work and also returned an IEnumerable<T> from GetAllTasks:
class Work
{
int _delay;
private Work(int delay) { _delay = delay; }
public void Execute() { Thread.Sleep(_delay); }
public static IEnumerable<Work> GetAllTasks()
{
Random r = new Random(4711);
for (int i = 0; i < 10; i++)
yield return new Work(r.Next(100, 5000));
}
}
static class TaskExecuter
{
public static void Execute()
{
foreach (Work task in Work.GetAllTasks())
{
task.Execute();
}
}
}
void Main()
{
System.Threading.Tasks.Parallel.ForEach(Work.GetAllTasks(), new Action<Work>(task =>
{
//Execute();
}));
}
If i use TPL i run into problems in Parse.. Methods i do use Console.Write to build some Line but somtimes one is to fast and writes in the other Methods row. How do i lock or is there some better way?
Parallel.Invoke(
() => insertedOne = Lib.ParseOne(list),
() => insertedTwo = Lib.ParseTwo(list),
() => insertedThree = Lib.ParseThree(list));
Example for Parse.. Methods.
public static int ParseOne(string[] _list) {
Console.Write("blabla");
Console.Write("blabla");
return 0;
}
public static int ParseTwo(string[] _list) {
Console.Write("hahahah");
Console.Write("hahahah");
return 0;
}
public static int ParseThree(string[] _list) {
Console.Write("egegege");
Console.Write("egegege");
return 0;
}
To be able to print your blablas, hahahahs and egegeges as a single entity(indivisible)
you can write your method as:
public static int ParseThree(string[] _list)
{
lock (Console.Out)
{
Console.Write("egegege");
Console.Write("egegege");
}
return 0;
}
Why don't you run all the tasks in one thread, one after the other?
System.Threading.Tasks.Task.Factory.StartNew(()=>
{
insertedOne = Lib.ParseOne(list);
insertedTwo = Lib.ParseTwo(list);
insertedThree = Lib.ParseThree(list);
});
This way you won't have that much of a race condition.