I'm developing an ASP.NET forms webapplication using C#. I have a method which creates a new Order for a customer. It looks similar to this;
private string CreateOrder(string userName) {
// Fetch current order
Order order = FetchOrder(userName);
if (order.OrderId == 0) {
// Has no order yet, create a new one
order.OrderNumber = Utility.GenerateOrderNumber();
order.Save();
}
return order;
}
The problem here is, it is possible that 1 customer in two requests (threads) could cause this method to be called twice while another thread is also inside this method. This can cause two orders to be created.
How can I properly lock this method, so it can only be executed by one thread at a time per customer?
I tried;
Mutex mutex = null;
private string CreateOrder(string userName) {
if (mutex == null) {
mutex = new Mutex(true, userName);
}
mutex.WaitOne();
// Code from above
mutex.ReleaseMutex();
mutex = null;
return order;
}
This works, but on some occasions it hangs on WaitOne and I don't know why. Is there an error, or should I use another method to lock?
Thanks
Pass false for initiallyOwned in the mutex ctor. If you create the mutex and initially own it, you need to call ReleaseMutex again.
You should always try finally when releasing mutex. Also, make sure that the key is correct(userName)
Mutex mutex = null;
private string CreateOrder(string userName) {
mutex = mutex ?? new Mutex(true, userName);
mutex.WaitOne();
try{
// Code from above
}finally{
mutex.ReleaseMutex();
}
mutex = null;
return order;
}
In your code, you are creating the mutex lazily. This leads to race conditions.
E.g. it can happen that the mutex is only partially constructed when you call WaitOne() from another thread.
It can also happen that you create two mutex instances.
etc...
You can avoid this by creating the instance eagerly - i.e. as in Michael's code.
(Be sure to initialize it to a non-owned state.)
Mutex is a kernel-level synchronization primitive - it is more expensive than Monitor (that is what lock uses.).
Unless I'm missing something, can't you just use a regular lock?
private object _locker = new object();
private string CreateOrder(string userName)
{
lock(_locker)
{
// Fetch current order
Order order = FetchOrder(userName);
if (order.OrderId == 0)
{
// Has no order yet, create a new one
order.OrderNumber = Utility.GenerateOrderNumber();
order.Save();
}
return order;
}
}
I have always avoided locking in a web-based application - let the web server deal with the threads, and instead build in duplicate detection.
What do you think you're going to get by locking on the CreateOrder? It seems to me that you may avoid creating two order simultaneously, but you're still going to end up with two orders created.
Its easier to do this:
define a class somewhere like so:
public class MyLocks {
public static object OrderLock;
static MyLocks() {
OrderLock = new object();
}
}
then when using the lock do this:
lock(MyLocks.OrderLock) {
// put your code here
}
Its not very complicated then. Its light weight to define locks for whatever purpose as they are just very tiny objects in memory that are alive across multiple threads.
Related
I want to start some new threads each for one repeating operation. But when such an operation is already in progress, I want to discard the current task. In my scenario I need very current data only - dropped data is not an issue.
In the MSDN I found the Mutex class but as I understand it, it waits for its turn, blocking the current thread. Also I want to ask you: Does something exist in the .NET framework already, that does the following:
Is some method M already being executed?
If so, return (and let me increase some counter for statistics)
If not, start method M in a new thread
The lock(someObject) statement, which you may have come across, is syntactic sugar around Monitor.Enter and Monitor.Exit.
However, if you use the monitor in this more verbose way, you can also use Monitor.TryEnter which allows you to check if you'll be able to get the lock - hence checking if someone else already has it and is executing code.
So instead of this:
var lockObject = new object();
lock(lockObject)
{
// do some stuff
}
try this (option 1):
int _alreadyBeingExecutedCounter;
var lockObject = new object();
if (Monitor.TryEnter(lockObject))
{
// you'll only end up here if you got the lock when you tried to get it - otherwise you'll never execute this code.
// do some stuff
//call exit to release the lock
Monitor.Exit(lockObject);
}
else
{
// didn't get the lock - someone else was executing the code above - so I don't need to do any work!
Interlocked.Increment(ref _alreadyBeingExecutedCounter);
}
(you'll probably want to put a try..finally in there to ensure the lock is released)
or dispense with the explicit lock althogether and do this
(option 2)
private int _inUseCount;
public void MyMethod()
{
if (Interlocked.Increment(ref _inUseCount) == 1)
{
// do dome stuff
}
Interlocked.Decrement(ref _inUseCount);
}
[Edit: in response to your question about this]
No - don't use this to lock on. Create a privately scoped object to act as your lock.
Otherwise you have this potential problem:
public class MyClassWithLockInside
{
public void MethodThatTakesLock()
{
lock(this)
{
// do some work
}
}
}
public class Consumer
{
private static MyClassWithLockInside _instance = new MyClassWithLockInside();
public void ThreadACallsThis()
{
lock(_instance)
{
// Having taken a lock on our instance of MyClassWithLockInside,
// do something long running
Thread.Sleep(6000);
}
}
public void ThreadBCallsThis()
{
// If thread B calls this while thread A is still inside the lock above,
// this method will block as it tries to get a lock on the same object
// ["this" inside the class = _instance outside]
_instance.MethodThatTakesLock();
}
}
In the above example, some external code has managed to disrupt the internal locking of our class just by taking out a lock on something that was externally accessible.
Much better to create a private object that you control, and that no-one outside your class has access to, to avoid these sort of problems; this includes not using this or the type itself typeof(MyClassWithLockInside) for locking.
One option would be to work with a reentrancy sentinel:
You could define an int field (initialize with 0) and update it via Interlocked.Increment on entering the method and only proceed if it is 1. At the end just do a Interlocked.Decrement.
Another option:
From your description it seems that you have a Producer-Consumer-Scenario...
For this case it might be helpful to use something like BlockingCollection as it is thread-safe and mostly lock-free...
Another option would be to use ConcurrentQueue or ConcurrentStack...
You might find some useful information on the following site (the PDf is also downlaodable - recently downloaded it myself). The Adavnced threading Suspend and Resume or Aborting chapters maybe what you are inetrested in.
You should use Interlocked class atomic operations - for best performance - since you won't actually use system-level sychronizations(any "standard" primitive needs it, and involve system call overhead).
//simple non-reentrant mutex without ownership, easy to remake to support //these features(just set owner after acquiring lock(compare Thread reference with Thread.CurrentThread for example), and check for matching identity, add counter for reentrancy)
//can't use bool because it's not supported by CompareExchange
private int lock;
public bool TryLock()
{
//if (Interlocked.Increment(ref _inUseCount) == 1)
//that kind of code is buggy - since counter can change between increment return and
//condition check - increment is atomic, this if - isn't.
//Use CompareExchange instead
//checks if 0 then changes to 1 atomically, returns original value
//return true if thread succesfully occupied lock
return CompareExchange(ref lock, 1, 0)==0;
return false;
}
public bool Release()
{
//returns true if lock was occupied; false if it was free already
return CompareExchange(ref lock, 0, 1)==1;
}
I'm writing a program that will analyze changes in the stock market.
Every time the candles on the stock charts are updated, my algorithm scans every chart for certain pieces of data. I've noticed that this process is taking about 0.6 seconds each time, freezing my application. Its not getting stuck in a loop, and there are no other problems like exception errors slowing it down. It just takes a bit of time.
To solve this, I'm trying to see if I can thread the algorithm.
In order to call the algorithm to check over the charts, I have to call this:
checkCharts.RunAlgo();
As threads need an object, I'm trying to figure out how to run the RunAlgo(), but I'm not having any luck.
How can I have a thread run this method in my checkCharts object? Due to back propagating data, I can't start a new checkCharts object. I have to continue using that method from the existing object.
EDIT:
I tried this:
M4.ALProj.BotMain checkCharts = new ALProj.BotMain();
Thread algoThread = new Thread(checkCharts.RunAlgo);
It tells me that the checkCharts part of checkCharts.RunAlgo is gives me, "An object reference is required for the non-static field, method, or property "M4.ALProj.BotMain"."
In a specific if statement, I was going to put the algoThread.Start(); Any idea what I did wrong there?
The answer to your question is actually very simple:
Thread myThread = new Thread(checkCharts.RunAlgo);
myThread.Start();
However, the more complex part is to make sure that when the method RunAlgo accesses variables inside the checkCharts object, this happens in a thread-safe manner.
See Thread Synchronization for help on how to synchronize access to data from multiple threads.
I would rather use Task.Run than Thread. Task.Run utilizes the ThreadPool which has been optimized to handle various loads effectively. You will also get all the goodies of Task.
await Task.Run(()=> checkCharts.RunAlgo);
Try this code block. Its a basic boilerplate but you can build on and extend it quite easily.
//If M4.ALProj.BotMain needs to be recreated for each run then comment this line and uncomment the one in DoRunParallel()
private static M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
private static object SyncRoot = new object();
private static System.Threading.Thread algoThread = null;
private static bool ReRunOnComplete = false;
public static void RunParallel()
{
lock (SyncRoot)
{
if (algoThread == null)
{
System.Threading.ThreadStart TS = new System.Threading.ThreadStart(DoRunParallel);
algoThread = new System.Threading.Thread(TS);
}
else
{
//Recieved a recalc call while still calculating
ReRunOnComplete = true;
}
}
}
public static void DoRunParallel()
{
bool ReRun = false;
try
{
//If M4.ALProj.BotMain needs to be recreated for each run then uncomment this line and comment private static version above
//M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
checkCharts.RunAlgo();
}
finally
{
lock (SyncRoot)
{
algoThread = null;
ReRun = ReRunOnComplete;
ReRunOnComplete = false;
}
}
if (ReRun)
{
RunParallel();
}
}
I'm attempting to reimplement functionality from a system class (Lazy<T>) and I found this unusual bit of code. I get the basic idea. The first thread to try for a value performs the calculations. Any threads that try while that's happening get locked at the gate, wait until release, and then go get the cached value. Any later calls notice the sentinel value and don't bother with the locks any more.
bool lockWasTaken = false;
var obj = Volatile.Read<object>(ref this._locker);
object returnValue = null;
try
{
if (obj != SENTINEL_VALUE)
{
Monitor.Enter(obj, ref lockWasTaken);
}
if (this.cachedValue != null) // always true after code has run once
{
returnValue = this.cachedValue;
}
else //only happens on the first thread to lock and enter
{
returnValue = SomeCalculations();
this.cachedValue = returnValue;
Volatile.Write<object>(ref this._locker, SENTINEL_VALUE);
}
return returnValue
}
finally
{
if (lockWasTaken)
{
Monitor.Exit(obj);
}
}
But let's say, after a change in the code, that another method resets the this._locker to it's original value and then goes in to lock and recalculate the cached value. While it does this, another thread happened to be picking up the cached value, so it's inside the locked section, but without a lock. What happens? Does it just execute normally while the thread with the lock also goes in parallel?
While it does this, another thread happened to be picking up the cached value, so it's inside the locked section, but without a lock. What happens? Does it just execute normally while the thread with the lock also goes in parallel?
Yes, it'll just execute normally.
That being said, this code appears like it could be removed entirely by using Lazy<T>. The Lazy<T> class provides a thread safe way to handle lazy instantiation of data, which appears to be the goal of this code.
Basically, the entire code could be replaced by:
// Have a field like the following:
Lazy<object> cachedValue = new Lazy<object>(() => SomeCalculations());
// Code then becomes:
return cachedValue.Value;
I have 2 threads to are triggered at the same time and run in parallel. These 2 threads are going to be manipulating a string value, but I want to make sure that there are no data inconsistencies. For that I want to use a lock with Monitor.Pulse and Monitor.Wait. I used a method that I found on another question/answer, but whenever I run my program, the first thread gets stuck at the Monitor.Wait level. I think that's because the second thread has already "Pulsed" and "Waited". Here is some code to look at:
string currentInstruction;
public void nextInstruction()
{
Action actions = {
fetch,
decode
}
Parallel.Invoke(actions);
_pc++;
}
public void fetch()
{
lock(irLock)
{
currentInstruction = "blah";
GiveTurnTo(2);
WaitTurn(1);
}
decodeEvent.WaitOne();
}
public void decode()
{
decodeEvent.Set();
lock(irLock)
{
WaitTurn(2);
currentInstruction = "decoding..."
GiveTurnTo(1);
}
}
// Below are the methods I talked about before.
// Wait for turn to use lock object
public static void WaitTurn(int threadNum, object _lock)
{
// While( not this threads turn )
while (threadInControl != threadNum)
{
// "Let go" of lock on SyncRoot and wait utill
// someone finishes their turn with it
Monitor.Wait(_lock);
}
}
// Pass turn over to other thread
public static void GiveTurnTo(int nextThreadNum, object _lock)
{
threadInControl = nextThreadNum;
// Notify waiting threads that it's someone else's turn
Monitor.Pulse(_lock);
}
Any idea how to get 2 parallel threads to communicate (manipulate the same resources) within the same cycle using locks or anything else?
You want to run 2 peaces of code in parallel, but locking them at start using the same variable?
As nvoigt mentioned, it already sounds wrong. What you have to do is to remove lock from there. Use it only when you are about to access something exclusively.
Btw "data inconsistencies" can be avoided by not having to have them. Do not use currentInstruction field directly (is it a field?), but provide a thread safe CurrentInstruction property.
private object _currentInstructionLock = new object();
private string _currentInstruction
public string CurrentInstruction
{
get { return _currentInstruction; }
set
{
lock(_currentInstructionLock)
_currentInstruction = value;
}
}
Other thing is naming, local variables name starting from _ is a bad style. Some peoples (incl. me) using them to distinguish private fields. Property name should start from BigLetter and local variables fromSmall.
I have a question about improving the efficiency of my program. I have a Dictionary<string, Thingey> defined to hold named Thingeys. This is a web application that will create multiple named Thingey’s over time. Thingey’s are somewhat expensive to create (not prohibitively so) but I’d like to avoid it whenever possible. My logic for getting the right Thingey for the request looks a lot like this:
private Dictionary<string, Thingey> Thingeys;
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
if (!this.Thingeys.ContainsKey(thingeyName))
{
// create a new thingey on 1st reference
Thingey newThingey = new Thingey(request);
lock (this.Thingeys)
{
if (!this.Thingeys.ContainsKey(thingeyName))
{
this.Thingeys.Add(thingeyName, newThingey);
}
// else - oops someone else beat us to it
// newThingey will eventually get GCed
}
}
return this. Thingeys[thingeyName];
}
In this application, Thingeys live forever once created. We don’t know how to create them or which ones will be needed until the app starts and requests begin coming in. The question I have is in the above code is there are occasional instances where newThingey is created because we get multiple simultaneous requests for it before it’s been created. We end up creating 2 of them but only adding one to our collection.
Is there a better way to get Thingeys created and added that doesn’t involve check/create/lock/check/add with the rare extraneous thingey that we created but end up never using? (And this code works and has been running for some time. This is just the nagging bit that has always bothered me.)
I'm trying to avoid locking the dictionary for the duration of creating a Thingey.
This is the standard double check locking problem. The way it is implemented here is unsafe and can cause various problems - potentially up to the point of a crash in the first check if the internal state of the dictionary is screwed up bad enough.
It is unsafe because you are checking it without synchronization and if your luck is bad enough you can hit it while some other thread is in the middle of updating internal state of the dictionary
A simple solution is to place the first check under a lock as well. A problem with this is that this becomes a global lock and in web environment under heavy load it can become a serious bottleneck.
If we are talking about .NET environment, there are ways to work around this issue by piggybacking on the ASP.NET synchronization mechanism.
Here is how I did it in NDjango rendering engine: I keep one global dictionary and one dictionary per rendering thread. When a request comes I check the local dictionary first - this check does not have to be synchronized and if the thingy is there I just take it
If it is not I synchronize on the global dictionary check if it is there and if it is add it to my thread dictionary and release the lock. If it is not in the global dictionary I add it there first while still under lock.
Well, from my point of view simpler code is better, so I'd only use one lock:
private readonly object thingeysLock = new object();
private readonly Dictionary<string, Thingey> thingeys;
public Thingey GetThingey(Request request)
{
string key = request.ThingeyName;
lock (thingeysLock)
{
Thingey ret;
if (!thingeys.TryGetValue(key, out ret))
{
ret = new Thingey(request);
thingeys[key] = ret;
}
return ret;
}
}
Locks are really cheap when they're not contended. The downside is that this means that occasionally you will block everyone for the whole duration of the time you're creating a new Thingey. Clearly to avoid creating redundant thingeys you'd have to at least block while multiple threads create the Thingey for the same key. Reducing it so that they only block in that situation is somewhat harder.
I would suggest you use the above code but profile it to see whether it's fast enough. If you really need "only block when another thread is already creating the same thingey" then let us know and we'll see what we can do...
EDIT: You've commented on Adam's answer that you "don't want to lock while a new Thingey is being created" - you do realise that there's no getting away from that if there's contention for the same key, right? If thread 1 starts creating a Thingey, then thread 2 asks for the same key, your alternatives for thread 2 are either waiting or creating another instance.
EDIT: Okay, this is generally interesting, so here's a first pass at the "only block other threads asking for the same item".
private readonly object dictionaryLock = new object();
private readonly object creationLocksLock = new object();
private readonly Dictionary<string, Thingey> thingeys;
private readonly Dictionary<string, object> creationLocks;
public Thingey GetThingey(Request request)
{
string key = request.ThingeyName;
Thingey ret;
bool entryExists;
lock (dictionaryLock)
{
entryExists = thingeys.TryGetValue(key, out ret);
// Atomically mark the dictionary to say we're creating this item,
// and also set an entry for others to lock on
if (!entryExists)
{
thingeys[key] = null;
lock (creationLocksLock)
{
creationLocks[key] = new object();
}
}
}
// If we found something, great!
if (ret != null)
{
return ret;
}
// Otherwise, see if we're going to create it or whether we need to wait.
if (entryExists)
{
object creationLock;
lock (creationLocksLock)
{
creationLocks.TryGetValue(key, out creationLock);
}
// If creationLock is null, it means the creating thread has finished
// creating it and removed the creation lock, so we don't need to wait.
if (creationLock != null)
{
lock (creationLock)
{
Monitor.Wait(creationLock);
}
}
// We *know* it's in the dictionary now - so just return it.
lock (dictionaryLock)
{
return thingeys[key];
}
}
else // We said we'd create it
{
Thingey thingey = new Thingey(request);
// Put it in the dictionary
lock (dictionaryLock)
{
thingeys[key] = thingey;
}
// Tell anyone waiting that they can look now
lock (creationLocksLock)
{
Monitor.PulseAll(creationLocks[key]);
creationLocks.Remove(key);
}
return thingey;
}
}
Phew!
That's completely untested, and in particular it isn't in any way, shape or form robust in the face of exceptions in the creating thread... but I think it's the generally right idea :)
If you're looking to avoid blocking unrelated threads, then additional work is needed (and should only be necessary if you've profiled and found that performance is unacceptable with the simpler code). I would recommend using a lightweight wrapper class that asynchronously creates a Thingey and using that in your dictionary.
Dictionary<string, ThingeyWrapper> thingeys = new Dictionary<string, ThingeyWrapper>();
private class ThingeyWrapper
{
public Thingey Thing { get; private set; }
private object creationLock;
private Request request;
public ThingeyWrapper(Request request)
{
creationFlag = new object();
this.request = request;
}
public void WaitForCreation()
{
object flag = creationFlag;
if(flag != null)
{
lock(flag)
{
if(request != null) Thing = new Thingey(request);
creationFlag = null;
request = null;
}
}
}
}
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
ThingeyWrapper output;
lock (this.Thingeys)
{
if(!this.Thingeys.TryGetValue(thingeyName, out output))
{
output = new ThingeyWrapper(request);
this.Thingeys.Add(thingeyName, output);
}
}
output.WaitForCreation();
return output.Thing;
}
While you are still locking on all calls, the creation process is much more lightweight.
Edit
This issue has stuck with me more than I expected it to, so I whipped together a somewhat more robust solution that follows this general pattern. You can find it here.
IMHO, if this piece of code is called from many thread simultaneous, it is recommended to check it twice.
(But: I'm not sure that you can safely call ContainsKey while some other thread is call Add. So it might not be possible to avoid the lock at all.)
If you just want to avoid the Thingy is created but not used, just create it within the locking block:
private Dictionary<string, Thingey> Thingeys;
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
if (!this.Thingeys.ContainsKey(thingeyName))
{
lock (this.Thingeys)
{
// only one can create the same Thingy
Thingey newThingey = new Thingey(request);
if (!this.Thingeys.ContainsKey(thingeyName))
{
this.Thingeys.Add(thingeyName, newThingey);
}
}
}
return this. Thingeys[thingeyName];
}
You have to ask yourself the question whether the specific ContainsKey operation and the getter are themselfes threadsafe (and will stay that way in newer versions), because those may and willbe invokes while another thread has the dictionary locked and is performing the Add.
Typically, .NET locks are fairly efficient if used correctly, and I believe that in this situation you're better of doing this:
bool exists;
lock (thingeys) {
exists = thingeys.TryGetValue(thingeyName, out thingey);
}
if (!exists) {
thingey = new Thingey();
}
lock (thingeys) {
if (!thingeys.ContainsKey(thingeyName)) {
thingeys.Add(thingeyName, thingey);
}
}
return thingey;
Well I hope not being to naive at giving this answer. but what I would do, as Thingyes are expensive to create, would be to add the key with a null value. That is something like this
private Dictionary<string, Thingey> Thingeys;
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
if (!this.Thingeys.ContainsKey(thingeyName))
{
lock (this.Thingeys)
{
this.Thingeys.Add(thingeyName, null);
if (!this.Thingeys.ContainsKey(thingeyName))
{
// create a new thingey on 1st reference
Thingey newThingey = new Thingey(request);
Thingeys[thingeyName] = newThingey;
}
// else - oops someone else beat us to it
// but it doesn't mather anymore since we only created one Thingey
}
}
return this.Thingeys[thingeyName];
}
I modified your code in a rush so no testing was done.
Anyway, I hope my idea is not so naive. :D
You might be able to buy a little bit of speed efficiency at the expense of memory. If you create an immutable array that lists all of the created Thingys and reference the array with a static variable, then you could check the existance of a Thingy outside of any lock, since immutable arrays are always thread safe. Then when adding a new Thingy, you can create a new array with the additional Thingy and replace it (in the static variable) in one (atomic) set operation. Some new Thingys may be missed, because of race conditions, but the program shouldn't fail. It just means that on rare occasions extra duplicate Thingys will be made.
This will not replace the need for duplicate checking when creating a new Thingy, and it will use a lot of memory resources, but it will not require that the lock be taken or held while creating a Thingy.
I'm thinking of something along these lines, sorta:
private Dictionary<string, Thingey> Thingeys;
// An immutable list of (most of) the thingeys that have been created.
private string[] existingThingeys;
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
// Reference the same list throughout the method, just in case another
// thread replaces the global reference between operations.
string[] localThingyList = existingThingeys;
// Check to see if we already made this Thingey. (This might miss some,
// but it doesn't matter.
// This operation on an immutable array is thread-safe.
if (localThingyList.Contains(thingeyName))
{
// But referencing the dictionary is not thread-safe.
lock (this.Thingeys)
{
if (this.Thingeys.ContainsKey(thingeyName))
return this.Thingeys[thingeyName];
}
}
Thingey newThingey = new Thingey(request);
Thiney ret;
// We haven't locked anything at this point, but we have created a new
// Thingey that we probably needed.
lock (this.Thingeys)
{
// If it turns out that the Thingey was already there, then
// return the old one.
if (!Thingeys.TryGetValue(thingeyName, out ret))
{
// Otherwise, add the new one.
Thingeys.Add(thingeyName, newThingey);
ret = newThingey;
}
}
// Update our existingThingeys array atomically.
string[] newThingyList = new string[localThingyList.Length + 1];
Array.Copy(localThingyList, newThingey, localThingyList.Length);
newThingey[localThingyList.Length] = thingeyName;
existingThingeys = newThingyList; // Voila!
return ret;
}