I have a service which looks locally in a sql table for a value. If the value does not exist it is doing a remote call. If there is a value returned from the remote call then this value is added to the local sql table.
I have notified Unique Index exceptions in the log file that the string value XXX already exists.
To me that means that the following happened:
Request1_localSqlCheck
Request1_remoteCheck
Request2_localSqlCheck
Request2_remoteCheck
Request1_AddValueLocallyIfRecievedByRemote
Request2_AddValueLocallyIfRecievedByRemote // Same value Added here causes exception.
I want to make all 3 steps atom and lock it:
Lock
{
Request1_localSqlCheck
Request1_remoteCheck
Request1_AddValueLocallyIfRecievedByRemote
}
Will this work putting a lock around these 3 method calls?
You could use a Mutex to block the code execution until the server returns the value.
For example:
private readonly Mutex m = new Mutex();
public int ThreadSafeMethod() {
// you can do this immediately to skip the mutex
if (Request1_localSqlCheck == 1) return 1;
m.WaitOne();
try {
// check again
if (Request1_localSqlCheck == 1) return 1;
Request1_remoteCheck
Request1_AddValueLocallyIfRecievedByRemote
} finally {
m.ReleaseMutex();
}
}
Just to be clear, it can be really problematic if this code is called very frequently because it will slow down all the other thread that will have to wait for the mutex.
IMHO, if the "unique" error is not a big deal, I think you can stick with it; it can be boring, but sometimes is better to try and fail than wait for mutex release and slow down everything.
Related
I have a network application that uses Lua scripts. Upon starting the application I create a global Lua state and load all script files, that contain various functions, and for every client that connects I create a Lua thread, for that connection.
// On start
var GL = luaL_newstate();
// register functions...
// load scripts...
// On connection
connection.State = lua_newthread(GL);
When a request that uses a script comes in, I get the global function and call it.
var NL = connection.State;
var result = lua_resume(NL, 0);
if (result != 0 && result != LUA_YIELD)
{
// error...
result = 0;
}
if (result == 0)
{
// function returned...
}
Now, some scripts require a response to something from the client, so I yield in those functions, to wait for it. When the response comes in, the script is resumed with lua_resume(NL, 1).
// Lua
text("How are you?")
local response = select("Good", "Bad")
// Host
private int select(IntPtr L)
{
// send response request...
return lua_yield(L, 1);
}
// On response
lua_pushstring(NL, response);
var result = lua_resume(NL, 1);
// ...
My problem is that I need to be able to cancel that yield, and return from the Lua function, without executing any more code in the Lua function, and without adding additional code to the scripts. In other words, I basically want to make the Lua thread throw an exception, get back to the start, and forget it ever executed that function.
Is that possible?
One thing I thought might work, but didn't, was calling lua_error. The result was an SEHException on the lua_error call. I assume because the script isn't currently running, but yielding.
While I didn't find a way to wipe a thread's slate clean (I don't think it's possible), I did find a solution in figuring out how lua_newthread works.
When the thread is created, the reference to it is put on the "global" state's stack, and it doesn't get collected until it's removed from there. All you have to do to clean up the thread is removing it from the stack with lua_remove. This requires you to create new threads regularly, but that's not much of a problem for me.
I'm now keeping track of the created threads and their index on the stack, so I can removed them when I'm done with them for whatever reason (cancel, error, etc). All other indices are updated, as the removal will shift the ones that came after it.
if (sessionOver)
{
lua_remove(GL, thread.StackIndex);
foreach (var t in threads)
{
if (t.StackIndex > thread.StackIndex)
t.StackIndex--;
}
}
I've got a routine called GetEmployeeList that loads when my Windows Application starts.
This routine pulls in basic employee information from our Active Directory server and retains this in a list called m_adEmpList.
We have a few Windows accounts set up as Public Profiles that most of our employees on our manufacturing floor use. This m_adEmpList gives our employees the ability to log in to select features using those Public Profiles.
Once all of the Active Directory data is loaded, I attempt to "auto logon" that employee based on the System.Environment.UserName if that person is logged in under their private profile. (employees love this, by the way)
If I do not thread GetEmployeeList, the Windows Form will appear unresponsive until the routine is complete.
The problem with GetEmployeeList is that we have had times when the Active Directory server was down, the network was down, or a particular computer was not able to connect over our network.
To get around these issues, I have included a ManualResetEvent m_mre with the THREADSEARCH_TIMELIMIT timeout so that the process does not go off forever. I cannot login someone using their Private Profile with System.Environment.UserName until I have the list of employees.
I realize I am not showing ALL of the code, but hopefully it is not necessary.
public static ADUserList GetEmployeeList()
{
if ((m_adEmpList == null) ||
(((m_adEmpList.Count < 10) || !m_gotData) &&
((m_thread == null) || !m_thread.IsAlive))
)
{
m_adEmpList = new ADUserList();
m_thread = new Thread(new ThreadStart(fillThread));
m_mre = new ManualResetEvent(false);
m_thread.IsBackground = true;
m_thread.Name = FILLTHREADNAME;
try {
m_thread.Start();
m_gotData = m_mre.WaitOne(THREADSEARCH_TIMELIMIT * 1000);
} catch (Exception err) {
Global.LogError(_CODEFILE + "GetEmployeeList", err);
} finally {
if ((m_thread != null) && (m_thread.IsAlive)) {
// m_thread.Abort();
m_thread = null;
}
}
}
return m_adEmpList;
}
I would like to just put a basic lock using something like m_adEmpList, but I'm not sure if it is a good idea to lock something that I need to populate, and the actual data population is going to happen in another thread using the routine fillThread.
If the ManualResetEvent's WaitOne timer fails to collect the data I need in the time allotted, there is probably a network issue, and m_mre does not have many records (if any). So, I would need to try to pull this information again the next time.
If anyone understands what I'm trying to explain, I'd like to see a better way of doing this.
It just seems too forced, right now. I keep thinking there is a better way to do it.
I think you're going about the multithreading part the wrong way. I can't really explain it, but threads should cooperate and not compete for resources, but that's exactly what's bothering you here a bit. Another problem is that your timeout is too long (so that it annoys users) and at the same time too short (if the AD server is a bit slow, but still there and serving). Your goal should be to let the thread run in the background and when it is finished, it updates the list. In the meantime, you present some fallbacks to the user and the notification that the user list is still being populated.
A few more notes on your code above:
You have a variable m_thread that is only used locally. Further, your code contains a redundant check whether that variable is null.
If you create a user list with defaults/fallbacks first and then update it through a function (make sure you are checking the InvokeRequired flag of the displaying control!) you won't need a lock. This means that the thread does not access the list stored as member but a separate list it has exclusive access to (not a member variable). The update function then replaces (!) this list, so now it is for exclusive use by the UI.
Lastly, if the AD server is really not there, try to forward the error from the background thread to the UI in some way, so that the user knows what's broken.
If you want, you can add an event to signal the thread to stop, but in most cases that won't even be necessary.
I have a process A that reads in some data produced by some other process B. The data is 'exchanged' via the file system. To ensure that the file exists, process A currently checks for the file's existence like this:
while (!File.Exists(FileLocation))
{
Thread.Sleep(100);
}
This only seems to work 99 percent of the time. The other 1 percent of the time, process A establishes that the file exists but process B has not written everything yet (i.e. some data is missing).
Is there another simpler way to make the above situation more bullet proofed? Thanks.
Is there another simpler way to make the above situation more bullet proofed?
You could use a Mutex for reliable inter-process synchronization. Another possibility is to use a FileSystemWatcher.
After determining that the file exists, you can try opening the file for exclusive access, which will fail if another process still has the file open:
try
{
File.Open("foo",FileMode.Open,FileAccess.Read,FileShare.None);
}
catch(IOException ex)
{
// go back to
}
Given that you say that you can change both processes' code, you can use an EventWaitHandle to communicate between the processes.
In your program that creates the file, in the Main() method you can create an EventWaitHandle and keep it around until the end of the program. You'll need to pass the EventWaitHandle object around in your program so that it is available to the bit of code that creates the file (or provide some method that the file-creating code can call to set the event).
using (EventWaitHandle readySignaller = new EventWaitHandle(false, EventResetMode.ManualReset, "MySignalName"))
{
// Rest of program goes here...
// When your program creates the file, do this:
readySignaller.Set();
}
Then have some code like this in the program that's waiting for the file:
// Returns true if the wait was successful.
// Once this has returned true, it will return false until the file is created again.
public static bool WaitForFileToBeCreated(int timeoutMilliseconds) // Pass Timeout.Infinite to wait infinitely.
{
using (EventWaitHandle readySignaller = new EventWaitHandle(false, EventResetMode.ManualReset, "MySignalName"))
{
bool result = readySignaller.WaitOne(timeoutMilliseconds);
if (result)
{
readySignaller.Reset();
}
return result;
}
}
NOTE: If we successfully wait note that I am resetting the signal and it will remain reset until the other process sets it again. You can handle the logic differently if you need to; this is just an example.
Essentially what we are (logically) doing here is sharing a bool between two processes. You have to be careful about the order in which you set and reset that shared bool.
Try the FileSystemWatcher.
Listens to the file system change notifications and raises events when
a directory, or file in a directory, changes.
While looking at the source code of System.ServiceModel.Channels.BufferManager, I noticed this method:
void TuneQuotas()
{
if (areQuotasBeingTuned)
return;
bool lockHeld = false;
try
{
try { }
finally
{
lockHeld = Monitor.TryEnter(tuningLock);
}
// Don't bother if another thread already has the lock
if (!lockHeld || areQuotasBeingTuned)
return;
areQuotasBeingTuned = true;
}
finally
{
if (lockHeld)
{
Monitor.Exit(tuningLock);
}
}
//
// DO WORK... (code removed for brevity)
//
areQuotasBeingTuned = false;
}
Obviously, they want only one thread to run TuneQuotas(), and other threads to not wait if it is already being run by another thread. I should note that the code removed was not try protected.
I'm trying to understand the advantages of this method above over just doing this:
void TuneQuotas()
{
if(!Monitor.TryEnter(tuningLock)) return;
//
// DO WORK...
//
Monitor.Exit(tuningLock);
}
Any ideas why they might have bothered with all that? I suspect the way they use the finally blocks is to guard against a thread abort scenario, but I still don't see the point because, even with all this code, TuneQuotas() would be locked for good if that one thread doesn't make it all the way to the end to set areQuotasBeingTunes=false, for one reason or another. So is there something cool about this pattern that I'm missing?
EDIT:
As a side note, it seems the method exists in .NET 4.0, which I confirmed using this code running on framework 4 (although I cannot confirm that the content of the method hasn't changed from what I found on the web):
var buffMgr = BufferManager.CreateBufferManager(1, 1);
var pooledBuffMgrType = buffMgr.GetType()
.GetProperty("InternalBufferManager")
.GetValue(buffMgr, null)
.GetType();
Debug.WriteLine(pooledBuffMgrType.Module.FullyQualifiedName);
foreach (var methodInfo in pooledBuffMgrType
.GetMethods(BindingFlags.Instance | BindingFlags.NonPublic))
{
Debug.WriteLine(methodInfo.Name);
}
which outputs:
C:\Windows\Microsoft.Net\assembly\GAC_MSIL\System.Runtime.DurableInstancing\v4.0_4.0.0.0__3 1bf3856ad364e35\System.Runtime.DurableInstancing.dll
ChangeQuota
DecreaseQuota
FindMostExcessivePool
FindMostStarvedPool
FindPool
IncreaseQuota
TuneQuotas
Finalize
MemberwiseClone
I'll add some comments:
void TuneQuotas()
{
if (areQuotasBeingTuned)
return; //fast-path, does not require locking
bool lockHeld = false;
try
{
try { }
finally
{
//finally-blocks cannot be aborted by Thread.Abort
//The thread could be aborted after getting the lock and before setting lockHeld
lockHeld = Monitor.TryEnter(tuningLock);
}
// Don't bother if another thread already has the lock
if (!lockHeld || areQuotasBeingTuned)
return; //areQuotasBeingTuned could have switched to true in the mean-time
areQuotasBeingTuned = true; //prevent others from needlessly trying to lock (trigger fast-path)
}
finally //ensure the lock being released
{
if (lockHeld)
{
Monitor.Exit(tuningLock);
}
}
//
// DO WORK... (code removed for brevity)
//
//this might be a bug. There should be a call to Thread.MemoryBarrier,
//or areQuotasBeingTuned should be volatile
//if not, the write might never reach other processor cores
//maybe this doesn't matter for x86
areQuotasBeingTuned = false;
}
The simple version you gave does not protect against some problems. At the very least it is not exception-safe (lock won't be released). Interestingly, the "sophisticated" version, doesn't either.
This method has been removed from .NET 4.
Until .NET 4.0 there was essentially a bug in the code that was generated by a lock statment. It would generate something similar to the following:
Monitor.Enter(lockObject)
// see next paragraph
try
{
// code that was in the lock block
}
finally
{
Monitor.Exit(lockObject);
}
This means that if an exception occurred between Enter and try, the Exit would never be called. As usr alluded to, this could happen due to Thread.Abort.
Your example:
if(!Monitor.TryEnter(tuningLock)) return;
//
// DO WORK...
//
Monitor.Exit(tuningLock);
Suffers from this problem and more. The window in which this code and become interrupted and Exit not be called is basically the whole block of code--by any exception (not just one from Thread.Abort).
I have no idea why most code was written in .NET. But, I surmise that this code was written to avoid the problem of an exception between Enter and try. Let's look at some of the details:
try{}
finally
{
lockHeld = Monitor.TryEnter(tuningLock);
}
Finally blocks basically generate a constrained execution region in IL. Constrained execution regions cannot be interrupted by anything. So, putting the TryEnter in the finally block above ensures that lockHeld reliably holds the state of the lock.
That block of code is contained in a try/finally block whose finally statement calls Monitor.Exit if tuningLock is true. This means that there is no point between the Enter and the try block that can be interrupted.
FWIW, this method was still in .NET 3.5 and is visible in the WCF 3.5 source code (not the .NET source code). I don't know yet what's in 4.0; but I would imagine it would be the same; there's no reason to change working code even if the impetus for part of its structure no longer exists.
For more details on what lock used to generate see http://blogs.msdn.com/b/ericlippert/archive/2007/08/17/subtleties-of-c-il-codegen.aspx
Any ideas why they might have bothered with all that?
After running some tests, I think see one reason (if not THE reason): They probably bothered with all that because it is MUCH faster!
It turns out Monitor.TryEnter is an expensive call IF the object is already locked (if it's not locked, TryEnter is still very fast -- no problems there). So all threads, except the first one, are going to experience the slowness.
I didn't think this would matter that much; since afterall, each thread is going to try taking the lock just once and then move on (not like they'd be sitting there, trying in a loop). However, I wrote some code for comparison and it showed that the cost of TryEnter (when already locked) is significant. In fact, on my system each call took about 0.3 ms without the debugger attached, which is several orders of magnitude slower than using a simple boolean check.
So I suspect, this probably showed up in Microsoft's test results, so they optimized the code as above, by adding the fast track boolean check. But that's just my guess..
In my c# application multiple clients will access the same server, to process one client ata a time below code is written.In the code i used Moniter class and also the queue class.will this code affect the performance.if i use Monitor class, then shall i remove queue class from the code.
Sometimes my remote server machine where my application running as service is totally down.is the below code is the reasond behind, coz all the clients go in a queue, when i check the netstatus -an command using command prompt, for 8 clients it shows 50 connections are holding in Time-wait...
Below is my code where client acces the server ...
if (Id == "")
{
System.Threading.Monitor.Enter(this);
try
{
if (Request.AcceptTypes == null)
{
queue.Enqueue(Request.QueryString["sessionid"].Value);
string que = "";
que = queue.Dequeue();
TypeController.session_id = que;
langStr = SessionDatabase.Language;
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
TypeController.session_id = "";
filter.Execute();
Request.Clear();
return filter.XML;
}
else
{
TypeController.session_id = "";
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
filter.Execute();
}
}
finally
{
System.Threading.Monitor.Exit(this);
}
}
Locking this is pretty wrong, it won't work at all if every thread uses a different instance of whatever class this code lives in. It isn't clear from the snippet if that's the case but fix that first. Create a separate object just to store the lock and make it static or give it the same scope as the shared object you are trying to protect (also not clear).
You might still have trouble since this sounds like a deadlock rather than a race. Deadlocks are pretty easy to troubleshoot with the debugger since the code got stuck and is not executing at all. Debug + Break All, then Debug + Windows + Threads. Locate the worker threads in the thread list. Double click one to select it and use Debug + Call Stack to see where it got stuck. Repeat for other threads. Look back through the stack trace to see where one of them acquired a lock and compare to other threads to see what lock they are blocking on.
That could still be tricky if the deadlock is intricate and involves multiple interleaved locks. In which case logging might help. Really hard to diagnose mandelbugs might require a rewrite that cuts back on the amount of threading.