I am using DotNetOpenAuth in conjunction with Mono 2.10. When context.Application.Unlock() is called, an exception is thrown indicating the lock was never acquired in the first place. I've modified the code as shown below.
My question is, does the code serve the same purpose, and does mono under Apache even support locking in this way?
Original
context.Application.Lock();
try
{
if ((store = (IRelyingPartyApplicationStore)context.Application[ApplicationStoreKey]) == null)
{
context.Application[ApplicationStoreKey] = store = new StandardRelyingPartyApplicationStore();
}
}
finally
{
context.Application.UnLock();
}
My Modifications
lock (app)
{
try
{
if ((store = (IRelyingPartyApplicationStore)context.Application[ApplicationStoreKey]) == null)
{
context.Application[ApplicationStoreKey] = store = new StandardRelyingPartyApplicationStore();
}
}
finally
{
//context.Application.UnLock();
}
}
Actually is not the same think the Application.Lock(); with the lock(app)
The Application.Lock(); is lock all threads on pools, the lock(app) can lock only current pool threads.
If you have problems with the Application data, then save them in a static variable and there you can use the lock(), and its faster and suggested by microsoft.
for more details read also this similar answer: https://stackoverflow.com/a/10964038/159270
By the way this is the code of the Application.Lock();
public void Lock()
{
this._lock.AcquireWrite();
}
internal virtual void AcquireWrite()
{
lock (this)
{
while (this._lock != 0)
{
try
{
Monitor.Wait(this);
continue;
}
catch (ThreadInterruptedException)
{
continue;
}
}
this._lock = -1;
}
}
Related
I have a simple logging mechanism that should be thread safe. It works most of the time, but every now and then I get an exception on this line, "_logQ.Enqueue(s);" that the queue is not long enough. Looking in the debugger there are sometimes just hundreds of items, so I can't see it being resources. The queue is supposed to expand as needed. If I catch the exception as opposed to letting the debugger pause at the exception I see the same error. Is there something not thread safe here? I don't even know how to start debugging this.
static void ProcessLogQ(object state)
{
try
{
while (_logQ.Count > 0)
{
var s = _logQ.Dequeue();
string dir="";
Type t=Type.GetType("Mono.Runtime");
if (t!=null)
{
dir ="/var/log";
}else
{
dir = #"c:\log";
if (!Directory.Exists(dir))
Directory.CreateDirectory(dir);
}
if (Directory.Exists(dir))
{
File.AppendAllText(Path.Combine(dir, "admin.log"), DateTime.Now.ToString("hh:mm:ss ") + s + Environment.NewLine);
}
}
}
catch (Exception)
{
}
finally
{
_isProcessingLogQ = false;
}
}
public static void Log(string s) {
if (_logQ == null)
_logQ = new Queue<string> { };
lock (_logQ)
_logQ.Enqueue(s);
if (!_isProcessingLogQ) {
_isProcessingLogQ = true;
ThreadPool.QueueUserWorkItem(ProcessLogQ);
}
}
Note that the threads all call Log(string s). ProcessLogQ is private to the logger class.
* Edit *
I made a mistake in not mentioning that this is in a .NET 3.5 environment, therefore I can't use Task or ConcurrentQueue. I am working on fixes for the current example within .NET 3.5 constraints.
** Edit *
I believe I have a thread-safe version for .NET 3.5 listed below. I start the logger thread once from a single thread at program start, so there is only one thread running to log to the file (t is a static Thread):
static void ProcessLogQ()
{
while (true) {
try {
lock (_logQ);
while (_logQ.Count > 0) {
var s = _logQ.Dequeue ();
string dir = "../../log";
if (!Directory.Exists (dir))
Directory.CreateDirectory (dir);
if (Directory.Exists (dir)) {
File.AppendAllText (Path.Combine (dir, "s3ol.log"), DateTime.Now.ToString ("hh:mm:ss ") + s + Environment.NewLine);
}
}
} catch (Exception ex) {
Console.WriteLine (ex.Message);
} finally {
}
Thread.Sleep (1000);
}
}
public static void startLogger(){
lock (t) {
if (t.ThreadState != ThreadState.Running)
t.Start ();
}
}
private static void multiThreadLog(string msg){
lock (_logQ)
_logQ.Enqueue(msg);
}
Look at the TaskParallel Library. All the hard work is already done for you. If you're doing this to learn about multithreading read up on locking techniques and pros and cons of each.
Further, you're checking if _logQ is null outside your lock statement, from what I can deduce it's a static field that you're not initializing inside a static constructor. You can avoid doing this null check (which should be inside a lock, it's critical code!) you can ensure thread-safety by making it a static readonly and initializing it inside the static constructor.
Further, you're not properly handling queue states. Since there's no lock during the check of the queue count it could vary on every iteration. You're missing a lock as your dequeuing items.
Excellent resource:
http://www.yoda.arachsys.com/csharp/threads/
For a thread-safe queue, you should use the ConcurrentQueue instead:
https://msdn.microsoft.com/en-us/library/dd267265(v=vs.110).aspx
While keeping in mind that:
I am using a blocking queue that waits for ever until something is added to it
I might get a FileSystemWatcher event twice
The updated code:
{
FileProcessingManager processingManager = new FileProcessingManager();
processingManager.RegisterProcessor(new ExcelFileProcessor());
processingManager.RegisterProcessor(new PdfFileProcessor());
processingManager.Completed += new ProcessingCompletedHandler(ProcessingCompletedHandler);
processingManager.Completed += new ProcessingCompletedHandler(LogFileStatus);
while (true)
{
try
{
var jobData = (JobData)fileMonitor.FileQueue.Dequeue();
if (jobData == null)
break;
_pool.WaitOne();
Application.Log(String.Format("{0}:{1}", DateTime.Now.ToString(CultureInfo.InvariantCulture), "Thread launched"));
Task.Factory.StartNew(() => processingManager.Process(jobData));
}
catch (Exception e)
{
Application.Log(String.Format("{0}:{1}", DateTime.Now.ToString(CultureInfo.InvariantCulture), e.Message));
}
}
}
What are are you suggestions on making the code multi-threaded while taking into consideration the possibility that two identical string paths may be added into the blocking queue? I have left the possibility that this might happen and in this case.. the file would be processed twice, the thing is that sometimes I get it twice, sometimes not, it is really awkward, if you have suggestions on this, please tell.
The null checking is for exiting the loop, I intentionally add a null from outside the threaded loop to determine it to stop.
For multi-threading this... I would probably add a "Completed" event to your FileProcessingManager and register for it. One argument of that event will be the "bool" return value you currently have. Then in that event handler, I would do the checking of the bool and re-queueing of the file. Note that you will have to keep a reference to the FileMonitorManager. So, I would have this ThreadProc method be in a class where you keep the FileMonitorManager and FileProcessingManager instances in a property.
To deduplicate, in ThreadProc, I would create a List outside of the while loop. Then inside the while loop, before you process a file, lock that list, check to see if the string is already in there, if not, add the string to the list and process the file, if it is, then skip processing.
Obviously, this is based on little information surrounding your method but my 2 cents anyway.
Rough code, from Notepad:
private static FileMonitorManager fileMon = null;
private static FileProcessingManager processingManager = new FileProcessingManager();
private static void ThreadProc(object param)
{
processingManager.RegisterProcessor(new ExcelFileProcessor());
processingManager.RegisterProcessor(new PdfFileProcessor());
processingManager.Completed += ProcessingCompletedHandler;
var procList = new List<string>();
while (true)
{
try
{
var path = (string)fileMon.FileQueue.Dequeue();
if (path == null)
break;
bool processThis = false;
lock(procList)
{
if(!procList.Contains(path))
{
processThis = true;
procList.Add(path);
}
}
if(processThis)
{
Thread t = new Thread (new ParameterizedThreadStart(processingManager.Process));
t.Start (path);
}
}
catch (System.Exception e)
{
Console.WriteLine(e.Message);
}
}
}
private static void ProcessingCompletedHandler(bool status, string path)
{
if (!status)
{
fileMon.FileQueue.Enqueue(path);
Console.WriteLine("\n\nError on file: " + path);
}
else
Console.WriteLine("\n\nSucces on file: " + path);
}
I have a regular Queue object in C# (4.0) and I'm using BackgroundWorkers that access this Queue.
The code I was using is as follows:
do
{
while (dataQueue.Peek() == null // nothing waiting yet
&& isBeingLoaded == true // and worker 1 still actively adding stuff
)
System.Threading.Thread.Sleep(100);
// otherwise ready to do something:
if (dataQueue.Peek() != null) // because maybe the queue is complete and also empty
{
string companyId = dataQueue.Dequeue();
processLists(companyId);
// use up the stuff here //
} // otherwise nothing was there yet, it will resolve on the next loop.
} while (isBeingLoaded == true // still have stuff coming at us
|| dataQueue.Peek() != null); // still have stuff we haven’t done
However, I guess when dealing with threads I should be using a ConcurrentQueue.
I was wondering if there were examples of how to use a ConcurrentQueue in a Do While Loop like above?
Everything I tried with the TryPeek wasn't working..
Any ideas?
You can use a BlockingCollection<T> as a producer-consumer queue.
My answer makes some assumptions about your architecture, but you can probably mold it as you see fit:
public void Producer(BlockingCollection<string> ids)
{
// assuming this.CompanyRepository exists
foreach (var id in this.CompanyRepository.GetIds())
{
ids.Add(id);
}
ids.CompleteAdding(); // nothing left for our workers
}
public void Consumer(BlockingCollection<string> ids)
{
while (true)
{
string id = null;
try
{
id = ids.Take();
} catch (InvalidOperationException) {
}
if (id == null) break;
processLists(id);
}
}
You could spin up as many consumers as you need:
var companyIds = new BlockingCollection<string>();
Producer(companyIds);
Action process = () => Consumer(companyIds);
// 2 workers
Parallel.Invoke(process, process);
Is it possible to detect if the same thread trying to release the lock?
We have many places in code that looks like:
try
{
try
{
if(!Monitor.TryEnter(obj, 2000))
{
throw new Exception("can not lock");
}
}
finally
{
Monitor.Exit(obj);
}
}
catch
{
//Log
}
The above code very simplified, and actually Enter and Exit statement located in custom object (lock manager).
The problem, that in that structure, we have SynchronizationLockException when trying to "Exit", since it looks like the thread that not succeed to lock, tries to release in finally.
So the question, is how I can know if the thread who making Monitor.Exit is the same thread who did Monitor.Enter?
I thought that I can use CurrentThread.Id to sync enter and exit, but I'm not sure if it "safe" enough.
So the question, is how I can know if the thread who making Monitor.Exit is the same thread who did Monitor.Enter?
You can't, easily, as far as I'm aware. You can't find out which thread owns a monitor.
However, this is just a coding issue - you should change your code so that it doesn't even attempt to release the monitor when it shouldn't. So your code above could be rewritten as:
if (!Monitor.TryEnter(obj, 2000))
{
throw new Exception(...);
}
try
{
// Presumably other code
}
finally
{
Monitor.Exit(obj);
}
Or even better, if you're using .NET 4, use the overload of TryEnter which accepts an ret parameter:
bool gotMonitor = false;
try
{
Monitor.TryEnter(obj, ref gotMonitor);
if (!gotMonitor)
{
throw new Exception(...);
}
// Presumably other code
}
finally
{
if (gotMonitor)
{
Monitor.Exit(obj);
}
}
As you think that to put the calling of Monitor.Exit in try-catch was 'durty'(dirty?), here's a very simple idea trying to 'take the durty away'. Lock is reentrant for the same thread and if one thread acquired successfully, before it releases, attempt from another thread will fail. So that you can consider something like:
public void Exit(object key) {
if(!IsActive) {
return;
}
if(LockDictionary.ContainsKey(key)) {
var syncObject=LockDictionary[key];
if(Monitor.TryEnter(syncObject.SyncObject, 0)) {
SetLockExit(syncObject);
Monitor.Exit(syncObject.SyncObject);
Monitor.Exit(syncObject.SyncObject);
}
}
}
We call Monitor.Exit twice because we lock it twice, one in the code outer, and one just here.
I know this is an older question, but here's my answer anyway.
I would move the try-finally construct inside the if:
try
{
if(Monitor.TryEnter(obj, 2000))
{
try
{
// code here
}
finally
{
Monitor.Exit(obj);
}
}
else
{
throw new Exception("Can't acquire lock");
}
}
catch
{
// log
}
i'm profiling the below code inside a singltone and found that a lot of Rate objects are kept in memory altough i clear them.
protected void FetchingRates()
{
int count = 0;
while (true)
{
try
{
if (m_RatesQueue.Count > 0)
{
List<RateLog> temp = null;
lock (m_RatesQueue)
{
temp = new List<RateLog>();
temp.AddRange(m_RatesQueue);
m_RatesQueue.Clear();
}
foreach (RateLog item in temp)
{
m_ConnectionDataAccess.InsertRateLog(item);
}
temp.Clear();
temp = null;
}
count++;
Thread.Sleep(int.Parse(ConfigurationManager.AppSettings["RatesIntreval"].ToString()));
}
catch (Exception ex)
{
}
}
}
the insertion to the queue is made by:
public void InsertLogRecord(RateLog msg)
{
try
{
if (m_RatesQueue != null)
{
//lock (((ICollection)m_queue).SyncRoot)
lock (m_RatesQueue)
{
//insert new job to the line and release the thread to continue working.
m_RatesQueue.Add(msg);
}
}
}
catch (Exception ex)
{
}
}
the worker inserts rate log into DB as follows:
internal int InsertRateLog(RateLog item)
{
try
{
SqlCommand dbc = GetStoredProcCommand("InsertRateMonitoring");
if (dbc == null)
return 0;
dbc.Parameters.Add(new SqlParameter("#HostName", item.HostName));
dbc.Parameters.Add(new SqlParameter("#RateType", item.RateType));
dbc.Parameters.Add(new SqlParameter("#LastUpdated", item.LastUpdated));
return ExecuteNonQuery(dbc);
}
catch (Exception ex)
{
return 0;
}
}
any one sees a possible memory leak?
Stopping swallowing all exceptions would be the first place to start I would suggest.
You are certainly clearing the queue and the temporary list temp (which is unnecessary since it is eligible for collection even before you assign null to the reference). At this point I think your problem is more likely related to the following line.
m_ConnectionDataAccess.InsertRateLog(item);
You are passing a reference to a RateLog to another method. You have not provided any details on this method so I cannot eliminate the possibility that it is storing its own copy of the reference in a separate data structure.
I have experienced the same issue. There is probably a real explanation for this, but I couldn't find it.
I assumed that because I was in a while(true) loop the GC won't run. I don't know if this is an artefact of MS's implementation of the .NET framework (.NET 3.5), but it is what I experienced.
The way I mitigated the memory pile up was by putting GC.Collect(); at the bottom of the loop.
I have a feeling it was something to do with undisposed SqlConnection objects.
There is no need in clearing and nulling of List<RateLog> temp. It will be collected by GC anyway because leaving from function scope with lack of references, i.e. there is no more references to this variable on function end so it will be collected.