Broken lock strategy - analysis and correction - c#

I'm asking this primarily as a sanity check: In a C# (8.0) application I've got this bit of code, which spuriously fails with an "object is not synchronized" exception from Monitor.pulse() (I've omitted irrelevant code for clarity):
// vanilla multiple-producer single-consumer queue stuff:
private Queue<Message> messages = new Queue<Message>();
private void ConsumerThread () {
Queue<Message> myMessages = new Queue<Message>();
while (...) {
lock (messages) {
// wait
while (messages.Count == 0)
Monitor.Wait(messages);
// swap
(messages, myMessages) = (myMessages, messages);
}
// process
while (myMessages.Count > 0)
DoStuff(myMessages.Dequeue());
}
}
public void EnqueueMessage (...) {
Message message = new Message(...);
lock (messages) {
messages.Enqueue(message);
Monitor.Pulse(messages);
}
}
I'm fairly new to C# and also I was stressed when I wrote that. Now I am reviewing that code to fix the exception and I'm immediately raising an eyebrow at the fact that I reassigned messages inside the consumer's lock.
I looked around and found Is it bad to overwrite a lock object if it is the last statement in the lock?, which validates my raised eyebrow.
However, I still don't have a lot of confidence (inexperience + stress), so, just to confirm: Is the following analysis of why this is broken correct?
If the following happens, in this order:
Stuff happens to be in the queue.
Consumer thread locks messages (and will skip wait loop).
EnqueueMessage tries to lock messages, waits for lock.
Consumer thread swaps messages and myMessages, releases lock.
EnqueueMessage takes lock.
EnqueueMessage adds item to messages and calls Monitor.pulse(messages) except messages isn't the same object that it locked in step (3), since it was swapped out from under us in (4). Possible consequences include:
Calling Monitor.Pulse on a non-locked object (what used to be myMessages) -- hence the aforementioned exception.
Enqueueing to the wrong queue and the consequences of that.
Even weirder stuff if the consumer thread manages to complete another full loop cycle while EnqueueMessage is still somewhere in its lock{}.
Right? I'm pretty sure that's right, it feels very basic, but I just want to confirm because I'm completely burnt out right now.
Then, whether that's correct or not: Does the following proposed fix make sense?
It seems to me like the fix is super simple: Instead of using messages as the monitor object, just use some dedicated dummy object that won't be changed:
private readonly object messagesLock = new object();
private Queue<Message> messages = new Queue<Message>();
private void ConsumerThread () {
Queue<Message> myMessages = new Queue<Message>();
while (...) {
lock (messagesLock) {
while (messages.Count == 0)
Monitor.Wait(messagesLock);
(messages, myMessages) = (myMessages, messages);
}
}
...
}
public void EnqueueMessage (...) {
...;
lock (messagesLock) {
messages.Enqueue(...);
Monitor.Pulse(messagesLock);
}
}
Where the intent is to avoid any issues caused by swapping out the lock object in strange places.
And that should work... right?

Nobody uses Queue in multi-threading since .NET 2 probably 16 yrs ago (correct me if I am wrong with dates).
it is trivial with concurrent collections.
BlockingColleciton<Message> myMessages = new BlockingColleciton<Message>();
private void ConsumerThread () {
while (...)
{
var message = myMessages.Take();
}
...
}
public void EnqueueMessage (Message msg) {
...;
myMessages.Add(msg);
}

Related

Creating a buffer for Consumer and Producer threads using Queue c# .NET

I am writing a windows service application that is capable of collecting data from sensors like temperature, pressure volume etc...
The frequency at which the data is read is pretty high, there could be a hundred sensors and the data being received could be at a frequency could be one per second per sensor..
I need to store this data to an oracle database, for obvious reasons i dont want to hit the database at such a high rate.
Hence i want to create a Buffer.
My plan is to create a Buffer using the standard .NET Queue, a few threads keep Enqueue data into the queue and another timer driven thread can keep writing into the database at regular intervals.
What i want to know is..?? Is This thread safe
If this is not, what is the best way of creating a in memory buffer
To answer your question, as long as you lock accesses, you can have multiple threads access a regular queue.
For me though, I didn't use that and wanted to use queues with locks to keep them thread safe. I have been doing this in c# for one of my programs. I just use a regular queue, and then put a locker on accesses to it (enqueue, dequeue, count). It is completely thread safe if you just lock the accesses.
My setup comes from the tutorial/example here: http://www.albahari.com/threading/part2.aspx#_ProducerConsumerQWaitHandle
My situation is a little different than yours, but pretty similar. For me, my data can come in very fast, and if I don't queue it I lose the data if multiple come in at the same time. Then I have a thread running that slowly takes items off the queue and processes them. This hand-off uses an AutoResetEvent to hold my working-thread until data is ready to be processed. In your case you would use a timer or something that happens regularly.
I copy/pasted my code and tried to change the names. Hopefully I didn't completely break it by missing some name changes, but you should be able to get the gist.
public class MyClass : IDisposable
{
private Thread sensorProcessingThread = null;
private Queue<SensorData> sensorQueue = new Queue<SensorData>();
private readonly object _sensorQueueLocker = new object();
private EventWaitHandle _whSensorEvent = new AutoResetEvent(false);
public MyClass () {
sensorProcessingThread = new Thread(sensorProcessingThread_DoWork);
sensorProcessingThread.Start();
}
public void Dispose()
{
// Signal the end by sending 'null'
EnqueueSensorEvent(null);
sensorProcessingThread.Join();
_whSensorEvent.Close();
}
// The fast sensor data comes in, locks queue, and then
// enqueues the data, and releases the EventWaitHandle
private void EnqueueSensorEvent( SensorData wd )
{
lock ( _sensorQueueLocker )
{
sensorQueue.Enqueue(wd);
_whSensorEvent.Set();
}
}
// When asynchronous events come in, I just throw them into queue
private void OnSensorEvent( object sender, MySensorArgs e )
{
EnqueueSensorEvent(new SensorData(sender, e));
}
// I have several types of events that can come in,
// they just get packaged up into the same "SensorData"
// struct, and I worry about the contents later
private void FileSystem_Changed( object sender, System.IO.FileSystemEventArgs e )
{
EnqueueSensorEvent(new SensorData(sender, e));
}
// This is the slower process that waits for new SensorData,
// and processes it. Note, if it sees 'null' as data,
// then it knows it should quit the while(true) loop.
private void sensorProcessingThread_DoWork( object obj )
{
while ( true )
{
SensorData wd = null;
lock ( _sensorQueueLocker )
{
if ( sensorQueue.Count > 0 )
{
wd = sensorQueue.Dequeue();
if ( wd == null )
{
// Quit the loop, thread finishes
return;
}
}
}
if ( wd != null )
{
try
{
// Call specific handlers for the type of SensorData that was received
if ( wd.isSensorDataType1 )
{
SensorDataType1_handler(wd.sender, wd.SensorDataType1Content);
}
else
{
FileSystemChanged_handler(wd.sender, wd.FileSystemChangedContent);
}
}
catch ( Exception exc )
{
// My sensor processing also has a chance of failing to process completely, so I have a retry
// methodology that gives up after 5 attempts
if ( wd.NumFailedUpdateAttempts < 5 )
{
wd.NumFailedUpdateAttempts++;
lock ( _sensorQueueLocker )
{
sensorQueue.Enqueue(wd);
}
}
else
{
log.Fatal("Can no longer try processing data", exc);
}
}
}
else
_whWatchEvent.WaitOne(); // No more tasks, wait for a signal
}
}
Something you could maybe look at is Reactive (Rx) for .net from Microsoft. Check out: https://msdn.microsoft.com/en-us/data/gg577611.aspx and at the bottom of page is a pdf tutorial "Curing the asynchronous blues": http://go.microsoft.com/fwlink/?LinkId=208528 This is something very different but maybe you will see something you like.

How to: Write a thread-safe method that may only be called once?

I'm attempting to write a thread-safe method which may only be called once (per object instance). An exception should be thrown if it has been called before.
I have come up with two solutions. Are they both correct? If not, what's wrong with them?
With lock:
public void Foo()
{
lock (fooLock)
{
if (fooCalled) throw new InvalidOperationException();
fooCalled = true;
}
…
}
private object fooLock = new object();
private bool fooCalled;
With Interlocked.CompareExchange:
public void Foo()
{
if (Interlocked.CompareExchange(ref fooCalled, 1, 0) == 1)
throw new InvalidOperationException();
…
}
private int fooCalled;
If I'm not mistaken, this solution has the advantage of being lock-free (which seems irrelevant in my case), and that it requires fewer private fields.
I am also open to justified opinions which solution should be preferred, and to further suggestions if there's a better way.
Your Interlocked.CompareExchange solution looks the best, and (as you said) is lock-free. It's also significantly less complicated than other solutions. Locks are quite heavyweight, whereas CompareExchange can be compiled down to a single CAS cpu instruction. I say go with that one.
The double checked lock patter is what you are after:
This is what you are after:
class Foo
{
private object someLock = new object();
private object someFlag = false;
void SomeMethod()
{
// to prevent locking on subsequent calls
if(someFlag)
throw new Exception();
// to make sure only one thread can change the contents of someFlag
lock(someLock)
{
if(someFlag)
throw new Exception();
someFlag = true;
}
//execute your code
}
}
In general when exposed to issues like these try and follow well know patters like the one above.
This makes it recognizable and less error prone since you are less likely to miss something when following a pattern, especially when it comes to threading.
In your case the first if does not make a lot of sense but often you will want to execute the actual logic and then set the flag. A second thread would be blocked while you are executing your (maybe quite costly) code.
About the second sample:
Yes it is correct, but don't make it more complicated than it is. You should have very good reasons to not use the simple locking and in this situation it makes the code more complicated (because Interlocked.CompareExchange() is less known) without achieving anything (as you pointed out being lock less against locking to set a boolean flag is not really a benefit in this case).
Task task = new Task((Action)(() => { Console.WriteLine("Called!"); }));
public void Foo()
{
task.Start();
}
public void Bar()
{
Foo();
Foo();//this line will throws different exceptions depends on
//whether task in progress or task has already been completed
}

Serially process ConcurrentQueue and limit to one message processor. Correct pattern?

I'm building a multithreaded app in .net.
I have a thread that listens to a connection (abstract, serial, tcp...).
When it receives a new message, it adds it to via AddMessage. Which then call startSpool. startSpool checks to see if the spool is already running and if it is, returns, otherwise, starts it in a new thread. The reason for this is, the messages HAVE to be processed serially, FIFO.
So, my questions are...
Am I going about this the right way?
Are there better, faster, cheaper patterns out there?
My apologies if there is a typo in my code, I was having problems copying and pasting.
ConcurrentQueue<IMyMessage > messages = new ConcurrentQueue<IMyMessage>();
const int maxSpoolInstances = 1;
object lcurrentSpoolInstances;
int currentSpoolInstances = 0;
Thread spoolThread;
public void AddMessage(IMyMessage message)
{
this.messages.Add(message);
this.startSpool();
}
private void startSpool()
{
bool run = false;
lock (lcurrentSpoolInstances)
{
if (currentSpoolInstances <= maxSpoolInstances)
{
this.currentSpoolInstances++;
run = true;
}
else
{
return;
}
}
if (run)
{
this.spoolThread = new Thread(new ThreadStart(spool));
this.spoolThread.Start();
}
}
private void spool()
{
Message.ITimingMessage message;
while (this.messages.Count > 0)
{
// TODO: Is this below line necessary or does the TryDequeue cover this?
message = null;
this.messages.TryDequeue(out message);
if (message != null)
{
// My long running thing that does something with this message.
}
}
lock (lcurrentSpoolInstances)
{
this.currentSpoolInstances--;
}
}
This would be easier using BlockingCollection<T> instead of ConcurrentQueue<T>.
Something like this should work:
class MessageProcessor : IDisposable
{
BlockingCollection<IMyMessage> messages = new BlockingCollection<IMyMessage>();
public MessageProcessor()
{
// Move this to constructor to prevent race condition in existing code (you could start multiple threads...
Task.Factory.StartNew(this.spool, TaskCreationOptions.LongRunning);
}
public void AddMessage(IMyMessage message)
{
this.messages.Add(message);
}
private void Spool()
{
foreach(IMyMessage message in this.messages.GetConsumingEnumerable())
{
// long running thing that does something with this message.
}
}
public void FinishProcessing()
{
// This will tell the spooling you're done adding, so it shuts down
this.messages.CompleteAdding();
}
void IDisposable.Dispose()
{
this.FinishProcessing();
}
}
Edit: If you wanted to support multiple consumers, you could handle that via a separate constructor. I'd refactor this to:
public MessageProcessor(int numberOfConsumers = 1)
{
for (int i=0;i<numberOfConsumers;++i)
StartConsumer();
}
private void StartConsumer()
{
// Move this to constructor to prevent race condition in existing code (you could start multiple threads...
Task.Factory.StartNew(this.spool, TaskCreationOptions.LongRunning);
}
This would allow you to start any number of consumers. Note that this breaks the rule of having it be strictly FIFO - the processing will potentially process "numberOfConsumer" elements in blocks with this change.
Multiple producers are already supported. The above is thread safe, so any number of threads can call Add(message) in parallel, with no changes.
I think that Reed's answer is the best way to go, but for the sake of academics, here is an example using the concurrent queue -- you had some races in the code that you posted (depending upon how you handle incrementing currnetSpoolInstances)
The changes I made (below) were:
Switched to a Task instead of a Thread (uses thread pool instead of incurring the cost of creating a new thread)
added the code to increment/decrement your spool instance count
changed the "if currentSpoolInstances <= max ... to just < to avoid having one too many workers (probably just a typo)
changed the way that empty queues were handled to avoid a race: I think you had a race, where your while loop could have tested false, (you thread begins to exit), but at that moment, a new item is added (so your spool thread is exiting, but your spool count > 0, so your queue stalls).
private ConcurrentQueue<IMyMessage> messages = new ConcurrentQueue<IMyMessage>();
const int maxSpoolInstances = 1;
object lcurrentSpoolInstances = new object();
int currentSpoolInstances = 0;
public void AddMessage(IMyMessage message)
{
this.messages.Enqueue(message);
this.startSpool();
}
private void startSpool()
{
lock (lcurrentSpoolInstances)
{
if (currentSpoolInstances < maxSpoolInstances)
{
this.currentSpoolInstances++;
Task.Factory.StartNew(spool, TaskCreationOptions.LongRunning);
}
}
}
private void spool()
{
IMyMessage message;
while (true)
{
// you do not need to null message because it is an "out" parameter, had it been a "ref" parameter, you would want to null it.
if(this.messages.TryDequeue(out message))
{
// My long running thing that does something with this message.
}
else
{
lock (lcurrentSpoolInstances)
{
if (this.messages.IsEmpty)
{
this.currentSpoolInstances--;
return;
}
}
}
}
}
Check 'Pipelines pattern': http://msdn.microsoft.com/en-us/library/ff963548.aspx
Use BlockingCollection for the 'buffers'.
Each Processor (e.g. ReadStrings, CorrectCase, ..), should run in a Task.
HTH..

Using WCF service via async interface from worker thread, how do I ensure that events are sent from the client "in order"

I am writing a Silverlight class library to abstract the interface to a WCF service. The WCF service provides a centralized logging service. The Silverlight class library provides a simplified log4net-like interface (logger.Info, logger.Warn, etc) for logging. From the class library I plan to provide options such that logged messages can be accumulated on the client and sent in "bursts" to the WCF logging service, rather than sending each message as it occurs. Generally, this is working well. The class library does accumulate messages and it does send collections of messages to the WCF logging service, where they are logged by an underlying logging framework.
My current problem is that the messages (from a single client with a single thread - all logging code is in button click events) are becoming interleaved in the logging service. I realize that the at least part of this is probably due to the instancing (PerCall) or Synchronization of the WCF logging service. However, it also seems that my messages are occurring in such rapid succession that that the "bursts" of messages leaving on the async calls are actually "leaving" the client in a different order than they were generated.
I have tried to set up a producer consumer queue as described here with a slight (or should that be "slight" with air quotes) change that the Work method blocks (WaitOne) until the async call returns (i.e. until the async callback executes). The idea is that when one burst of messages is sent to the WCF logging service, the queue should wait until that burst has been processed before sending the next burst.
Maybe what I am trying to do is not feasible, or maybe I am trying to solve the wrong problem, (or maybe I just don't know what I am doing!).
Anyway, here is my producer/consumer queue code:
internal class ProducerConsumerQueue : IDisposable
{
EventWaitHandle wh = new AutoResetEvent(false);
Thread worker;
readonly object locker = new object();
Queue<ObservableCollection<LoggingService.LogEvent>> logEventQueue = new Queue<ObservableCollection<LoggingService.LogEvent>>();
LoggingService.ILoggingService loggingService;
internal ProducerConsumerQueue(LoggingService.ILoggingService loggingService)
{
this.loggingService = loggingService;
worker = new Thread(Work);
worker.Start();
}
internal void EnqueueLogEvents(ObservableCollection<LoggingService.LogEvent> logEvents)
{
//Queue the next burst of messages
lock(locker)
{
logEventQueue.Enqueue(logEvents);
//Is this Set conflicting with the WaitOne on the async call in Work?
wh.Set();
}
}
private void Work()
{
while(true)
{
ObservableCollection<LoggingService.LogEvent> events = null;
lock(locker)
{
if (logEventQueue.Count > 0)
{
events = logEventQueue.Dequeue();
if (events == null || events.Count == 0) return;
}
}
if (events != null && events.Count > 0)
{
System.Diagnostics.Debug.WriteLine("1. Work - Sending {0} events", events.Count);
//
// This seems to be the key...
// Send one burst of messages via an async call and wait until the async call completes.
//
loggingService.BeginLogEvents(events, ar =>
{
try
{
loggingService.EndLogEvents(ar);
System.Diagnostics.Debug.WriteLine("3. Work - Back");
wh.Set();
}
catch (Exception ex)
{
}
}, null);
System.Diagnostics.Debug.WriteLine("2. Work - Waiting");
wh.WaitOne();
System.Diagnostics.Debug.WriteLine("4. Work - Finished");
}
else
{
wh.WaitOne();
}
}
}
#region IDisposable Members
public void Dispose()
{
EnqueueLogEvents(null);
worker.Join();
wh.Close();
}
#endregion
}
In my test it is essentially called like this:
//Inside of LogManager, get the LoggingService and set up the queue.
ILoggingService loggingService = GetTheLoggingService();
ProducerConsumerQueue loggingQueue = new ProducerConsumerQueue(loggingService);
//Inside of client code, get a logger and log with it
ILog logger = LogManager.GetLogger("test");
for (int i = 0; i < 100; i++)
{
logger.InfoFormat("logging message [{0}]", i);
}
Internally, logger/LogManager accumulates some number of logging messages (say 25) before adding that group of messages to the queue. Something like this:
internal void AddNewMessage(string message)
{
lock(logMessages)
{
logMessages.Add(message);
if (logMessages.Count >= 25)
{
ObservableCollection<LogMessage> messages = new ObservableCollection<LogMessage>(logMessages);
logMessages.Clear();
loggingQueue.EnqueueLogEvents(messages);
}
}
}
So, in this case I would expect to have 4 bursts of 25 messages each. Based on the Debug statements in my ProducerConsumerQueue code (maybe not the best way to debug this?), I would expect to see something like this:
Work - Sending 25 events
Work - Waiting
Work - Back
Work - Finished
Repeated 4 times.
Instead I am seeing something like this:
*1. Work - Sending 25 events
*2. Work - Waiting
*4. Work - Finished
*1. Work - Sending 25 events
*2. Work - Waiting
*3. Work - Back
*4. Work - Finished
*1. Work - Sending 25 events
*2. Work - Waiting
*3. Work - Back
*4. Work - Finished
*1. Work - Sending 25 events
*2. Work - Waiting
*3. Work - Back
*3. Work - Back
*4. Work - Finished
(Added leading * so that the lines would not be autonumbered by SO)
I guess I would have expected that, the queue would have allowed multiple bursts of messages to be added, but that it would completely process one burst (waiting on the acync call to complete) before processing the next burst. It doesn't seem to be doing this. It does not seem to be reliably waiting on the completion of the async call. I do have a call to Set in the EnqueueLogEvents, maybe that is cancelling the WaitOne from the Work method?
So, I have a few questions:
1. Does my explanation of what I am trying to accomplish make sense (is my explanation clear, not is it a good idea or not)?
Is what I am trying to (transmit - from the client - the messages from a single thread, in the order that they occurred, completely processing one set of messages at a time) a good idea?
Am I close?
Can it be done?
Should it be done?
Thanks for any help!
[EDIT]
After more investigation and thanks to Brian's suggestion, we were able to get this working. I have copied the modified code. The key is that we are now using the "wh" wait handle strictly for ProducerConsumerQueue functions. Rather than using wh to wait for the async call to complete, we are now waiting on res.AsyncWaitHandle, which is returned by the BeginLogEvents call.
internal class LoggingQueue : IDisposable
{
EventWaitHandle wh = new AutoResetEvent(false);
Thread worker;
readonly object locker = new object();
bool working = false;
Queue<ObservableCollection<LoggingService.LogEvent>> logEventQueue = new Queue<ObservableCollection<LoggingService.LogEvent>>();
LoggingService.ILoggingService loggingService;
internal LoggingQueue(LoggingService.ILoggingService loggingService)
{
this.loggingService = loggingService;
worker = new Thread(Work);
worker.Start();
}
internal void EnqueueLogEvents(ObservableCollection<LoggingService.LogEvent> logEvents)
{
lock (locker)
{
logEventQueue.Enqueue(logEvents);
//System.Diagnostics.Debug.WriteLine("EnqueueLogEvents calling Set");
wh.Set();
}
}
private void Work()
{
while (true)
{
ObservableCollection<LoggingService.LogEvent> events = null;
lock (locker)
{
if (logEventQueue.Count > 0)
{
events = logEventQueue.Dequeue();
if (events == null || events.Count == 0) return;
}
}
if (events != null && events.Count > 0)
{
//System.Diagnostics.Debug.WriteLine("1. Work - Sending {0} events", events.Count);
IAsyncResult res = loggingService.BeginLogEvents(events, ar =>
{
try
{
loggingService.EndLogEvents(ar);
//System.Diagnostics.Debug.WriteLine("3. Work - Back");
}
catch (Exception ex)
{
}
}, null);
//System.Diagnostics.Debug.WriteLine("2. Work - Waiting");
// Block until async call returns. We are doing this so that we can be sure that all logging messages
// are sent FROM the client in the order they were generated. ALSO, we don't want interleave blocks of logging
// messages from the same client by sending a new block of messages before the previous block has been
// completely processed.
res.AsyncWaitHandle.WaitOne();
//System.Diagnostics.Debug.WriteLine("4. Work - Finished");
}
else
{
wh.WaitOne();
}
}
}
#region IDisposable Members
public void Dispose()
{
EnqueueLogEvents(null);
worker.Join();
wh.Close();
}
#endregion
}
As I mentioned in my initial question and in my comments to Jon and Brian, I still don't know if doing all of this work is a good idea, but at least the code does what I wanted it to do. That means that I at least have the choice of doing it this way or some other way (such as restoring order after the fact) rather than not having the choice.
Can I suggest that there's a simple alternative to all this coordination? Have a sequence using a cheap monotonically increasing ID (e.g. with Interlocked.Increment()) so that no matter what order things happen at the client or server, you can regenerate the original ordering later on.
That should let you be efficient and flexible, sending whatever you want asynchronously without waiting for acknowledgement, but without losing the ordering.
Obviously that means the ID (or possibly a guaranteed-unique timestamp field) would need to be part of your WCF service, but if you control both ends that should be reasonably simple.
The reason you are getting that kind of sequencing is because you are trying to use the same wait handle that the producer-consumer queue is using for a different purpose. That is going to cause all kinds of chaos. At some point things will go from bad to worse and the queue will get live-locked eventually. You really should create a separate WaitHandle to wait for completion of the logging service. Or if the BeginLoggingEvents fits the standard pattern it will return a IAsyncResult that contains a WaitHandle that you can use instead of creating your own.
As a side note, I really do not like the producer-consumer pattern presented on the Albarahi website. The problem is that it is not safe for multiple consumers (obviously that is of no concern to you). And I say that with all due respect because I think his website is one of the best resources for multithreaded programming. If BlockingCollection is available to you then use that instead.

How do i stop the The database file is locked exception?

I have a multithreaded app that uses sqlite. When two threads try to update the db at once i get the exception
Additional information: The database file is locked
I thought it would retry in a few milliseconds. My querys arent complex. The most complex one (which happens frequently) is update, select, run trivial code update/delete, commit. Why does it throw the exception? How can i make it retry a few times before throwing an exception?
SQLite isn't thread safe for access, which is why you get this error message.
You should synchronize the access to the database (create an object, and "lock" it) whenever you go to update. This will cause the second thread to block and wait until the first thread's update finishes automatically.
try to make your transaction / commit blocks as short as possible. The only time you can deadlock/block is with a transaction -- thus if you don't do them you won't have the problem.
That said, there are times when you need to do transactions (mostly on data updates), but don't do them while you are "run trivial code" if you can avoid it.
A better approach may be to use an update queue, if you can do the database updates out of line with the rest of your code. For example, you could do something like:
m_updateQueue.Add(()=>InsertOrder(o));
Then you could have a dedicated update thread that processed the queue.
That code would look similar to this (I haven't compiled or tested it):
class UpdateQueue : IDisposable
{
private object m_lockObj;
private Queue<Action> m_queue;
private volatile bool m_shutdown;
private Thread m_thread;
public UpdateQueue()
{
m_lockObj = new Object();
m_queue = new Queue<Action>();
m_thread = new Thread(ThreadLoop);
m_thread.Start();
}
public void Add(Action a)
{
lock(m_lockObj)
{
m_queue.Enqueue(a);
Monitor.Pulse(m_lockObj);
}
}
public void Dispose()
{
if (m_thread != null)
{
m_shutdown = true;
Monitor.PulseAll(m_lockObj);
m_thread.Join();
m_thread = null;
}
}
private void ThreadLoop()
{
while (! m_shutdown)
{
Action a;
lock (m_lockObj)
{
if (m_queue.Count == 0)
{
Monitor.Wait(m_lockObj);
}
if (m_shutdown)
{
return;
}
a = m_queuue.Dequeue();
}
a();
}
}
}
Or, you could use something other than Sql Lite.

Categories

Resources