I have a multithreaded app that uses sqlite. When two threads try to update the db at once i get the exception
Additional information: The database file is locked
I thought it would retry in a few milliseconds. My querys arent complex. The most complex one (which happens frequently) is update, select, run trivial code update/delete, commit. Why does it throw the exception? How can i make it retry a few times before throwing an exception?
SQLite isn't thread safe for access, which is why you get this error message.
You should synchronize the access to the database (create an object, and "lock" it) whenever you go to update. This will cause the second thread to block and wait until the first thread's update finishes automatically.
try to make your transaction / commit blocks as short as possible. The only time you can deadlock/block is with a transaction -- thus if you don't do them you won't have the problem.
That said, there are times when you need to do transactions (mostly on data updates), but don't do them while you are "run trivial code" if you can avoid it.
A better approach may be to use an update queue, if you can do the database updates out of line with the rest of your code. For example, you could do something like:
m_updateQueue.Add(()=>InsertOrder(o));
Then you could have a dedicated update thread that processed the queue.
That code would look similar to this (I haven't compiled or tested it):
class UpdateQueue : IDisposable
{
private object m_lockObj;
private Queue<Action> m_queue;
private volatile bool m_shutdown;
private Thread m_thread;
public UpdateQueue()
{
m_lockObj = new Object();
m_queue = new Queue<Action>();
m_thread = new Thread(ThreadLoop);
m_thread.Start();
}
public void Add(Action a)
{
lock(m_lockObj)
{
m_queue.Enqueue(a);
Monitor.Pulse(m_lockObj);
}
}
public void Dispose()
{
if (m_thread != null)
{
m_shutdown = true;
Monitor.PulseAll(m_lockObj);
m_thread.Join();
m_thread = null;
}
}
private void ThreadLoop()
{
while (! m_shutdown)
{
Action a;
lock (m_lockObj)
{
if (m_queue.Count == 0)
{
Monitor.Wait(m_lockObj);
}
if (m_shutdown)
{
return;
}
a = m_queuue.Dequeue();
}
a();
}
}
}
Or, you could use something other than Sql Lite.
Related
I'm asking this primarily as a sanity check: In a C# (8.0) application I've got this bit of code, which spuriously fails with an "object is not synchronized" exception from Monitor.pulse() (I've omitted irrelevant code for clarity):
// vanilla multiple-producer single-consumer queue stuff:
private Queue<Message> messages = new Queue<Message>();
private void ConsumerThread () {
Queue<Message> myMessages = new Queue<Message>();
while (...) {
lock (messages) {
// wait
while (messages.Count == 0)
Monitor.Wait(messages);
// swap
(messages, myMessages) = (myMessages, messages);
}
// process
while (myMessages.Count > 0)
DoStuff(myMessages.Dequeue());
}
}
public void EnqueueMessage (...) {
Message message = new Message(...);
lock (messages) {
messages.Enqueue(message);
Monitor.Pulse(messages);
}
}
I'm fairly new to C# and also I was stressed when I wrote that. Now I am reviewing that code to fix the exception and I'm immediately raising an eyebrow at the fact that I reassigned messages inside the consumer's lock.
I looked around and found Is it bad to overwrite a lock object if it is the last statement in the lock?, which validates my raised eyebrow.
However, I still don't have a lot of confidence (inexperience + stress), so, just to confirm: Is the following analysis of why this is broken correct?
If the following happens, in this order:
Stuff happens to be in the queue.
Consumer thread locks messages (and will skip wait loop).
EnqueueMessage tries to lock messages, waits for lock.
Consumer thread swaps messages and myMessages, releases lock.
EnqueueMessage takes lock.
EnqueueMessage adds item to messages and calls Monitor.pulse(messages) except messages isn't the same object that it locked in step (3), since it was swapped out from under us in (4). Possible consequences include:
Calling Monitor.Pulse on a non-locked object (what used to be myMessages) -- hence the aforementioned exception.
Enqueueing to the wrong queue and the consequences of that.
Even weirder stuff if the consumer thread manages to complete another full loop cycle while EnqueueMessage is still somewhere in its lock{}.
Right? I'm pretty sure that's right, it feels very basic, but I just want to confirm because I'm completely burnt out right now.
Then, whether that's correct or not: Does the following proposed fix make sense?
It seems to me like the fix is super simple: Instead of using messages as the monitor object, just use some dedicated dummy object that won't be changed:
private readonly object messagesLock = new object();
private Queue<Message> messages = new Queue<Message>();
private void ConsumerThread () {
Queue<Message> myMessages = new Queue<Message>();
while (...) {
lock (messagesLock) {
while (messages.Count == 0)
Monitor.Wait(messagesLock);
(messages, myMessages) = (myMessages, messages);
}
}
...
}
public void EnqueueMessage (...) {
...;
lock (messagesLock) {
messages.Enqueue(...);
Monitor.Pulse(messagesLock);
}
}
Where the intent is to avoid any issues caused by swapping out the lock object in strange places.
And that should work... right?
Nobody uses Queue in multi-threading since .NET 2 probably 16 yrs ago (correct me if I am wrong with dates).
it is trivial with concurrent collections.
BlockingColleciton<Message> myMessages = new BlockingColleciton<Message>();
private void ConsumerThread () {
while (...)
{
var message = myMessages.Take();
}
...
}
public void EnqueueMessage (Message msg) {
...;
myMessages.Add(msg);
}
I'm writing a program that will analyze changes in the stock market.
Every time the candles on the stock charts are updated, my algorithm scans every chart for certain pieces of data. I've noticed that this process is taking about 0.6 seconds each time, freezing my application. Its not getting stuck in a loop, and there are no other problems like exception errors slowing it down. It just takes a bit of time.
To solve this, I'm trying to see if I can thread the algorithm.
In order to call the algorithm to check over the charts, I have to call this:
checkCharts.RunAlgo();
As threads need an object, I'm trying to figure out how to run the RunAlgo(), but I'm not having any luck.
How can I have a thread run this method in my checkCharts object? Due to back propagating data, I can't start a new checkCharts object. I have to continue using that method from the existing object.
EDIT:
I tried this:
M4.ALProj.BotMain checkCharts = new ALProj.BotMain();
Thread algoThread = new Thread(checkCharts.RunAlgo);
It tells me that the checkCharts part of checkCharts.RunAlgo is gives me, "An object reference is required for the non-static field, method, or property "M4.ALProj.BotMain"."
In a specific if statement, I was going to put the algoThread.Start(); Any idea what I did wrong there?
The answer to your question is actually very simple:
Thread myThread = new Thread(checkCharts.RunAlgo);
myThread.Start();
However, the more complex part is to make sure that when the method RunAlgo accesses variables inside the checkCharts object, this happens in a thread-safe manner.
See Thread Synchronization for help on how to synchronize access to data from multiple threads.
I would rather use Task.Run than Thread. Task.Run utilizes the ThreadPool which has been optimized to handle various loads effectively. You will also get all the goodies of Task.
await Task.Run(()=> checkCharts.RunAlgo);
Try this code block. Its a basic boilerplate but you can build on and extend it quite easily.
//If M4.ALProj.BotMain needs to be recreated for each run then comment this line and uncomment the one in DoRunParallel()
private static M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
private static object SyncRoot = new object();
private static System.Threading.Thread algoThread = null;
private static bool ReRunOnComplete = false;
public static void RunParallel()
{
lock (SyncRoot)
{
if (algoThread == null)
{
System.Threading.ThreadStart TS = new System.Threading.ThreadStart(DoRunParallel);
algoThread = new System.Threading.Thread(TS);
}
else
{
//Recieved a recalc call while still calculating
ReRunOnComplete = true;
}
}
}
public static void DoRunParallel()
{
bool ReRun = false;
try
{
//If M4.ALProj.BotMain needs to be recreated for each run then uncomment this line and comment private static version above
//M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
checkCharts.RunAlgo();
}
finally
{
lock (SyncRoot)
{
algoThread = null;
ReRun = ReRunOnComplete;
ReRunOnComplete = false;
}
}
if (ReRun)
{
RunParallel();
}
}
I have 2 threads to are triggered at the same time and run in parallel. These 2 threads are going to be manipulating a string value, but I want to make sure that there are no data inconsistencies. For that I want to use a lock with Monitor.Pulse and Monitor.Wait. I used a method that I found on another question/answer, but whenever I run my program, the first thread gets stuck at the Monitor.Wait level. I think that's because the second thread has already "Pulsed" and "Waited". Here is some code to look at:
string currentInstruction;
public void nextInstruction()
{
Action actions = {
fetch,
decode
}
Parallel.Invoke(actions);
_pc++;
}
public void fetch()
{
lock(irLock)
{
currentInstruction = "blah";
GiveTurnTo(2);
WaitTurn(1);
}
decodeEvent.WaitOne();
}
public void decode()
{
decodeEvent.Set();
lock(irLock)
{
WaitTurn(2);
currentInstruction = "decoding..."
GiveTurnTo(1);
}
}
// Below are the methods I talked about before.
// Wait for turn to use lock object
public static void WaitTurn(int threadNum, object _lock)
{
// While( not this threads turn )
while (threadInControl != threadNum)
{
// "Let go" of lock on SyncRoot and wait utill
// someone finishes their turn with it
Monitor.Wait(_lock);
}
}
// Pass turn over to other thread
public static void GiveTurnTo(int nextThreadNum, object _lock)
{
threadInControl = nextThreadNum;
// Notify waiting threads that it's someone else's turn
Monitor.Pulse(_lock);
}
Any idea how to get 2 parallel threads to communicate (manipulate the same resources) within the same cycle using locks or anything else?
You want to run 2 peaces of code in parallel, but locking them at start using the same variable?
As nvoigt mentioned, it already sounds wrong. What you have to do is to remove lock from there. Use it only when you are about to access something exclusively.
Btw "data inconsistencies" can be avoided by not having to have them. Do not use currentInstruction field directly (is it a field?), but provide a thread safe CurrentInstruction property.
private object _currentInstructionLock = new object();
private string _currentInstruction
public string CurrentInstruction
{
get { return _currentInstruction; }
set
{
lock(_currentInstructionLock)
_currentInstruction = value;
}
}
Other thing is naming, local variables name starting from _ is a bad style. Some peoples (incl. me) using them to distinguish private fields. Property name should start from BigLetter and local variables fromSmall.
I have a SQL server CLR stored proc that is used to retrieve a large set of rows, then do a process and update a count in another table.
Here's the flow:
select -> process -> update count -> mark the selected rows as processed
The nature of the process is that it should not count the same set of data twice. And the SP is called with a GUID as an argument.
So I'm keeping a list of GUIDs (in a static list in the SP) that are currently in process and halt the execution for subsequent calls to the SP with the same argument until one currently in process finishes.
I have the code to remove the GUID when a process finishes in a finally block but it's not working everytime. There are instances (like when the user cancels the execution of the SP)where the SP exits without calling the finally block and without removing the GUID from the list so subsequent calls keeps waiting indefinitely.
Can you guys give me a solution to make sure that my finally block will be called no matter what or any other solution to make sure only one ID is in process at any given time.
Here's a sample of the code with the processing bits removed
[Microsoft.SqlServer.Server.SqlProcedure]
public static void TransformSurvey(Guid PublicationId)
{
AutoResetEvent autoEvent = null;
bool existing = false;
//check if the process is already running for the given Id
//concurrency handler holds a dictionary of publicationIds and AutoresetEvents
lock (ConcurrencyHandler.PublicationIds)
{
existing = ConcurrencyHandler.PublicationIds.TryGetValue(PublicationId, out autoEvent);
if (!existing)
{
//there's no process in progress. so OK to start
autoEvent = new AutoResetEvent(false);
ConcurrencyHandler.PublicationIds.Add(PublicationId, autoEvent);
}
}
if (existing)
{
//wait on the shared object
autoEvent.WaitOne();
lock (ConcurrencyHandler.PublicationIds)
{
ConcurrencyHandler.PublicationIds.Add(PublicationId, autoEvent); //add this again as the exiting thread has removed this from the list
}
}
try
{
// ... do the processing here..........
}
catch (Exception ex)
{
//exception handling
}
finally
{
//remove the pubid
lock (ConcurrencyHandler.PublicationIds)
{
ConcurrencyHandler.PublicationIds.Remove(PublicationId);
autoEvent.Set();
}
}
}
Wrapping the code at a higher level is a good solution, another option could be the using statement with IDisposable.
public class SQLCLRProcedure : IDisposable
{
public bool Execute(Guid guid)
{
// Do work
}
public void Dispose()
{
// Remove GUID
// Close Connection
}
}
using (SQLCLRProcedure procedure = new SQLCLRProcedure())
{
procedure.Execute(guid);
}
This isn't verified in a compiler but it's commonly referred to as the IDisposable Pattern.
http://msdn.microsoft.com/en-us/library/system.idisposable.aspx
I'm building a multithreaded app in .net.
I have a thread that listens to a connection (abstract, serial, tcp...).
When it receives a new message, it adds it to via AddMessage. Which then call startSpool. startSpool checks to see if the spool is already running and if it is, returns, otherwise, starts it in a new thread. The reason for this is, the messages HAVE to be processed serially, FIFO.
So, my questions are...
Am I going about this the right way?
Are there better, faster, cheaper patterns out there?
My apologies if there is a typo in my code, I was having problems copying and pasting.
ConcurrentQueue<IMyMessage > messages = new ConcurrentQueue<IMyMessage>();
const int maxSpoolInstances = 1;
object lcurrentSpoolInstances;
int currentSpoolInstances = 0;
Thread spoolThread;
public void AddMessage(IMyMessage message)
{
this.messages.Add(message);
this.startSpool();
}
private void startSpool()
{
bool run = false;
lock (lcurrentSpoolInstances)
{
if (currentSpoolInstances <= maxSpoolInstances)
{
this.currentSpoolInstances++;
run = true;
}
else
{
return;
}
}
if (run)
{
this.spoolThread = new Thread(new ThreadStart(spool));
this.spoolThread.Start();
}
}
private void spool()
{
Message.ITimingMessage message;
while (this.messages.Count > 0)
{
// TODO: Is this below line necessary or does the TryDequeue cover this?
message = null;
this.messages.TryDequeue(out message);
if (message != null)
{
// My long running thing that does something with this message.
}
}
lock (lcurrentSpoolInstances)
{
this.currentSpoolInstances--;
}
}
This would be easier using BlockingCollection<T> instead of ConcurrentQueue<T>.
Something like this should work:
class MessageProcessor : IDisposable
{
BlockingCollection<IMyMessage> messages = new BlockingCollection<IMyMessage>();
public MessageProcessor()
{
// Move this to constructor to prevent race condition in existing code (you could start multiple threads...
Task.Factory.StartNew(this.spool, TaskCreationOptions.LongRunning);
}
public void AddMessage(IMyMessage message)
{
this.messages.Add(message);
}
private void Spool()
{
foreach(IMyMessage message in this.messages.GetConsumingEnumerable())
{
// long running thing that does something with this message.
}
}
public void FinishProcessing()
{
// This will tell the spooling you're done adding, so it shuts down
this.messages.CompleteAdding();
}
void IDisposable.Dispose()
{
this.FinishProcessing();
}
}
Edit: If you wanted to support multiple consumers, you could handle that via a separate constructor. I'd refactor this to:
public MessageProcessor(int numberOfConsumers = 1)
{
for (int i=0;i<numberOfConsumers;++i)
StartConsumer();
}
private void StartConsumer()
{
// Move this to constructor to prevent race condition in existing code (you could start multiple threads...
Task.Factory.StartNew(this.spool, TaskCreationOptions.LongRunning);
}
This would allow you to start any number of consumers. Note that this breaks the rule of having it be strictly FIFO - the processing will potentially process "numberOfConsumer" elements in blocks with this change.
Multiple producers are already supported. The above is thread safe, so any number of threads can call Add(message) in parallel, with no changes.
I think that Reed's answer is the best way to go, but for the sake of academics, here is an example using the concurrent queue -- you had some races in the code that you posted (depending upon how you handle incrementing currnetSpoolInstances)
The changes I made (below) were:
Switched to a Task instead of a Thread (uses thread pool instead of incurring the cost of creating a new thread)
added the code to increment/decrement your spool instance count
changed the "if currentSpoolInstances <= max ... to just < to avoid having one too many workers (probably just a typo)
changed the way that empty queues were handled to avoid a race: I think you had a race, where your while loop could have tested false, (you thread begins to exit), but at that moment, a new item is added (so your spool thread is exiting, but your spool count > 0, so your queue stalls).
private ConcurrentQueue<IMyMessage> messages = new ConcurrentQueue<IMyMessage>();
const int maxSpoolInstances = 1;
object lcurrentSpoolInstances = new object();
int currentSpoolInstances = 0;
public void AddMessage(IMyMessage message)
{
this.messages.Enqueue(message);
this.startSpool();
}
private void startSpool()
{
lock (lcurrentSpoolInstances)
{
if (currentSpoolInstances < maxSpoolInstances)
{
this.currentSpoolInstances++;
Task.Factory.StartNew(spool, TaskCreationOptions.LongRunning);
}
}
}
private void spool()
{
IMyMessage message;
while (true)
{
// you do not need to null message because it is an "out" parameter, had it been a "ref" parameter, you would want to null it.
if(this.messages.TryDequeue(out message))
{
// My long running thing that does something with this message.
}
else
{
lock (lcurrentSpoolInstances)
{
if (this.messages.IsEmpty)
{
this.currentSpoolInstances--;
return;
}
}
}
}
}
Check 'Pipelines pattern': http://msdn.microsoft.com/en-us/library/ff963548.aspx
Use BlockingCollection for the 'buffers'.
Each Processor (e.g. ReadStrings, CorrectCase, ..), should run in a Task.
HTH..