I have a question. I have Access Database and WPF application. The application is Build in 32-bit, and Access Database is 32-bit. And every once in a while the applications show me an error when connecting to database.. External component trows an exception. And eavn if I eat the error or try to connect to database again it doesn't work. I have to restart the application and then it works again until it throws and error again in next 15 or so database connection transaction. If you know what I mean.
How can I restart that error so I can again connect to database or eavn prevent the error from throwing.
Please help me.
I don't know what kind of code to provide that is relevant.
try
{
List<IDModel> output = new List<IDModel>();
using (OleDbConnection connection = new OleDbConnection(Conn))
{
await connection.OpenAsync();
using (OleDbCommand Command = new OleDbCommand("SELECT * FROM DATA WHERE [STATUS] = #status;", connection))
{
Command.Parameters.AddWithValue("#status", _status);
var reader = await Command.ExecuteReaderAsync();
while (reader.Read())
{
output.Add(InsertID
((int)reader["ID"], (string)reader["STANDARD"], (string)reader["NAZIV"], (string)reader["POSLOVNA ENOTA"],
(string)reader["IZVOR NESKLADNOSTI"], (string)reader["ODDELEK"], (string)reader["OPIS"],
(string)reader["SLIKA 1"].ToString(),
(string)reader["SLIKA 2"].ToString(), (string)reader["SLIKA 3"].ToString(), (string)reader["SLIKA 4"].ToString(),
(string)reader["SLIKA 5"].ToString(), (string)reader["EXCEL 1"].ToString(), (string)reader["PDF 1"].ToString(),
(string)reader["SLIKA 6"].ToString(), (string)reader["EXCEL 2"].ToString(), (string)reader["PDF 2"].ToString(),
(string)reader["KOREKCIJA"].ToString(),
(string)reader["SLIKA 7"].ToString(), (string)reader["EXCEL 3"].ToString(), (string)reader["PDF 3"].ToString(),
(string)reader["KOREKTIVNI"].ToString(), (string)reader["VZROK"].ToString(),
(string)reader["OCENA"].ToString(), (string)reader["OPOMBA"].ToString(),
(string)reader["NESKLADNOST ODPRL"].ToString(), (string)reader["KOREKCIJA PODAL"].ToString(), (string)reader["NESKLADNOST ZAPRL"].ToString(),
(string)reader["NESKLADNOST VALIDIRAL"].ToString(), (string)reader["ROK ZA REŠITEV"].ToString(),
(bool)reader["BIG EVENT"],
(string)reader["NESKLADNOST ODPRTA"].ToString(), (string)reader["KOREKCIJA PODANA"].ToString(),
(string)reader["NESKLADNOST ZAPRTA"].ToString(), (string)reader["NESKLADNOST VALIDIRANA"].ToString()
));
}
}
return output;
}
}
catch (Exception)
{
throw;
}
Best regards!
I would assume the problem is with the access database. In my experience, Access simply fail sometime, without apparent reason. I have spent quite a bit of time investigating access crashes without reaching a solution. There are some things that might help:
Ensure you are only ever using the database from a single thread.
Ensure all database related objects are correctly disposed.
Ensure the databases are not to large, there is a 2Gb limit on database size. Running compaction might help.
Add better error handling, catch errors and retry the operation. If repeated failures occur, dispose and recreate the database connection.
Move the database access to another process. This can help with error handling, allowing you to restart the entire process if needed.
My preferred solution is to avoid using access at all. Spend the time to port the data to some real database instead.
Related
I'm working on a multithreaded C# program that uses SQLite. I'm having a problem that sometimes running SQLiteCommand.ExecuteNonQuery() to update some rows complains "SQLite error (5): database is locked". I'm aware that this happens because the database gets locked while an insert or update is going on, so if another query to modify the DB comes along the second query will have this database is locked error. So I'm trying to implement workarounds to it, but I'm not sure how I should do it.
I was trying to make it so that if the database locked error is thrown then the program waits a bit and tries again until it works. But somehow no exception gets caught and the code just exits the try-catch block even though the database locked message still gets printed in the debug output. I'm not entirely sure whether the SQL query gets rejected or accepted.
I also tried using TransactionScope and I haven't had the database is locked thing since then, but because of the random nature of the problem I can't be 100% sure if TransactionScope actually solves the problem, or if it only does to an extent, or if it doesn't and I was just lucky so far.
SQLiteConnection connection = new SQLiteConnection("Data Source=DB.db;Version=3;");
connection.Open();
SQLiteCommand command = connection.CreateCommand();
command.commandText = inputQuery;
try
{
command.executeNonQuery();
}
catch (SQLiteException sqle)
{
Debug.WriteLine("Database error: " + e.Message);
return false;
}
catch (Exception e)
{
Debug.WriteLine("Database error: " + e.Message);
return false;
}
finally
{
connection.Close();
}
So I'd really like someone to help me find out 1) how to eliminate the database is locked problem or 2) how to detect if the DB is locked error happens. Thanks in advance.
To detect if Sqlite DB is locked I used approach from Determine whether SQLite database is locked
The idea here is to try to acquire lock and release it immediately, if it is not failing then DB is not locked.
The following C# code worked for me:
public bool IsDatabaseLocked(string dbPath)
{
bool locked = true;
SQLiteConnection connection = new SQLiteConnection($"Data Source={dbPath};Version=3;");
connection.Open();
try
{
SQLiteCommand beginCommand = connection.CreateCommand();
beginCommand.CommandText = "BEGIN EXCLUSIVE"; // tries to acquire the lock
// CommandTimeout is set to 0 to get error immediately if DB is locked
// otherwise it will wait for 30 sec by default
beginCommand.CommandTimeout = 0;
beginCommand.ExecuteNonQuery();
SQLiteCommand commitCommand = connection.CreateCommand();
commitCommand.CommandText = "COMMIT"; // releases the lock immediately
commitCommand.ExecuteNonQuery();
locked = false;
}
catch(SQLiteException)
{
// database is locked error
}
finally
{
connection.Close();
}
return locked;
}
Then when you determined if db is locked you can either wait for it to be unlocked:
public async Task WaitForDbToBeUnlocked(string dbPath, CancellationToken token)
{
while (IsDatabaseLocked(dbPath))
{
await Task.Delay(TimeSpan.FromSeconds(1), token);
}
}
or send cancellation message to other thread (via CancellationTokenSource for example) before running your query.
If you're receiving error code 5 (busy) you can limit this by using an immediate transaction. If you're able to begin an immediate transaction, SQLite guarantees that you won't receive a busy error until you commit.
Also note that SQLite doesn't have row-level locking. The entire database is locked. Using a WAL journal, you can one writer and multiple readers. With other journaling methods, you can have either one writer, or multiple readers, but not both simultaneously.
SQLite Documentation on 'SQLITE_BUSY'
In my Windows application I try to connect to SQL Server 2008 with following code:
SqlConnection connection = new SqlConnection(Properties.Settings.Default.KargarBandarConnectionString);
SqlCommand command = new SqlCommand("Select IsAdmin from Users where UserName=#UserName And Password=#Password", connection);
SqlDataReader dataReader = null;
command.Parameters.AddWithValue("#UserName", UserNameTextBox.Text.Trim());
command.Parameters.AddWithValue("#Password", PasswordTextBox.Text);
try
{
connection.Open();
dataReader = command.ExecuteReader();
if (dataReader.HasRows)
{
while (dataReader.Read())
{
IsAdmin = dataReader.GetBoolean(0);
}
this.DialogResult = DialogResult.OK;
}
else
{
FMessageBox.ShowWarning("error");
UserNameTextBox.Focus();
}
}
catch (Exception ex)
{
if (progressForm != null)
progressForm.Close();
FMessageBox.ShowError(ex.Message);
}
finally
{
if (dataReader != null)
{
dataReader.Close();
dataReader.Dispose();
}
if (connection != null)
{
connection.Close();
connection.Dispose();
}
}
Everything works properly, but sometimes I get the following error:
timeout expired. the timeout period elapsed prior to obtaining a
connection from the pool ...
How can this be solved?
The reason you're getting this exception is because you have exhausted your connection pool and the number of "available" connections in your application.
Every time you open a connection, one is pulled from the connection pool if possible, or a new one is created if not.
However, to prevent galloping usage of connections, a limit of 100 (I think this is configurable) exists, and if you try to use more than 100 simultaneous connections, the code will not create new ones, and instead sit down to wait for one to be returned to the pool, and in this case you get a timeout if it sits too long.
So, for the particular example of code you've shown, I would:
Close the connection before I show an error messages to the user
However, unless 100 users are seeing the error message and leaving it there at the same time, it is unlikely the code you've shown is the cause of this problem.
Other than that, I would go through the entire application and ensure you don't have any connection leaks other places.
This particular type of exception can occur in one spot even though the problem is somewhere else. Example: A report is leaking an open connection every time it runs, and you run it 100 times successfully, then someone tries to log in, and the exception occurs in the login form.
That happens if you either:
leak connections (leaving them for GC to deal with rather than disposing them)
just have too much happening, such that the pool is exhausted
The first is the most common, and I expect it relates a lot to the fact that you are over-complication your error handling. This makes it easy to miss, and hard to spot that you've missed it. The code shown looks OK, but it would be far preferable to use using blocks for all the IDisposable elements, rather than finally. Also; don't keep the connection while you show modal things like the message box, unless you need the connection afterwards. Frankly, a lot of benefit here could be made by cleanly separating the UI and data-access code, then there is not temptation to stick a message-box in the middle of a database query.
However! To be explicit, I believe this code is the victim of some other code that is hogging connections. Look at your other data access code for the cause of this.
Refactor your code to look something like this. Implement using blocks. The other answers here are very important, be sure to understand them.
bool res=false;
try
{
using(var connection = new SqlConnection(Properties.Settings.Default.KargarBandarConnectionString))
using(var cmd = conn.CreateCommand())
{
cmd.commandText = "Select IsAdmin from Users where UserName=#UserName And HashedAndSaltedPassword=#PwdHash";
cmd.Parameters.AddWithValue("#UserName", UserNameTextBox.Text.Trim());
cmd.Parameters.AddWithValue("#PwdHash", SaltAndHash(PasswordTextBox.Text));
connection.Open();
var result = cmd.ExecuteScalar();
if (result!=null)
{
res=bool.Parse(result);
this.DialogResult = DialogResult.OK;
}
}
}
catch (Exception ex)
{
if (progressForm != null){progressForm.Close();}
FMessageBox.ShowError(ex.Message);
}
if(res==false)
{
FMessageBox.ShowWarning("error");
UserNameTextBox.Focus();
}
I one of my c# application, i have written sql connection code as following
try
{
myConnection = new SqlConnection(m_resourceDB.GetResourceString(nSiteID, ApplicationID.XClaim,(short)nResID ) );
myConnection.open();
}
I want to handle unkown issue of sqlserver like database down, time out.
For this i though to introduce for loop 3 times with 3 minute sleep between loop and if at all problem is there then i will exit from loop
I don't know my though is right or not? I want some expert advice on this? Any example?
I would say simply: the code that talks to connections etc should not be doing a sleep/retry, unless that code is already asynchronous. If the top-level calling code wants to catch an exception and set up a timer (not a sleep) and retry, then fine - but what you want to avoid is things like:
var data = dal.GetInfo();
suddenly taking 3 minutes. You might get away with it if it is an async/callback, and you have clearly advertised that this method may take minutes to execute. But even that feels like a stretch. And if you are up at the application logic, why not just catch the exception the first time, tell the user, and let the user click the button again at some point in the future?
If you are running a service with no user interface, then by all means, keep on looping until things start working, but at least log the errors to the EventLog while you're at it, so that the server admin can figure out when and why things go wrong.
For a client application, I would no suggest that you make the user wait 9 minutes before telling them things are not working like they should. Try to connect, assess the error condition, and let the user know what is going wrong so that they can take it further.
If you are using the SqlException class you can check the Exception Class and decide based on that what is going wrong, for example:
switch (sqlEx.Class)
{
case 20:
//server not found
case 11:
//database not found
All the classes have the SQL Server message on them, it is a matter of testing the different conditions.
It really depends on how you want your application to behave.
If your database access is dealt with on the same thread as your UI then whilst you are attempting to connect to a database it will become unresponsive.
The default time period for a connection timeout is already pretty long and so running it in a for loop 3 times would triple that and leave you with frustrated users.
In my opinion unless your specifically attempting to hide connection issues from the user, it is by far better to report back that a connection attempt has failed and ask the user if they wish to retry. Then having a count on the number of times that you'll allow a reconnection attempt before informing the user that they can't continue or putting the application into an "out of service" state.
I want to handle unkown issue of sqlserver like database down, time out.
Try to surround connection operation with using statement to capture connection related problems .
using( sqlcon = new SqlConnection(constr))
{}
Use the Try/Catch Statement for capturing the exception:
try
{
con.Open();
try
{
//Execute Queries
// ....
}
catch
{
// command related or other exception
}
finally
{
con.Close();
}
}
catch
{
// connection error
}
To prevent Exception of such type check these:
Troubleshooting Timeout SqlExceptions
you can set the CommandTimeout to some value on a SqlCommand:
objCmd.CommandTimeout = 600
You can catch the SqlException.
SqlException.Class
Gets the severity level of the error returned from the .NET Framework Data Provider for SQL Server.
SqlException.Errors
Gets a collection of one or more SqlError objects that give detailed information about exceptions generated by the .NET Framework Data Provider for SQL Server.
SqlCommand cmd = new SqlCommand();
cmd.Connection = new SqlConnection("CONNECTION_STRING");
cmd.CommandText = "SELECT * FROM ....";
// cmd.CommandType = System.Data.CommandType.Text;
try
{
cmd.Connection.Open();
try
{
SqlDataReader reader = cmd.ExecuteReader();
// ....
}
finally
{
cmd.Connection.Close();
}
}
catch (SqlException ex)
{
// ex.Class contains the ErrorCode, depends on your dataprovider
foreach (SqlError error in ex.Errors)
{
// error.LineNumber
// error.Message
}
}
The best way would be to putt it in a try catch statement and display the error in a better format, If it fails for 1 time, trying it continue sly 3 times will not change anything untill and unless you dc and send request again, In a separate in separate packed as a new request.
use try this.
try
{
Executing code.
}
catch (Exception err)
{
Display["ErrorMsg"] = err.Message.ToString() + "|" + err.GetBaseException() + "|" + Request.Url.ToString();
}
Good Luck.
I am using different SQL procedures in an application.
First procedures insert some rows then some processing in C#code and then 2nd procedure
do some updation then again some code processing then third procedure delete some record and then insert new record. When all is done on Sever 1 then data is fetch from this server and sent to Server 2 there record is deleted and new record is inserted.
IF there is error at any stage on any server in any procedure i want to roll back all the record.
I can not use begin trans because processing takes time and can not block table as others users are also using same tables in parallel. So kindly tell me how can i achieve it without blocking the table for other users.
Thanks in advance.
Edited (Added code example):
I tried Transaction Scope but i am getting exception while opening the connection. I configured MS DTC but may be not configured properly.
"
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."
using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Required))
{
try
{
dl.SetBookReadyToLive(13570, false);
//SetBookReadyToLive
dl.AddTestSubmiitedTitleID(23402);
dl.AddBookAuthorAtLIve(13570, 1);
ts.Complete();
}
catch (Exception ex)
{
Response.Write(ex.Message);
}
}
public void SetBookReadyToLive(long BookID, bool status)
{
try
{
if (dbConMeta.State != ConnectionState.Open)
dbConMeta.Open();
SqlCommand cmd = new SqlCommand("spSetBookReadyToLive", dbConMeta);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Clear();
cmd.Parameters.Add("#BookID", BookID);
cmd.Parameters.Add("#status", status);
cmd.ExecuteNonQuery();
if (dbConMeta.State == ConnectionState.Open)
dbConMeta.Close();
}
catch
{
if (dbConMeta.State == ConnectionState.Open)
dbConMeta.Close();
}
}
I get the exception on opening the connection of method>
I am using SQL Server 2000, i have set the configuration of MS DTC on the machine where SQL Server is installed and also on my PC from where i am running the code. But still same exception.
Kindly help me to configure it
You can use the TransactionScope class. It works generally well but in case of distributed SQL servers like in your case requires the MS DTC enabled in both servers and configured properly (security has to be granted for execution of network transactions, distributed ones and so on...)
here a copy paste from an example on MSDN, you could "almost" use it like this... :)
// Create the TransactionScope to execute the commands, guaranteeing
// that both commands can commit or roll back as a single unit of work.
using (TransactionScope scope = new TransactionScope())
{
using (SqlConnection connection1 = new SqlConnection(connectString1))
{
// Opening the connection automatically enlists it in the
// TransactionScope as a lightweight transaction.
connection1.Open();
// Create the SqlCommand object and execute the first command.
SqlCommand command1 = new SqlCommand(commandText1, connection1);
returnValue = command1.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command1: {0}", returnValue);
// If you get here, this means that command1 succeeded. By nesting
// the using block for connection2 inside that of connection1, you
// conserve server and network resources as connection2 is opened
// only when there is a chance that the transaction can commit.
using (SqlConnection connection2 = new SqlConnection(connectString2))
{
// The transaction is escalated to a full distributed
// transaction when connection2 is opened.
connection2.Open();
// Execute the second command in the second database.
returnValue = 0;
SqlCommand command2 = new SqlCommand(commandText2, connection2);
returnValue = command2.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command2: {0}", returnValue);
}
}
// The Complete method commits the transaction. If an exception has been thrown,
// Complete is not called and the transaction is rolled back.
scope.Complete();
}
source: TransactionScope Class
to minimize locks you could specify the IsolationLevel with the overload of the constructor which takes a TransactionScopeOptions, default is Serializable if you are fine with that you could set it to ReadCommitted.
Note: Personally I would not use this one unless absolutely needed, because it's a bit of a pain to have the DTC always configured and Distributed Transactions are in general slower than local ones but really depends on your BL / DAL logic.
Short answer : The same way you would do it if you would do it in MS SQL Management Studio.
You open a connection to a server.
Open a transaction for a specific server
You run your queries related to this server
You make sure to keep your connection alive while you... [go back to 1. for next server]
If all your queries worked, commit all your changes.
Else, rollback all your queries.
Warning : The first table will most likely be locked until you're done with all your servers/queries. What you could do here to help this : If you got a lot of data, you can transfer the data to temporary tables on every servers before doing the step #2. Once this is done, you open the transaction, do your fast things, then commit/rollback asap.
Note: I know you asked how to achieve this without locking the tables, hence why I added an idea in the « warning » part.
I'm performing a large number of INSERTS to a SQLite database. I'm using just one thread. I batch the writes to improve performance and have a bit of security in case of a crash. Basically I cache up a bunch of data in memory and then when I deem appropriate, I loop over all of that data and perform the INSERTS. The code for this is shown below:
public void Commit()
{
using (SQLiteConnection conn = new SQLiteConnection(this.connString))
{
conn.Open();
using (SQLiteTransaction trans = conn.BeginTransaction())
{
using (SQLiteCommand command = conn.CreateCommand())
{
command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)";
command.Parameters.Add(this.col1Param);
command.Parameters.Add(this.col2Param);
foreach (Data o in this.dataTemp)
{
this.col1Param.Value = o.Col1Prop;
this. col2Param.Value = o.Col2Prop;
command.ExecuteNonQuery();
}
}
this.TryHandleCommit(trans);
}
conn.Close();
}
}
I now employ the following gimmick to get the thing to eventually work:
private void TryHandleCommit(SQLiteTransaction trans)
{
try
{
trans.Commit();
}
catch (Exception e)
{
Console.WriteLine("Trying again...");
this.TryHandleCommit(trans);
}
}
I create my DB like so:
public DataBase(String path)
{
//build connection string
SQLiteConnectionStringBuilder connString = new SQLiteConnectionStringBuilder();
connString.DataSource = path;
connString.Version = 3;
connString.DefaultTimeout = 5;
connString.JournalMode = SQLiteJournalModeEnum.Persist;
connString.UseUTF16Encoding = true;
using (connection = new SQLiteConnection(connString.ToString()))
{
//check for existence of db
FileInfo f = new FileInfo(path);
if (!f.Exists) //build new blank db
{
SQLiteConnection.CreateFile(path);
connection.Open();
using (SQLiteTransaction trans = connection.BeginTransaction())
{
using (SQLiteCommand command = connection.CreateCommand())
{
command.CommandText = DataBase.CREATE_MATCHES;
command.ExecuteNonQuery();
command.CommandText = DataBase.CREATE_STRING_DATA;
command.ExecuteNonQuery();
//TODO add logging
}
trans.Commit();
}
connection.Close();
}
}
}
I then export the connection string and use it to obtain new connections in different parts of the program.
At seemingly random intervals, though at far too great a rate to ignore or otherwise workaround this problem, I get unhandled SQLiteException: Database file is locked. This occurs when I attempt to commit the transaction. No errors seem to occur prior to then. This does not always happen. Sometimes the whole thing runs without a hitch.
No reads are being performed on these files before the commits finish.
I have the very latest SQLite binary.
I'm compiling for .NET 2.0.
I'm using VS 2008.
The db is a local file.
All of this activity is encapsulated within one thread / process.
Virus protection is off (though I think that was only relevant if you were connecting over a network?).
As per Scotsman's post I have implemented the following changes:
Journal Mode set to Persist
DB files stored in C:\Docs + Settings\ApplicationData via System.Windows.Forms.Application.AppData windows call
No inner exception
Witnessed on two distinct machines (albeit very similar hardware and software)
Have been running Process Monitor - no extraneous processes are attaching themselves to the DB files - the problem is definitely in my code...
Does anyone have any idea whats going on here?
I know I just dropped a whole mess of code, but I've been trying to figure this out for way too long. My thanks to anyone who makes it to the end of this question!
brian
UPDATES:
Thanks for the suggestions so far! I've implemented many of the suggested changes. I feel that we are getting closer to the answer...however...
The code above technically works however it is non-deterministic! It is not guaranteed to do anything aside from spin in neutral forever. In practice it seems to work somewhere between the 1st and 10th iteration. If i batch my commits at a reasonable interval damage will be mitigated but I really do not want to leave things in this state...
More suggestions welcome!
It looks like you failed to link the command with the transaction you've created.
Instead of:
using (SQLiteCommand command = conn.CreateCommand())
You should use:
using (SQLiteCommand command = new SQLiteCommand("<INSERT statement here>", conn, trans))
Or you can set its Transaction property after its construction.
While we are at it - your handling of failures is incorrect:
The command's ExecuteNonQuery method can also fail and you are not really protected. You should change the code to something like:
public void Commit()
{
using (SQLiteConnection conn = new SQLiteConnection(this.connString))
{
conn.Open();
SQLiteTransaction trans = conn.BeginTransaction();
try
{
using (SQLiteCommand command = conn.CreateCommand())
{
command.Transaction = trans; // Now the command is linked to the transaction and don't try to create a new one (which is probably why your database gets locked)
command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)";
command.Parameters.Add(this.col1Param);
command.Parameters.Add(this.col2Param);
foreach (Data o in this.dataTemp)
{
this.col1Param.Value = o.Col1Prop;
this. col2Param.Value = o.Col2Prop;
command.ExecuteNonQuery();
}
}
trans.Commit();
}
catch (SQLiteException ex)
{
// You need to rollback in case something wrong happened in command.ExecuteNonQuery() ...
trans.Rollback();
throw;
}
}
}
Another thing is that you don't need to cache anything in memory. You can depend on SQLite journaling mechanism for storing incomplete transaction state.
Run Sysinternals Process Monitor and filter on filename while running your program to rule out if any other process does anything to it and to see what exacly your program is doing to the file. Long shot, but might give a clue.
We had a very similar problem using nested Transactions with the TransactionScope class. We thought all database actions occurred on the same thread...however we were caught out by the Transaction mechanism...more specifically the Ambient transaction.
Basically there was a transaction higher up the chain which, by the magic of ado, the connection automatically enlisted in. The result was that, even though we thought we were writing to the database on a single thread, the write didn't really happen until the topmost transaction was committed. At this 'indeterminate' point the database was written to causing it to be locked outside of our control.
The solution was to ensure that the sqlite database did not directly take part in the ambient transaction by ensuring we used something like:
using(TransactionScope scope = new TransactionScope(TransactionScopeOptions.RequiresNew))
{
...
scope.Complete()
}
Things to watch for:
don't use connections across multiple threads/processes.
I've seen it happen when a virus scanner would detect changes to the file and try to scan it. It would lock the file for a short interval and cause havoc.
I started facing this same problem today: I'm studying asp.net mvc, building my first application completely from scratch. Sometimes, when I'd write to the database, I'd get the same exception, saying the database file was locked.
I found it really strange, since I was completely sure that there was just one connection open at that time (based on process explorer's listing of active file handles).
I've also built the whole data access layer from scratch, using System.Data.SQLite .Net provider, and, when I planned it, I took special care with connections and transactions, in order to ensure no connection or transaction was left hanging around.
The tricky part was that setting a breakpoint on ExecuteNonQuery() command and running the application in debug mode would make the error disappear!
Googling, I found something interesting on this site: http://www.softperfect.com/board/read.php?8,5775. There, someone replied the thread suggesting the author to put the database path on the anti-virus ignore list.
I added the database file to the ignore list of my anti-virus (Microsoft Security Essentials) and it solved my problem. No more database locked errors!
Is your database file on the same machine as the app or is it stored on a server?
You should create a new connection in every thread. I would simplefy the creation of a connection, use everywhere: connection = new SQLiteConnection(connString.ToString());
and use a database file on the same machine as the app and test again.
Why the two different ways of creating a connection?
These guys were having similiar problems (mostly, it appears, with the journaling file being locked, maybe TortoiseSVN interactions ... check the referenced articles).
They came up with a set of recommendations (correct directories, changing journaling types from delete to persist, etc). http://sqlite.phxsoftware.com/forums/p/689/5445.aspx#5445
The journal mode options are discussed here: http://www.sqlite.org/pragma.html . You could try TRUNCATE.
Is there a stack trace during the exception into SQL Lite?
You indicate you "batch my commits at a reasonable interval". What is the interval?
I would always use a Connection, Transaction and Command in a using clause. In your first code listing you did, but your third (creating the tables) you didn't. I suggest you do that too, because (who knows?) maybe the commands that create the table somehow continue to lock the file. Long shot... but worth a shot?
Do you have Google Desktop Search (or another file indexer) running? As previously mentioned, Sysinternals Process Monitor can help you track it down.
Also, what is the filename of the database? From PerformanceTuningWindows:
Be VERY, VERY careful what you name your database, especially the extension
For example, if you give all your databases the extension .sdb (SQLite Database, nice name hey? I thought so when I choose it anyway...) you discover that the SDB extension is already associated with APPFIX PACKAGES.
Now, here is the cute part, APPFIX is an executable/package that Windows XP recognizes, and it will, (emphasis mine) ADD THE DATABASE TO THE SYSTEM RESTORE FUNCTIONALITY
This means, stay with me here, every time you write ANYTHING to the database, the Windows XP system thinks a bloody executable has changed and copies your ENTIRE 800 meg database to the system restore directory....
I recommend something like DB or DAT.
While the lock is reported on the COMMIT, the lock is on the INSERT/UPDATE command. Check for record locks not being released earlier in your code.