I have five threads. They are doing OracleBulkCopy(1 million records each) into the same table (EXCEL_DATA) at same time. But at some point in time I am getting below error:
ORA-00604: error occurred at recursive SQL level 1
ORA-00054: resource busy and acquire with NOWAIT specified
I am using below code for OracleBulkCopy:
using (OracleConnection con = new OracleConnection(ConnectionString))
{
con.Open();
using (var bulkcopy = new OracleBulkCopy(con, options))
{
OracleTransaction tran =
con.BeginTransaction(IsolationLevel.ReadCommitted);
bulkcopy.DestinationTableName = DestinationTable;
foreach (var mapping in columnMappings)
bulkcopy.ColumnMappings.Add(mapping);
bulkcopy.BulkCopyTimeout = TimeOut.Value;
try
{
bulkcopy.WriteToServer(dataTable);
tran.Commit();
}
catch (Exception ex)
{
tran.Rollback();
}
}
}
It sounds like the table or a section is locked (quite reasonable during a bulk-copy, especially since you have an explicit transaction), and that is blocking other inserts from competing bulk-copies. That doesn't sound very surprising. The best thing I can say is... "don't do that". In particular, this is an IO-bound operation, with your main blockage very likely to be the network, with the secondary limit being the back-end server - which is also required to observe the ACID rules you have specified. For these reasons, doing these operations in parallel is not likely to give any significant performance benefits, but is very likely to cause timeouts due to blocking.
So: instead of doing these in parallel... do them in series.
Related
I need to block database reads on row level while I'm executing update logic for same row. How would nice and clean solution look like? Here is some code I'm working on:
using (SqlConnection conn = new SqlConnection(Configuration.ConnectionString)) {
conn.Open();
using (SqlTransaction tran = conn.BeginTransaction(IsolationLevel.Serializable)) {
//TODO lock table row with ID=primaryKey (block other reads and updates)
using (SqlCommand cmd = new SqlCommand("SELECT Data FROM MyTable WHERE ID=#primaryKey", conn)) {
cmd.Parameters.AddWithValue("#primaryKey", primaryKey);
using (var reader = cmd.ExecuteReader()) {
data = PopulateData(reader);
};
}
if (isUpdateNeeded(data)) {
ChangeExternalSystemStateAndUpdateData(data) //EDIT - concurrent calls not allowed
WriteUpdatesToDatabase(conn, tran, data); //EDIT
tran.Commit();
}
} //realease lock and allow other processes to read row with ID=primaryKey
}
EDIT:
I have following constraints:
Code can be executed within different App pools simultaneously. So memory lock is not an option
ChangeExternalSystemStateAndUpdateData() must be executed only once in the scope of the MyTable row. Concurrent calls will cause problems
Usually the problem here isn't so much row locking, but rather: other SPIDs acquiring read locks between your read and your update - which can lead to deadlock scenarios; the common fix here is to acquire a write lock in the initial read:
SELECT Data FROM MyTable WITH (UPDLOCK) WHERE ID=#primaryKey
Note that if another SPID is explicitly using NOLOCK then they'll still blitz past it.
You could also try adding ROWLOCK, i.e. WITH (UPDLOCK, ROWLOCK) - personally I'd keep it simple initially. Since you're in a serializable context, trying to be too granular may be a lost cause - key-range locks, etc.
I got this exception msg:
Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim.
The only line of my code that was implicated in the Stack Trace was the last one here:
public static DataTable ExecuteSQLReturnDataTable(string sql, CommandType cmdType, params SqlParameter[] parameters)
{
using (DataSet ds = new DataSet())
using (SqlConnection connStr = new SqlConnection(CPSConnStr))
using (SqlCommand cmd = new SqlCommand(sql, connStr))
{
cmd.CommandType = cmdType;
cmd.CommandTimeout = EXTENDED_TIMEOUT;
foreach (var item in parameters)
{
cmd.Parameters.Add(item);
}
try
{
cmd.Connection.Open();
new SqlDataAdapter(cmd).Fill(ds);
This is a general-purpose method that I use for all sorts of queries; I haven't changed it recently, nor have I ever seen this particular exception before.
What can I do to guard against this exception being thrown again?
You can catch the deadlock exception and retry X number of times before giving up.
There's no magic solution to avoid deadlocks. If SQL Server detects a deadlock it's going to pick one of the processes to kill. In some cases you may have had deadlocks where your process was the one that was lucky enough to continue.
You can use SQL Profiler to capture the deadlocks. I had to do this in the past to try and figure out what was actually causing the deadlocks. The less often this happens the harder it is to track down. In our test environment we just created some testing code to hammer the database from a few different machines to try and cause a deadlock.
In our case we made some changes to our indexes and modified database triggers to reduce the deadlocks as best we could. In the end we still had to implement the retries as a "just in case".
It might have helped if you had shown the SQL that was passed to ExecuteSQLReturnDataTable. Meanwhile read Minimizing Deadlocks.
Of course, you may have to also look at whatever else is contributing to the deadlock.
I am writing an application which updates data in separate but related database tables. I would like to perform the update queries within a transaction from within my code. I know of two potential ways of doing this:
Use a System.Data.SqlClient.SqlTransaction to wrap each update in one transaction.
Use a System.Transactions.TransactionScope to wrap the separate update operations in one transaction.
My question is: will either of these models result in the transactions being rolled back if the process is terminated in the middle, i.e. if someone literally kills the process forcefully? For example, if I have this code which uses the first model and the process is terminated between the two commands, will the transaction be rolled back?
try
{
using (var conn = new SqlConnection(connectionString))
{
using (var tran = conn.BeginTransaction("MyTran"))
{
using (var firstCommand = new SqlCommand(firstQuery, conn))
{
firstCommand.ExecuteNonQuery();
}
//PROCESS IS TERMINATED HERE.
using (var secondCommand = new SqlCommand(secondQuery, conn))
{
secondCommand.ExecuteNonQuery();
}
tran.Commit();
}
}
}
catch (Exception)
{
//do whatever.
}
Or if I have the following code which uses the second model and the process is terminated between the two method calls, will the transaction be rolled back?
try
{
using (var scope = new TransactionScope())
{
MyFirstUpdate();
//PROCESS IS TERMINATED HERE.
MySecondUpdate();
scope.Complete();
}
}
catch (Exception)
{
//do whatever.
}
I have not been able to find any information in MSDN or elsewhere which indicates what the results will be in either of these cases.
If a transaction is not commited, then it is rolled back. Killing a process prevents the COMMIT to be applied as the TransactionScope.Complete is never called, and terminates the connection which will cause a rollback.
If we're talking about abruptly terminating a process, none of the "rollback" logic gets a chance to run and no messages are sent to the server. This is different from general exception-handling, where finally blocks will usually ensure rollbacks occur.
The behaviour of SQL Server is to apply a timeout to connections - if the connection doesn't send a keep-alive message before the timeout it is considered dead and is closed at the SQL Server end. If there is a transaction pending, it will be rolled back when the associated connection is closed.
In my Windows application I try to connect to SQL Server 2008 with following code:
SqlConnection connection = new SqlConnection(Properties.Settings.Default.KargarBandarConnectionString);
SqlCommand command = new SqlCommand("Select IsAdmin from Users where UserName=#UserName And Password=#Password", connection);
SqlDataReader dataReader = null;
command.Parameters.AddWithValue("#UserName", UserNameTextBox.Text.Trim());
command.Parameters.AddWithValue("#Password", PasswordTextBox.Text);
try
{
connection.Open();
dataReader = command.ExecuteReader();
if (dataReader.HasRows)
{
while (dataReader.Read())
{
IsAdmin = dataReader.GetBoolean(0);
}
this.DialogResult = DialogResult.OK;
}
else
{
FMessageBox.ShowWarning("error");
UserNameTextBox.Focus();
}
}
catch (Exception ex)
{
if (progressForm != null)
progressForm.Close();
FMessageBox.ShowError(ex.Message);
}
finally
{
if (dataReader != null)
{
dataReader.Close();
dataReader.Dispose();
}
if (connection != null)
{
connection.Close();
connection.Dispose();
}
}
Everything works properly, but sometimes I get the following error:
timeout expired. the timeout period elapsed prior to obtaining a
connection from the pool ...
How can this be solved?
The reason you're getting this exception is because you have exhausted your connection pool and the number of "available" connections in your application.
Every time you open a connection, one is pulled from the connection pool if possible, or a new one is created if not.
However, to prevent galloping usage of connections, a limit of 100 (I think this is configurable) exists, and if you try to use more than 100 simultaneous connections, the code will not create new ones, and instead sit down to wait for one to be returned to the pool, and in this case you get a timeout if it sits too long.
So, for the particular example of code you've shown, I would:
Close the connection before I show an error messages to the user
However, unless 100 users are seeing the error message and leaving it there at the same time, it is unlikely the code you've shown is the cause of this problem.
Other than that, I would go through the entire application and ensure you don't have any connection leaks other places.
This particular type of exception can occur in one spot even though the problem is somewhere else. Example: A report is leaking an open connection every time it runs, and you run it 100 times successfully, then someone tries to log in, and the exception occurs in the login form.
That happens if you either:
leak connections (leaving them for GC to deal with rather than disposing them)
just have too much happening, such that the pool is exhausted
The first is the most common, and I expect it relates a lot to the fact that you are over-complication your error handling. This makes it easy to miss, and hard to spot that you've missed it. The code shown looks OK, but it would be far preferable to use using blocks for all the IDisposable elements, rather than finally. Also; don't keep the connection while you show modal things like the message box, unless you need the connection afterwards. Frankly, a lot of benefit here could be made by cleanly separating the UI and data-access code, then there is not temptation to stick a message-box in the middle of a database query.
However! To be explicit, I believe this code is the victim of some other code that is hogging connections. Look at your other data access code for the cause of this.
Refactor your code to look something like this. Implement using blocks. The other answers here are very important, be sure to understand them.
bool res=false;
try
{
using(var connection = new SqlConnection(Properties.Settings.Default.KargarBandarConnectionString))
using(var cmd = conn.CreateCommand())
{
cmd.commandText = "Select IsAdmin from Users where UserName=#UserName And HashedAndSaltedPassword=#PwdHash";
cmd.Parameters.AddWithValue("#UserName", UserNameTextBox.Text.Trim());
cmd.Parameters.AddWithValue("#PwdHash", SaltAndHash(PasswordTextBox.Text));
connection.Open();
var result = cmd.ExecuteScalar();
if (result!=null)
{
res=bool.Parse(result);
this.DialogResult = DialogResult.OK;
}
}
}
catch (Exception ex)
{
if (progressForm != null){progressForm.Close();}
FMessageBox.ShowError(ex.Message);
}
if(res==false)
{
FMessageBox.ShowWarning("error");
UserNameTextBox.Focus();
}
I'm performing a large number of INSERTS to a SQLite database. I'm using just one thread. I batch the writes to improve performance and have a bit of security in case of a crash. Basically I cache up a bunch of data in memory and then when I deem appropriate, I loop over all of that data and perform the INSERTS. The code for this is shown below:
public void Commit()
{
using (SQLiteConnection conn = new SQLiteConnection(this.connString))
{
conn.Open();
using (SQLiteTransaction trans = conn.BeginTransaction())
{
using (SQLiteCommand command = conn.CreateCommand())
{
command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)";
command.Parameters.Add(this.col1Param);
command.Parameters.Add(this.col2Param);
foreach (Data o in this.dataTemp)
{
this.col1Param.Value = o.Col1Prop;
this. col2Param.Value = o.Col2Prop;
command.ExecuteNonQuery();
}
}
this.TryHandleCommit(trans);
}
conn.Close();
}
}
I now employ the following gimmick to get the thing to eventually work:
private void TryHandleCommit(SQLiteTransaction trans)
{
try
{
trans.Commit();
}
catch (Exception e)
{
Console.WriteLine("Trying again...");
this.TryHandleCommit(trans);
}
}
I create my DB like so:
public DataBase(String path)
{
//build connection string
SQLiteConnectionStringBuilder connString = new SQLiteConnectionStringBuilder();
connString.DataSource = path;
connString.Version = 3;
connString.DefaultTimeout = 5;
connString.JournalMode = SQLiteJournalModeEnum.Persist;
connString.UseUTF16Encoding = true;
using (connection = new SQLiteConnection(connString.ToString()))
{
//check for existence of db
FileInfo f = new FileInfo(path);
if (!f.Exists) //build new blank db
{
SQLiteConnection.CreateFile(path);
connection.Open();
using (SQLiteTransaction trans = connection.BeginTransaction())
{
using (SQLiteCommand command = connection.CreateCommand())
{
command.CommandText = DataBase.CREATE_MATCHES;
command.ExecuteNonQuery();
command.CommandText = DataBase.CREATE_STRING_DATA;
command.ExecuteNonQuery();
//TODO add logging
}
trans.Commit();
}
connection.Close();
}
}
}
I then export the connection string and use it to obtain new connections in different parts of the program.
At seemingly random intervals, though at far too great a rate to ignore or otherwise workaround this problem, I get unhandled SQLiteException: Database file is locked. This occurs when I attempt to commit the transaction. No errors seem to occur prior to then. This does not always happen. Sometimes the whole thing runs without a hitch.
No reads are being performed on these files before the commits finish.
I have the very latest SQLite binary.
I'm compiling for .NET 2.0.
I'm using VS 2008.
The db is a local file.
All of this activity is encapsulated within one thread / process.
Virus protection is off (though I think that was only relevant if you were connecting over a network?).
As per Scotsman's post I have implemented the following changes:
Journal Mode set to Persist
DB files stored in C:\Docs + Settings\ApplicationData via System.Windows.Forms.Application.AppData windows call
No inner exception
Witnessed on two distinct machines (albeit very similar hardware and software)
Have been running Process Monitor - no extraneous processes are attaching themselves to the DB files - the problem is definitely in my code...
Does anyone have any idea whats going on here?
I know I just dropped a whole mess of code, but I've been trying to figure this out for way too long. My thanks to anyone who makes it to the end of this question!
brian
UPDATES:
Thanks for the suggestions so far! I've implemented many of the suggested changes. I feel that we are getting closer to the answer...however...
The code above technically works however it is non-deterministic! It is not guaranteed to do anything aside from spin in neutral forever. In practice it seems to work somewhere between the 1st and 10th iteration. If i batch my commits at a reasonable interval damage will be mitigated but I really do not want to leave things in this state...
More suggestions welcome!
It looks like you failed to link the command with the transaction you've created.
Instead of:
using (SQLiteCommand command = conn.CreateCommand())
You should use:
using (SQLiteCommand command = new SQLiteCommand("<INSERT statement here>", conn, trans))
Or you can set its Transaction property after its construction.
While we are at it - your handling of failures is incorrect:
The command's ExecuteNonQuery method can also fail and you are not really protected. You should change the code to something like:
public void Commit()
{
using (SQLiteConnection conn = new SQLiteConnection(this.connString))
{
conn.Open();
SQLiteTransaction trans = conn.BeginTransaction();
try
{
using (SQLiteCommand command = conn.CreateCommand())
{
command.Transaction = trans; // Now the command is linked to the transaction and don't try to create a new one (which is probably why your database gets locked)
command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)";
command.Parameters.Add(this.col1Param);
command.Parameters.Add(this.col2Param);
foreach (Data o in this.dataTemp)
{
this.col1Param.Value = o.Col1Prop;
this. col2Param.Value = o.Col2Prop;
command.ExecuteNonQuery();
}
}
trans.Commit();
}
catch (SQLiteException ex)
{
// You need to rollback in case something wrong happened in command.ExecuteNonQuery() ...
trans.Rollback();
throw;
}
}
}
Another thing is that you don't need to cache anything in memory. You can depend on SQLite journaling mechanism for storing incomplete transaction state.
Run Sysinternals Process Monitor and filter on filename while running your program to rule out if any other process does anything to it and to see what exacly your program is doing to the file. Long shot, but might give a clue.
We had a very similar problem using nested Transactions with the TransactionScope class. We thought all database actions occurred on the same thread...however we were caught out by the Transaction mechanism...more specifically the Ambient transaction.
Basically there was a transaction higher up the chain which, by the magic of ado, the connection automatically enlisted in. The result was that, even though we thought we were writing to the database on a single thread, the write didn't really happen until the topmost transaction was committed. At this 'indeterminate' point the database was written to causing it to be locked outside of our control.
The solution was to ensure that the sqlite database did not directly take part in the ambient transaction by ensuring we used something like:
using(TransactionScope scope = new TransactionScope(TransactionScopeOptions.RequiresNew))
{
...
scope.Complete()
}
Things to watch for:
don't use connections across multiple threads/processes.
I've seen it happen when a virus scanner would detect changes to the file and try to scan it. It would lock the file for a short interval and cause havoc.
I started facing this same problem today: I'm studying asp.net mvc, building my first application completely from scratch. Sometimes, when I'd write to the database, I'd get the same exception, saying the database file was locked.
I found it really strange, since I was completely sure that there was just one connection open at that time (based on process explorer's listing of active file handles).
I've also built the whole data access layer from scratch, using System.Data.SQLite .Net provider, and, when I planned it, I took special care with connections and transactions, in order to ensure no connection or transaction was left hanging around.
The tricky part was that setting a breakpoint on ExecuteNonQuery() command and running the application in debug mode would make the error disappear!
Googling, I found something interesting on this site: http://www.softperfect.com/board/read.php?8,5775. There, someone replied the thread suggesting the author to put the database path on the anti-virus ignore list.
I added the database file to the ignore list of my anti-virus (Microsoft Security Essentials) and it solved my problem. No more database locked errors!
Is your database file on the same machine as the app or is it stored on a server?
You should create a new connection in every thread. I would simplefy the creation of a connection, use everywhere: connection = new SQLiteConnection(connString.ToString());
and use a database file on the same machine as the app and test again.
Why the two different ways of creating a connection?
These guys were having similiar problems (mostly, it appears, with the journaling file being locked, maybe TortoiseSVN interactions ... check the referenced articles).
They came up with a set of recommendations (correct directories, changing journaling types from delete to persist, etc). http://sqlite.phxsoftware.com/forums/p/689/5445.aspx#5445
The journal mode options are discussed here: http://www.sqlite.org/pragma.html . You could try TRUNCATE.
Is there a stack trace during the exception into SQL Lite?
You indicate you "batch my commits at a reasonable interval". What is the interval?
I would always use a Connection, Transaction and Command in a using clause. In your first code listing you did, but your third (creating the tables) you didn't. I suggest you do that too, because (who knows?) maybe the commands that create the table somehow continue to lock the file. Long shot... but worth a shot?
Do you have Google Desktop Search (or another file indexer) running? As previously mentioned, Sysinternals Process Monitor can help you track it down.
Also, what is the filename of the database? From PerformanceTuningWindows:
Be VERY, VERY careful what you name your database, especially the extension
For example, if you give all your databases the extension .sdb (SQLite Database, nice name hey? I thought so when I choose it anyway...) you discover that the SDB extension is already associated with APPFIX PACKAGES.
Now, here is the cute part, APPFIX is an executable/package that Windows XP recognizes, and it will, (emphasis mine) ADD THE DATABASE TO THE SYSTEM RESTORE FUNCTIONALITY
This means, stay with me here, every time you write ANYTHING to the database, the Windows XP system thinks a bloody executable has changed and copies your ENTIRE 800 meg database to the system restore directory....
I recommend something like DB or DAT.
While the lock is reported on the COMMIT, the lock is on the INSERT/UPDATE command. Check for record locks not being released earlier in your code.