Improve querying speed on Sqlite [duplicate] - c#

I recently read about SQLite and thought I would give it a try. When I insert one record it performs okay. But when I insert one hundred it takes five seconds, and as the record count increases so does the time. What could be wrong? I am using the SQLite Wrapper (system.data.SQlite):
dbcon = new SQLiteConnection(connectionString);
dbcon.Open();
//---INSIDE LOOP
SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
dbcon.close();

Wrap BEGIN \ END statements around your bulk inserts. Sqlite is optimized for transactions.
dbcon = new SQLiteConnection(connectionString);
dbcon.Open();
SQLiteCommand sqlComm;
sqlComm = new SQLiteCommand("begin", dbcon);
sqlComm.ExecuteNonQuery();
//---INSIDE LOOP
sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
sqlComm = new SQLiteCommand("end", dbcon);
sqlComm.ExecuteNonQuery();
dbcon.close();

I read everywhere that creating transactions is the solution to slow SQLite writes, but it can be long and painful to rewrite your code and wrap all your SQLite writes in transactions.
I found a much simpler, safe and very efficient method: I enable a (disabled by default) SQLite 3.7.0 optimisation : the Write-Ahead-Log (WAL).
The documentation says it works in all unix (i.e. Linux and OSX) and Windows systems.
How ? Just run the following commands after initializing your SQLite connection:
PRAGMA journal_mode = WAL
PRAGMA synchronous = NORMAL
My code now runs ~600% faster : my test suite now runs in 38 seconds instead of 4 minutes :)

Try wrapping all of your inserts (aka, a bulk insert) into a single transaction:
string insertString = "INSERT INTO [TableName] ([ColumnName]) Values (#value)";
SQLiteCommand command = new SQLiteCommand();
command.Parameters.AddWithValue("#value", value);
command.CommandText = insertString;
command.Connection = dbConnection;
SQLiteTransaction transaction = dbConnection.BeginTransaction();
try
{
//---INSIDE LOOP
SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
transaction.Commit();
return true;
}
catch (SQLiteException ex)
{
transaction.Rollback();
}
By default, SQLite wraps every inserts in a transaction, which slows down the process:
INSERT is really slow - I can only do few dozen INSERTs per second
Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second.
Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite..
By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced.

See "Optimizing SQL Queries" in the ADO.NET help file SQLite.NET.chm. Code from that page:
using (SQLiteTransaction mytransaction = myconnection.BeginTransaction())
{
using (SQLiteCommand mycommand = new SQLiteCommand(myconnection))
{
SQLiteParameter myparam = new SQLiteParameter();
int n;
mycommand.CommandText = "INSERT INTO [MyTable] ([MyId]) VALUES(?)";
mycommand.Parameters.Add(myparam);
for (n = 0; n < 100000; n ++)
{
myparam.Value = n + 1;
mycommand.ExecuteNonQuery();
}
}
mytransaction.Commit();
}

Related

Slow SQL data retrieval with SqlDataReader.Read() in C# vs SSMS

I am doing a simple SQL query to get lots of data.
The complexity of the query is not an issue. It takes around 200ms to execute.
However the amount of data seems to be the issue.
We retrieve around 40k rows.
Each row has 8 columns and the amount of data is around a few hundreds of kbytes per row. Say, we download 15megs in total for this query.
What boggles my mind is that:
when I execute the query from a basic C# code it takes 1min and 44secs.
But when I do it from SSMS it takes 10 secs. Of course I do this from the same machine, and I'm using the same database.
And I clearly see the UI and the rows being populated in realtime. In 10secs the whole data table is full.
We tried:
to set the same SET things as the ones from SSMS,
to change the transaction isolation level,
to ignore the execution plan (with the OPTION(RECOMPILE)),
to ignore locks (with the WITH(NOLOCK)).
It doesn't change anything.
Makes sense: it's the read that is slow. Not the query (IMHO).
It is the while(reader.Read()) that takes time.
And, we tried with an empty while loop. So this excludes boxing/unboxing stuff or putting the result in memory.
Here is a test program we made to figure out it was the Read() that is taking time:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Threading.Tasks;
using System.Transactions;
namespace SqlPerfTest
{
class Program
{
const int GroupId = 1234;
static readonly DateTime DateBegin = new DateTime(2017, 6, 19, 0, 0, 0, DateTimeKind.Utc);
static readonly DateTime DateEnd = new DateTime(2017, 10, 20, 0, 0, 0, DateTimeKind.Utc);
const string ConnectionString = "CENSORED";
static void Main(string[] args)
{
TransactionOptions transactionOptions = new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted
};
using (var transactionScope = new TransactionScope(TransactionScopeOption.Required, transactionOptions))
{
using (SqlConnection connection = new SqlConnection(ConnectionString))
{
connection.Open();
SetOptimizations(connection);
ShowUserOptions(connection);
DoPhatQuery(connection).Wait(TimeSpan.FromDays(1));
}
transactionScope.Complete();
}
}
static void SetOptimizations(SqlConnection connection)
{
SqlCommand cmd = connection.CreateCommand();
Console.WriteLine("===================================");
cmd.CommandText = "SET QUOTED_IDENTIFIER ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_NULL_DFLT_ON ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_PADDING ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_WARNINGS ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_NULLS ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET CONCAT_NULL_YIELDS_NULL ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ARITHABORT ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET DEADLOCK_PRIORITY -1";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET QUERY_GOVERNOR_COST_LIMIT 0";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET TEXTSIZE 2147483647";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
}
static void ShowUserOptions(SqlConnection connection)
{
SqlCommand cmd = connection.CreateCommand();
Console.WriteLine("===================================");
cmd.CommandText = "DBCC USEROPTIONS";
using (SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess))
{
Console.WriteLine(cmd.CommandText);
while (reader.HasRows)
{
while (reader.Read())
{
Console.WriteLine("{0} = {1}", reader.GetString(0), reader.GetString(1));
}
reader.NextResult();
}
}
}
static async Task DoPhatQuery(SqlConnection connection)
{
Console.WriteLine("===================================");
SqlCommand cmd = connection.CreateCommand();
cmd.CommandText =
#"SELECT
p.[Id],
p.[UserId],
p.[Text],
FROM [dbo].[Post] AS p WITH (NOLOCK)
WHERE p.[Visibility] = #visibility
AND p.[GroupId] = #groupId
AND p.[DatePosted] >= #dateBegin
AND p.[DatePosted] < #dateEnd
ORDER BY p.[DatePosted] DESC
OPTION(RECOMPILE)";
cmd.Parameters.Add("#visibility", SqlDbType.Int).Value = 0;
cmd.Parameters.Add("#groupId", SqlDbType.Int).Value = GroupId;
cmd.Parameters.Add("#dateBegin", SqlDbType.DateTime).Value = DateBegin;
cmd.Parameters.Add("#dateEnd", SqlDbType.DateTime).Value = DateEnd;
Console.WriteLine(cmd.CommandText);
Console.WriteLine("===================================");
DateTime beforeCommit = DateTime.UtcNow;
using (SqlDataReader reader = await cmd.ExecuteReaderAsync(CommandBehavior.CloseConnection))
{
DateTime afterCommit = DateTime.UtcNow;
Console.WriteLine("Query time = {0}", afterCommit - beforeCommit);
DateTime beforeRead = DateTime.UtcNow;
int currentRow = 0;
while (reader.HasRows)
{
while (await reader.ReadAsync())
{
if (currentRow++ % 1000 == 0)
Console.WriteLine("[{0}] Rows read = {1}", DateTime.UtcNow, currentRow);
}
await reader.NextResultAsync();
}
Console.WriteLine("[{0}] Rows read = {1}", DateTime.UtcNow, currentRow);
DateTime afterRead = DateTime.UtcNow;
Console.WriteLine("Read time = {0}", afterRead - beforeRead);
}
}
}
}
As you can see above, we reproduce the same SET stuff as the ones from SSMS.
We also tried all the tricks known to mankind to speed up everything.
Using async stuff. Using WITH(NOLOCK), NO RECOMPILE, defining a bigger PacketSize in the connection string didn't help, and using Sequential Reader.
Still, SSMS is 50 times faster.
More info
Our database is an Azure database. We actually have 2 databases, one in Europe and one in West US.
Since we are located in Europe, the same query is faster when we use the Europe database. But it's still like 30secs and is like instant in SSMS.
The data transfer speed does influence this, but it's not the main issue.
We can also reduce the time of the data transfer by projecting less columns. It does quickens the Read() iteration of course. Say we retrieve only our ID column: then we have a while(Read()) that lasts 5secs.
But it's not an option as we need all these columns.
We know how to 'solve' this issue: we can approach our problem differently, and make small queries daily and cache these results in an Azure Table or something.
But we want to know WHY SSMS is faster. What's the trick.
We used Entity Framework in C#, Dapper in C# and the example above is like native C#. I have seen a few people in the interwebz with potentially a similar issue. To me, it feels like it's the SqlDataReader that is slow.
Like, it doesn't pipeline the download of the rows using multiple connections or something.
Question
So my question here is this: how the hell does Management Studio manages to be 50 times faster to download the result of our query? What's the trick?
Thanks guys.
What boggles my mind is that: when I execute the query from a basic C#
code it takes 1min and 44secs. But when I do it from SSMS it takes 10
secs
You can't execute a parameterized query directly in SSMS so you're comparing different things. When you use local variables instead of parameters in SSMS, SQL Server estimates row counts using overall average density statistics. With a parameterized query, SQL Server uses the statistics histogram and supplied parameter values for initial compilation. Different estimates can result in different plans, although the estimates from the histogram are usually more accurate and yield a better plan (theoretically).
Try updating statistics and executing the query from SSMS using sp_executesql and parameters. I would expect the same performance as the app code, good or bad.
For grins have you tried ditching the idea of using datareader and slap the results into a DataTable? I have seen datareader be slow in certain situations.

Execute DELETE only when the SELECT returns

I have a routine where I update the local database with other database data.
I only execute a DELETE and then an INSERT INTO tblX (SELECT * FROM tblY (tblY is a linked table)), as below.
The problem is that, in some cases the SELECT takes a long time after the DELETE and I´d like to diminish the possibility of the user to make a request to this table while it´s processing.
I´d like to know if there is some mechanism to execute the DELETE only after the return of the SELECT.
conn = new OleDbConnection(Conexao.getConexaoPainelGerencialLocal());
conn.Open();
OleDbCommand cmd = new OleDbCommand(" DELETE * FROM tblClienteContato; ", conn);
cmd.ExecuteNonQuery();
cmd = new OleDbCommand(" INSERT INTO tblClienteContato " +
" SELECT * FROM tblClienteContatoVinculada;", conn);
cmd.ExecuteNonQuery();
It sounds like what you need to do is wrap both of those commands in a transaction.
The cool thing about a transaction is that it either ALL WORKS or ALL FAILS, meaning that if something happens to stop the select statement, the database will not finalise the delete statement.
This looks like a really good example to work with:
https://msdn.microsoft.com/en-us/library/93ehy0z8(v=vs.110).aspx
Note that they have one command object, and replace the CommandText, rather than create a new object each time. This is probably important.
Try something like this:
conn = new OleDbConnection(Conexao.getConexaoPainelGerencialLocal());
OleDbCommand cmd = new OleDbCommand();
OleDbTransaction transaction = null;
try {
conn.Open();
transaction = conn.BeginTransaction(IsolationLevel.ReadCommitted);
cmd.Connection = conn;
cmd.Transaction = transaction;
cmd.CommandText = " DELETE * FROM tblClienteContato; ";
cmd.ExecuteNonQuery();
cmd.CommandText = " INSERT INTO tblClienteContato " +
" SELECT * FROM tblClienteContatoVinculada;";
cmd.ExecuteNonQuery();
// The data isn't _finally_ completed until this happens
transaction.Commit();
}
catch (Exception ex)
{
// Something has gone wrong.
// do whatever error messaging you do
Console.WriteLine(ex.Message);
try
{
// Attempt to roll back the transaction.
// this means your records won't be deleted
transaction.Rollback();
}
catch
{
// Do nothing here; transaction is not active.
}
}
You should look into BeginTransaction, Commit and rollback, here's an example:
_con.Open();
_con_trans = _con.BeginTransaction();
using(SqlCommand cmd = _con.CreateCommand())
{
cmd.CommandText = "delete from XXXXX";
cmd.CommandType = CommandType.Text;
cmd.Transaction = _con_trans;
cmd.ExecuteNonquery();
}
using(SqlCommand cmd = _con.CreateCommand())
{
cmd.CommandText = "insert into XXXX";
cmd.CommandType = CommandType.Text;
cmd.Transaction = _con_trans;
cmd.ExecuteNonquery();
}
_con_trans.Commit();
_con_trans = null;
_con.Close();
This way, everything is wrapped under a single transaction, so when the delete begins, the table will be locked for reading and writing.
Without knowing the schema of the table, it is hard to identify why the delete process is taking an extended amount of time.
An alternative to wrapping the commands within a transaction would be to simply delete the table itself rather than the data within it by using the DROP TABLE command. And then you can recreate the table utilizing the SELECT...INTO...FROM statement to recreate. A potential advantage to this is that the schemas will match identically, and any inherent conversions (eg decimal to int) will not need to be done.
using (conn = new OleDbConnection(Conexao.getConexaoPainelGerencialLocal())) {
conn.Open();
using (OleDbCommand cmd = new OleDbCommand()) {
cmd.CommandText = "DROP TABLE tblClienteContato; ";
cmd.ExecuteNonQuery();
cmd.CommandText = "SELECT * INTO tblClienteContato FROM tblClienteContatoVinculada;";
cmd.ExecuteNonQuery();
}
}
The following does not apply here (MS Access), but may to other SQL variants
Another option is to utilize the TRUNCATE command, which will delete everything in the table in one fell swoop. There is no logging of the individual rows and the indexes (if present) don't need to be recalculated on each and every line being deleted. The catch to this method is that this will not work within the transaction. If there is an Identity column the value will be reset as well. There are other potential cons to this but without knowing the design of the table I have no way of identifying them.
using (conn = new OleDbConnection(Conexao.getConexaoPainelGerencialLocal())) {
conn.Open();
using (OleDbCommand cmd = new OleDbCommand()) {
cmd.CommandText = "TRUNCATE TABLE tblClienteContato; ";
cmd.ExecuteNonQuery();
cmd.CommandText = " INSERT INTO tblClienteContato " +
" SELECT * FROM tblClienteContatoVinculada;";
cmd.ExecuteNonQuery();
}
}
As Greg commented, I created temporary tables to receive data from the external database and then I tranfer the data to the definitive tables, so that the probability of the users being impacted is very low.

To optimize the query performance

i have stored procedure which returns like minimum of 40K rows and it takes like 20 seconds in SSMS 2008 R2 where the database resides in Sql Azure but when i run the same Sp in my c# application using EF 5 or just Normal ADo.NET it took like 70-80 seconds.
table has a non-clustered index on ScenarioID
Sp is just a select statement with where condition. select * from Cost where ScenarioID= #ID
using (SqlConnection con = new SqlConnection(constr))
{
using (SqlCommand cmd = new SqlCommand("sp_GetActCostsByID", con))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#ID", SqlDbType.VarChar).Value = ID;
con.Open();
DataTable dt = new DataTable();
DateTime timee = DateTime.Now;
Console.WriteLine(timee);
dt.Load(cmd.ExecuteReader());
timee = DateTime.Now;
Console.WriteLine(timee);
}
}
Is there any way to increase the performance:
My execution Plan:
Your nonclustered index on ScenarioID might not be helping, if you're not INCLUDEing all the columns you're trying to return, as it will need to do lookups to get those other columns - if there are lots of rows for that Scenario, you could end up with an ordinary table scan. And this comes down to statistics, so can vary from server to server.
If you can avoid the need for lookups, you'll get more consistent performance.

Improving SQLite Performance

Well i have a file.sql that contains 20,000 of insert commands
Sample From the .sql file
INSERT INTO table VALUES
(1,-400,400,3,154850,'Text',590628,'TEXT',1610,'TEXT',79);
INSERT INTO table VALUES
(39,-362,400,3,111659,'Text',74896,'TEXT',0,'TEXT',14);
And i am using the following code to create an in memory Sqlite database and pull the values into it then calculate the time elapsed
using (var conn = new SQLiteConnection(#"Data Source=:memory:"))
{
conn.Open();
var stopwatch = new Stopwatch();
stopwatch.Start();
using (var cmd = new SQLiteCommand(conn))
{
using (var transaction = conn.BeginTransaction())
{
cmd.CommandText = File.ReadAllText(#"file.sql");
cmd.ExecuteNonQuery();
transaction.Commit();
}
}
var timeelapsed = stopwatch.Elapsed.TotalSeconds <= 60
? stopwatch.Elapsed.TotalSeconds + " seconds"
: Math.Round(stopwatch.Elapsed.TotalSeconds/60) + " minutes";
MessageBox.Show(string.Format("Time elapsed {0}", timeelapsed));
conn.Close();
}
Things i have tried
Using file database instead of memory one.
Using begin transaction and commit transaction [AS SHOWN IN MY CODE].
Using Firefox's extension named SQLite Manager to test whether the
slowing down problem is from the script; However, I was surprised
that the same 20,000 lines that i am trying to process using my code
has been pulled to the database in JUST 4ms!!!.
Using PRAGMA synchronous = OFF, as well as, PRAGMA journal_mode =
MEMORY.
Appending begin transaction; and commit transaction; to the
beginning and ending of the .sql file respectively.
As the SQLite documentations says : SQLite is capable of processing 50,000 commands per seconds. And that is real and i made sure of it using the SQLite Manager [AS DESCRIPED IN THE THIRD SOMETHING THAT I'V TRIED]; However, I am getting my 20,000 commands done in 4 minutes something that tells that there is something wrong.
QUESTION : What is the problem am i facing why is the Execution done very slowly ?!
SQLite.Net documentation recommends the following construct for transactions
using (SqliteConnection conn = new SqliteConnection(#"Data Source=:memory:"))
{
conn.Open();
using(SqliteTransaction trans = conn.BeginTransaction())
{
using (SqliteCommand cmd = new SQLiteCommand(conn))
{
cmd.CommandText = File.ReadAllText(#"file.sql");
cmd.ExecuteNonQuery();
}
trans.Commit();
}
con.Close();
}
Are you able to manipulate the text file contexts to something like:
INSERT INTO table (col01, col02, col03, col04, col05, col06, col07, col08, col09, col10, col11)
SELECT 1,-400,400,3,154850,'Text',590628,'TEXT',1610,'TEXT',79
UNION ALL
SELECT 39,-362,400,3,111659,'Text',74896,'TEXT',0,'TEXT',14
;
Maybe try "batching them" into groups of 100 as a initial test.
http://sqlite.org/lang_select.html
SqlLite seems to support the UNION ALL statement.

SQLite Insert very slow?

I recently read about SQLite and thought I would give it a try. When I insert one record it performs okay. But when I insert one hundred it takes five seconds, and as the record count increases so does the time. What could be wrong? I am using the SQLite Wrapper (system.data.SQlite):
dbcon = new SQLiteConnection(connectionString);
dbcon.Open();
//---INSIDE LOOP
SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
dbcon.close();
Wrap BEGIN \ END statements around your bulk inserts. Sqlite is optimized for transactions.
dbcon = new SQLiteConnection(connectionString);
dbcon.Open();
SQLiteCommand sqlComm;
sqlComm = new SQLiteCommand("begin", dbcon);
sqlComm.ExecuteNonQuery();
//---INSIDE LOOP
sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
sqlComm = new SQLiteCommand("end", dbcon);
sqlComm.ExecuteNonQuery();
dbcon.close();
I read everywhere that creating transactions is the solution to slow SQLite writes, but it can be long and painful to rewrite your code and wrap all your SQLite writes in transactions.
I found a much simpler, safe and very efficient method: I enable a (disabled by default) SQLite 3.7.0 optimisation : the Write-Ahead-Log (WAL).
The documentation says it works in all unix (i.e. Linux and OSX) and Windows systems.
How ? Just run the following commands after initializing your SQLite connection:
PRAGMA journal_mode = WAL
PRAGMA synchronous = NORMAL
My code now runs ~600% faster : my test suite now runs in 38 seconds instead of 4 minutes :)
Try wrapping all of your inserts (aka, a bulk insert) into a single transaction:
string insertString = "INSERT INTO [TableName] ([ColumnName]) Values (#value)";
SQLiteCommand command = new SQLiteCommand();
command.Parameters.AddWithValue("#value", value);
command.CommandText = insertString;
command.Connection = dbConnection;
SQLiteTransaction transaction = dbConnection.BeginTransaction();
try
{
//---INSIDE LOOP
SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
transaction.Commit();
return true;
}
catch (SQLiteException ex)
{
transaction.Rollback();
}
By default, SQLite wraps every inserts in a transaction, which slows down the process:
INSERT is really slow - I can only do few dozen INSERTs per second
Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second.
Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite..
By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced.
See "Optimizing SQL Queries" in the ADO.NET help file SQLite.NET.chm. Code from that page:
using (SQLiteTransaction mytransaction = myconnection.BeginTransaction())
{
using (SQLiteCommand mycommand = new SQLiteCommand(myconnection))
{
SQLiteParameter myparam = new SQLiteParameter();
int n;
mycommand.CommandText = "INSERT INTO [MyTable] ([MyId]) VALUES(?)";
mycommand.Parameters.Add(myparam);
for (n = 0; n < 100000; n ++)
{
myparam.Value = n + 1;
mycommand.ExecuteNonQuery();
}
}
mytransaction.Commit();
}

Categories

Resources