Confusing performance of executing bulk insert in transaction and without - c#

I'm doing bulk insert of million records using temporary table to check the performance in two ways - with transaction and without. At first deleting if exist and creating new tables before each test.
//Filling up data to insert, calling from constructor
private void FillData()
{
_insertData = new List<TransactionDto>();
for (var i = 1; i <= 1000000; i++)
{
_insertData.Add(new TransactionDto(i, $"Insert{i}"));
}
}
private void PrepareDbTables(NpgsqlConnection connection)
{
var query = #"DROP TABLE IF EXISTS TransactionTest;
CREATE TABLE TransactionTest(id integer,text varchar(24))";
connection.Query(query);
}
private void DropAndCreateTempTable(NpgsqlConnection connection)
{
var query = #"DROP TABLE IF EXISTS TmpTransactions";
connection.Execute(query);
query = #"CREATE TEMP TABLE TmpTransactions (id integer, text varchar(24));";
connection.Execute(query);
}
Here are 2 tests:
[Fact]
public void CheckInsertBulkSpeedWithOutTransaction()
{
var sw = new Stopwatch();
using (var con = new NpgsqlConnection(ConnectionString))
{
con.Open();
//Delete and create new table TransactionTest
PrepareDbTables(con);
DropAndCreateTempTable(con);
sw.Start();
InsertBulkWithTempTable(null, _insertData, con);
sw.Stop();
}
_output.WriteLine(
$"Test completed. InsertBulk without transaction {_insertData.Count} elements for: {sw.ElapsedMilliseconds} ms");
}
The same test with transaction:
[Fact]
public void CheckInsertBulkSpeedWithTransaction()
{
var sw = new Stopwatch();
using (var con = new NpgsqlConnection(ConnectionString))
{
con.Open();
//Delete and create new table TransactionTest
PrepareDbTables(con);
DropAndCreateTempTable(con);
sw.Start();
using (var transaction = con.BeginTransaction(IsolationLevel.ReadUncommitted))
{
InsertBulkWithTempTable(transaction, _insertData, con);
transaction.Commit();
transaction.Dispose();
}
sw.Stop();
}
_output.WriteLine(
$"Test completed. InsertBulk with transaction {_insertData.Count} elements for: {sw.ElapsedMilliseconds} ms");
}
The main method which inserts records of object:
private void InsertBulkWithTempTable(NpgsqlTransaction transaction, List<TransactionDto> data, NpgsqlConnection connection)
{
using (var writer =
connection.BeginBinaryImport(
"COPY TmpTransactions(id,text) FROM STDIN(Format BINARY)"))
{
foreach (var dto in data)
{
writer.WriteRow(dto.Id, dto.Text);
}
writer.Complete();
}
var query =
"INSERT INTO TransactionTest select * from TmpTransactions";
//connection.Query(query, transaction);
connection.Execute(query);
}
The results of this tests are always different each time I run them, and it doesn't matter if I use Execute() or Query().
Test completed. InsertBulk without transaction 1000000 elements for: 7451 ms
Test completed. InsertBulk with transaction 1000000 elements for: 4676 ms
Test completed. InsertBulk without transaction 1000000 elements for: 6336 ms
Test completed. InsertBulk with transaction 1000000 elements for: 8776 ms
I'm trying to figure out what it depends on?
Any ideas? Any help is appreciated.
Thanks.

Related

SQL Insert statements in C# on Azure DB - Running very slow

I'm working on an importer in our web application. With the code I currently have, when you are connecting via local SQL server, it runs fine and within reason. I'm also creating a .sql script that they can download as well
Example 1
40k records, 8 columns, from 1 minute and 30 seconds until 2 minutes
When I move it to production and Azure app service, it is running VERY slow.
Example 2
40k records, 8 columns, from 15 minutes to 18 minutes
The current database is set to: Pricing tier: Standard S2: 50 DTUs
Here is the code:
using (var sqlConnection = new SqlConnection(connectionString))
{
try
{
var generatedScriptFilePathInfo = GetImportGeneratedScriptFilePath(trackingInfo.UploadTempDirectoryPath, trackingInfo.FileDetail);
using (FileStream fileStream = File.Create(generatedScriptFilePathInfo.GeneratedScriptFilePath))
{
using (StreamWriter writer = new StreamWriter(fileStream))
{
sqlConnection.Open();
sqlTransaction = sqlConnection.BeginTransaction();
await writer.WriteLineAsync("/* Insert Scripts */").ConfigureAwait(false);
foreach (var item in trackingInfo.InsertSqlScript)
{
errorSqlScript = item;
using (var cmd = new SqlCommand(item, sqlConnection, sqlTransaction))
{
cmd.CommandTimeout = 800;
cmd.CommandType = CommandType.Text;
await cmd.ExecuteScalarAsync().ConfigureAwait(false);
}
currentRowLine++;
rowsProcessedUpdateEveryXCounter++;
rowsProcessedTotal++;
// append insert statement to the file
await writer.WriteLineAsync(item).ConfigureAwait(false);
}
// write out a couple of blank lines to separate insert statements from post scripts (if there are any)
await writer.WriteLineAsync(string.Empty).ConfigureAwait(false);
await writer.WriteLineAsync(string.Empty).ConfigureAwait(false);
}
}
}
catch (OverflowException exOverFlow)
{
sqlTransaction.Rollback();
sqlTransaction.Dispose();
trackingInfo.IsSuccessful = false;
trackingInfo.ImportMetricUpdateError = new ImportMetricUpdateErrorDTO(trackingInfo.ImportMetricId)
{
ErrorLineNbr = currentRowLine + 1, // add one to go ahead and count the record we are on to sync up with the file
ErrorMessage = string.Format(CultureInfo.CurrentCulture, "{0}", ImporterHelper.ArithmeticOperationOverflowFriendlyErrorText),
ErrorSQL = errorSqlScript,
RowsProcessed = currentRowLine
};
await LogImporterError(trackingInfo.FileDetail, exOverFlow.ToString(), currentUserId).ConfigureAwait(false);
await UpdateImportAfterFailure(trackingInfo.ImportMetricId, exOverFlow.Message, currentUserId).ConfigureAwait(false);
return trackingInfo;
}
catch (Exception ex)
{
sqlTransaction.Rollback();
sqlTransaction.Dispose();
trackingInfo.IsSuccessful = false;
trackingInfo.ImportMetricUpdateError = new ImportMetricUpdateErrorDTO(trackingInfo.ImportMetricId)
{
ErrorLineNbr = currentRowLine + 1, // add one to go ahead and count the record we are on to sync up with the file
ErrorMessage = string.Format(CultureInfo.CurrentCulture, "{0}", ex.Message),
ErrorSQL = errorSqlScript,
RowsProcessed = currentRowLine
};
await LogImporterError(trackingInfo.FileDetail, ex.ToString(), currentUserId).ConfigureAwait(false);
await UpdateImportAfterFailure(trackingInfo.ImportMetricId, ex.Message, currentUserId).ConfigureAwait(false);
return trackingInfo;
}
}
Questions
Is there any way to speed this up on Azure? Or is the only way to upgrade the DTUs?
We are looking into SQL Bulk Copy as well. Will this help any or still cause slowness on Azure: https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlbulkcopy?redirectedfrom=MSDN&view=dotnet-plat-ext-5.0
Desired results
Run at the same speed when running it at a local SQL Server database
For now, I updated my code to batch the insert statements based on how many records. If the record count is over 10k, then it will batch them by dividing the total by 10.
This helped performance BIG TIME on our Azure instance. I was able to add 40k records within 30 seconds. I also think some of the issue was how many different slots use our app service on Azure.
We will also probably move to SQLBulkCopy later on as users need to import larger excel files.
Thanks everyone for the helps and insights!
// apply the create table SQL script if found.
if (string.IsNullOrWhiteSpace(trackingInfo.InsertSqlScript.ToString()) == false)
{
int? updateEveryXRecords = GetProcessedEveryXTimesForApplyingInsertStatementsValue(trackingInfo.FileDetail);
trackingInfo.FileDetail = UpdateImportMetricStatus(trackingInfo.FileDetail, ImportMetricStatus.ApplyingInsertScripts, currentUserId);
int rowsProcessedUpdateEveryXCounter = 0;
int rowsProcessedTotal = 0;
await UpdateImportMetricsRowsProcessed(trackingInfo.ImportMetricId, rowsProcessedTotal, trackingInfo.FileDetail.ImportMetricStatusHistories).ConfigureAwait(false);
bool isBulkMode = trackingInfo.InsertSqlScript.Count >= 10000;
await writer.WriteLineAsync("/* Insert Scripts */").ConfigureAwait(false);
int insertCounter = 0;
int bulkCounter = 0;
int bulkProcessingAmount = 0;
int lastInsertCounter = 0;
if (isBulkMode == true)
{
bulkProcessingAmount = trackingInfo.InsertSqlScript.Count / 10;
}
await LogInsertBulkStatus(trackingInfo.FileDetail, isBulkMode, trackingInfo.InsertSqlScript.Count, bulkProcessingAmount, currentUserId).ConfigureAwait(false);
StringBuilder sbInsertBulk = new StringBuilder();
foreach (var item in trackingInfo.InsertSqlScript)
{
if (isBulkMode == false)
{
errorSqlScript = item;
using (var cmd = new SqlCommand(item, sqlConnection, sqlTransaction))
{
cmd.CommandTimeout = 800;
cmd.CommandType = CommandType.Text;
await cmd.ExecuteScalarAsync().ConfigureAwait(false);
}
currentRowLine++;
rowsProcessedUpdateEveryXCounter++;
rowsProcessedTotal++;
// append insert statement to the file
await writer.WriteLineAsync(item).ConfigureAwait(false);
// Update database with the insert statement created count to alert the user of the status.
if (updateEveryXRecords.HasValue)
{
if (updateEveryXRecords.Value == rowsProcessedUpdateEveryXCounter)
{
await UpdateImportMetricsRowsProcessed(trackingInfo.ImportMetricId, rowsProcessedTotal, trackingInfo.FileDetail.ImportMetricStatusHistories).ConfigureAwait(false);
rowsProcessedUpdateEveryXCounter = 0;
}
}
}
else
{
sbInsertBulk.AppendLine(item);
if (bulkCounter < bulkProcessingAmount)
{
errorSqlScript = string.Format(CultureInfo.CurrentCulture, "IsBulkMode is True | insertCounter = {0}", insertCounter);
bulkCounter++;
}
else
{
// display to the end user
errorSqlScript = string.Format(CultureInfo.CurrentCulture, "IsBulkMode is True | currentInsertCounter value = {0} | lastInsertCounter (insertCounter when the last bulk insert occurred): {1}", insertCounter, lastInsertCounter);
await ApplyBulkInsertStatements(sbInsertBulk, writer, sqlConnection, sqlTransaction, trackingInfo, rowsProcessedTotal).ConfigureAwait(false);
bulkCounter = 0;
sbInsertBulk.Clear();
lastInsertCounter = insertCounter;
}
rowsProcessedTotal++;
}
insertCounter++;
}
// get the remaining records after finishing the forEach insert statement
if (isBulkMode == true)
{
await ApplyBulkInsertStatements(sbInsertBulk, writer, sqlConnection, sqlTransaction, trackingInfo, rowsProcessedTotal).ConfigureAwait(false);
}
}
/// <summary>
/// Applies the bulk insert statements.
/// </summary>
/// <param name="sbInsertBulk">The sb insert bulk.</param>
/// <param name="wrtier">The wrtier.</param>
/// <param name="sqlConnection">The SQL connection.</param>
/// <param name="sqlTransaction">The SQL transaction.</param>
/// <param name="trackingInfo">The tracking information.</param>
/// <param name="rowsProcessedTotal">The rows processed total.</param>
/// <returns>Task</returns>
private async Task ApplyBulkInsertStatements(
StringBuilder sbInsertBulk,
StreamWriter wrtier,
SqlConnection sqlConnection,
SqlTransaction sqlTransaction,
ProcessImporterTrackingDTO trackingInfo,
int rowsProcessedTotal)
{
var bulkInsertStatements = sbInsertBulk.ToString();
using (var cmd = new SqlCommand(bulkInsertStatements, sqlConnection, sqlTransaction))
{
cmd.CommandTimeout = 800;
cmd.CommandType = CommandType.Text;
await cmd.ExecuteScalarAsync().ConfigureAwait(false);
}
// append insert statement to the file
await wrtier.WriteLineAsync(bulkInsertStatements).ConfigureAwait(false);
// Update database with the insert statement created count to alert the user of the status.
await UpdateImportMetricsRowsProcessed(trackingInfo.ImportMetricId, rowsProcessedTotal, trackingInfo.FileDetail.ImportMetricStatusHistories).ConfigureAwait(false);
}

Multithreading and TPL do not speed up execution C#

I have a program that makes use of SQL Server to pull information from a database and then perform a series of insertions into other tables and also send an email with the data that was retireved.
The program takes around 3 and a half minutes to execute and there is only 5 rows of data in the database. I am trying to reduce this time in any way I can and have tried multithreading which seems to slow it down further, and TPL which neither increases nor reduces the time. Does anyone know why I am not seeing performance improvements?
I am using an Intel Core i5 which I know has 2 cores so using more than 2 cores I understand will reduce performance. Here is how I am incorporating the use of tasks:
private static void Main(string[] args)
{
Util util = new Util(); //Util object
List<Data> dataList = new List<Data>(); //List of Data Objects
//Reads each row of data and creates Data obj for each
//Then adds each object to the list
dataList = util.getData();
var stopwatch = Stopwatch.StartNew();
var tasks = new Task[dataList.Count];
int i = 0; //Count
foreach (Data data in dataList)
{
//Perform insertions and send email with data
tasks[i++] = Task.Factory.StartNew(() => util.processData(data));
}
Task.WaitAll(tasks); //Wait for completion
Console.WriteLine("DONE: {0}", stopwatch.ElapsedMilliseconds);
}
Util Class:
class Util
{
// create and open a connection object
SqlConnection conn = new SqlConnection("**Connection String**");
//Gets all results from table, and adds object to list
public List<Data> getData()
{
conn.Open();
SqlCommand cmd = new SqlCommand("REF.GET_DATA", conn);
cmd.CommandType = CommandType.StoredProcedure;
SqlDataReader reader = cmd.ExecuteReader();
List<Data> dataList = new List<Data>();
while (reader.Read())
{
//** Take data from table and assigns them to variables
//** Removed for simplicity
Data data= new Data(** pass varaibles here **);
dataList.Add(data); //Add object to datalist
}
return dataList;
}
public void processData(Data data)
{
//** Perform range of trivial operations on data
//** Removed for simplicity
byte[] results = data.RenderData(); //THIS IS WHAT TAKES A LONG TIME TO COMPLETE
data.EmailFile(results);
return;
} //END handleReport()
}
Am I using tasks in the wrong place? Should I instead be making use of parellelism in the util.processData() method? I also tried using await and async around the util.processData(data) call in the main method with no improvements.
EDIT:
Here is the renderData function:
//returns byte data of report results which will be attatched to email.
public byte[] RenderData(string format, string mimeType, ReportExecution.ParameterValue[] parameters)
{
ReportExecutionService res = new ReportExecutionService();
res.Credentials = System.Net.CredentialCache.DefaultCredentials;
res.Timeout = 600000;
//Prepare Render arguments
string historyID = null;
string deviceInfo = String.Empty;
string extension = String.Empty;
string encoding = String.Empty;
ReportExecution.Warning[] warnings = null;
string[] streamIDs = null;
byte[] results = null;
try
{
res.LoadReport(reportPath, historyID);
res.SetExecutionParameters(parameters, "en-gb"); //"/LSG Reporting/Repossession Sales (SAL)/SAL004 - Conveyancing Matter Listing"
results = res.Render(format, deviceInfo, out extension, out mimeType, out encoding, out warnings, out streamIDs);
break;
}
catch (Exception ex)
{
Console.WriteLine(ex.StackTrace)
}
return results;
}

Export a large CSV file in parallel to SQL server

I have a large CSV file... 10 columns, 100 million rows, roughly 6 GB in size on my hard disk.
I want to read this CSV file line by line and then load the data into a Microsoft SQL server database using SQL bulk copy.
I have read couple of threads on here and also on the internet. Most people suggest that reading a CSV file in parallel doesn't buy much in terms of efficiency as the tasks/threads contend for disk access.
What I'm trying to do is, read line by line from CSV and add it to blocking collection of size 100K rows. And once this collection is full spin up a new task/thread to write the data to SQL server using SQLBuckCopy API.
I have written this piece of code, but hitting an error at run time that says "Attempt to invoke bulk copy on an object that has a pending operation." This scenario looks like something that can be easily solved using .NET 4.0 TPL but I'm not able to get it work. Any suggestions on what I'm doing wrong?
public static void LoadCsvDataInParalleToSqlServer(string fileName, string connectionString, string table, DataColumn[] columns, bool truncate)
{
const int inputCollectionBufferSize = 1000000;
const int bulkInsertBufferCapacity = 100000;
const int bulkInsertConcurrency = 8;
var sqlConnection = new SqlConnection(connectionString);
sqlConnection.Open();
var sqlBulkCopy = new SqlBulkCopy(sqlConnection.ConnectionString, SqlBulkCopyOptions.TableLock)
{
EnableStreaming = true,
BatchSize = bulkInsertBufferCapacity,
DestinationTableName = table,
BulkCopyTimeout = (24 * 60 * 60),
};
BlockingCollection<DataRow> rows = new BlockingCollection<DataRow>(inputCollectionBufferSize);
DataTable dataTable = new DataTable(table);
dataTable.Columns.AddRange(columns);
Task loadTask = Task.Factory.StartNew(() =>
{
foreach (DataRow row in ReadRows(fileName, dataTable))
{
rows.Add(row);
}
rows.CompleteAdding();
});
List<Task> insertTasks = new List<Task>(bulkInsertConcurrency);
for (int i = 0; i < bulkInsertConcurrency; i++)
{
insertTasks.Add(Task.Factory.StartNew((x) =>
{
List<DataRow> bulkInsertBuffer = new List<DataRow>(bulkInsertBufferCapacity);
foreach (DataRow row in rows.GetConsumingEnumerable())
{
if (bulkInsertBuffer.Count == bulkInsertBufferCapacity)
{
SqlBulkCopy bulkCopy = x as SqlBulkCopy;
var dataRows = bulkInsertBuffer.ToArray();
bulkCopy.WriteToServer(dataRows);
Console.WriteLine("Inserted rows " + bulkInsertBuffer.Count);
bulkInsertBuffer.Clear();
}
bulkInsertBuffer.Add(row);
}
},
sqlBulkCopy));
}
loadTask.Wait();
Task.WaitAll(insertTasks.ToArray());
}
private static IEnumerable<DataRow> ReadRows(string fileName, DataTable dataTable)
{
using (var textFieldParser = new TextFieldParser(fileName))
{
textFieldParser.TextFieldType = FieldType.Delimited;
textFieldParser.Delimiters = new[] { "," };
textFieldParser.HasFieldsEnclosedInQuotes = true;
while (!textFieldParser.EndOfData)
{
string[] cols = textFieldParser.ReadFields();
DataRow row = dataTable.NewRow();
for (int i = 0; i < cols.Length; i++)
{
if (string.IsNullOrEmpty(cols[i]))
{
row[i] = DBNull.Value;
}
else
{
row[i] = cols[i];
}
}
yield return row;
}
}
}
Don't.
Parallel access may or may not give you faster read of the file (it won't, but I'm not going to fight that battle...) but for certain parallel writes it won't give you faster bulk insert. That is because minimally logged bulk insert (ie. the really fast bulk insert) requires a table lock. See Prerequisites for Minimal Logging in Bulk Import:
Minimal logging requires that the target table meets the following conditions:
...
- Table locking is specified (using TABLOCK).
...
Parallel inserts, by definition, cannot obtain concurrent table locks. QED. You are barking up the wrong tree.
Stop getting your sources from random finding on the internet. Read The Data Loading Performance Guide, is the guide to ... performant data loading.
I would recommend to you stop inventing the wheel. Use an SSIS, this is exactly what is designed to handle.
http://joshclose.github.io/CsvHelper/
https://efbulkinsert.codeplex.com/
If possible for you, I suggest you read your file into a List<T> using the aforementioned csvhelper and write to your db using bulk insert as you are doing or efbulkinsert which I have used and is amazingly fast.
using CsvHelper;
public static List<T> CSVImport<T,TClassMap>(string csvData, bool hasHeaderRow, char delimiter, out string errorMsg) where TClassMap : CsvHelper.Configuration.CsvClassMap
{
errorMsg = string.Empty;
var result = Enumerable.Empty<T>();
MemoryStream memStream = new MemoryStream(Encoding.UTF8.GetBytes(csvData));
StreamReader streamReader = new StreamReader(memStream);
var csvReader = new CsvReader(streamReader);
csvReader.Configuration.RegisterClassMap<TClassMap>();
csvReader.Configuration.DetectColumnCountChanges = true;
csvReader.Configuration.IsHeaderCaseSensitive = false;
csvReader.Configuration.TrimHeaders = true;
csvReader.Configuration.Delimiter = delimiter.ToString();
csvReader.Configuration.SkipEmptyRecords = true;
List<T> items = new List<T>();
try
{
items = csvReader.GetRecords<T>().ToList();
}
catch (Exception ex)
{
while (ex != null)
{
errorMsg += ex.Message + Environment.NewLine;
foreach (var val in ex.Data.Values)
errorMsg += val.ToString() + Environment.NewLine;
ex = ex.InnerException;
}
}
return items;
}
}
Edit - I don't understand what you are doing with the bulk insert. You want to bulk insert the whole list or data data table, not row-by-row.
You can create store procedure and pass the file location like below
CREATE PROCEDURE [dbo].[CSVReaderTransaction]
#Filepath varchar(100)=''
AS
-- STEP 1: Start the transaction
BEGIN TRANSACTION
-- STEP 2 & 3: checking ##ERROR after each statement
EXEC ('BULK INSERT Employee FROM ''' +#Filepath
+''' WITH (FIELDTERMINATOR = '','', ROWTERMINATOR = ''\n'' )')
-- Rollback the transaction if there were any errors
IF ##ERROR <> 0
BEGIN
-- Rollback the transaction
ROLLBACK
-- Raise an error and return
RAISERROR ('Error in inserting data into employee Table.', 16, 1)
RETURN
END
COMMIT TRANSACTION
You can also add BATCHSIZE option like FIELDTERMINATOR and ROWTERMINATOR.

Inserting using Linq to Sql but table is still empty

The data is inserted using LINQ to SQL, the id is generated but the database table is empty.
Using a stored procedure there is no problem. But inserting using linq the id is generated everytime but the table is empty.
The code is below:
Int32 t = 2;
using (EduDataClassesDataContext db =new EduDataClassesDataContext())
{
using (var scope = new TransactionScope())
{
db.Connection.ConnectionString = Common.EdukatingConnectionString;
UserLogin userlog = new UserLogin();
userlog.Username = userinfo.Username;
userlog.Password = userinfo.Password;
userlog.UserTypeId = t;
userlog.FullName = userinfo.FullName;
db.UserLogins.InsertOnSubmit(userlog);
db.SubmitChanges();
Int64 n = userlog.Id;
UserInformation userinfor = new UserInformation();
userinfor.FirstName = userinfo.FirstName;
userinfor.LastName = userinfo.LastName;
userinfor.MobileNum = userinfo.MobileNum;
userinfor.Email = userinfo.Email;
userinfor.Gender = userinfo.Gender;
userinfor.Address = userinfo.Address;
userinfor.UserLoginId = n;
userinfor.CreatedBy = n;
userinfor.OrganizationName = userinfo.OrganizationName;
userinfor.DateOfBirth = userinfo.DateOfBirth;
userinfor.CreatedDate = DateTime.Now;
db.UserInformations.InsertOnSubmit(userinfor);
db.SubmitChanges();
}
}
When you are using a TransactionScope, you need to call the Complete method in order to Commit the transaction in the DataBase.
using (var db = new EduDataClassesDataContext())
using (var scope = new TransactionScope())
{
...
db.UserInformations.InsertOnSubmit(userinfor);
db.SubmitChanges();
// The Complete method commits the transaction. If an exception has been thrown,
// Complete is not called and the transaction is rolled back.
scope.Complete();
}
Failing to call this method aborts the transaction, because the
transaction manager interprets this as a system failure, or exceptions
thrown within the scope of transaction.

Getting exception "duplicate transaction identifier " while using Transaction.Current.DependentClone

I am creating a mechanism to bulk-insert (import) a lot of new records into a ORACLE database. I am using multiple threads and dependent transactions:
Creation of threads:
const int ThreadCount = 4;
using (TransactionScope transaction = new TransactionScope())
{
List<Thread> threads = new List<Thread>(threadCount);
for (int i = 0; i < ThreadCount; i++)
{
Thread thread = new Thread(WorkerThread);
thread.Start(Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete));
threads.Add(thread);
}
threads.ForEach(thread => thread.Join());
transaction.Complete();
}
The method that does the actual work:
private void WorkerThread(object transaction)
{
using (DependentTransaction dTx = (DependentTransaction)transaction)
using (TransactionScope ts = new TransactionScope(dTx))
{
// The actual work, the inserts on the database, are executed here.
}
}
During this operation, I get an exception of type System.Data.OracleClient.OracleException with the message ORA-24757: duplicate transaction identifier.
What am I doing wrong? Am I implementing the dependent transaction the incorrect way? Is it incompatible with Oracle? If so, is there a workarround?
I don't know about why u get this exception but a work around might be:
const int ThreadCount = 4;
using (var connection = new OracleConnection(MyConnectionstring))
{
connection.Open();
using (var transaction = connection.BeginTransaction())
{
List<Thread> threads = new List<Thread>(ThreadCount);
for (int i = 0; i < ThreadCount; i++)
{
Thread thread = new Thread(WorkerThread);
thread.Start(transaction);
threads.Add(thread);
}
threads.ForEach(thread => thread.Join());
transaction.Commit();
}
}
and the working class would look something like:
private void WorkerThread(object transaction)
{
OracleTransaction trans = (OracleTransaction) transaction;
using (OracleCommand command = trans.Connection.CreateCommand())
{
command.CommandText = "INSERT INTO mytable (x) values (1) ";
command.ExecuteNonQuery();
}
}

Categories

Resources