A bit of pseudocode for you, the system itself is much more verbose:
using (var insertCmd = new SqlCommand("insert new row in database, selects the ID that was inserted", conn)) {
using (var updateCmd = new SqlCommand("update the row with #data1 where id = #idOfInsert", conn)) {
// Got a whole lot of inserts AND updates to process - those two has to be seperated in this system
// I have to make sure all the data that has been readied earlier in the system are inserted
// My MS sql server is known to throw timeout errors, no matter how long the SqlCommand.CommandTimeout is.
for (int i = 0; i < 100000; i++) {
if (i % 100 == 99) { // every 100th items
// sleep for 10 seconds, so the sql server isn't locked while I do my work
System.Threading.Thread.Sleep(1000 * 10);
}
var id = insertCmd.ExecuteScalar().ToString();
updateCmd.Parameters.AddWithValue("#data1", i);
updateCmd.Parameters.AddWithValue("#idOfInsert", id);
updateCmd.ExecuteNonQuery();
}
}
}
How would I make sure that the ExecuteScalar and ExecuteNonQuery are able to recover from exceptions? I have thought of using (I'M VERY SORRY) a goto and exceptions for flow control, such as this:
Restart:
try {
updateCmd.ExecuteNonQuery();
} catch (SqlException) {
System.Threading.Thread.Sleep(1000 * 10); // sleep for 10 seconds
goto Restart;
}
Is there another way to do it, completely?
Instead of goto you can use a loop.
while(sqlQueryHasNotSucceeded)
{
try
{
updateCmd.ExecuteNonQuery();
sqlQueryHasNotSucceeded = false;
}
catch(Exception e)
{
LogError(e);
System.Threading.Thread.Sleep(1000 * 10);
}
}
Related
I'm working on an importer in our web application. With the code I currently have, when you are connecting via local SQL server, it runs fine and within reason. I'm also creating a .sql script that they can download as well
Example 1
40k records, 8 columns, from 1 minute and 30 seconds until 2 minutes
When I move it to production and Azure app service, it is running VERY slow.
Example 2
40k records, 8 columns, from 15 minutes to 18 minutes
The current database is set to: Pricing tier: Standard S2: 50 DTUs
Here is the code:
using (var sqlConnection = new SqlConnection(connectionString))
{
try
{
var generatedScriptFilePathInfo = GetImportGeneratedScriptFilePath(trackingInfo.UploadTempDirectoryPath, trackingInfo.FileDetail);
using (FileStream fileStream = File.Create(generatedScriptFilePathInfo.GeneratedScriptFilePath))
{
using (StreamWriter writer = new StreamWriter(fileStream))
{
sqlConnection.Open();
sqlTransaction = sqlConnection.BeginTransaction();
await writer.WriteLineAsync("/* Insert Scripts */").ConfigureAwait(false);
foreach (var item in trackingInfo.InsertSqlScript)
{
errorSqlScript = item;
using (var cmd = new SqlCommand(item, sqlConnection, sqlTransaction))
{
cmd.CommandTimeout = 800;
cmd.CommandType = CommandType.Text;
await cmd.ExecuteScalarAsync().ConfigureAwait(false);
}
currentRowLine++;
rowsProcessedUpdateEveryXCounter++;
rowsProcessedTotal++;
// append insert statement to the file
await writer.WriteLineAsync(item).ConfigureAwait(false);
}
// write out a couple of blank lines to separate insert statements from post scripts (if there are any)
await writer.WriteLineAsync(string.Empty).ConfigureAwait(false);
await writer.WriteLineAsync(string.Empty).ConfigureAwait(false);
}
}
}
catch (OverflowException exOverFlow)
{
sqlTransaction.Rollback();
sqlTransaction.Dispose();
trackingInfo.IsSuccessful = false;
trackingInfo.ImportMetricUpdateError = new ImportMetricUpdateErrorDTO(trackingInfo.ImportMetricId)
{
ErrorLineNbr = currentRowLine + 1, // add one to go ahead and count the record we are on to sync up with the file
ErrorMessage = string.Format(CultureInfo.CurrentCulture, "{0}", ImporterHelper.ArithmeticOperationOverflowFriendlyErrorText),
ErrorSQL = errorSqlScript,
RowsProcessed = currentRowLine
};
await LogImporterError(trackingInfo.FileDetail, exOverFlow.ToString(), currentUserId).ConfigureAwait(false);
await UpdateImportAfterFailure(trackingInfo.ImportMetricId, exOverFlow.Message, currentUserId).ConfigureAwait(false);
return trackingInfo;
}
catch (Exception ex)
{
sqlTransaction.Rollback();
sqlTransaction.Dispose();
trackingInfo.IsSuccessful = false;
trackingInfo.ImportMetricUpdateError = new ImportMetricUpdateErrorDTO(trackingInfo.ImportMetricId)
{
ErrorLineNbr = currentRowLine + 1, // add one to go ahead and count the record we are on to sync up with the file
ErrorMessage = string.Format(CultureInfo.CurrentCulture, "{0}", ex.Message),
ErrorSQL = errorSqlScript,
RowsProcessed = currentRowLine
};
await LogImporterError(trackingInfo.FileDetail, ex.ToString(), currentUserId).ConfigureAwait(false);
await UpdateImportAfterFailure(trackingInfo.ImportMetricId, ex.Message, currentUserId).ConfigureAwait(false);
return trackingInfo;
}
}
Questions
Is there any way to speed this up on Azure? Or is the only way to upgrade the DTUs?
We are looking into SQL Bulk Copy as well. Will this help any or still cause slowness on Azure: https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlbulkcopy?redirectedfrom=MSDN&view=dotnet-plat-ext-5.0
Desired results
Run at the same speed when running it at a local SQL Server database
For now, I updated my code to batch the insert statements based on how many records. If the record count is over 10k, then it will batch them by dividing the total by 10.
This helped performance BIG TIME on our Azure instance. I was able to add 40k records within 30 seconds. I also think some of the issue was how many different slots use our app service on Azure.
We will also probably move to SQLBulkCopy later on as users need to import larger excel files.
Thanks everyone for the helps and insights!
// apply the create table SQL script if found.
if (string.IsNullOrWhiteSpace(trackingInfo.InsertSqlScript.ToString()) == false)
{
int? updateEveryXRecords = GetProcessedEveryXTimesForApplyingInsertStatementsValue(trackingInfo.FileDetail);
trackingInfo.FileDetail = UpdateImportMetricStatus(trackingInfo.FileDetail, ImportMetricStatus.ApplyingInsertScripts, currentUserId);
int rowsProcessedUpdateEveryXCounter = 0;
int rowsProcessedTotal = 0;
await UpdateImportMetricsRowsProcessed(trackingInfo.ImportMetricId, rowsProcessedTotal, trackingInfo.FileDetail.ImportMetricStatusHistories).ConfigureAwait(false);
bool isBulkMode = trackingInfo.InsertSqlScript.Count >= 10000;
await writer.WriteLineAsync("/* Insert Scripts */").ConfigureAwait(false);
int insertCounter = 0;
int bulkCounter = 0;
int bulkProcessingAmount = 0;
int lastInsertCounter = 0;
if (isBulkMode == true)
{
bulkProcessingAmount = trackingInfo.InsertSqlScript.Count / 10;
}
await LogInsertBulkStatus(trackingInfo.FileDetail, isBulkMode, trackingInfo.InsertSqlScript.Count, bulkProcessingAmount, currentUserId).ConfigureAwait(false);
StringBuilder sbInsertBulk = new StringBuilder();
foreach (var item in trackingInfo.InsertSqlScript)
{
if (isBulkMode == false)
{
errorSqlScript = item;
using (var cmd = new SqlCommand(item, sqlConnection, sqlTransaction))
{
cmd.CommandTimeout = 800;
cmd.CommandType = CommandType.Text;
await cmd.ExecuteScalarAsync().ConfigureAwait(false);
}
currentRowLine++;
rowsProcessedUpdateEveryXCounter++;
rowsProcessedTotal++;
// append insert statement to the file
await writer.WriteLineAsync(item).ConfigureAwait(false);
// Update database with the insert statement created count to alert the user of the status.
if (updateEveryXRecords.HasValue)
{
if (updateEveryXRecords.Value == rowsProcessedUpdateEveryXCounter)
{
await UpdateImportMetricsRowsProcessed(trackingInfo.ImportMetricId, rowsProcessedTotal, trackingInfo.FileDetail.ImportMetricStatusHistories).ConfigureAwait(false);
rowsProcessedUpdateEveryXCounter = 0;
}
}
}
else
{
sbInsertBulk.AppendLine(item);
if (bulkCounter < bulkProcessingAmount)
{
errorSqlScript = string.Format(CultureInfo.CurrentCulture, "IsBulkMode is True | insertCounter = {0}", insertCounter);
bulkCounter++;
}
else
{
// display to the end user
errorSqlScript = string.Format(CultureInfo.CurrentCulture, "IsBulkMode is True | currentInsertCounter value = {0} | lastInsertCounter (insertCounter when the last bulk insert occurred): {1}", insertCounter, lastInsertCounter);
await ApplyBulkInsertStatements(sbInsertBulk, writer, sqlConnection, sqlTransaction, trackingInfo, rowsProcessedTotal).ConfigureAwait(false);
bulkCounter = 0;
sbInsertBulk.Clear();
lastInsertCounter = insertCounter;
}
rowsProcessedTotal++;
}
insertCounter++;
}
// get the remaining records after finishing the forEach insert statement
if (isBulkMode == true)
{
await ApplyBulkInsertStatements(sbInsertBulk, writer, sqlConnection, sqlTransaction, trackingInfo, rowsProcessedTotal).ConfigureAwait(false);
}
}
/// <summary>
/// Applies the bulk insert statements.
/// </summary>
/// <param name="sbInsertBulk">The sb insert bulk.</param>
/// <param name="wrtier">The wrtier.</param>
/// <param name="sqlConnection">The SQL connection.</param>
/// <param name="sqlTransaction">The SQL transaction.</param>
/// <param name="trackingInfo">The tracking information.</param>
/// <param name="rowsProcessedTotal">The rows processed total.</param>
/// <returns>Task</returns>
private async Task ApplyBulkInsertStatements(
StringBuilder sbInsertBulk,
StreamWriter wrtier,
SqlConnection sqlConnection,
SqlTransaction sqlTransaction,
ProcessImporterTrackingDTO trackingInfo,
int rowsProcessedTotal)
{
var bulkInsertStatements = sbInsertBulk.ToString();
using (var cmd = new SqlCommand(bulkInsertStatements, sqlConnection, sqlTransaction))
{
cmd.CommandTimeout = 800;
cmd.CommandType = CommandType.Text;
await cmd.ExecuteScalarAsync().ConfigureAwait(false);
}
// append insert statement to the file
await wrtier.WriteLineAsync(bulkInsertStatements).ConfigureAwait(false);
// Update database with the insert statement created count to alert the user of the status.
await UpdateImportMetricsRowsProcessed(trackingInfo.ImportMetricId, rowsProcessedTotal, trackingInfo.FileDetail.ImportMetricStatusHistories).ConfigureAwait(false);
}
I have a discord bot that gets its data from a SQLite Database. I am using the System.Data.SQLite-Namespace
My problem is this code part:
m_dbConnection.Open();
SQLiteDataReader sqlite_datareader;
SQLiteCommand sqlite_cmd;
sqlite_cmd = m_dbConnection.CreateCommand();
sqlite_cmd.CommandText = SQLCommand; //SQLCommand is a command parameter
sqlite_datareader = sqlite_cmd.ExecuteReader();
while (sqlite_datareader.Read())
{
int i = 0;
while (true)
{
try
{
string temp = "";
try
{
temp = sqlite_datareader.GetString(i).ToString();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
try
{
temp = sqlite_datareader.GetInt32(i).ToString();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
break;
}
}
output.Add(temp);
i++;
}
catch (Exception)
{
break;
}
}
}
For this example the variable SQLCommand is "SELECT Money FROM Users WHERE UserId = 12345 AND ServerID = 54321".
When I execute this command in an SQL Editor, , I get the value "10". So the command works. Now when I pass this command in my method, to get the data, I just got with the editor, I get the error Specified cast is not valid. at the code temp = sqlite_datareader.GetString(i).ToString();.
The value i is 0, to get the very first row that the sql command selected. I don't know why this happens, every other SQLite-Command works and gives me what I want. Why isn't this command working too?
Try using it this way
while (sqlite_datareader.Read())
{
for (int i = 0; i < reader.FieldCount; i++)
{
var ColName = reader.GetName(i);
var colValue = reader[i];
}
}
Please note that
while (sqlite_datareader.Read()){..}
the purpose of the above statement is to fetch all the rows.
therefore I would like to mention the problems in your code
1-) while(true){...}
is infinite loop, ofcourse in this scenario it would hit the break and quit but still this is not a good practice
2-) int i = 0;
you have declared and increased it by one inside the while loop. The problem here is that:
say you have 100 rows and 10 columns; this means that i would be increased to 99
however you have 10 colums, trying to get an invalid column value would give you an error
putting your code in try catch/ nested try catch statements would solve the issue however it's a nasty solution.
Occasionally I am getting a SqlException exception on the line marked below.
It occurs when we have network problems and the server cannot be found.
public void markComplete(int aTKey)
{
SqlCommand mySqlCommand = null;
SqlConnection myConnect = null;
mySqlCommand = new SqlCommand();
try
{
myConnect = new SqlConnection(ConfigurationManager.ConnectionStrings["foo"].ConnectionString);
mySqlCommand.Connection = myConnect;
mySqlCommand.Connection.Open(); //<<<<<<<< EXCEPTION HERE <<<<<<<<<<<<<<<<<<
mySqlCommand.CommandType = CommandType.Text;
mySqlCommand.CommandText =
" UPDATE dbo.tb_bar " +
" SET LastUpdateTime = CONVERT(Time,GETDATE()) " +
" WHERE AKey = " + aTKey;
mySqlCommand.ExecuteNonQuery();
mySqlCommand.Connection.Close();
}
finally
{
if(mySqlCommand != null)
{
mySqlCommand.Dispose();
}
}
}
I have two questions so maybe should split into 2 SO questions:
Is my finally statement sufficiently defensive ?
Rather than just failing how involved would it be to amend the method so instead of crashing it waits for say 10 minutes and then tries again to open the connection - tries a maximum of 3 times and if still no valid connection it moves on?
Use parameters. Do not concatenate strings to create SQL statements. Read about SQL Injection.
Instead of using try...finally you can simplify your code with the using statement. You need to dispose all the instances that implements the IDisposable interface, so you also need to use the using statement with the SqlConnection. In fact, it's even more important then disposing the SqlCommand, since it allows ADO.Net to use connection pooling.
You don't need to do the entire procedure again and again, just the connection.Open and the ExecuteNonQuery.
Using the constructor that accepts a string an SqlConnection saves you the need to set them via properties.
You don't need to specify CommandType.Text - it's the default value.
Here is a basic implementation of retry logic with some improvements to your code:
public void markComplete(int aTKey)
{
var sql = " UPDATE dbo.tb_bar " +
" SET LastUpdateTime = CONVERT(Time,GETDATE()) " +
" WHERE AKey = #aTKey";
using(var myConnect = new SqlConnection(ConfigurationManager.ConnectionStrings["foo"].ConnectionString))
{
using(var mySqlCommand = new SqlCommand(sql, myConnect))
{
mySqlCommand.Parameters.Add("#aTKey", SqlDbType.Int).Value = aTKey;
var success = false;
var attempts = 0;
do
{
attempts++;
try
{
mySqlCommand.Connection.Open();
mySqlCommand.ExecuteNonQuery();
success = true;
}
catch(Exception ex)
{
// Log exception here
Threading.Thread.Sleep(1000);
}
}while(attempts < 3 || !success);
}
}
}
Update:
Well, I've had some free time and I remember writing a general retry method a few years back. Couldn't find it but here is the general idea:
static void RetryOnException(Action action, int numberOfRetries, int timeoutBetweenRetries)
{
var success = false;
var exceptions = new List<Exception>();
var currentAttempt = 0;
do
{
currentAttempt++;
try
{
action();
success = true;
}
catch(Exception ex)
{
exceptions.Add(ex);
Threading.Thread.Sleep(timeoutBetweenRetries);
}
} while(!success || currentAttempt < numberOfRetries);
// Note: The Exception will only be thrown in case all retries fails.
// If the action completes without throwing an exception at any point, all exceptions before will be swallowed by this method. You might want to log them for future analysis.
if(!success && exceptions.Count > 0)
{
throw new AggregateException("Failed all {numberOfRetries} retries.", exceptions);
}
}
Using this method you can retry all sort of things, while keeping your methods simpler and cleaner.
So here is how it should be used:
public void markComplete(int aTKey)
{
var sql = " UPDATE dbo.tb_bar " +
" SET LastUpdateTime = CONVERT(Time,GETDATE()) " +
" WHERE AKey = #aTKey";
using(var myConnect = new SqlConnection(ConfigurationManager.ConnectionStrings["foo"].ConnectionString))
{
using(var mySqlCommand = new SqlCommand(sql, myConnect))
{
mySqlCommand.Parameters.Add("#aTKey", SqlDbType.Int).Value = aTKey;
// You can do this inside a `try...catch` block or let the AggregateException propagate to the calling method
RetryOnException(
() => {
mySqlCommand.Connection.Open();
mySqlCommand.ExecuteNonQuery();
}, 3, 1000);
}
}
}
I have the following code, which im trying to use to test out if its possible to have transactions and a notifyAfter property used to raise an event(i have already tried substituting the event for one i create/raise myself but it only gets raised after all the rows have been copied). The following link suggests that its not possible
MSDN
Has anyone had any experience with this? Thanks
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
{
try
{
using (SqlBulkCopy copy = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity |SqlBulkCopyOptions.UseInternalTransaction))
{
//Column mapping for the required columns.
for (int count = 0; count < numberOfColumns; count++)
{
copy.ColumnMappings.Add(count, count);
}
//SQLBulkCopy parameters.
copy.DestinationTableName = dataTableName;
copy.BatchSize = batchSize;
copy.SqlRowsCopied += new SqlRowsCopiedEventHandler(OnSqlRowsCopied);
copy.NotifyAfter = 5;
copy.WriteToServer(fullDataTable);
}
}
//Error(s) occured while trying to commit the transaction.
catch (InvalidOperationException transactionEx)
{
//uploadTransaction.Rollback();
status = "The current transaction has been rolled back due to an error. \n\r" + transactionEx.Message;
MessageBox.Show(status, "Error Message:");
alreadyCaught = true;
throw;
}
}
I would presume because of the transaction, the processing only occurs after the transaction is committed hence you won't get the event raised until after that.
I've written a small console app that I point to a folder containing DBF/FoxPo files.
It then creates a table in SQL based on each dbf table, then does a bulk copy to insert the data into SQL. It works quite well for the most part, except for a few snags..
1) Some of the FoxPro tables contain 5000000+ records and the connection expries before the insert completes..
Here is my connection string:
<add name="SQL" connectionString="data source=source_source;persist security info=True;user id=DBFToSQL;password=DBFToSQL;Connection Timeout=20000;Max Pool Size=200" providerName="System.Data.SqlClient" />
Error message:
"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding."
CODE:
using (SqlConnection SQLConn = new SqlConnection(SQLString))
using (OleDbConnection FPConn = new OleDbConnection(FoxString))
{
ServerConnection srvConn = new Microsoft.SqlServer.Management.Common.ServerConnection(SQLConn);
try
{
FPConn.Open();
string dataString = String.Format("Select * from {0}", tableName);
using (OleDbCommand Command = new OleDbCommand(dataString, FPConn))
using (OleDbDataReader Reader = Command.ExecuteReader(CommandBehavior.SequentialAccess))
{
tbl = new Table(database, tableName, "schema");
for (int i = 0; i < Reader.FieldCount; i++)
{
col = new Column(tbl, Reader.GetName(i), ConvertTypeToDataType(Reader.GetFieldType(i)));
col.Nullable = true;
tbl.Columns.Add(col);
}
tbl.Create();
BulkCopy(Reader, tableName);
}
}
catch (Exception ex)
{
// LogText(ex, #"C:\LoadTable_Errors.txt", tableName);
throw ex;
}
finally
{
SQLConn.Close();
srvConn.Disconnect();
}
}
private DataType ConvertTypeToDataType(Type type)
{
switch (type.ToString())
{
case "System.Decimal":
return DataType.Decimal(18, 38);
case "System.String":
return DataType.NVarCharMax;
case "System.Int32":
return DataType.Int;
case "System.DateTime":
return DataType.DateTime;
case "System.Boolean":
return DataType.Bit;
default:
throw new NotImplementedException("ConvertTypeToDataType Not implemented for type : " + type.ToString());
}
}
private void BulkCopy(OleDbDataReader reader, string tableName)
{
using (SqlConnection SQLConn = new SqlConnection(SQLString))
{
SQLConn.Open();
SqlBulkCopy bulkCopy = new SqlBulkCopy(SQLConn);
bulkCopy.DestinationTableName = "schema." + tableName;
try
{
bulkCopy.WriteToServer(reader);
}
catch (Exception ex)
{
//LogText(ex, #"C:\BulkCopy_Errors.txt", tableName);
}
finally
{
SQLConn.Close();
reader.Close();
}
}
}
My 2nd & 3rd errors are the following:
I understand what the issues are, but how to rectify them i'm not so sure
2) "The provider could not determine the Decimal value. For example, the row was just created, the default for the Decimal column was not available, and the consumer had not yet set a new Decimal value."
3) SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.
I found a result on google that indicated what the issue is : [A]... and a possible work around [B] (but I'd like to keep my decimal values as decimal, and dates as date, as I'll be doing further calculations against the data)
What I'm wanting to do as a solution
1.) Either increase the connection time, (but i dont think i can increase it any more than i have), or alternatively is it possible to split the OleDbDataReader's results and do in incremental bulk insert?
2.)I was thinking if its possible to have bulk copy to ignore results with errors, or have the records that do error out log to a csv file or something to that extent?
So where you do the "for" statement I would probably break it up to take so many at a time :
int i = 0;
int MaxCount = 1000;
while (i < Reader.FieldCount)
{
var tbl = new Table(database, tableName, "schema");
for (int j = i; j < MaxCount; j++)
{
col = new Column(tbl, Reader.GetName(j), ConvertTypeToDataType(Reader.GetFieldType(j)));
col.Nullable = true;
tbl.Columns.Add(col);
i++;
}
tbl.Create();
BulkCopy(Reader, tableName);
}
So, "i" keeps track of the overall count, "j" keeps track of the incremental count (ie your max at one time count) and when you have created your 'batch', you create the table and Bulk Copy it.
Does that look like what you would expect?
Cheers,
Chris.
This is my current attemt at the bulk copy method, I't works for about 90% of the tables, but i get an OutOfMemory exeption, with the bigger tables... I'd like to split the reader's data into smaller secions, without having to pass it into a DataTable and store it in memory first (which is the cause of the OutOfMemory exception on the bigger result sets)
UPDATE
Imodified the code below as to how it looks in my solution.. It aint pretty.. but it works. I'll def do some refactoring, and update my answer again.
private void BulkCopy(OleDbDataReader reader, string tableName, Table table)
{
Console.WriteLine(tableName + " BulkCopy Started.");
try
{
DataTable tbl = new DataTable();
List<Type> typeList = new List<Type>();
foreach (Column col in table.Columns)
{
tbl.Columns.Add(col.Name, ConvertDataTypeToType(col.DataType));
typeList.Add(ConvertDataTypeToType(col.DataType));
}
int batch = 1;
int counter = 0;
DataRow tblRow = tbl.NewRow();
while (reader.Read())
{
counter++;
int colcounter = 0;
foreach (Column col in table.Columns)
{
try
{
tblRow[colcounter] = reader[colcounter];
}
catch (Exception)
{
tblRow[colcounter] = GetDefault(typeList[0]);
}
colcounter++;
}
tbl.LoadDataRow(tblRow.ItemArray, true);
if (counter == BulkInsertIncrement)
{
Console.WriteLine(tableName + " :: Batch >> " + batch);
counter = PerformInsert(tableName, tbl, batch);
batch++;
}
}
if (counter > 0)
{
Console.WriteLine(tableName + " :: Batch >> " + batch);
PerformInsert(tableName, tbl, counter);
}
tbl = null;
Console.WriteLine("BulkCopy Success!");
}
catch (Exception ex)
{
Console.WriteLine("BulkCopy Fail!");
SharedLogger.Write(ex, #"C:\BulkCopy_Errors.txt", tableName);
Console.WriteLine(ex.Message);
}
finally
{
reader.Close();
reader.Dispose();
}
Console.WriteLine(tableName + " BulkCopy Ended.");
Console.WriteLine("*****");
Console.WriteLine("");
}