I have a listview on a C# windows form application. I want to insert all of its items into a SQL table. I can do this by iterating through all items and insert them one by one. but I need to make sure that all items will be inserted without any errors. if there is even one error in all items the application raise that error and don't insert any items at all.
How can I do this?
#AmoExcel
Hello,
You are going to want to use a transaction that wraps the totality of all your inserts over the same connection. I'll skip talking about bulk inserts etc atm but assuming you have a singular way to loop through and put all the records in.
using (SqlConnection connection = new SqlConnection(connectionString))
{
try
{
connection.Open();
using (SqlTransaction mysingleTransaction = connection.BeginTransaction())
{
// just a pseudo example not real code in this part of the loop
foreach (line in the grid)
{
// in here I would put my code that loops through my rows of data
// then creates a sqlcommand using the existing transaction
// it will rollback if it fails and drops into the Exception path
using (SqlCommand command = new SqlCommand("", connection, mysingleTransaction))
{
command.CommandType = System.Data.CommandType.Text;
foreach (var commandString in sqlCommandList)
{
command.CommandText = commandString;
command.ExecuteNonQuery();
}
}
}
// once you did all your inserts, commit it
mysingleTransaction.Commit();
}
}
catch (Exception ex) //error occurred
{
// Do what you want with the error and everything rolls back
}
}
I'm trying to save the data I'm inserting into a local Firebird database.
I've tried running an sql command, in the C# code, containg commit; after inserting the data but it doesn't seem to work. The informations are sent but the database isn't saving them.
This is the code I'm using for inserting the data.
FbConnectionStringBuilder csb = new FbConnectionStringBuilder
{
DataSource = "localhost",
Port = 3050,
Database = #"D:\db\DBUTENTI.FDB",
UserID = "SYSDBA",
Password = "masterkey",
ServerType = FbServerType.Default
};
using (FbConnection myConn = new FbConnection(csb.ToString()))
{
if (myConn.State == ConnectionState.Closed)
{
try
{
myConn.Open();
Console.WriteLine("CONNECTION OPENED");
string Id = txt_Id.Text;
string Utente = txt_User.Text;
string Password = txt_Password.Text;
FbCommand cmd = new FbCommand("insert into utenti(id,utente,password)values(#id, #utente, #password)", myConn);
cmd.Parameters.AddWithValue("id", Id);
cmd.Parameters.AddWithValue("utente", Utente);
cmd.Parameters.AddWithValue("password", Password);
cmd.ExecuteNonQuery();
myConn.Close();
Console.WriteLine("CONNECTION CLOSED");
}
catch (Exception exc)
{
Console.WriteLine(exc.Message);
}
}
}
The code runs without any errors/exceptions, but I have to manually commit in the ISQL Tool to see the changes.
Thanks to anyone who is willing to help.
If your workaround (solution) is to manually commit in ISQL, the problem is that you had an active transaction in ISQL (and one is started as soon as you start ISQL). This transaction cannot see changes from transactions committed after the transaction in ISQL started (ie: the changes in your program).
ISQL by default starts transactions with the SNAPSHOT isolation level (which is somewhat equivalent to the SQL standard REPEATABLE READ). If you want ISQL to be able to see changes made by your program, you either need to relax its isolation level to READ COMMITTED, or - as you already found out - you need to explicitly commit (so a new transaction is used).
For example to switch ISQL to use READ COMMITTED, you can use statement:
set transaction read committed record_version;
This will only change the transaction setting for the current session.
For details, see
Firebird 2.5 Language Reference, Transaction Statements
ISQL, Transaction Handling
I couldn't fetch more than 700000 rows from SQL Server using C# - I get a "out-of-memory" exception. Please help me out.
This is my code:
using (SqlConnection sourceConnection = new SqlConnection(constr))
{
sourceConnection.Open();
SqlCommand commandSourceData = new SqlCommand("select * from XXXX ", sourceConnection);
reader = commandSourceData.ExecuteReader();
}
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(constr2))
{
bulkCopy.DestinationTableName = "destinationTable";
try
{
// Write from the source to the destination.
bulkCopy.WriteToServer(reader);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
reader.Close();
}
}
I have made up small console App based on the given solution 1 but ends up with same exception also i have posted my Memory process Before and After
Before Processing:
After adding the command timeout at the read code side, Ram Peaks up,
That code should not cause an OOM exception. When you pass a DataReader to SqlBulkCopy.WriteToServer you are streaming the rows from the source to the destination. Somewhere else you are retaining stuff in memory.
SqlBulkCopy.BatchSize controls how often SQL Server commits the rows loaded at the destination, limiting the lock duration and the log file growth (if not minimally logged and in simple recovery mode). Whether you use one batch or not should have no impact on the amount of memory used either in SQL Server or in the client.
Here's a sample that copies 10M rows without growing memory:
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace SqlBulkCopyTest
{
class Program
{
static void Main(string[] args)
{
var src = "server=localhost;database=tempdb;integrated security=true";
var dest = src;
var sql = "select top (1000*1000*10) m.* from sys.messages m, sys.messages m2";
var destTable = "dest";
using (var con = new SqlConnection(dest))
{
con.Open();
var cmd = con.CreateCommand();
cmd.CommandText = $"drop table if exists {destTable}; with q as ({sql}) select * into {destTable} from q where 1=2";
cmd.ExecuteNonQuery();
}
Copy(src, dest, sql, destTable);
Console.WriteLine("Complete. Hit any key to exit.");
Console.ReadKey();
}
static void Copy(string sourceConnectionString, string destinationConnectionString, string query, string destinationTable)
{
using (SqlConnection sourceConnection = new SqlConnection(sourceConnectionString))
{
sourceConnection.Open();
SqlCommand commandSourceData = new SqlCommand(query, sourceConnection);
var reader = commandSourceData.ExecuteReader();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnectionString))
{
bulkCopy.BulkCopyTimeout = 60 * 10;
bulkCopy.DestinationTableName = destinationTable;
bulkCopy.NotifyAfter = 10000;
bulkCopy.SqlRowsCopied += (s, a) =>
{
var mem = GC.GetTotalMemory(false);
Console.WriteLine($"{a.RowsCopied:N0} rows copied. Memory {mem:N0}");
};
// Write from the source to the destination.
bulkCopy.WriteToServer(reader);
}
}
}
}
}
Which outputs:
. . .
9,830,000 rows copied. Memory 1,756,828
9,840,000 rows copied. Memory 798,364
9,850,000 rows copied. Memory 4,042,396
9,860,000 rows copied. Memory 3,092,124
9,870,000 rows copied. Memory 2,133,660
9,880,000 rows copied. Memory 1,183,388
9,890,000 rows copied. Memory 3,673,756
9,900,000 rows copied. Memory 1,601,044
9,910,000 rows copied. Memory 3,722,772
9,920,000 rows copied. Memory 1,642,052
9,930,000 rows copied. Memory 3,763,780
9,940,000 rows copied. Memory 1,691,204
9,950,000 rows copied. Memory 3,812,932
9,960,000 rows copied. Memory 1,740,356
9,970,000 rows copied. Memory 3,862,084
9,980,000 rows copied. Memory 1,789,508
9,990,000 rows copied. Memory 3,903,044
10,000,000 rows copied. Memory 1,830,468
Complete. Hit any key to exit.
NB: Per DavidBrowne's answer, it seems I'd misunderstood how the batching of the SqlBulkCopy class works. The refactored code may still be useful to you, so I've not deleted this answer (as the code is still valid), but the answer is not to set the BatchSize as I'd believed. Please see David's answer for an explanation.
Try something like this; the key being setting the BatchSize property to limit how many rows you deal with at once:
using (SqlConnection sourceConnection = new SqlConnection(constr))
{
sourceConnection.Open();
SqlCommand commandSourceData = new SqlCommand("select * from XXXX ", sourceConnection);
using (reader = commandSourceData.ExecuteReader() { //add a using statement for your reader so you don't need to worry about close/dispose
//keep the connection open or we'll be trying to read from a closed connection
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(constr2))
{
bulkCopy.BatchSize = 1000; //Write a few pages at a time rather than all at once; thus lowering memory impact. See https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlbulkcopy.batchsize?view=netframework-4.7.2
bulkCopy.DestinationTableName = "destinationTable";
try
{
// Write from the source to the destination.
bulkCopy.WriteToServer(reader);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
throw; //we've caught the top level Exception rather than somethign specific; so once we've logged it, rethrow it for a proper handler to deal with up the call stack
}
}
}
}
Note that because the SqlBulkCopy class takes an IDataReader as an argument we don't need to download the full data set. Instead, the reader gives us a way to pull back records as required (hence us leaving the connection open after creating the reader). When we call the SqlBulkCopy's WriteToServer method, internally it has logic to loop multiple times, selecting BatchSize new records from the reader, then pushing those to the destination table before repeating / completing once the reader has sent all pending records. This works differently to, say, a DataTable, where we'd have to populate the data table with the full set of records, rather than being able to read more back as required.
One potential risk of this approach is, because we have to keep the connection open, any locks on our source are kept in place until we close our reader. Depending on the isolation level and whether other queries are trying to access the same records, this may cause blocking; whilst the data table approach would have taken a one-off copy of the data into memory and then closed the connection, avoiding any blocks. If this blocking is a concern you should look at changing the isolation level of your query, or applying hints... Exactly how you approach that would depend on the requirements though.
NB: In reality, instead of running the above code as is, you'd want to refactor things a bit, so the scope of each method is contained. That way you can reuse this logic to copy other queries to other tables.
You'd also want to make the batch size configurable rather than hard-coded so you can adjust to a value that gives a good balance of resource usage vs performance (which will vary based on the host's resources).
You may also want to use async methods, to allow other parts of your program to progress whilst you're waiting on data to flow from/to your databases.
Here's a slightly amended version:
public Task<SqlDataReader> async ExecuteReaderAsync(string connectionString, string query)
{
SqlConnection connection;
SqlCommand command;
try
{
connection = new SqlConnection(connectionString); //not in a using as we want to keep the connection open until our reader's finished with it.
connection.Open();
command = new SqlCommand(query, connection);
return await command.ExecuteReaderAsync(CommandBehavior.CloseConnection); //tell our reader to close the connection when done.
}
catch
{
//if we have an issue before we've returned our reader, dispose of our objects here
command?.Dispose();
connection?.Dispose();
//then rethrow the exception
throw;
}
}
public async Task CopySqlDataAsync(string sourceConnectionString, string sourceQuery, string destinationConnectionString, string destinationTableName, int batchSize)
{
using (var reader = await ExecuteReaderAsync(sourceConnectionString, sourceQuery))
await CopySqlDataAsync(reader, destinationConnectionString, destinationTableName, batchSize);
}
public async Task CopySqlDataAsync(IDataReader sourceReader, string destinationConnectionString, string destinationTableName, int batchSize)
{
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnectionString))
{
bulkCopy.BatchSize = batchSize;
bulkCopy.DestinationTableName = destinationTableName;
await bulkCopy.WriteToServerAsync(sourceReader);
}
}
public void CopySqlDataExample()
{
try
{
var constr = ""; //todo: define connection string; ideally pulling from config
var constr2 = ""; //todo: define connection string #2; ideally pulling from config
var batchSize = 1000; //todo: replace hardcoded batch size with value from config
var task = CopySqlDataAsync(constr, "select * from XXXX", constr2, "destinationTable", batchSize);
task.Wait(); //waits for the current task to complete / if any exceptions will throw an aggregate exception
}
catch (AggregateException es)
{
var e = es.InnerExceptions[0]; //get the wrapped exception
Console.WriteLine(e.Message);
//throw; //to rethrow AggregateException
ExceptionDispatchInfo.Capture(e).Throw(); //to rethrow the wrapped exception
}
}
Something went horribly wrong in your design if you even try to process 700k Rows in C#. That you fail at this is to be expected.
If this is data retrieval for display: There is no way the user will be able to process that amount of data. And filtering down from 700k Rows in the GUI is just a waste of time and Bandwidth. 25-100 fields at once is about the limit. Do filtering or pagination on the Query side so you do not end up retrieving orders of magnitude more then you can actually process.
If this is some form of Bulk insert or Bulk modification: Do that kind of operation in the SQL Server, not in your code. Retrieving, processing in C# and then posting back just adds layers of Overhead. If you add the 2 way Network transfer, you will easily triple the time this will take.
I am trying to move data from some tables from one sql server database to another sql server database. I am planning to write a wcf rest service to do that. I am also trying to implement this using SQLBulkCopy. I am trying to implement the below functionality on button click. Copy Table1 data from source sql server to Table 1 in destination sql server. Same as for table 2 and table 3. I am blocked on couple of things. Is sql bulk copy a best option with wcf rest service to transfer data. I was asked not to use ssis in this task. If there is any exception while moving data from source to destination, then the destination data should be reverted back. It is something like transaction. How do I implement this transaction functionality. Any pointer would help.
Based on the info you gave, your solution would be something like the sample code I inserted. Pointers for the SQLBulkCopy: BCopyTutorial1, BCopyTutorial2 SQLBulkCopy with transaction scope examples: TrasactionBulkCopy
My version is a simplified version of these, based on your question.
Interface:
[ServiceContract]
public interface IYourBulkCopyService {
[OperationContract]
void PerformBulkCopy();
}
Implementation:
public class YourBulkCopyService : IYourBulkCopyService {
public void PerformBulkCopy() {
string sourceCs = ConfigurationManager.AppSettings["SourceCs"];
string destinationCs = ConfigurationManager.AppSettings["DestinationCs"];
// Open a sourceConnection to the AdventureWorks database.
using (SqlConnection sourceConnection = new SqlConnection(sourceCs) {
sourceConnection.Open();
// Get data from the source table as a SqlDataReader.
SqlCommand commandSourceData = new SqlCommand(
"SELECT * FROM YourSourceTable", sourceConnection);
SqlDataReader reader = commandSourceData.ExecuteReader();
//Set up the bulk copy object inside the transaction.
using (SqlConnection destinationConnection = new SqlConnection(destinationCs)) {
destinationConnection.Open();
using (SqlTransaction transaction = destinationConnection.BeginTransaction()) {
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(
destinationConnection, SqlBulkCopyOptions.KeepIdentity,
transaction)) {
bulkCopy.BatchSize = 10;
bulkCopy.DestinationTableName =
"YourDestinationTable";
// Write from the source to the destination.
try {
bulkCopy.WriteToServer(reader);
transaction.Commit();
} catch (Exception ex) {
// If any error, rollback
Console.WriteLine(ex.Message);
transaction.Rollback();
} finally {
reader.Close();
}
}
}
}
}
}
}
I have a windows form application with .NET 4 and Entity Framework for data layer
I need one method with transaction, but making simple tests I couldn't make it work
In BLL:
public int Insert(List<Estrutura> lista)
{
using (TransactionScope scope = new TransactionScope())
{
id = this._dal.Insert(lista);
}
}
In DAL:
public int Insert(List<Estrutura> lista)
{
using (Entities ctx = new Entities (ConnectionType.Custom))
{
ctx.AddToEstrutura(lista);
ctx.SaveChanges(); //<---exception is thrown here
}
}
"The underlying provider failed on Open."
Anyone have any ideas?
PROBLEM RESOLVED - MY SOLUTION
I solved my problem doing some changes.
In one of my DAL I use a Bulk Insert and others Entity.
The problem transaction was occurring by the fact that the bulk of the transaction (transaction sql) do not understand a transaction scope
So I separated the Entity in DAL and used the sql transaction in its running some trivial. ExecuteScalar ();
I believe that is not the most elegant way to do this, but solved my problem transaction.
Here is the code of my DAL
using (SqlConnection sourceConnection = new SqlConnection(Utils.ConnectionString()))
{
sourceConnection.Open();
using (SqlTransaction transaction = sourceConnection.BeginTransaction())
{
StringBuilder query = new StringBuilder();
query.Append("INSERT INTO...");
SqlCommand command = new SqlCommand(query.ToString(), sourceConnection, transaction);
using (SqlBulkCopy bulk = new SqlBulkCopy(sourceConnection, SqlBulkCopyOptions.KeepNulls, transaction))
{
bulk.BulkCopyTimeout = int.MaxValue;
bulk.DestinationTableName = "TABLE_NAME";
bulk.WriteToServer(myDataTable);
StringBuilder updateQuery = new StringBuilder();
//another simple insert or update can be performed here
updateQuery.Append("UPDATE... ");
command.CommandText = updateQuery.ToString();
command.Parameters.Clear();
command.Parameters.AddWithValue("#SOME_PARAM", DateTime.Now);
command.ExecuteNonQuery();
transaction.Commit();
}
}
}
thanks for the help
According to the all-mighty Google, it seems that EF will open/close connections with each call to a database. Since it's doing that, it will treat the transaction as using multiple connections (using a distributed transaction). The way to get around this is to open and close the connection manually when using it.
Here's the information on the distributed transactions issue.
Here's how to manually open and close the connection.
A small code sample:
public int Insert(List<Estrutura> lista)
{
using (TransactionScope scope = new TransactionScope())
{
using (Entities ctx = new Entities (ConnectionType.Custom))
{
ctx.Connection.Open()
id = this._dal.Insert(ctx, lista);
}
}
}
public int Insert(Entities ctx, List<Estrutura> lista)
{
ctx.AddToEstrutura(lista);
ctx.SaveChanges();
}
Instead of employing TransactionScope, it is better to employ UnitOfWork pattern while working with entity framework. please refer to:
unit of work pattern
and also;
unit of work and persistance ignorance