I want to perform bulk insert from CSV to MySQL database using C#, I'm using MySql.Data.MySqlClient for connection. CSV columns are refereed into multiple tables and they are dependent on primary key value, for example,
CSV(column & value): -
emp_name, address,country
-------------------------------
jhon,new york,usa
amanda,san diago,usa
Brad,london,uk
DB Schema(CountryTbl) & value
country_Id,Country_Name
1,usa
2,UK
3,Germany
DB Schema(EmployeeTbl)
Emp_Id(AutoIncrement),Emp_Name
DB Schema(AddressTbl)
Address_Id(AutoIncrement), Emp_Id,Address,countryid
Problem statement:
1> Read data from CSV to get the CountryId from "CountryTbl" for respective employee.
2> Insert data into EmployeeTbl and AddressTbl with CountryId
Approach 1
Go as per above problem statement steps, but that will be a performance hit (Row-by-Row read and insert)
Approach 2
Use "Bulk Insert" option "MySqlBulkLoader", but that needs csv files to read, and looks that this option is not going to work for me.
Approach 3
Use stored proc and use the procedure for upload. But I don't want to use stored proc.
Please suggest if there is any other option by which I can do bulk upload or suggest any other approach.
Unless you have hundreds of thousands of rows to upload, bulk loading (your approach 2) probably is not worth the extra programming and debugging time it will cost. That's my opinion, for what it's worth (2x what you paid for it :)
Approaches 1 and 3 are more or less the same. The difference lies in whether you issue the queries from c# or from your sp. You still have to work out the queries. So let's deal with 1.
The solutions to these sorts of problems depend on make and model of RDBMS. If you decide you want to migrate to SQL Server, you'll have to change this stuff.
Here's what you do. For each row of your employee csv ...
... Put a row into the employee tbl
INSERT INTO EmployeeTbl (Emp_Name) VALUES (#emp_name);
Notice this query uses the INSERT ... VALUES form of the insert query. When this query (or any insert query) runs, it drops the autoincremented Emp_Id value where a subsequent invocation of LAST_INSERT_ID() can get it.
... Put a row into the address table
INSERT INTO AddressTbl (Emp_Id,Address,countryid)
SELECT LAST_INSERT_ID() AS Emp_Id,
#address AS Address,
country_id AS countryid
FROM CountryTbl
WHERE Country_Name = #country;
Notice this second INSERT uses the INSERT ... SELECT form of the insert query. The SELECT part of all this generates one row of data with the column values to insert.
It uses LAST_INSERT_ID() to get Emp_Id,
it uses a constant provided by your C# program for the #address, and
it looks up the countryid value from your pre-existing CountryTbl.
Notice, of course, that you must use the C# Parameters.AddWithValue() method to set the values of the # parameters in these queries. Those values come from your CSV file.
Finally, wrap each thousand rows or so of your csv in a transaction, by preceding their INSERT statements with a START TRANSACTION; statement and ending them with a COMMIT; statement. That will get you a performance improvement, and if something goes wrong the entire transaction will get rolled back so you can start over.
Related
I was given a task to insert over 1000 rows with 4 columns. The table in question does not have a PK or FK. Let's say it contains columns ID, CustomerNo, Description. The records needed to be inserted can have the same CustomerNo and Description values.
I read about importing data to a temporary table, comparing it with the real table, removing duplicates, and moving new records to the real table.
I also could have 1000 queries that check if such a record already exists and insert data if it does not. But I'm too ashamed to try that out for obvious reasons.
I'm not expecting any specific code, because I did not give any specific details. What I'm hoping for is some pseudocode or general advice for completing such tasks. I can't wait to give some upvotes!
So the idea is, you don't want to insert an entry if there's already an entry with the same ID?
If so, after you import your data into a temporary table, you can accomplish what you're looking for in the where clause of a select statement:
insert into table
select ID, CustomerNo, Description from #data_source
where (#data_source.ID not in (select table.ID from table))
I would suggest to you to load the data into a temp table or variable table. Then you can do a "Select Into" using the distinct key word which will removed the duplicated records.
you will always need to read the target table, unless you bulk load the target table into a temp table(in this point you will have two temp tables) compare both, eliminate duplicates and then insert in target table, but even this is not accurate, because you can have a new insert in the target table while you do this.
I have a sql script to migrate data from old table to new one with FluentMigrator execute method.
This is my script:
INSERT INTO [Demo].[C]([key], [value], [tempID]) SELECT [name], [value], [userID] FROM [Demo].[A]
INSERT INTO [Demo].[B]([parentID], [propertyID]) SELECT [tempID], [id] FROM [Demo].[C] WHERE [tempID] IS NOT NULL
UPDATE [Demo].[C] SET [tempID] = NULL
The userProperty table has about 11 million rows and in:
first step, I have to insert to some column in C table.(11 million rows)
step two, I have to inserting data from C table to B table.(11 million rows)
step three, I should update C table (11 million rows)
Totally 11 million rows, but I getting this error:
The error was The transaction log for database 'test' is full
due to 'ACTIVE_TRANSACTION'.
I want to find the fastest way to doing because this is one time running script.
Your transaction log file is full and there is no left disk space for it to grow (if auto-grow option is specified).
Execute the query below to get more details about your transaction log file settings:
SELECT [type_desc]
,[name]
,[physical_name]
,[size]
,[max_size]
,[growth]
FROM [sys].[database_files];
There might be different solutions about your problem. For example, get more space and enable auto-grow option, execute the steps separately, etc.
Few things you check for sure:
is your database under FULL or SIMPLE recovery model
if it is using FULL recovery model, check if regular backups of the transaction log file are made (if such are not made, it will grow as many times as possible and eat your space)
If your database do not need to be under FULL recovery model, you can put it to SIMPLE.
If you are running these in the same query window, SSMS is running it all in one implicit transaction. Try putting each of these in a seperate explicit transaction. Also, clear your T-Log
I have some data that needs to be imported into SQL Server.
I have the following fields:
ID Param1 Param2
The way it needs to go into the table is not that straighforward.
It needs to go in as
ID Param1 5655 DateTime
ID Param2 5555 DateTime
as such, it needs to insert 2 records into the table for one row from the input file. Wondering what the best way to do this in SQL Server is in terms of importing the file. I can do a BULK INSERT but I the columns need to match exactly. In my case it does not
I am also using .NET C#. Wondering if importing file to datatable, etc. and then using foreach look to further manipulate it may be the best approach.
As the question was a little bit unclear for me but if I'm getting you well then there is many ways for doing it one simple way is using a temp table:
create a temp table:
CREATE TABLE #TBL (ID int, param1 datetime, param2 datetime);
bulk insert from file into temp table
BULK INSERT #TBL FROM 'D:\data.txt' WITH (FIELDTERMINATOR = ' ');
now you can insert into permanent table using a specific query on the temp table (assuming your table structure is: (ID,param) ):
INSERT INTO TABLE_NAME(id,PARAM)
SELECT DISTINCT T.ID,T.PARAM1
FROM #TBL
UNION
SELECT DISTINCT T.ID,T.PARAM2
FROM #TBL
Since you are using C#, you can make use of Table-Valued Parameters to stream in the data in any way you like. You can read a row from a file, split it apart, and pass in 2 rows instead of mapping columns 1 to 1. I detailed a similar approach in this answer:
How can I insert 10 million records in the shortest time possible?
The main difference here is that, in the while loop inside of the GetFileContents() method, you would need to call yield return twice, once for each piece.
I am doing a conversion with SqlBulkCopy. I currently have an IList collection of classes which basically i can do a conversion to a DataTable for use with SqlBulkCopy.
Problem is that I can have 3 records with the same ID.
Let me explain .. here are 3 records
ID Name Address
1 Scott London
1 Mark London
1 Manchester
Basically i need to insert them sequentially .. hence i insert record 1 if it doesn't exist, then the next record if it exists i need to update the record rather than insert a new 1 (notice the id is still 1) so in the case of the second record i replace both columns Name And Address on ID 1.
Finally on the 3rd record you notice that Name doesn't exist but its ID 1 and has an address of manchester so i need to update the record but NOT CHANGING Name but updating Manchester.. hence the 3rd record would make the id1 =
ID Name Address
1 Mark Manchester
Any ideas how i can do this? i am at a loss.
Thanks.
EDIT
Ok a little update. I will manage and merge my records before using SQLbulkCopy. Is it possible to get a list of what succeeded and what failed... or is it a case of ALL or nothing? I presume there is no other alternative to SQLbulkCopy but to do updates?
it would be ideal to be able to Insert everything and the ones that failed are inserted into a temp table ... hence i only need to worry about correcting the ones in my failed table as the others i know are all OK
Since you need to process that data into a DataTable anyway (unless you are writing a custom IDataReader), you should merge the records before giving them to SqlBulkCopy; for example (in pseudo code):
/* create empty data-table */
foreach(row in list) {
var row = /* try to get exsiting row from data-table based on id */
if(row == null) { row = /* create and append row to data-table */ }
else { merge non-trivial properties into existing row */
}
then pass the DataTable to SqlBulkCopy once you have the desired data.
Re the edit; in that scenario, I would upload to a staging table (just a regular table that has a schema like the uploaded data, but typically no foreign keys etc), then use regular TSQL to move the data into the transactional tables. In addition to full TSQL support this also allows better logging of operations. In particular, perhaps look at the OUTPUT clause of INSERT which can help complex bulk operations.
You can't do updates with bulk copy (bulk insert), only insert. Hence the name.
You need to fix the data before you insert them. If this means you have updates to pre-existing rows, you can't insert those as that will generate the key conflict.
You can either bulk insert into a temporary table, and run the appropriate insert or update statements, only insert the new rows and issue update statements for the rest, or delete the pre-existing rows after fetching them and fixing the data before reinserting.
But there's no way to persuade bulk copy to update an existing row.
I have a table, schema is very simple, an ID column as unique primary key (uniqueidentifier type) and some other nvarchar columns. My current goal is, for 5000 inputs, I need to calculate what ones are already contained in the table and what are not. Tht inputs are string and I have a C# function which converts string into uniqueidentifier (GUID). My logic is, if there is an existing ID, then I treat the string as already contained in the table.
My question is, if I need to find out what ones from the 5000 input strings are already contained in DB, and what are not, what is the most efficient way?
BTW: My current implementation is, convert string to GUID using C# code, then invoke/implement a store procedure which query whether an ID exists in database and returns back to C# code.
My working environment: VSTS 2008 + SQL Server 2008 + C# 3.5.
My first instinct would be to pump your 5000 inputs into a single-column temporary table X, possibly index it, and then use:
SELECT X.thecol
FROM X
JOIN ExistingTable USING (thecol)
to get the ones that are present, and (if both sets are needed)
SELECT X.thecol
FROM X
LEFT JOIN ExistingTable USING (thecol)
WHERE ExistingTable.thecol IS NULL
to get the ones that are absent. Worth benchmarking, at least.
Edit: as requested, here are some good docs & tutorials on temp tables in SQL Server. Bill Graziano has a simple intro covering temp tables, table variables, and global temp tables. Randy Dyess and SQL Master discuss performance issue for and against them (but remember that if you're getting performance problems you do want to benchmark alternatives, not just go on theoretical considerations!-).
MSDN has articles on tempdb (where temp tables are kept) and optimizing its performance.
Step 1. Make sure you have a problem to solve. Five thousand inserts isn't a lot to insert one at a time in a lot of contexts.
Are you certain that the simplest way possible isn't sufficient? What performance issues have you measured so far?
What do you need to do with those entries that do or don't exist in your table??
Depending on what you need, maybe the new MERGE statement in SQL Server 2008 could fit your bill - update what's already there, insert new stuff, all wrapped neatly into a single SQL statement. Check it out!
http://blogs.conchango.com/davidportas/archive/2007/11/14/SQL-Server-2008-MERGE.aspx
http://www.sql-server-performance.com/articles/dba/SQL_Server_2008_MERGE_Statement_p1.aspx
http://blogs.msdn.com/brunoterkaly/archive/2008/11/12/sql-server-2008-merge-capability.aspx
Your statement would look something like this:
MERGE INTO
(your target table) AS t
USING
(your source table, e.g. a temporary table) AS s
ON t.ID = s.ID
WHEN NOT MATCHED THEN -- new rows does not exist in base table
....(do whatever you need to do)
WHEN MATCHED THEN -- row exists in base table
... (do whatever else you need to do)
;
To make this really fast, I would load the "new" records from e.g. a TXT or CSV file into a temporary table in SQL server using BULK INSERT:
BULK INSERT YourTemporaryTable
FROM 'c:\temp\yourimportfile.csv'
WITH
(
FIELDTERMINATOR =',',
ROWTERMINATOR =' |\n'
)
BULK INSERT combined with MERGE should give you the best performance you can get on this planet :-)
Marc
PS: here's a note from TechNet on MERGE performance and why it's faster than individual statements:
In SQL Server 2008, you can perform multiple data manipulation language (DML) operations in a single statement by using the MERGE statement. For example, you may need to synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table. Typically, this is done by executing a stored procedure or batch that contains individual INSERT, UPDATE, and DELETE statements. However, this means that the data in both the source and target tables are evaluated and processed multiple times; at least once for each statement.
By using the MERGE statement, you can replace the individual DML statements with a single statement. This can improve query performance because the operations are performed within a single statement, therefore, minimizing the number of times the data in the source and target tables are processed. However, performance gains depend on having correct indexes, joins, and other considerations in place. This topic provides best practice recommendations to help you achieve optimal performance when using the MERGE statement.
Try to ensure you end up running only one query - i.e. if your solution consists of running 5000 queries against the database, that'll probably be the biggest consumer of resources for the operation.
If you can insert the 5000 IDs into a temporary table, you could then write a single query to find the ones that don't exist in the database.
If you want simplicity, since 5000 records is not very many, then from C# just use a loop to generate an insert statement for each of the strings you want to add to the table. Wrap the insert in a TRY CATCH block. Send em all up to the server in one shot like this:
BEGIN TRY
INSERT INTO table (theCol, field2, field3)
SELECT theGuid, value2, value3
END TRY BEGIN CATCH END CATCH
BEGIN TRY
INSERT INTO table (theCol, field2, field3)
SELECT theGuid, value2, value3
END TRY BEGIN CATCH END CATCH
BEGIN TRY
INSERT INTO table (theCol, field2, field3)
SELECT theGuid, value2, value3
END TRY BEGIN CATCH END CATCH
if you have a unique index or primary key defined on your string GUID, then the duplicate inserts will fail. Checking ahead of time to see if the record does not exist just duplicates work that SQL is going to do anyway.
If performance is really important, then consider downloading the 5000 GUIDS to your local station and doing all the analysis localy. Reading 5000 GUIDS should take much less than 1 second. This is simpler than bulk importing to a temp table (which is the only way you will get performance from a temp table) and doing an update using a join to the temp table.
Since you are using Sql server 2008, you could use Table-valued parameters. It's a way to provide a table as a parameter to a stored procedure.
Using ADO.NET you could easily pre-populate a DataTable and pass it as a SqlParameter.
Steps you need to perform:
Create a custom Sql Type
CREATE TYPE MyType AS TABLE
(
UniqueId INT NOT NULL,
Column NVARCHAR(255) NOT NULL
)
Create a stored procedure which accepts the Type
CREATE PROCEDURE spInsertMyType
#Data MyType READONLY
AS
xxxx
Call using C#
SqlCommand insertCommand = new SqlCommand(
"spInsertMyType", connection);
insertCommand.CommandType = CommandType.StoredProcedure;
SqlParameter tvpParam =
insertCommand.Parameters.AddWithValue(
"#Data", dataReader);
tvpParam.SqlDbType = SqlDbType.Structured;
Links: Table-valued Parameters in Sql 2008
Definitely do not do it one-by-one.
My preferred solution is to create a stored procedure with one parameter that can take and XML in the following format:
<ROOT>
<MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000000">
<MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000001">
....
</ROOT>
Then in the procedure with the argument of type NCHAR(MAX) you convert it to XML, after what you use it as a table with single column (lets call it #FilterTable). The store procedure looks like:
CREATE PROCEDURE dbo.sp_MultipleParams(#FilterXML NVARCHAR(MAX))
AS BEGIN
SET NOCOUNT ON
DECLARE #x XML
SELECT #x = CONVERT(XML, #FilterXML)
-- temporary table (must have it, because cannot join on XML statement)
DECLARE #FilterTable TABLE (
"ID" UNIQUEIDENTIFIER
)
-- insert into temporary table
-- #important: XML iS CaSe-SenSiTiv
INSERT #FilterTable
SELECT x.value('#ID', 'UNIQUEIDENTIFIER')
FROM #x.nodes('/ROOT/MyObject') AS R(x)
SELECT o.ID,
SIGN(SUM(CASE WHEN t.ID IS NULL THEN 0 ELSE 1 END)) AS FoundInDB
FROM #FilterTable o
LEFT JOIN dbo.MyTable t
ON o.ID = t.ID
GROUP BY o.ID
END
GO
You run it as:
EXEC sp_MultipleParams '<ROOT><MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000000"/><MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000002"/></ROOT>'
And your results look like:
ID FoundInDB
------------------------------------ -----------
60EAD98F-8A6C-4C22-AF75-000000000000 1
60EAD98F-8A6C-4C22-AF75-000000000002 0