Bulk insert records into SQL Server database - c#

I have a text file that contains about a million records. What is the best way to insert them into a SQL Server database from C#?
Can I use BULK INSERT?

Best way is to use the bcp utility or an SSIS workflow. Those tools have refinements like caching and batching your will miss in a naive implementation. Next best option is BULK INSERT statement, as long as the SQL Server engine itself can reach the file. Last option would be SqlBulkCopy class which allows your app do read the file, maybe process it and transform it, then feed the data as an enumerator to the SqlBulkCopy.

I recently worked on same kind of problem , i realized there are couple solutions to it.
I wrote a BatchProgram (for batch program design read this - http://msdn.microsoft.com/en-us/magazine/cc164014.aspx)
You can use SQL Server Utilities either BCP.exe or OSQL.exe or .net framework supplied SQLBulkCopy class.
I ended up using BCP ( i got a CSV file and used a formatting file and load the data) and OSQL ( i used OSQL where i have to supply a file to the the stored proc )
i also went to .NET Process class and used outputdatarecieved event to log all output of BCP.exe into console (Read this - http://msdn.microsoft.com/en-us/library/system.diagnostics.process.outputdatareceived.aspx) this worked pretty well.
I also tried to SQLBulkCopy class , but it can be slow if you load data first to datatable ( http://msdn.microsoft.com/en-us/library/ex21zs8x.aspx ) , if you use IDataReader ( http://msdn.microsoft.com/en-us/library/434atets.aspx ) it could be fast .
Since i had millions rows i tried using CSVReader ( http://www.codeproject.com/Articles/9258/A-Fast-CSV-Reader ) which is pretty fast .But down the line there was too much problem in data conversion and i did not have much flexibity in SQL server side .
I ended up using BCP and OSQL.

Is using C# a requirement? The fastest way is to use the bcp command line tool: http://msdn.microsoft.com/en-us/library/ms162802.aspx

Related

Bulk Insert Sql Server millions of record

I have a Windows Service application that receives a stream of data with the following format
IDX|20120512|075659|00000002|3|AALI |Astra Agro Lestari Tbk. |0|ORDI_PREOPEN|12 |00000001550.00|00000001291.67|00001574745000|00001574745000|00500|XDS1BXO1| |00001574745000|ݤ
IDX|20120512|075659|00000022|3|ALMI |Alumindo Light Metal Industry Tbk. |0|ORDI |33 |00000001300.00|00000001300.00|00000308000000|00000308000000|00500|--U3---2| |00000308000000|õÄ
This data comes in millions of rows and in sequence 00000002....00198562 and I have to parse and insert them according to the sequence into a database table.
My question is, what is the best way (the most effective) to insert these data into my database? I have tried to use a simple method as to open a SqlConnection object then generate a string of SQL insert script and then execute the script using SqlCommand object, however this method is taking too long.
I read that I can use Sql BULK INSERT but it has to read from a textfile, is it possible for this scenario to use BULK INSERT? (I have never used it before).
Thank you
update: I'm aware of SqlBulkCopy but it requires me to have DataTable first, is this good for performance? If possible I want to insert directly from my data source to SQL Server without having to use in memory DataTable.
If you are writing this in C# you might want to look at the SqlBulkCopy class.
Lets you efficiently bulk load a SQL Server table with data from another source.
First, download free LumenWorks.Framework.IO.Csv library.
Second, use the code like this
StreamReader sr = new TextReader(yourStream);
var sbc = new SqlBulkCopy(connectionString);
sbc.WriteToServer(new LumenWorks.Framework.IO.Csv.CsvReader(sr));
Yeah, it is really that easy.
You can use SSIS "Sql Server Integration Service" for converting data from source data flow to destination data flow.
The source can be a text file and destination can be a SQL Server table. Your conversion executes in bulk insert mode.

SQL Server: BCP vs SQLCMD - what is more efficient?

I'm writing a program in c# to export SQL Server data from one database and importing it in another. Since these two servers are not connected I need to choose a method such as bcp.
What are the differences between these two? Is one more efficient than the other? And in what scenarios?
What are the known limitations/compatibility issues of each?
What other methods exist to export data from SQL Server into files and import from them?
Can I enable compression in these files at the same time as creating them via a command line switch instead of zipping them afterwards?
Please include any other aspects which you think is important when making this decision.
Thanks in advance.
Doesn't cover BCP, but I did write a blog post comparing a couple of approaches for bulk loading data into SQL Server - compared SqlBulkCopy against batched inserts via SqlDataAdapter.
SqlBulkCopy is worth checking out - the kind of process you'd use is query database 1 and retrieve an SqlDataReader. Pass that SqlDataReader to SqlBulkCopy to persist that data to database 2.

How to best insert 350,000 rows with ADO.Net

I have a csv file with 350,000 rows, each row has about 150 columns.
What would be the best way to insert these rows into SQL Server using ADO.Net?
The way I've usually done it is to create the SQL statement manually. I was wondering if there is any way I can code it to simply insert the entire datatable into SQL Server? Or some short-cut like this.
By the way I already tried doing this with SSIS, but there are a few data clean-up issues which I can handle with C# but not so easily with SSIS. The data started as XML, but I changed it to CSV for simplicity.
Make a class "CsvDataReader" that implements IDataReader. Just implement Read(), GetValue(int i), Dispose() and the constructor : you can leave the rest throwing NotImplementedException if you want, because SqlBulkCopy won't call them. Use read to handle the read of each line and GetValue to read the i'th value in the line.
Then pass it to the SqlBulkCopy with the appropriate column mappings you want.
I get about 30000 records/per sec insert speed with that method.
If you have control of the source file format, make it tab delimited as it's easier to parse than CSV.
Edit : http://www.codeproject.com/KB/database/CsvReader.aspx - tx Mark Gravell.
SqlBulkCopy if it's available. Here is a very helpful explanation of using SqlBulkCopy in ADO.NET 2.0 with C#
I think you can load your XML directly into a DataSet and then map your SqlBulkCopy to the database and the DataSet.
Hey you should revert back to XML instead of csv, then load that xml file in a temp table using openxml, clean up your data in temp table and then finally process this data.
I have been following this approach for huge data imports where my XML files happen to be > 500 mb in size and openxml works like a charm.
You would be surprised at how much faster this would work compared to manual ado.net statements.

Upload a Massive CSV File to SQL Server Database

I need to upload a massive (16GB, 65+ million records) CSV file to a single table in a SQL server 2005 database. Does anyone have any pointers on the best way to do this?
Details
I am currently using a C# console application (.NET framework 2.0) to split the import file into files of 50000 records, then process each file. I upload the records into the database from the console application using the SqlBulkCopy class in batches of 5000. To split the files takes approximately 30 minutes, and to upload the entire data set (65+ million records) takes approximately 4.5 hours. The generated file size and the batch upload size are both configuration settings, and I am investigating increasing the value of both to improve performance. To run the application, we use a quad core server with 16GB RAM. This server is also the database server.
Update
Given the answers so far, please note that prior to the import:
The database table is truncated, and all indexes and constraints are dropped.
The database is shrunk, and disk space reclaimed.
After the import has completed:
The indexes are recreated
If you can suggest any different approaches, or ways we can improve the existing import application, I would appreciate it. Thanks.
Related Question
The following question may be of use to others dealing with this problem:
Potential Pitfalls of inserting millions of records into SQL Server 2005 from flat file
Solution
I have investigated the affect of altering batch size, and the size of the split files, and found that batches of 500 records, and split files of 200,000 records work best for my application. Use of the SqlBulkCopyOptions.TableLock also helped. See the answer to this question for further details.
I also looked at using a SSIS DTS package, and a BULK INSERT SQL script. The SSIS package appeared quicker, but did not offer me the ability to record invalid records, etc. The BULK INSERT SQL script whilst slower than the SSIS package, was considerably faster than the C# application. It did allow me to record errors, etc, and for this reason, I am accepting the BULK INSERT answer from ConcernedOfTunbridgeWells as the solution. I'm aware that this may not be the best answer for everyone facing this issue, but it answers my immediate problem.
Thanks to everyone who replied.
Regards, MagicAndi
BULK INSERT is run from the DBMS itself, reading files described by a bcp control file from a directory on the server (or mounted on it). Write an application that splits the file into smaller chunks, places them in an appropriate directory executes a wrapper that executes a series of BULK INSERTS. You can run several threads in parallel if necessary.
This is probably about as fast as a bulk load gets. Also, if there's a suitable partitioning key available in the bulk load file, put the staging table on a partition scheme.
Also, if you're bulk loading into a table with a clustered index, make sure the data is sorted in the same order as the index. Merge sort is your friend for large data sets.
Have you tried SSIS (SQL Server Integration Services).
The SqlBulkCopy class that you're already using is going to be your best bet. The best you can do from here in your c# code is experiment with your particular system and data to see what batch sizes work best. But you're already doing that.
Going beyond the client code, there might be some things you can do with the server to make the import run more efficiently:
Try setting the table and database size before starting the import to something large enough to hold the entire set. You don't want to rely on auto-grow in the middle of this.
Depending on how the data is sorted and any indexes one the table, you might do a little a better to drop any indexes that don't match the order in which the records are imported, and then recreate them after the import.
Finally, it's tempting to try running this in parallel, with a few threads doing bulk inserts at one time. However, the biggest bottleneck is almost certainly disk performance. Anything you can do to the physical server to improve that (new disks, san, etc) will help much more.
You may be able to save the step of splitting the files as follows:
Instantiate an IDataReader to read the values from the input CSV file. There are several ways to do this: the easiest is probably to use the Microsoft OleDb Jet driver. Google for this if you need more info - e.g. there's some info in this StackOverflow question.
An alternative method is to use a technique like that used by www.csvreader.com.
Instantiate a SqlBulkCopy object, set the BatchSize and BulkCopyTimeout properties to appropriate values.
Pass the IDataReader to SqlBulkCopy.WriteToServer method.
I've used this technique successfully with large files, but not as large as yours.
See this and this blog posts for a comparison.
It seems the best alternative is to use BulkInsert with the TABLOCK option set to true.
Have you tried using the Bulk Insert method in Sql Server?
Lately, I had to upload/import a lot of stuff, too (built a PHP script).
I decided to process them record-for-record.
Of course, it takes longer, but for me, the following points were important:
- easily pause the process
- better debugging
This is just a tip.
regards,
Benedikt
BULK INSERT is probably already the fastest way. You can gain additional performance by dropping indexes and constraints while inserting and reestablishing them later. The highest performance impact comes from clustered indexes.
Have you tried SQL Server Integration Services for this? It might be better able to handle such a large text file
Just to check, your inserting will be faster if there are no indexes on the table you are inserting into.
My scenario for things like that is:
Create SSIS Package on SQL server which using BLUK insert into sql,
Create stored procedure inside the DataBase to can run that Package from T-SQL code
After that send file for bluk insert to SQL server using FTP and call SSIS Package usinfg stored procedure

What are the pitfalls of inserting millions of records into SQL Server from flat file?

I am about to start on a journey writing a windows forms application that will open a txt file that is pipe delimited and about 230 mb in size. This app will then insert this data into a sql server 2005 database (obviously this needs to happen swiftly). I am using c# 3.0 and .net 3.5 for this project.
I am not asking for the app, just some communal advise here and potential pitfalls advise. From the site I have gathered that SQL bulk copy is a prerequisite, is there anything I should think about (I think that just opening the txt file with a forms app will be a large endeavor; maybe break it into blob data?).
Thank you, and I will edit the question for clarity if anyone needs it.
Do you have to write a winforms app? It might be much easier and faster to use SSIS. There are some built-in tasks available especially Bulk Insert task.
Also, worth checking Flat File Bulk Import methods speed comparison in SQL Server 2005.
Update: If you are new to SSIS, check out some of these sites to get you on fast track. 1) SSIS Control Flow Basics 2) Getting Started with SQL Server Integration Services
This is another How to: on importing Excel file into SQL 2005.
This is going to be a streaming endeavor.
If you can, do not use transactions here. The transactional cost will simply be too great.
So what you're going to do is read the file a line at a time and insert it in a line at a time. You should dump failed inserts into another file that you can diagnose later and see where they failed.
At first I would go ahead and try a bulk insert of a couple of hundred rows just to see that the streaming is working properly and then you can open up all you want.
You could try using SqlBulkCopy. It lets you pull from "any data source".
Just as a side note, it's sometimes faster to drop the indices of your table and recreate them after the bulk insert operation.
You might consider switching from full recovery to bulk-logged. This will help to keep your backups a reasonable size.
I totally recommend SSIS, you can read in millions of records and clean them up along the way in relatively little time.
You will need to set aside some time to get to grips with SSIS, but it should pay off. There are a few other threads here on SO which will probably be useful:
What's the fastest way to bulk insert a lot of data in SQL Server (C# client)
What are the recommended learning material for SSIS?
You can also create a package from C#. I have a C# program which reads a 3GL "master file" from a legacy system (parses into an object model using an API I have for a related project), takes a package template and modifies it to generate a package for the ETL.
The size of data you're talking about actually isn't that gigantic. I don't know what your efficiency concerns are, but if you can wait a few hours for it to insert, you might be surprised at how easy this would be to accomplish with a really naive technique of just INSERTing each row one at a time. Batching together a thousand or so rows at a time and submitting them to SQL server may make it quite a bit faster as well.
Just a suggestion that could save you some serious programming time, if you don't need it to be as fast as conceivable. Depending on how often this import has to run, saving a few days of programming time could easily be worth it in exchange for waiting a few hours while it runs.
You could use SSIS for the read & insert, but call it as a package from your WinForms app. Then you could pass in things like source, destination, connection strings etc as parameter/configurations.
HowTo: http://msdn.microsoft.com/en-us/library/aa337077.aspx
You can set up transforms and error handling inside SSIS and even create logical branching based on input parameters.
If the column format of the file matches the target table where the data needs to end up, I prefer using the command line utility bcp to load the data file. It's blazingly fast and you can specify and error file for any "odd" records that fail to be inserted.
Your app could kick off the command if you need to store the command line parameters for it (server, database, username / password or trusted connection, table, error file etc.).
I like this method better than running a BULK INSERT SQL command because the data file isn't required to be on a system accessible by the database server. To use bulk insert you have to specify the path to the data file to load, so it must be a path visible and readable by the system user on the database server that is running the load. Too much hassle for me usually. :-)

Categories

Resources