I'm currently building an application that needs a feature to import a user-supplied CSV file as data into a database. Each "cell" in the CSV will be stored in its own row.
Initially I was using parameterized queries to insert each row one-by-one, but the speed of the operation (520,000 inserts in one example file!) meant I'm having to reconsider that. I'm now parsing the CSV file into an IEnumerable<Answer> and handing it over to the following code to be inserted into the database in batches:
public void AddAnswers(IEnumerable<Answer> answers)
{
const int batchSize = 1000;
var values = new StringBuilder();
var i = 0;
foreach (var answer in answers)
{
if (i++ > 0)
{
values.Append(",");
}
values.AppendFormat("({0},{1},'{2}')", answer.AnswerSetId, answer.QuestionId, answer.Value.Replace("'", "''"));
if (i == batchSize)
{
// We've reached the batch size limit - send what we have so far
SendAnswerBatch(values.ToString());
values.Clear();
i = 0;
}
}
if (i > 0)
{
// Ensure any leftovers that didn't reach the maximum batch size are sent over
SendAnswerBatch(values.ToString());
}
}
private void SendAnswerBatch(string values)
{
var query = String.Format("INSERT INTO Answers (AnswerSetId,QuestionId,Value) VALUES {0}", values);
Context.Database.ExecuteSqlCommand(query);
}
This changed a large set of data from taking over 5 minutes to less than 5 seconds to insert, however I realise that basic replacing of ' with '' is not safe.
Obviously the safest way to insert a single row would be to use a parameterized query but is there a way to make such a thing work with a batch insert like this?
If at all possible, I also need it to be non-database specific - I had already considered SqlBulkCopy but the application needs to support multiple database engines.
i would suggest you use sqlBulkCopy, when inserting a lot of values, this provided to be really usefull to me
place your items into a datatable and let SqlBulkCopy do the rest.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx
Related
I want to insert around 1 million records into a database using Linq in ASP.NET MVC. But when I try the following code it didn't work. It's throwing an OutOfMemoryException. And also it took 3 days in the loop. Can anyone please help me on this???
db.Database.ExecuteSqlCommand("DELETE From [HotelServices]");
DataTable tblRepeatService = new DataTable();
tblRepeatService.Columns.Add("HotelCode",typeof(System.String));
tblRepeatService.Columns.Add("Service",typeof(System.String));
tblRepeatService.Columns.Add("Category",typeof(System.String));
foreach (DataRow row in xmltable.Rows)
{
string[] servicesarr = Regex.Split(row["PAmenities"].ToString(), ";");
for (int a = 0; a < servicesarr.Length; a++)
{
tblRepeatService.Rows.Add(row["HotelCode"].ToString(), servicesarr[a], "PA");
}
String[] servicesarrA = Regex.Split(row["RAmenities"].ToString(), ";");
for (int b = 0; b < servicesarrA.Length; b++)
{
tblRepeatService.Rows.Add(row["hotelcode"].ToString(), servicesarrA[b], "RA");
}
}
HotelAmenties _hotelamenties;
foreach (DataRow hadr in tblRepeatService.Rows)
{
_hotelamenties = new HotelAmenties();
_hotelamenties.Id = Guid.NewGuid();
_hotelamenties.ServiceName = hadr["Service"].ToString();
_hotelamenties.HotelCode = hadr["HotelCode"].ToString();
db.HotelAmenties.Add(_hotelamenties);
}
db.SaveChanges();
tblRepeatService table has around 1 million rows.
Bulk inserts like this are highly inefficient in LINQtoSQL. Every insert creates at least three objects (the DataRow, the HotelAmenities object and the tracking record for it), chewing up memory on objects you don't need.
Given that you already have a DataTable, you can use System.Data.SqlClient.SqlBulkCopy to push the content of the table to a temporary table on the SQL server, then use a single insert statement to load the data into its final destination. This is the fastest way I have found so far to move many thousands of records from memory to SQL.
If performance doesn't matter and this is a 1 shot job you can stick to the way you're using. Your problem is you're only saving at the end, so entity Framework has to store and generate the SQL for 1 million operations at once, modify your code so that you save every 1000 or so inserts instead of only at the end and it should work just fine.
int i = 0;
foreach (DataRow hadr in tblRepeatService.Rows)
{
_hotelamenties = new HotelAmenties();
_hotelamenties.Id = Guid.NewGuid();
_hotelamenties.ServiceName = hadr["Service"].ToString();
_hotelamenties.HotelCode = hadr["HotelCode"].ToString();
db.HotelAmenties.Add(_hotelamenties);
if((i%1000)==0){
db.SaveChanges();
}
i++;
}
db.SaveChanges();
I'm working on an import from a CSV file to my ASP.NET MVC3/C#/Entity Framework Application.
Currently this is my code, but I'm looking to optimise:
var excel = new ExcelQueryFactory(file);
var data = from c in excel.Worksheet(0)
select c;
var dataList = data.ToList();
List<FullImportExcel> importList = new List<FullImportExcel>();
foreach (var s in dataList.ToArray())
{
if ((s[0].ToString().Trim().Length < 6) && (s[1].ToString().Trim().Length < 7))
{
FullImportExcel item = new FullImportExcel();
item.Carrier = s[0].ToString().Trim();
item.FlightNo = s[1].ToString().Trim();
item.CodeFlag = s[2].ToString().Trim();
//etc etc (50 more columns here)
importList.Add(item);
}
}
PlannerEntities context = null;
context = new PlannerEntities();
context.Configuration.AutoDetectChangesEnabled = false;
int count = 0;
foreach (var item in importList)
{
++count;
context = AddToFullImportContext(context, item, count, 100, true);
}
private PlannerEntities AddToFullImportContext(PlannerEntities context, FullImportExcel entity, int count, int commitCount, bool recreateContext)
{
context.Set<FullImportExcel>().Add(entity);
if (count % commitCount == 0)
{
context.SaveChanges();
if (recreateContext)
{
context.Dispose();
context = new PlannerEntities();
context.Configuration.AutoDetectChangesEnabled = false;
}
}
return context;
}
This works fine, but isn't as quick as it could be, and the import that I'm going to need to do will be a minimum of 2 million lines every month. Are there any better methods out there for bulk imports?
Am I better avoiding EF altogether and using SQLConnection and inserting that way?
Thanks
I do like how you're only committing records every X number of records (100 in your case.)
I've recently written a system that once a month, needed to update the status of upwards of 50,000 records in one go - this is updating each record and inserting an audit record for each updated record.
Originally I wrote this with the entity framework, and it took 5-6 minutes to do this part of the task. SQL Profiler showed me it was doing 100,000 SQL queries - one UPDATE and one INSERT per record (as expected I guess.)
I changed this to a stored procedure which takes a comma-separated list of record IDs, the status and user ID as parameters, which does a mass-update followed by a mass-insert. This now takes 5 seconds.
In your case, for this number of records, I'd recommend creating a BULK IMPORT file and passing that over to SQL to import.
http://msdn.microsoft.com/en-us/library/ms188365.aspx
For large number of inserts in SQL Server Bulk Copy is the fastest way. You can use the SqlBulkCopy class for accessing Bulk Copy from code. You have to create an IDataReader for your List or you can use this IDataReader for inserting generic Lists I have written.
Thanks to Andy for the heads up - this was the code used in SQL, with a little help from the ever helpful, Pinal Dave - http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/ :)
DECLARE #bulkinsert NVARCHAR(2000)
DECLARE #filepath NVARCHAR(100)
set #filepath = 'C:\Users\Admin\Desktop\FullImport.csv'
SET #bulkinsert =
N'BULK INSERT FullImportExcel2s FROM ''' +
#filepath +
N''' WITH (FIRSTROW = 2, FIELDTERMINATOR = '','', ROWTERMINATOR = ''\n'')'
EXEC sp_executesql #bulkinsert
Still got a bit of work to do to work it into the code, but we're down to 25 seconds for 50000 rows instead of an hour, so a huge improvement!
I have a remote SQL server. I want to make a local copy of all tables in this server. I don't care about the file format used locally, I'm looking for the fastest approach of getting the data from SQL server into the file. (note: server side backup is not an option)
This is my current approach:
Step 1. Creating a reader and read all data into a List of objects
while (reader.Read()) {
var fieldCount = reader.FieldCount;
for (int i = 0; i < fieldCount; i++) {
objects.Add(reader.GetValue(i));
}
}
Step 2. Convert the objects to strings
List<string> test = new List<string>();
foreach (var o in objects) {
test.Add(o.ToString());
}
Step 3. Write the string to a (CSV) file
foreach (var s in test) {
backupFile.Write("\"");
backupFile.Write(s);
backupFile.Write("\";");
}
I've measured the performance of these 3 steps:
Step 1 takes 3 seconds
Step 2 takes 2.8 seconds
Step 3 takes 0.8 seconds
I'm looking for a way to speed up step 2. Is there a faster way of getting these objects to a file? (doesn't have to be a text file. Binary of local database file is also ok)
Why are you bothering with three steps exactly.
Why not
while (reader.Read()) {
var fieldCount = reader.FieldCount;
for (int i = 0; i < fieldCount; i++) {
backupFile.Write("\"");
backupFile.Write(reader.GetValue(i).ToString());
backupFile.Write("\";");;
}
backupFile.WriteLine();
}
Unless of course you're using two threads. One that pushes the data into a collection. And one that flushes the data into a file.
If you dont care about the file format used locally, how About loading your SQL table to a DataTable object and use:
datatable.WriteXml("c:\YourFile.xml");
Use the OPENROWSET command. Example:
INSERT INTO OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=C:\testing.xls;',
'SELECT Name, Date FROM [Sheet1$]')
SELECT [Name], GETDATE() FROM MyTable
I'm trying to find a better and faster way to insert pretty massive amount of data(~50K rows) than the Linq that I'm using now.
The data I'm trying to write to a local db is in a list of ORM mapped data serialized and received from WCF.
I'm keen on using SqlBulkCopy, but the problem is that the tables are normalized and are actually a sequence or interconnected tables with one-to-many relationships.
Here's some code that illustrates my point:
foreach (var meeting in meetingsList)
{
int meetingId = dbAccess.InsertM(value1, value2...);
foreach (var competition in meeting.COMPETITIONs)
{
int competitionId = dbAccess.InsertC(meetingid, value1, value2...);
foreach(var competitor in competition.COMPETITORs)
{
int competitorId = dbAccess.InsertCO(comeetitionId, value1,....)
// and so on
}
}
}
where dbAccess.InsertMeeting looks something like this:
// check if meeting exists
int meetingId = GetMeeting(meeting, date);
if (meetingId == 0)
{
// if meeting doesn't exist insert new
var m = new MEETING
{
NAME = name,
DATE = date
}
_db.InsertOnSubmit(m);
_db.SubmitChanges();
}
Thanks in advance for any answers.
Bojan
I would still use SqlBulkCopy to quickly copy your data from the external file into a staging table that has the same (flat) structure as the file (you'll need to create that table ahead of time)
Once it's loaded, you can split up the data across multiple tables using e.g. a stored procedure or something - should be pretty fast since everything's on the server already.
I need to write some code to insert around 3 million rows of data.
At the same time I need to insert the same number of companion rows.
I.e. schema looks like this:
Item
- Id
- Title
Property
- Id
- FK_Item
- Value
My first attempt was something vaguely like this:
BaseDataContext db = new BaseDataContext();
foreach (var value in values)
{
Item i = new Item() { Title = value["title"]};
ItemProperty ip = new ItemProperty() { Item = i, Value = value["value"]};
db.Items.InsertOnSubmit(i);
db.ItemProperties.InsertOnSubmit(ip);
}
db.SubmitChanges();
Obviously this was terribly slow so I'm now using something like this:
BaseDataContext db = new BaseDataContext();
DataTable dt = new DataTable("Item");
dt.Columns.Add("Title", typeof(string));
foreach (var value in values)
{
DataRow item = dt.NewRow();
item["Title"] = value["title"];
dt.Rows.Add(item);
}
using (System.Data.SqlClient.SqlBulkCopy sb = new System.Data.SqlClient.SqlBulkCopy(db.Connection.ConnectionString))
{
sb.DestinationTableName = "dbo.Item";
sb.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Title", "Title"));
sb.WriteToServer(dt);
}
But this doesn't allow me to add the corresponding 'Property' rows.
I'm thinking the best solution might be to add a Stored Procedure like this one that generically lets me do a bulk insert (or at least multiple inserts, but I can probably disable logging in the stored procedure somehow for performance) and then returns the corresponding ids.
Can anyone think of a better (i.e. more succinct, near equal performance) solution?
To combine the previous best two answers and add in the missing piece for the IDs:
1) Use BCP to Load the data into a temporary "staging" table defined like this
CREATE TABLE stage(Title AS VARCHAR(??), value AS {whatever});
and you'll need the appropriate index for performance later:
CREATE INDEX ix_stage ON stage(Title);
2) Use SQL INSERT to load the Item table:
INSERT INTO Item(Title) SELECT Title FROM stage;
3) Finally load the Property table by joining stage with Item:
INSERT INTO Property(FK_ItemID, Value)
SELECT id, Value
FROM stage
JOIN Item ON Item.Title = stage.Title
The best way to move that much data into SQL Server is bcp. Assuming that the data starts in some sort of file, you'll need to write a small script to funnel the data into the two tables. Alternately you could use bcp to funnel the data into a single table and then use an SP to INSERT the data into the two tables.
Bulk copy the data into a temporary table, and then call a stored proc that splits the data into the two tables you need to populate.
You can bulk copy in code as well, using the .NET SqlBulkCopy class.