I have some data that needs to be imported into SQL Server.
I have the following fields:
ID Param1 Param2
The way it needs to go into the table is not that straighforward.
It needs to go in as
ID Param1 5655 DateTime
ID Param2 5555 DateTime
as such, it needs to insert 2 records into the table for one row from the input file. Wondering what the best way to do this in SQL Server is in terms of importing the file. I can do a BULK INSERT but I the columns need to match exactly. In my case it does not
I am also using .NET C#. Wondering if importing file to datatable, etc. and then using foreach look to further manipulate it may be the best approach.
As the question was a little bit unclear for me but if I'm getting you well then there is many ways for doing it one simple way is using a temp table:
create a temp table:
CREATE TABLE #TBL (ID int, param1 datetime, param2 datetime);
bulk insert from file into temp table
BULK INSERT #TBL FROM 'D:\data.txt' WITH (FIELDTERMINATOR = ' ');
now you can insert into permanent table using a specific query on the temp table (assuming your table structure is: (ID,param) ):
INSERT INTO TABLE_NAME(id,PARAM)
SELECT DISTINCT T.ID,T.PARAM1
FROM #TBL
UNION
SELECT DISTINCT T.ID,T.PARAM2
FROM #TBL
Since you are using C#, you can make use of Table-Valued Parameters to stream in the data in any way you like. You can read a row from a file, split it apart, and pass in 2 rows instead of mapping columns 1 to 1. I detailed a similar approach in this answer:
How can I insert 10 million records in the shortest time possible?
The main difference here is that, in the while loop inside of the GetFileContents() method, you would need to call yield return twice, once for each piece.
Related
I've been looking at the PostGres multi row/value insert which looks something like this in pure SQL:
insert into table (col1, col2, col3) values (1,2,3), (4,5,6)....
The reason I wan to use this is I have a lot of data to insert that is arriving via a queue, which I'm batching into 500/1000 record inserts at a time to improve performance.
However, I have been unable to find an example of doing this from within C#, everything I can find is adding only a single records parameter at a time, then executing, which is too slow.
I have this working using Dapper currently, but I need to expand the SQL to an upsert (insert on conflict update) which everything I have found indicated Dapper can't handle. I have found evidence the Postgres can handle upsert and multi valued in a single action.
Tom
I didn't get your question completely right. But for bulk insert in Postgresql, this is a good answer
It gives an example for inserting multiple records from a list (RecordList) into table (user_data.part_list) :
using (var writer = conn.BeginBinaryImport(
"copy user_data.part_list from STDIN (FORMAT BINARY)"))
{
foreach (var record in RecordList)
{
writer.StartRow();
writer.Write(record.UserId);
writer.Write(record.Age, NpgsqlTypes.NpgsqlDbType.Integer);
writer.Write(record.HireDate, NpgsqlTypes.NpgsqlDbType.Date);
}
writer.Complete();
}
COPY is the fastest way but does not work if you want to do UPSERTS with an ON CONFLICT ... clause.
If it's necessary to use INSERT, ingesting n rows (with possibly varying n per invocation) can be elegantly done using UNNEST like
INSERT INTO table (col1, col2, ..., coln) SELECT UNNEST(#p1), UNNEST(#p2), ... UNNEST(#pn);
The parameters p then need to be an array of the matching type. Here's an example for an array of ints:
new NpgsqlParameter()
{
ParameterName = "p1",
Value = new int[]{1,2,3},
NpgsqlDbType = NpgsqlDbType.Array | NpgsqlDbType.Integer
}
If you want to insert many records efficiently, you probably want to take a look at Npgsql's bulk copy API, which doesn't use SQL and is the most efficient option available.
Otherwise, there's nothing special about inserting two rows rather than one:
insert into table (col1, col2, col3) values (#p1_1,#p1_2,#p1_3), (#p2_1,#p2_2,#p_3)....
Simply add the parameters with the correct name and execute just as you would any other SQL.
I want to perform bulk insert from CSV to MySQL database using C#, I'm using MySql.Data.MySqlClient for connection. CSV columns are refereed into multiple tables and they are dependent on primary key value, for example,
CSV(column & value): -
emp_name, address,country
-------------------------------
jhon,new york,usa
amanda,san diago,usa
Brad,london,uk
DB Schema(CountryTbl) & value
country_Id,Country_Name
1,usa
2,UK
3,Germany
DB Schema(EmployeeTbl)
Emp_Id(AutoIncrement),Emp_Name
DB Schema(AddressTbl)
Address_Id(AutoIncrement), Emp_Id,Address,countryid
Problem statement:
1> Read data from CSV to get the CountryId from "CountryTbl" for respective employee.
2> Insert data into EmployeeTbl and AddressTbl with CountryId
Approach 1
Go as per above problem statement steps, but that will be a performance hit (Row-by-Row read and insert)
Approach 2
Use "Bulk Insert" option "MySqlBulkLoader", but that needs csv files to read, and looks that this option is not going to work for me.
Approach 3
Use stored proc and use the procedure for upload. But I don't want to use stored proc.
Please suggest if there is any other option by which I can do bulk upload or suggest any other approach.
Unless you have hundreds of thousands of rows to upload, bulk loading (your approach 2) probably is not worth the extra programming and debugging time it will cost. That's my opinion, for what it's worth (2x what you paid for it :)
Approaches 1 and 3 are more or less the same. The difference lies in whether you issue the queries from c# or from your sp. You still have to work out the queries. So let's deal with 1.
The solutions to these sorts of problems depend on make and model of RDBMS. If you decide you want to migrate to SQL Server, you'll have to change this stuff.
Here's what you do. For each row of your employee csv ...
... Put a row into the employee tbl
INSERT INTO EmployeeTbl (Emp_Name) VALUES (#emp_name);
Notice this query uses the INSERT ... VALUES form of the insert query. When this query (or any insert query) runs, it drops the autoincremented Emp_Id value where a subsequent invocation of LAST_INSERT_ID() can get it.
... Put a row into the address table
INSERT INTO AddressTbl (Emp_Id,Address,countryid)
SELECT LAST_INSERT_ID() AS Emp_Id,
#address AS Address,
country_id AS countryid
FROM CountryTbl
WHERE Country_Name = #country;
Notice this second INSERT uses the INSERT ... SELECT form of the insert query. The SELECT part of all this generates one row of data with the column values to insert.
It uses LAST_INSERT_ID() to get Emp_Id,
it uses a constant provided by your C# program for the #address, and
it looks up the countryid value from your pre-existing CountryTbl.
Notice, of course, that you must use the C# Parameters.AddWithValue() method to set the values of the # parameters in these queries. Those values come from your CSV file.
Finally, wrap each thousand rows or so of your csv in a transaction, by preceding their INSERT statements with a START TRANSACTION; statement and ending them with a COMMIT; statement. That will get you a performance improvement, and if something goes wrong the entire transaction will get rolled back so you can start over.
Here is my requirement:
I want to delete multiple rows of a table
I have the list of all ids that needs to be deleted in the client app.
All I want is to make just a single call from my C# client app to delete all these records.
What I had earlier:
foreach (var id in idList)
{
//Call a Stored Procedure which takes the id
//as parameter and deletes the record
}
This is a problem because there is a database hit for each item in loop.
Then I started using the SQL Server 2008 newly introduced Table Types and Table Valued Parameters for stored procedures for this.
First have a Table Type created in the database which has a single column (of type of Id)
Create a new Stored Procedure which accepts this new table type (Table Valued Parameter)
In the Client code, create a list/Datatable with all the ids to be deleted.
Make a single call to the database to make use of the new stored Procedure.
It is pretty straight forward in the client code, which goes something like this:
SqlParameter parameter = new SqlParameter();
parameter.SqlDbType = System.Data.SqlDbType.Structured;
OK - Though I achieved what I set for, this definitely does not work for SQL server versions before 2008.
So what I did was to introduce a fall back mechanism:
I introduced a Stored Procedure that takes comma separated id values
My client code will create such comma separated ids, and then simply pass it to the new stored proc.
The Stored Proc will then split all those ids, and then call delete one by one.
So in a way, I have achieved what I had set for even on SQL Server version prior to 2008: delete multiple rows with a single database hit.
But I am not that convinced with the last approach I took - A quick search on the internet did not reveal anything much.
So can somebody tell me if I am doing the right thing with comma separated ids and all.
What is the usual best practice to delete multiple records from the client app with a single database hit, provided the ids are known.
Thanks in advance.
hi you just concatenate the all values in the list with delemeter as follows 1,2,3,4,5
here is the sample store procdure for deleteing multiple rows
create PROCEDURE deleteusers --deleteusers '1,2,3,4'
-- Add the parameters for the stored procedure here
#UsersIds varchar(max)
AS
BEGIN
declare #abc varchar(max)
set #abc='delete from users where userid in('+#UsersIds+')'
exec(#abc)
END
GO
It will delete the users with userid 1,2,3,4
You can send a string containing comma separated or some other character(to be used for split) to send ids from C# end.
Create Procedure deleteusers
(
#ids nvarchar(MAX)
)
as
begin
DECLARE #pos bigint,#count int,#id varchar(max),#I int,#Spos int
set #Spos=1
set #I=1
set #count= len(#ids)-len(REPLACE(#ids,',',''))
while(#I<=#count)
begin
SET #pos = CHARINDEX(',', #ids, 1)
set #id = substring(#ids,#Spos,#pos-1)
delete from TableName where id=#id
set #ids= stuff(#ids, #Spos, #pos,'')
set #I=#I+1
end
end
The Dynamic SQL option listed here is probably your best bet but just to give you one more option.
CREATE PROCEDURE deleteusers --deleteusers '1,2,3,4'
-- Add the parameters for the stored procedure here
#UsersIds varchar(max)
CREATE TABLE #DeleteUsers (UserId Int)
-- Split #UserIds and populate into the #DeleteUsers temp table
DELETE FROM Users WHERE UserId IN (SELECT UserId FROM #DeleteUsers)
If you have a column delimited list of values, say #list, you can do what you want with this delete statement:
delete from t
where ','+#list+',' like '%,'+cast(t.id as varchar(255))+',%'
This is emulating the in expression with string operations.
The downside is that this requires a full-table scan. It will not use an index on id.
Rather than build SQL strings in code, I'd suggest looking at an ORM (object-relational-mapper), such as the Entity Framework or NHibernate.
I have a table, schema is very simple, an ID column as unique primary key (uniqueidentifier type) and some other nvarchar columns. My current goal is, for 5000 inputs, I need to calculate what ones are already contained in the table and what are not. Tht inputs are string and I have a C# function which converts string into uniqueidentifier (GUID). My logic is, if there is an existing ID, then I treat the string as already contained in the table.
My question is, if I need to find out what ones from the 5000 input strings are already contained in DB, and what are not, what is the most efficient way?
BTW: My current implementation is, convert string to GUID using C# code, then invoke/implement a store procedure which query whether an ID exists in database and returns back to C# code.
My working environment: VSTS 2008 + SQL Server 2008 + C# 3.5.
My first instinct would be to pump your 5000 inputs into a single-column temporary table X, possibly index it, and then use:
SELECT X.thecol
FROM X
JOIN ExistingTable USING (thecol)
to get the ones that are present, and (if both sets are needed)
SELECT X.thecol
FROM X
LEFT JOIN ExistingTable USING (thecol)
WHERE ExistingTable.thecol IS NULL
to get the ones that are absent. Worth benchmarking, at least.
Edit: as requested, here are some good docs & tutorials on temp tables in SQL Server. Bill Graziano has a simple intro covering temp tables, table variables, and global temp tables. Randy Dyess and SQL Master discuss performance issue for and against them (but remember that if you're getting performance problems you do want to benchmark alternatives, not just go on theoretical considerations!-).
MSDN has articles on tempdb (where temp tables are kept) and optimizing its performance.
Step 1. Make sure you have a problem to solve. Five thousand inserts isn't a lot to insert one at a time in a lot of contexts.
Are you certain that the simplest way possible isn't sufficient? What performance issues have you measured so far?
What do you need to do with those entries that do or don't exist in your table??
Depending on what you need, maybe the new MERGE statement in SQL Server 2008 could fit your bill - update what's already there, insert new stuff, all wrapped neatly into a single SQL statement. Check it out!
http://blogs.conchango.com/davidportas/archive/2007/11/14/SQL-Server-2008-MERGE.aspx
http://www.sql-server-performance.com/articles/dba/SQL_Server_2008_MERGE_Statement_p1.aspx
http://blogs.msdn.com/brunoterkaly/archive/2008/11/12/sql-server-2008-merge-capability.aspx
Your statement would look something like this:
MERGE INTO
(your target table) AS t
USING
(your source table, e.g. a temporary table) AS s
ON t.ID = s.ID
WHEN NOT MATCHED THEN -- new rows does not exist in base table
....(do whatever you need to do)
WHEN MATCHED THEN -- row exists in base table
... (do whatever else you need to do)
;
To make this really fast, I would load the "new" records from e.g. a TXT or CSV file into a temporary table in SQL server using BULK INSERT:
BULK INSERT YourTemporaryTable
FROM 'c:\temp\yourimportfile.csv'
WITH
(
FIELDTERMINATOR =',',
ROWTERMINATOR =' |\n'
)
BULK INSERT combined with MERGE should give you the best performance you can get on this planet :-)
Marc
PS: here's a note from TechNet on MERGE performance and why it's faster than individual statements:
In SQL Server 2008, you can perform multiple data manipulation language (DML) operations in a single statement by using the MERGE statement. For example, you may need to synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table. Typically, this is done by executing a stored procedure or batch that contains individual INSERT, UPDATE, and DELETE statements. However, this means that the data in both the source and target tables are evaluated and processed multiple times; at least once for each statement.
By using the MERGE statement, you can replace the individual DML statements with a single statement. This can improve query performance because the operations are performed within a single statement, therefore, minimizing the number of times the data in the source and target tables are processed. However, performance gains depend on having correct indexes, joins, and other considerations in place. This topic provides best practice recommendations to help you achieve optimal performance when using the MERGE statement.
Try to ensure you end up running only one query - i.e. if your solution consists of running 5000 queries against the database, that'll probably be the biggest consumer of resources for the operation.
If you can insert the 5000 IDs into a temporary table, you could then write a single query to find the ones that don't exist in the database.
If you want simplicity, since 5000 records is not very many, then from C# just use a loop to generate an insert statement for each of the strings you want to add to the table. Wrap the insert in a TRY CATCH block. Send em all up to the server in one shot like this:
BEGIN TRY
INSERT INTO table (theCol, field2, field3)
SELECT theGuid, value2, value3
END TRY BEGIN CATCH END CATCH
BEGIN TRY
INSERT INTO table (theCol, field2, field3)
SELECT theGuid, value2, value3
END TRY BEGIN CATCH END CATCH
BEGIN TRY
INSERT INTO table (theCol, field2, field3)
SELECT theGuid, value2, value3
END TRY BEGIN CATCH END CATCH
if you have a unique index or primary key defined on your string GUID, then the duplicate inserts will fail. Checking ahead of time to see if the record does not exist just duplicates work that SQL is going to do anyway.
If performance is really important, then consider downloading the 5000 GUIDS to your local station and doing all the analysis localy. Reading 5000 GUIDS should take much less than 1 second. This is simpler than bulk importing to a temp table (which is the only way you will get performance from a temp table) and doing an update using a join to the temp table.
Since you are using Sql server 2008, you could use Table-valued parameters. It's a way to provide a table as a parameter to a stored procedure.
Using ADO.NET you could easily pre-populate a DataTable and pass it as a SqlParameter.
Steps you need to perform:
Create a custom Sql Type
CREATE TYPE MyType AS TABLE
(
UniqueId INT NOT NULL,
Column NVARCHAR(255) NOT NULL
)
Create a stored procedure which accepts the Type
CREATE PROCEDURE spInsertMyType
#Data MyType READONLY
AS
xxxx
Call using C#
SqlCommand insertCommand = new SqlCommand(
"spInsertMyType", connection);
insertCommand.CommandType = CommandType.StoredProcedure;
SqlParameter tvpParam =
insertCommand.Parameters.AddWithValue(
"#Data", dataReader);
tvpParam.SqlDbType = SqlDbType.Structured;
Links: Table-valued Parameters in Sql 2008
Definitely do not do it one-by-one.
My preferred solution is to create a stored procedure with one parameter that can take and XML in the following format:
<ROOT>
<MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000000">
<MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000001">
....
</ROOT>
Then in the procedure with the argument of type NCHAR(MAX) you convert it to XML, after what you use it as a table with single column (lets call it #FilterTable). The store procedure looks like:
CREATE PROCEDURE dbo.sp_MultipleParams(#FilterXML NVARCHAR(MAX))
AS BEGIN
SET NOCOUNT ON
DECLARE #x XML
SELECT #x = CONVERT(XML, #FilterXML)
-- temporary table (must have it, because cannot join on XML statement)
DECLARE #FilterTable TABLE (
"ID" UNIQUEIDENTIFIER
)
-- insert into temporary table
-- #important: XML iS CaSe-SenSiTiv
INSERT #FilterTable
SELECT x.value('#ID', 'UNIQUEIDENTIFIER')
FROM #x.nodes('/ROOT/MyObject') AS R(x)
SELECT o.ID,
SIGN(SUM(CASE WHEN t.ID IS NULL THEN 0 ELSE 1 END)) AS FoundInDB
FROM #FilterTable o
LEFT JOIN dbo.MyTable t
ON o.ID = t.ID
GROUP BY o.ID
END
GO
You run it as:
EXEC sp_MultipleParams '<ROOT><MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000000"/><MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000002"/></ROOT>'
And your results look like:
ID FoundInDB
------------------------------------ -----------
60EAD98F-8A6C-4C22-AF75-000000000000 1
60EAD98F-8A6C-4C22-AF75-000000000002 0
Obviously I can use BCP but here is the issue. If one of the records in a Batch have an invalid date I want to redirect that to a separate table/file/whatever, but keep the batch processing running. I don't think SSIS can be installed on the server which would have helped.
Create a trigger that processes on INSERT. This trigger will do a validation check on your date field. If it fails the validation, then do an insert into your separate table, and you can also choose to continue the insert or not allow it to go through.
an important note: by default triggers do not fire on bulk inserts (BCP & SSIS included). To get this to work, you'll need to specify that you want the trigger to fire, using something like:
BULK INSERT your_database.your_schema.your_table FROM your_file WITH (FIRE_TRIGGERS )
Yeah, if you are using DTS, you should just import into a staging table that uses varchar instead of dates and then massage the data into the proper tables afterwords.
The problem with What Matt said is that you should not use a cursor to manipulate the data afterwards especially if you have millions of records. CUrsoprs are extremely inefficient and should be avoided.
Use batch processing instead.
But by all means use his idea of a staging table. I wouldn' ever consider importing directly into a production table as too many things can happen over time to change the data in the input file and cause problems.
You're saying there's a column full of dates in the file, and you want that data to go into a column of type "datetime" in a table in a SQL database? And it'll blow up if one of the values from the file isn't a valid date? I just wanted to make sure I understand this right.
You could create another, temporary, table in the SQL database, of the same structure as the table you want the data from the file to end up in, but with every column of type varchar(255) or something. Sucking the data out of the file and into that table shouldn't fail whether any of the dates is valid or not.
Then, in SQL, you could massage the data however you want. You could use a cursor to select all of the records from the temporary table and loop through them. For each record, you could use the T-SQL ISDATE function to conditionally insert the values from the current record into one table or another.
I'm saying, get the data into the database and then run script like this:
// **this is untested, there could be syntax errors**
// if we have tables like this:
CREATE TABLE tempoary (id VARCHAR(255), theDate VARCHAR(255), somethingElse VARCHAR(255))
CREATE TABLE theGood (id INT, theDate DATETIME, somethingElse VARCHAR(255))
CREATE TABLE theBad (id INT, theDate VARCHAR(255))
// then after getting the data into [tempoary], do this:
DECLARE tempCursor CURSOR
FOR SELECT id, theDate, somethingElse FROM temporary
OPEN tempCursor
DECLARE #id VARCHAR(255)
DECLARE #theDate VARCHAR(255)
DECLARE #somethingElse VARCHAR(255)
FETCH NEXT FROM tempCursor INTO #id, #theDate, #somethingElse
While (##FETCH_STATUS <> -1)
BEGIN
IF ISDATE(#theDate)
BEGIN
INSERT INTO theGood (id, theDate, somethingElse)
VALUES (CONVERT(INT, #id), CONVERT(DATETIME, theDate), somethingElse)
END
ELSE
BEGIN
INSERT INTO theBad (id, theDate)
VALUES (CONVERT(INT, #id), theDate)
END
FETCH NEXT FROM tempCursor INTO #id, #theDate, #somethingElse
END
CLOSE tempCursor
DEALLOCATE tempCursor