Does SqlCommand.ExecuteNonQuery() insert previous rows before an error? - c#

I have a temp table in sql server that I have created. I have the commandText of the sqlcommand object to insert from the temp table to another table. My question is: Does it insert the rows before reaching an error row?
So for example, lets say there is 1000 rows in the temptable and 0 in tableA. I do an insert from temptable to tableA. There is an error on row 999 and an exception is thrown. Does tableA have 989 rows inside of it? Or is it 0?
I have tried googling this question, but I haven't found anything. I have also read the documentation on SQLCommand.ExecuteNonQuery() and haven't found an answer. I would appreciate any help or leads.

Since your INSERT is a single statement, then no. There will be 0 rows in tableA.
If you had multiple statements in a batch then each sucessfully executed statement will perform the requested modifications EXCEPT the statement that errors out, which will leave the tables in the state as of the completion of the prior batch.
If you have the multiple statement batch mentioned above wrapped inside a TRANSACTION then, generally speaking, if one of the statements errors you can roll back the entire batch to the state prior to any of the statements executing.
Note: again, this is generally speaking. There are many external factors that can leave your data in an inconsistent state (server failure, IO corruption, etc) in which case SQL Server will try to rollback your data from the transaction log.
This is a single statement
INSERT tableA (col1,col2,col3)
SELECT col1,col2,col3
FROM #tmpTable;
An error here (such as datatype mismatch, NULL value on a NOT NULL column, etc) will result in 0 rows being inserted into tableA.

Related

Is there a way to have Entity Framework Core partially save data?

I'm working on some bulk inserts with Entity Framework Core. To minimize round trips to the database, the new inserts are batched in groups of 100 before being added to the database context and saved using SaveChanges().
The current problem is that if any of the batch fail to insert because of, e.g., unique key violations on the table, the entire transaction is rolled back. In this scenario it would be ideal for it to simply discard the records that were unable to be inserted, and insert the rest.
I'm more than likely going to need to write a stored procedure for this, but is there any way to have Entity Framework Core skip over rows that fail to insert?
In your stored procedure, use a MERGE statement instead of an INSERT and then only use the WHEN NOT MATCHED
MERGE #tvp incoming
INTO targetTable existing WITH (HOLDLOCK)
ON (incoming.PK = existing.PK)
WHEN NOT MATCHED
INSERT
The records that match will be discarded. The #tvp is the Table Valued Parameter that is being given to the stored proc from your app code.
There are locking considerations when using the MERGE statement that may or may not apply to your scenario. It's worth reading up on concurrency and atomicity for it to make sure you cover the rest of your bases.
if you are going to the stored procedure then you can declare TVPs. In C# when you will fill the TVP it will fail and you will know in the catch that it failed then to go with recursion of this 100 rows.
Recursive function
It will break the N row into n/2 and will again call the TVP filling. If the first set is ok it will proceed and the second set will fail. The set which will fail you can simply call this recursive function again on that. It will keep your safe records in TVP and failed records seperately. You can call this recursive function up to X. Where X is a number such as 5,6,7. After that, you will be having only bad records.
If you need to know about TVP you can see this
Note : you can not use Parallel Query execution for this approach.

Oracle Unique Constraint Throwing Exception when rows do not exist

I have a table in an oracle database, lets call it Task, where I'm inserting a bunch of rows from a batch process.
I have a unique constraint set up on 4 columns, one of which is nullable (locationId, shelfId, itemId, and batchId), with one of the columns being nullable (shelfId)
In the process that's parsing the CSV file's values (read from another database table), they are batched in groups of 100 and posted to an API for further parsing (into the format of the above mentioned table) and inserted for later submission to another table (in a different schema, but with the same unique constraint). The issue I'm running into is where there are duplicates based on the above constraint in the file (they are typically sequential, and I have only ever seen one additional entry in the file). After they have been parsed, they are inserted, and I'm seeing the unique constraint exception being thrown on rows that a) do not have a row in the table and b) do not meet the unique constraint. When I remove the duplicates from the initial import file I do not get any unique constraint exceptions (which... makes sense weirdly enough).
I'm using Entity Framework in .net for the Oracle database, which I wouldn't think has anything to do with this, but it may, judging by the weirdness of this issue. I'm completely stumped as to what to do, I've tried writing additional validation and looking up the records in the table before inserting them, removing them from the initial file (which works as a work around), but I'm unsure of what to do for a long-term solution.
Example Data:
LocationId ShelfId ItemID BatchId
1 NULL 00AXXFD 1
1 NULL 00AXXFD 1
1 NULL 00FFD12 1
etc...
You are getting UK error because your input data contains duplicates. When you insert all of them at once they are part of the same transaction so Oracle sees duplicates and throws exception even before you commit. After it fails the transaction rolls back so you don't see any records inserted hence no duplicates found.
The correct approach is to remove duplicates from the input data (as you are doing) before inserting.
You use Oracle to enforce UK by committing after insertion of each row.
Note - As I was saying you mayn't be committing after inserting each row. It doesn't matter if insertion happens one by one or all at once, what matters is the transaction scope. JDBC has autocommit=true/false to enable single operation commit. When it is 'true' a transaction is committed after every operation. In general it needs to be 'false' so that you can control the transaction scope

DB Transaction: can we block just a record, not the whole table?

I am trying to insert the data into the Wallets table of a SQL Server database.
There are can be many requests at the same time so due to the sensitive info I have to use transactions.
The work flow is the following:
read the amount of the user's wallet
insert the new record based on the previously received data
I tried different isolation levels but in all the cases the transaction blocks the whole table, not just the record I am working with. Even ReadUncommitted or RepeatableRead block the whole table.
Is there a way to block only the records I am working with?
Let me detail:
I don't use any indexes in the table
The workflow (translating C# into SQL) is the following:
1) Select * from Balance
2) Insert ... INTO Balance
UPDLOCK is used when you want to lock a row or rows during a select statement for a future update statement
Ttransaction-1 :
BEGIN TRANSACTION
SELECT * FROM dbo.Test WITH (UPDLOCK) /*read the amount of the user's wallet*/
/* update the record on same transaction that were selected in previous select statement */
COMMIT TRANSACTION
Ttransaction-2 :
BEGIN TRANSACTION
/* insert a new row in table is allowed as we have taken UPDLOCK, that only prevents updating the same record in other transaction */
COMMIT TRANSACTION
It isn't possible to handle lock escalation process (row - page - table - database). Unfortunately, It makes automatically. But you can get some positive effects if:
reduce the amount of data which used in queries
optimize queries by hints, indexes etc
For INSERT INTO TABLE hint with (rowlock) can improve performance.
Also, select statement use shared (S/IS) lock types which don't allow any update of data, but doesn't block reading.
you should use optimistic locking. That will only lock the current row. Not whole table.
You can read below links for more reference :-
optimistic locking
Optimistic Concurrency

Error on large data in FluentMigrator Execute(string template, params object[] args);

I have a sql script to migrate data from old table to new one with FluentMigrator execute method.
This is my script:
INSERT INTO [Demo].[C]([key], [value], [tempID]) SELECT [name], [value], [userID] FROM [Demo].[A]
INSERT INTO [Demo].[B]([parentID], [propertyID]) SELECT [tempID], [id] FROM [Demo].[C] WHERE [tempID] IS NOT NULL
UPDATE [Demo].[C] SET [tempID] = NULL
The userProperty table has about 11 million rows and in:
first step, I have to insert to some column in C table.(11 million rows)
step two, I have to inserting data from C table to B table.(11 million rows)
step three, I should update C table (11 million rows)
Totally 11 million rows, but I getting this error:
The error was The transaction log for database 'test' is full
due to 'ACTIVE_TRANSACTION'.
I want to find the fastest way to doing because this is one time running script.
Your transaction log file is full and there is no left disk space for it to grow (if auto-grow option is specified).
Execute the query below to get more details about your transaction log file settings:
SELECT [type_desc]
,[name]
,[physical_name]
,[size]
,[max_size]
,[growth]
FROM [sys].[database_files];
There might be different solutions about your problem. For example, get more space and enable auto-grow option, execute the steps separately, etc.
Few things you check for sure:
is your database under FULL or SIMPLE recovery model
if it is using FULL recovery model, check if regular backups of the transaction log file are made (if such are not made, it will grow as many times as possible and eat your space)
If your database do not need to be under FULL recovery model, you can put it to SIMPLE.
If you are running these in the same query window, SSMS is running it all in one implicit transaction. Try putting each of these in a seperate explicit transaction. Also, clear your T-Log

Show how many rows were deleted

I use C# program and my database is in SQL server 2008.
When user deleted some rows from database, I want to show him/her in windows application how many rows deleted.
I want to know how I can send SQL message to C# and show it for user.
For example when I deleted 4 rows from table, SQL show message like (4 row(s) affected). Now I want to send number 4 to my C# program. How can I do it? Thank you.
If you are using SqlCommand from your .NET application to perform your delete/update, the result of ExecuteNonQuery() returns the number of rows affected by the last statement of the command.
See http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executenonquery.aspx.
If you're using the System.Data.SqlClient.SqlCommand.ExecuteNonQuery method or System.Data.Common.DbCommand.ExecuteNonQuery method, then the return value should be the number of rows affected by your statement (the last statement in your command, I think).
There is a caveat to this...if you execute a batch or stored procedure that does SET NOCOUNT ON, then the number of rows affected by each statement is not reported and ExecuteNonQuery will return -1 instead.
in T-SQL, there is a ##rowcount variable that you can access in order to get the number of rows affected by the last statement. Obviously you would need to grab that immediately after your DELETE statement, but I believe you could do a return ##rowcount within your T-SQL if you are using SET NOCOUNT ON.
Alternatives would be to return the value as an OUTPUT parameter, especially if you have a batch of multiple statements and you'd like to know how many rows are affected by each. Some people like to use the T-SQL RETURN statement to report success/failure, so you may want to avoid returning "number of rows affected" for consistency's sake.
I imagine you would want to do a select "count" on the delete statement before you issue the delete, then capture the number and manipulate it as needed.
Use the ##RowCount SQL Environment variable.
You can return it from a Stored Procedure if you are using them.

Categories

Resources