I am using the IBM DB2 driver for .NET with command chaining.
I first open a DB2Connection and start a transaction. Then I call DB2Connection.BeginChain on my connection to start a bulk insert. I execute a bunch of prepared statements with 0 as the DB2Command.CommandTimeout. Last, I call DB2Connection.EndChain and commit the transaction.
I expect some of the inserts to fail due to duplicate key errors. I trap this by catching a DB2Exception and inspecting the DB2Exception.Errors collection. I know which row failed because I can look at the DB2Error.RowNumber inside the Errors collection.
The problem is that sometimes I trap a DB2Exception when I call DB2Connection.EndChain and the affected row number is negative.
[IBM][DB2] SQL0952N Processing was cancelled due to an interrupt. SQLSTATE=57014
Searching the DB2 documentation for this error seems to indicate that a query has timed out. I didn't see any information how this relates to chaining. Did the entire chain process time out or was it a problem with an individual query? If the later, then why didn't I get a valid row number? Why am I timing out at all if my DB2Connection.ConnectionTimeout is 0 and my DB2Command.CommandTimeout is 0?
Related
I am using one REST API which will either insert or update record in mysql table based on unique key. While using this API parallel, in some cases i am getting error state as 'Deadlock found when trying to get lock; try restarting transaction'. I don't have access to server for verification of logs.
While i am curious to know that even multiple callers are calling this API during that process if mysql takes row level locking for inserting/updating record also ideally it should not create deadlock and other calls should wait for acquiring lock.
For example caller A call this API and take row level lock on table 'tableA' then caller B call should wait till caller A doesn't release the lock and it shouldn't throw deadlock. Please help me to understand this.
Below is the table query i am using it.
INSERT INTO tableA (A,B,C,D) VALUES
{{INSERT_CLAUSE}}
ON DUPLICATE KEY UPDATE `D`= (D) + 1, E = now()";
Unique Key On columns - A,B,C
P.S I have gone through all other suggested answer but nothing seems to clear this doubt.
I'm using the following code to do a batch insert using the C# driver. I have a unique index, and I want it to fail silently if I try to insert a record that isn't unique.
Even though I have InsertFlags.ContinueOnError set, I still get an error on the InsertBatch call. If I swallow the error as I have shown below, everything works ok. But this certainly feels wrong.
var mio = new MongoInsertOptions {Flags = InsertFlags.ContinueOnError};
// newImages is a list of POCO objects
try
{
_db.GetCollection("Images").InsertBatch(newImages, mio);
}
catch (WriteConcernException)
{
}
Are you using version 1.8 of the csharp Mongo driver?
If so, try upgrading to version 1.8.1 which contains a fix for the following two issues:
InsertBatch fails when large batch has to be split into smaller sub batches
InsertBatch throws duplicate key exception with too much data...
So your inserts could succeed, but the driver is still throwing an exception on bulk insert operations due to the bug above.
And this exception doesn't originate from the database itself, explaining why the inserts succeed but you still need to catch the exception afterwards - i.e. the db is in fact respecting your ContinueOnError flag but the driver throws an exception anyway afterwards.
When I execute a delete storedprocedure I am getting "ORA-01013: user requested cancel of current operation".
And also it takes time (about more than 10 seconds) to throw exception from the application
when I execute this query in Toad it takes more than 30 seconds, when i cancel it, in the output windows, it shows above error.
I think, dataaccess blog is cancels automatically when it exeeds the timeout.
I am wondering why it takes 30 seconds. And when I run the select query alone, there are no records.
When I call delete only it takes time.
DELETE FROM ( SELECT *
FROM VoyageVesselBunkers a
JOIN VoyageVessel b
ON a.VoyageVesselId = b.Id
WHERE a.Id = NVL(null,a.Id)
AND b.VoyageId = NVL('5dd6a8fbb69d4969b27d01e6c6245094',b.VoyageId)
AND a.VoyageVesselId = NVL(null,a.VoyageVesselId) );
any suggestion.
anand
If you have uncommitted changes to a data row sitting in a SQL editor (such as SQL Developer, Oracle, etc.), and you try to update the same row via another program (perhaps one that is running in an IDE such as Visual Studio), you will also get this error. To remedy this possible symptom, simply commit the change in the SQL editor.
Your code is setting a timeout (storedProcCommand.CommandTimeout). The error indicates that the stored procedure call is taking longer than the allowed timeout will allow so it is cancelled. You would either need to increase (or remove) the timeout or you would need to address whatever performance issue is causing the procedure call to exceed the allowed timeout.
ORA-01013 user requested cancel of current operation
Cause: The user interrupted an Oracle operation by entering CTRL-C, Control-C, or another
canceling operation. This forces the current operation to end. This is an informational
message only.
Action: Continue with the next operation.
I am processing extremely large delimited files. These files have been pre-processed to ensure that field and row delimiters are valid. Occasionally a row is processed that fails TSQL constraints (usually a datatype issue). 'Fixing' the input data is not an option in this case.
We use MAXERRORS to set an acceptable number of input errors and ERRORFILE to log failed rows.
The bulk insert completes in SSMS with severity level 16 error messages logged to the messages window for each failed row. Attempting to execute this code via the C# SqlCommand class causes an exception to be thrown when the first severity level 16 error message is generated, causing the batch to fail.
Is there a way to complete the operation and ignore SQL error messages via C# and something like SqlCommand?
Example Command:
BULK INSERT #some-table FROM 'filename'
WITH(FIELDTERMINATOR ='\0',ROWTERMINATOR ='\n',FIRSTROW = 2, MAXERRORS = 100, ERRORFILE = 'some-file')
Why not use SqlBulkCopy, and then capture the rows copied using the SqlRowsCopied event. This will more closely mimic the BULK INSERT T-SQL command.
EDIT: It looks like the error handling isn't that robust with SqlBulkCopy. However, here is an example that seems to do what you're looking for:
http://www.codeproject.com/Articles/387465/Retrieving-failed-records-after-an-SqlBulkCopy-exc
Since .NET support all the datatype as SQL you should be able to TryParse in .NET to catch any conversion errors. On date you also need to test for within the SQL data range. On text need to test length. I do exactly this on some very large inserts were I parse down some CSV. TryParse is pretty fast. Better than a Try Catch as it does not have the overhead of throwing and error.
And why not insert in .NET C#. There is a class for bulkcopy. I use TVP asynch insert while parsing and do 10,000 at a time.
It appears that an exception is thrown for each row that has an error but is just counted by SQL. However, it is also passed back to C# (SSIS in my case). I found that wrapping the bulk insert with TRY/CATCH logic and using THROW (to re-throw the exceptions) when more than MAXERRORS occur, works for me.
BEGIN TRY
BULK INSERT #some-table FROM 'filename'
WITH(FIELDTERMINATOR ='\0',ROWTERMINATOR ='\n',FIRSTROW = 2, MAXERRORS = 100,
ERRORFILE = 'some-file')
END TRY
BEGIN CATCH
THROW;
END CATCH
I have a simple query which returns 25,026 rows:
MySqlCommand cmd = new MySqlCommand("SELECT ID FROM People", DB);
MySqlDataReader reader = cmd.ExecuteReader();
(ID is an int.) If I just do this:
int i = 0;
while (reader.Read()) i++;
i will equal 25026. However, I need to do some processing on each ID in my loop; each iteration ends up taking somewhere in the hundreds of milliseconds.
int i = 0;
MySqlCommand updater = new MySqlCommand("INSERT INTO OtherTable (...)", anotherConnection);
updater.Prepare();
while (reader.Read()) {
int id = reader.getInt32(0);
// do stuff, then
updater.ExecuteNonQuery();
i++;
}
However, after about 4:15 of processing, reader.Read() simply returns false. In most of my test runs, i equaled 14896, but it also sometimes stops at 11920. The DataReader quitting after the same number of records is suspicious, and the times it stops after a different number of rows seems even stranger.
Why is reader.Read() returning false when there's definitely more rows? There are no exceptions being thrown – not even first chance exceptions.
Update: I mentioned in my response to Shaun's answer that I was becoming convinced that MySqlDataReader.Read() is swallowing an exception, so I downloaded Connector/Net's source code (bzr branch lp:connectornet/6.2 C:/local/path) and added the project to my solution. Sure enough, after 6:15 of processing, an exception!
The call to resultSet.NextRow() throws a MySqlException with a message of "Reading from the stream has failed." The InnerException is a SocketException:
{ Message: "An existing connection was forcibly closed by the remote host",
ErrorCode: 10054,
SocketErrorCode: ConnectionReset }
10054 means the TCP socket was aborted with a RST instead of the normal disconnection handshake (FIN, FIN ACK, ACK), which tells me something screwy is happening to the network connection.
In my.ini, I cranked interactive_timeout and wait_timeout to 1814400 (seconds) to no avail.
So... why is my connection getting torn down after reading for 6:15 (375 sec)?
(Also, why is this exception getting swallowed when I use the official binary? It looks like it should bubble up to my application code.)
Perhaps you have a corrupted table - this guy's problem sounds very similar to yours:
http://forums.asp.net/t/1507319.aspx?PageIndex=2 - repair the table and see what happens.
If that doesn't work, read on:
My guess is that you are hitting some type of Deadlock, especially considering you are reading and writing. This would explaing why it works with the simple loop, but doesn't work when you do updates. It would also explain why it happens around the same row / time each time.
There was a weird bug in SqlDataReader that squelched exceptions (http://support.microsoft.com/kb/316667). There might be something similar in MySqlDatareader - After your final .Read() call, try calling .NextResult(). Even if it's not a deadlock, it might help you diagnose the problem. In these type of situations, you want to lean more towards "trust but verify" - yes, the documentation says that and exception will be thrown on timeone, but sometimes (very rarely) that documentation lies :) This is especially true for 3rd party vendors - e.g. see http://bugs.mysql.com/bug.php?id=53439 - the mysql .net library has had a couple of problems like the one you are having in the past.
Another idea would be to watch what's happening in your database - make sure data is contantly being fetched up till the row that your code exits on.
Failing that, I would just read all the data in, cache it, and then do your modifications. By batching the modifications, the code would be less chatty and execute faster.
Alternatively, reset the reader every 1000 or so rows (and keep track of what row ID you were up to)
Hope something here helps you with your frustration! :)
Since I'm just reading ints, I ultimately just read the entire resultset into a List<int>, closed the reader, and then did my processing. This is fine for ints since a even a million take up < 100 MB of RAM, but I'm still disappointed that the root issue isn't resolved – if I were reading more than a single int per row, memory would become a very large problem with a large dataset.
Try to set longer connection timeout.
There are 2 issues that make things a bit more confusing than it should be:
The first, as has been mentioned in another post, is that older versions of the MySQL .NET connector were swallowing a timeout exception. I was using mysql.data.dll version 6.1.x and after upgrading to 6.3.6 the exception was being properly thrown.
The second is the default MySQL server timeouts, specifically net_read_timeout and net_write_timeout (which default to 30 and 60 seconds respectively).
With older versions of mysql.data.dll, when you are performing an action with the data in the datareader loop and you exceed the 60 second default timeout it would just sit there and not do anything. With newer versions it properly throws a timeout exception which helps diagnose the problem.
Hope this helps someone as I stumbled upon this but the solution was to use a different approach, not an actual cause/fix.
TLDR: The fix is increase net_read_timeout and net_write_timeout on the mysql server in my.ini although upgrading mysql.data.dll is a good idea.
May be this is timeout on server-side?
Try this
1. Add reference System.Transactions;
using(TransactionScope scope = new TransactionScope())
{
//Initialize connection
// execute command
:
:
scope.Complete();
}
Write your entire insert/update logic inside Scope's using. This will definetly help you.
Add the following after creating your Command.
cmd.CommandTimeout = 0;
This will set the CommandTimeout to indefinitly. The reason your getting a timeout is probably because the connection though executed, is still in the 'command' phase because of the Reader.
Either try setting the CommandTimeout = 0 or reading everything first, then doing subsequent functions on the results. Otherwise the only other issue i could possibley see is that the Sql Server is dropping the result set against the specified process id due to a timeout on the server itself.
i've found an article here
http://corengen.wordpress.com/2010/06/09/mysql-connectornet-hangs-on-mysqldatareader-read/
What this guy experienced was something similar: a hang on the Read method at exactly the same moment, during reading of the same record (which is the same thing you experience i guess).
In his case he called another webservice during the Read() loop, and that one timed out causing the Read() to hang without an exception.
Can it be the same at your machine, that an update in the Read() loop times out (i think that update uses the default 30 secs timeout) and causes the same effect?
Maybe a longshot, but reading the two stories the sounded a lot familiair.
Try setting the timeout on the MySqlCommand and not the MySqlConnection on both "cmd" and "updater".
What happens if you do : SELECT TOP 100 ID FROM PEOPLE ?