ORA-01013: user requested cancel of current operation - c#

When I execute a delete storedprocedure I am getting "ORA-01013: user requested cancel of current operation".
And also it takes time (about more than 10 seconds) to throw exception from the application
when I execute this query in Toad it takes more than 30 seconds, when i cancel it, in the output windows, it shows above error.
I think, dataaccess blog is cancels automatically when it exeeds the timeout.
I am wondering why it takes 30 seconds. And when I run the select query alone, there are no records.
When I call delete only it takes time.
DELETE FROM ( SELECT *
FROM VoyageVesselBunkers a
JOIN VoyageVessel b
ON a.VoyageVesselId = b.Id
WHERE a.Id = NVL(null,a.Id)
AND b.VoyageId = NVL('5dd6a8fbb69d4969b27d01e6c6245094',b.VoyageId)
AND a.VoyageVesselId = NVL(null,a.VoyageVesselId) );
any suggestion.
anand

If you have uncommitted changes to a data row sitting in a SQL editor (such as SQL Developer, Oracle, etc.), and you try to update the same row via another program (perhaps one that is running in an IDE such as Visual Studio), you will also get this error. To remedy this possible symptom, simply commit the change in the SQL editor.

Your code is setting a timeout (storedProcCommand.CommandTimeout). The error indicates that the stored procedure call is taking longer than the allowed timeout will allow so it is cancelled. You would either need to increase (or remove) the timeout or you would need to address whatever performance issue is causing the procedure call to exceed the allowed timeout.

ORA-01013 user requested cancel of current operation
Cause: The user interrupted an Oracle operation by entering CTRL-C, Control-C, or another
canceling operation. This forces the current operation to end. This is an informational
message only.
Action: Continue with the next operation.

Related

How do I avoid two (or more) threads that work on a table at the same time to not work on same row?

I am trying to make a C# WinForms application that fetches data from a url that is saved in a table named "Links". And each link has a "Last Checked" and "Next Check" datetime and there is "interval" which decides "next check" based on last check.
Right now, what I am doing is fetching ID with a query BEFORE doing the webscraping, and after that I turn Last Checked into DateTime.Now and Next Check into null untill all is completed. Which both then gets updated, after web scraping is done.
Problem with this is if there is any "abort" with an ongoing process, lastcheck will be a date, but nextcheck will be null.
So I need a better way for two processes to not work on same table's same row. But not sure how.
For a multithreaded solution, the standard engineering approach is to use a pool of workers and a pool of work.
This is just a conceptual sketch - you should adapt it to your circumstances:
A worker (i.e. a thread) looks at the pool of work. If there is some work available, it marks it as in_progress. This has to be done so that no two threads can take the same work. For example, you could use a lock in C# to do the query in a database, and to mark a row before returning it.
You need to have a way of un-marking it after the thread finishes. Successful or not, in_progress must be re-set. Typically, you could use a finally block so that you don't miss it in the event of any exception.
If there is no work available, the thread goes to sleep.
Whenever a new work arrives (i.e. INSERT, or a nextcheck is due), one of sleeping threads is awakened.
When your program starts, it should clear any in_progress flags in the event of a previous crash.
You should take advantage of DBMS transactions so that any changes a worker makes after completing its work are atomic - i.e. other threads percieve them as they had happened all at once.
By changing the size of worker pool, you can set the maximum number of simultaneously active workers.
First thing, the separation of controller/workers might be a better pattern as mentioned in other answer. This will work better if the number of threads gets large and te number of links to check is large.
But if your problem is this:
But problem with it is, if for any reason that scraping gets
aborted/finishes halfway/doesn't work properly, LastCheck becomes
DateTime.Now but NextCheck is left NULL, and previous
LastCheck/NextCheck values are gone, and LastCheck/NextCheck values
are updated for a link that is not actually checked
You just need to handle errors better.
The failure will result in exception. Catch the exception and handle it by resetting the state in the database. For example:
void DoScraping(.....)
{
try
{
// ....
}
catch (Exception err)
{
// oh dear, it went wrong, reset lastcheck/nextcheck
}
}
What you reset last/nextcheck to depends on you. You could reset them to what they where at the start if when you determine 'the next thing to do' you also get the values of last/nextcheck and store in variables. Then in the event of failure just set to what they were before.

How to Control a Long-Running Query

We are currently managing a web application that runs stored procedures, one of which is triggered whenever a user searches for a particular item. Unfortunately we encountered a very serious issue one day for this SP as it went long-running, thus it caused the database to perform very poorly. This eventually caused a lot of problems for our applications. I found out that a long-running query was causing the problem by running this database script:
SELECT sqltext.TEXT,
req.session_id,
req.status,
req.command,
req.cpu_time,
req.total_elapsed_time
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
where
sqltext.[Text] NOT LIKE '--''check%'
ORDER BY req.cpu_time DESC
So what we did to fix this was to execute KILL [SESSION_ID] and after a few seconds our application was back to normal. Now, we would like to handle this type of problem proactively so when I say control, I would like to know if it is possible for the web application to terminate this session gracefully (not causing subsequent problems) after a certain period or time or should it be handled within SQL Server itself?
If anyone still needs further clarification, please feel free to comment.
Do you really need where
sqltext.[Text] NOT LIKE '--''check%'
1.sys.dm_exec_requests also has a start time column, which you currently are not using the where clause. Pass in a start time, so its not going to the beginning of all the data
2. look at the data and try to modify the where clause accordingly.
3. Pull the execution plan for the proc pretty sure it will to do a table scan which is not good.
Here are the steps to get the execution plan
Step 1
Please modify dates, proc db name
select distinct top 1000 qs.plan_handle,o.name,d.name--,--ps.database_id
from sys.dm_exec_query_stats qs
,sys.dm_exec_procedure_stats ps
,sys.objects o
,sys.databases d
where qs.last_execution_time > '2017-03-29 17:06:42.340'
and qs.last_execution_time < '2017-03-30 18:19:45.653'
and ps.sql_handle = qs.sql_handle
and o.object_id = ps.object_id
AND o.name = 'Your proc name here'
AND d.database_id = ps.database_id
AND d.name = 'database name '
Step 2
Set output to grid and save as .sqlplan
You get a link, if you have enough permissions you can click on it and it will open. Make sure the have query output options set so there is enough space given
for xml output.
select query_plan
from sys.dm_exec_query_plan (copy your handle here from step 1, do no use quotes)

Unwanted DB2 timeout when using command chaining

I am using the IBM DB2 driver for .NET with command chaining.
I first open a DB2Connection and start a transaction. Then I call DB2Connection.BeginChain on my connection to start a bulk insert. I execute a bunch of prepared statements with 0 as the DB2Command.CommandTimeout. Last, I call DB2Connection.EndChain and commit the transaction.
I expect some of the inserts to fail due to duplicate key errors. I trap this by catching a DB2Exception and inspecting the DB2Exception.Errors collection. I know which row failed because I can look at the DB2Error.RowNumber inside the Errors collection.
The problem is that sometimes I trap a DB2Exception when I call DB2Connection.EndChain and the affected row number is negative.
[IBM][DB2] SQL0952N Processing was cancelled due to an interrupt. SQLSTATE=57014
Searching the DB2 documentation for this error seems to indicate that a query has timed out. I didn't see any information how this relates to chaining. Did the entire chain process time out or was it a problem with an individual query? If the later, then why didn't I get a valid row number? Why am I timing out at all if my DB2Connection.ConnectionTimeout is 0 and my DB2Command.CommandTimeout is 0?

Pessimistic locking of record?

I am creating a WCF Web Service, for a Silverlight application, and I need to have a record to be Read/Write Locked when Modified.
I am using MySQL version 5.5.11.
To be more specific, i would like to prevent a request from reading data from a Row when it is being modified.
The two SQL commands for UPDATE and SELECT are actually pretty simple, something like:
Update(should lock for write/read):
UPDATE user SET user = ..... WHERE id = .....
Select(should not be able to read when locked from the query above):
SELECT * FROM user WHERE id = .....
Here is what i tried but it doesn't seem to work or lock anything at all:
START TRANSACTION;
SELECT user
FROM user
WHERE id = 'the user id'
FOR UPDATE;
UPDATE user
SET user = 'the user data'
WHERE id = 'the user id';
COMMIT;
How are you determining that it's not locking the record?
When a query is run over a table with locks on it, it will wait for the locks to be released or eventually timeout. Your update transaction would happen so fast that you'd never even be able to tell that it was locked.
The only way you'd be able to tell there was a problem is if you had a query that ran after your transaction started, but returned the original value for user instead of the updated value. Has that happened?
I would have just put this in a comment but it was too long, but I'll update this with a more complete answer based off your response.
MySql uses multi-versioned concurrency control by default (and this is a very, very good behavior, instead of MSSQL). Try to use locking reads (LOCK IN SHARE MODE) to achieve what you want.

ThreadPool and GUI wait question

I am new to threads and in need of help. I have a data entry app that takes an exorbitant amount of time to insert a new record(i.e 50-75 seconds). So my solution was to send an insert statement out via a ThreadPool and allow the user to begin entering the data for the record while that insert which returns a new record ID while that insert is running. My problem is that a user can hit save before the new ID is returned from that insert.
I tried putting in a Boolean variable which get set to true via an event from that thread when it is safe to save. I then put in
while (safeToSave == false)
{
Thread.Sleep(200)
}
I think that is a bad idea. If i run the save method before that tread returns, it gets stuck.
So my questions are:
Is there a better way of doing this?
What am I doing wrong here?
Thanks for any help.
Doug
Edit for more information:
It is doing an insert into a very large (approaching max size) FoxPro database. The file has about 200 fields and almost as many indexes on it.
And before you ask, no I cannot change the structure of it as it was here before I was and there is a ton of legacy code hitting it. The first problem is, in order to get a new ID I must first find the max(id) in the table then increment and checksum it. That takes about 45 seconds. Then the first insert is simply and insert of that new id and an enterdate field. This table is not/ cannot be put into a DBC so that rules out auto-generating ids and the like.
#joshua.ewer
You have the proccess correct and I think for the short term I will just disable the save button, but I will be looking into your idea of passing it into a queue. Do you have any references to MSMQ that I should take a look at?
1) Many :), for example you could disable the "save" button while the thread is inserting the object, or you can setup a Thread Worker which handle a queue of "save requests" (but I think the problem here is that the user wants to modify the newly created record, so disabling the button maybe it's better)
2) I think we need some more code to be able to understand... (or maybe is a synchronization issue, I am not a bug fan of threads too)
btw, I just don't understand why an insert should take so long..I think that you should check that code first! <- just as charles stated before (sorry, dind't read the post) :)
Everyone else, including you, addressed the core problems (insert time, why you're doing an insert, then update), so I'll stick with just the technical concerns with your proposed solution. So, if I get the flow right:
Thread 1: Start data entry for
record
Thread 2: Background calls to DB to retrieve new Id
The save button is always enabled,
if user tries to save before Thread
2 completes, you put #1 to sleep for
200 ms?
The simplest, not best, answer is to just have the button disabled, and have that thread make a callback to a delegate that enables the button. They can't start the update operation until you're sure things are set up appropriately.
Though, I think a much better solution (though it might be overblown if you're just building a Q&D front end to FoxPro), would be to throw those save operations into a queue. The user can key as quickly as possible, then the requests are put into something like MSMQ and they can complete in their own time asynchronously.
Use a future rather than a raw ThreadPool action. Execute the future, allow the user to do whatever they want, when they hit Save on the 2nd record, request the value from the future. If the 1st insert finished already, you'll get the ID right away and the 2nd insert will be allowed to kick off. If you are still waiting on the 1st operation, the future will block until it is available, and then the 2nd operation can execute.
You're not saving any time unless the user is slower than the operation.
First, you should probably find out, and fix, the reason why an insert is taking so long... 50-75 seconds is unreasonable for any modern database for a single row insert, and indicates that something else needs to be addressed, like indices, or blocking...
Secondly, why are you inserting the record before you have the data? Normally, data entry apps are coded so that the insert is not attempted until all the necessary data for the insert has been gathered from the user. Are you doing this because you are trying to get the new Id back from the database first, and then "update" the new empty record with the user-entered data later? If so, almost every database vendor has a mechanism where you can do the insert only once, without knowing the new ID, and have the database return the new ID as well... What vendor database are you using?
Is a solution like this possible:
Pre-calculate the unique IDs before a user even starts to add. Keep a list of unique Id's that are already in the table but are effectively place holders. When a user is trying to insert, reserve them one of the unique IDs, when the user presses save, they now replace the place-holder with their data.
PS: It's difficult to confirm this, but be aware of the following concurrency issue with what you are proposing (with or without threads): User A, starts to add, user B starts to add, user A calculates ID 1234 as the max free ID, user B calculates ID 1234 as the max free ID. User A inserts ID 1234, User B inserts ID 1234 = Boom!

Categories

Resources