How to Control a Long-Running Query - c#

We are currently managing a web application that runs stored procedures, one of which is triggered whenever a user searches for a particular item. Unfortunately we encountered a very serious issue one day for this SP as it went long-running, thus it caused the database to perform very poorly. This eventually caused a lot of problems for our applications. I found out that a long-running query was causing the problem by running this database script:
SELECT sqltext.TEXT,
req.session_id,
req.status,
req.command,
req.cpu_time,
req.total_elapsed_time
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
where
sqltext.[Text] NOT LIKE '--''check%'
ORDER BY req.cpu_time DESC
So what we did to fix this was to execute KILL [SESSION_ID] and after a few seconds our application was back to normal. Now, we would like to handle this type of problem proactively so when I say control, I would like to know if it is possible for the web application to terminate this session gracefully (not causing subsequent problems) after a certain period or time or should it be handled within SQL Server itself?
If anyone still needs further clarification, please feel free to comment.

Do you really need where
sqltext.[Text] NOT LIKE '--''check%'
1.sys.dm_exec_requests also has a start time column, which you currently are not using the where clause. Pass in a start time, so its not going to the beginning of all the data
2. look at the data and try to modify the where clause accordingly.
3. Pull the execution plan for the proc pretty sure it will to do a table scan which is not good.
Here are the steps to get the execution plan
Step 1
Please modify dates, proc db name
select distinct top 1000 qs.plan_handle,o.name,d.name--,--ps.database_id
from sys.dm_exec_query_stats qs
,sys.dm_exec_procedure_stats ps
,sys.objects o
,sys.databases d
where qs.last_execution_time > '2017-03-29 17:06:42.340'
and qs.last_execution_time < '2017-03-30 18:19:45.653'
and ps.sql_handle = qs.sql_handle
and o.object_id = ps.object_id
AND o.name = 'Your proc name here'
AND d.database_id = ps.database_id
AND d.name = 'database name '
Step 2
Set output to grid and save as .sqlplan
You get a link, if you have enough permissions you can click on it and it will open. Make sure the have query output options set so there is enough space given
for xml output.
select query_plan
from sys.dm_exec_query_plan (copy your handle here from step 1, do no use quotes)

Related

C# Oracle ODP: Is it possible to return multiple query results in a single trip to the server without calling a stored procedure?

I expected to be able to include multiple SELECT statements, each separated by a semicolon, in my query, and get a dataset returned with as the same number of datatables as individual SELECT statements.
I am starting to think that the only way that this can be done is to create a stored procedure with multiple refcursor output parameters.
string sql = #"SELECT
R.DERVN_RULE_NUM
,P.DERVN_PARAM_INPT_IND
,R.DERVN_PARAM_NM
,R.DERVN_PARAM_VAL_DESC
,P.DERVN_PARAM_SPOT_NUM
,R.DERVN_PARAM_VAL_TXT
FROM
FDS_BASE.DERVN_RULE R
INNER JOIN FDS_BASE.DERVN_PARAM P
ON R.DERVN_TY_CD = P.DERVN_TY_CD
AND R.DERVN_PARAM_NM = P.DERVN_PARAM_NM
WHERE
R.DERVN_TY_CD = :DERVN_TY_CD
ORDER BY
R.DERVN_RULE_NUM
,P.DERVN_PARAM_INPT_IND DESC
, P.DERVN_PARAM_SPOT_NUM";
var dataSet = new DataSet();
using (OracleConnection oracleConnection = new OracleConnection(connectionString))
{
oracleConnection.Open();
var oracleCommand = new OracleCommand(sql, oracleConnection)
{
CommandType = CommandType.Text
};
oracleCommand.Parameters.Add(":DERVN_TY_CD", derivationType);
var oracleDataAdapter = new OracleDataAdapter(oracleCommand);
oracleDataAdapter.Fill(dataSet);
}
I tried to apply what I read here:
https://www.intertech.com/Blog/executing-sql-scripts-with-oracle-odp/
including changing my SQL to enclose it in a BEGIN END BLOCK in this form:
string sql = #"BEGIN
SELECT 1 FROM DUAL;
SELECT 2 FROM DUAL;
END";
and replacing my end of line character
sql = sql.Replace("\r\n", "\n");
but nothing works.
Is this even possible w/o using a stored procedure using ODP or must I make a seperate trip to the server for each query?
The simplest way to return multiple query results from a single statement is with the CURSOR SQL function. For example:
select
cursor(select * from all_tables) tables,
cursor(select * from all_objects) objects
from dual;
(However, I am not a C# programmer, so I don't know if this solution will work for you. Let me know if the code doesn't work - there's probably another solution using anonymous blocks and OUT parameters.)
must I make a seperate trip to the server for each query?
The way this is asked makes it seem like there's a considerable effort or waste of resources going on somewhere that can be saved or reduced, like making a database query is the equivalent of walking to the shops to get milk, coming back, then walking to the shops again to get bread and coming back
There isn't any appreciable saving to be had; if this was going to the shops, db querying is like being able to clone yourself X times, the X of you all going to the shops, and coming back at different times - some of you found your small things instantly and sprint back with them, some of you found the massive things instantly and stagger back with them, some of you took ages to find your things etc. (These are metaphors for the speed of query execution and the time required to download large vs small result sets).
If you have two queries that take ten seconds each to run, you can set them going in parallel and have your results ready and retrieved to the client in 10+x seconds (x being the time required to drag the data over the network), or you could execute them in series and have it be 20+x
If you think about it, putting two queries in one statement is only the same thing as submitting two statements for execution over different connections. The set of steps the db must take, and the set of steps the client must do to read, are the same. Writing a sproc to handle it is more effort, more complexity to maintain and more places code lives in. Even writing a block to do it is more. None of it saves anything. Even the bytes in the header of the tcp packets, minutiae as they are, are offset by more complex multi line blocks. If one query takes considerably longer than the other you might even be hamstrung into having to wait for them all to finish before you can get the results
Write your "query statement x with y parameters and return resultset Z" as async, start two of them and Task.WhenAll to wait for them to finish; if you can handle it, don't do a WhenAll but instead read and use the results as they finish - that's a saving, if the process can logically proceed before all queries deliver
I get that you're thinking "surely I should just walk to the shops and carry milk and bread back with me - that's more efficient than going twice" but it's a faulty perspective when you consider that the shop is nanoseconds away because you run at the speed of light, you have multiple private unobstructed paths to it and the bigger spend of time is finding the items you want and loading them into sufficient chained-together carts/dragging them all home. With a cloning approach, if the milk is right there, one of you can take it home and spend 10 minutes making the béchamel with it while the other of you is still waiting 10 minutes for the shop to bake the bread that you'll eat directly when you get home - you can still eat in 10 minutes if you maintain the parallelism, and launching separate operations is not only simpler but it keeps you in easy control of that

Figuring out what process is running on my SQL that is being called by my c# code

I am working on a .NET nop commerce application where I have around 5 million+ results in the database and I need to query all of that data for extraction. But the data from SQL is never returned to my code while my GC keeps on growing (it goes beyond 1gb) but when I run the same stored procedure in SQL after providing the respective parameters, it takes less than 2 minutes. I need to somehow figure out why call from my code is taking so much time.
NopCommerce uses entity framework libraries to call the databases stored procedure but that is not async so I am just trying to call the stored procedure in an async way using this function:
await dbcontext.Database.SqlQuery<TEntity>(commandText, parameters).ToListAsync();
as of my research from another SO post ToListAsync(); turns this call into an async when so the task is sent back to the task library.
now I need to figure out 3 things that currently I'm unable to do:
1) I need to figure out if that thread is running in the background? I assume it is as GC keeps growing but I'm just not sure, below is a pic of how I tried that using Diagnostics tool in Visual Studio:
2) I need to make sure if SQL processes are giving enough time to the database calls from my code, I tried following queries but they don't show me any value for the process running for that particular data export initiated by my code
I tried this query:
select top 50
sum(qs.total_worker_time) as total_cpu_time,
sum(qs.execution_count) as total_execution_count,
count(*) as number_of_statements,
qs.plan_handle
from
sys.dm_exec_query_stats qs
group by qs.plan_handle
order by sum(qs.total_worker_time) desc
also tried this one:
SELECT
r.session_id
,st.TEXT AS batch_text
,SUBSTRING(st.TEXT, statement_start_offset / 2 + 1, (
(
CASE
WHEN r.statement_end_offset = - 1
THEN (LEN(CONVERT(NVARCHAR(max), st.TEXT)) * 2)
ELSE r.statement_end_offset
END
) - r.statement_start_offset
) / 2 + 1) AS statement_text
,qp.query_plan AS 'XML Plan'
,r.*
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS st
CROSS APPLY sys.dm_exec_query_plan(r.plan_handle) AS qp
ORDER BY cpu_time DESC
also when I use sp_who or sp_who2 the statuses for the processes for my database stay in the 'runnable' form like this, also those CPU and DISKIO:
3) I need to know, what if my DB call has been completed successfully but mapping them to the relevant list is taking a lot of time?
I would very much appreciate someone pointing me in the right direction, maybe help me with a query that can help me see the right results, or help me with viewing the running background threads and their status or maybe helping me with learning more about viewing the GC or threads and CPU utilization in a better way.
Any help will be highly appreciated. Thanks
A couple of diagnostic things to try:
Try adding a top 100 clause to the select statement, to see if there's a problem in the communication layer, or in the data mapper.
How much data is being returned by the stored procedure? If the procedure is returning more than a million rows, you may not be querying the data you mean.
Have you tried running it both synchronously and asynchronously?

Absolute most basic T-SQL DELETE statement causing a timeout

This is going to top some of the weirdest things I've seen. I've tried looking up "simple t-sql delete causing timeout" but all titles are misleading, they say simple but are not. They deal with deleting millions of records or have complex relationships setup. I do not.
I have four tables:
tblInterchangeControl,
tblFunctionalGroup,
tblTransactionSet,
tblSegment
The latter 3 all associate to tblInterchangeConrol via InterchangeControlID. There are no relationships setup. like I said as simple as one could get.
The procedure runs a delete statement on all 4 tables like so...
DELETE FROM tblSegment
WHERE (ID_InterchangeControlID = #InterchangeControlID)
DELETE FROM tblTransactionSet
WHERE (ID_InterchangeControlID = #InterchangeControlID)
DELETE FROM tblFunctionalGroup
WHERE (ID_InterchangeControlID = #InterchangeControlID)
DELETE FROM tblInterchangeControl
WHERE (InterchangeControlID = #InterchangeControlID)
The weird part is if I leave these in the procedure it times out, if I remove them, it does not. I've pinned it to these delete statements that are the cause. But Why?!
I included c# because I'm calling this procedure from a c# application. I don't think this is the issue but maybe. I only say I don't think so because my code work just fine when I remove the delete statements inside the stored procedure. Then if I put them back, an exception is thrown that it's timed out.
In case my comment is the answer.
Most likely you have some locks holding those deletes up.
If you run a query from a command line SQL tool or from SQL Management Studio it will take whatever it needs to complete the query. So yes, most likely it's client part issue. And, because you mentioned c# it's probably ADO.NET command timeout.
Also, I'd suggest to profile the queries by inspecting their execution plans. In case you don't have indexes (primary/unique key constraints) this will result to full-scan, i.e. O(n) operation you don't want.
Update:
OK, looks like it's ADO.NET error. In your code, just prior executing the command increase the timeout:
var myCommand = new SqlCommand("EXEC ..."); // you create it something like this
....
myCommand.CommandTimeout = 300; // 5 minutes
myCommand.ExecuteNonReader(); // assuming your SP doesn't return anything

Deadlocks during logon to ASP app caused by dropping/creating SQL Server views

I have been chasing this issue for a day now and am stumped, so thought I would put it out to you folks for some inspiration. I'm a bit of a novice when it comes to deadlocks and SQL Server lock modes, I rarely need to delve into this.
The short story:
When a user logs into our application, we want to update a SQL Server view based on the fact that they now have a "session", so that when they subsequently run a SQL Server Reporting Services report based on a report model, it includes security settings for their session.
The regular deadlock I've noticed is occuring between the process that DROPs and reCREATEs the view (which I call the AuthRuleCache), and a Microsoft SQL Server Reporting Services 2008 (SSRS) report that tries to select from the view.
The if I read the SQL Profiler deadlock event properly, the AuthRuleCache has a Sch-M lock, and the report has an IS lock.
The AuthRuleCache code is C# in a DotNet assembly, it's executed when users log into our Classic ASP app.
Obviously I want to avoid the deadlock because it's preventing logins - I don't mind how I achieve this as long as I don't need to compromise any other functionality. I've got full control over the AuthRuleCache and the database, but I would say that we're "light" on enterprise DBA expertise.
Here is an example deadlock event from SQL Profiler:
<deadlock-list>
<deadlock victim="process4785288">
<process-list>
<process id="process4785288" taskpriority="0" logused="0" waitresource="OBJECT: 7:617365564:0 " waittime="13040" ownerId="3133391" transactionname="SELECT" lasttranstarted="2013-01-07T15:16:24.680" XDES="0x8005bd10" lockMode="IS" schedulerid="8" kpid="20580" status="suspended" spid="83" sbid="0" ecid="0" priority="0" trancount="0" lastbatchstarted="2013-01-07T15:15:55.780" lastbatchcompleted="2013-01-07T15:15:55.780" clientapp=".Net SqlClient Data Provider" hostname="MYMACHINE" hostpid="1176" loginname="MYMACHINE\MyUser" isolationlevel="read committed (2)" xactid="3133391" currentdb="7" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056">
<executionStack>
<frame procname="adhoc" line="2" stmtstart="34" sqlhandle="0x02000000bd919913e43fd778cd5913aabd70d423cb30904a">
SELECT
CAST(1 AS BIT) [c0_is_agg],
1 [agg_row_count],
COALESCE([dbo_actions2].[ActionOverdue30days], 0) [ActionOverdue30days],
COALESCE([dbo_actions3].[ActionOverdueTotal], 0) [ActionOverdueTotal],
COALESCE([dbo_actions4].[ActionOverdue90daysPLUS], 0) [ActionOverdue90daysPLUS],
COALESCE([dbo_actions5].[ActionOverdue60days], 0) [ActionOverdue60days],
COALESCE([dbo_actions6].[ActionOverdue90days], 0) [ActionOverdue90days],
COALESCE([dbo_actions7].[ActionPlanned30days], 0) [ActionPlanned30days],
COALESCE([dbo_actions8].[ActionPlanned60days], 0) [ActionPlanned60days],
COALESCE([dbo_actions9].[ActionPlanned90days], 0) [ActionPlanned90days],
COALESCE([dbo_actions10].[ActionPlanned90daysPLUS], 0) [ActionPlanned90daysPLUS],
COALESCE([dbo_actions11].[ActionPlannedTotal], 0) [ActionPlannedTotal],
CASE WHEN [dbo_actions12].[CountOfFilter] > 0 THEN 'Overdue0-30days' WHEN [dbo_actions13].[CountOfFilter] > 0 THEN 'Overdue90daysPlus' WHEN [dbo_actions5].[Count </frame>
</executionStack>
<inputbuf>
SET DATEFIRST 7
SELECT
CAST(1 AS BIT) [c0_is_agg],
1 [agg_row_count],
COALESCE([dbo_actions2].[ActionOverdue30days], 0) [ActionOverdue30days],
COALESCE([dbo_actions3].[ActionOverdueTotal], 0) [ActionOverdueTotal],
COALESCE([dbo_actions4].[ActionOverdue90daysPLUS], 0) [ActionOverdue90daysPLUS],
COALESCE([dbo_actions5].[ActionOverdue60days], 0) [ActionOverdue60days],
COALESCE([dbo_actions6].[ActionOverdue90days], 0) [ActionOverdue90days],
COALESCE([dbo_actions7].[ActionPlanned30days], 0) [ActionPlanned30days],
COALESCE([dbo_actions8].[ActionPlanned60days], 0) [ActionPlanned60days],
COALESCE([dbo_actions9].[ActionPlanned90days], 0) [ActionPlanned90days],
COALESCE([dbo_actions10].[ActionPlanned90daysPLUS], 0) [ActionPlanned90daysPLUS],
COALESCE([dbo_actions11].[ActionPlannedTotal], 0) [ActionPlannedTotal],
CASE WHEN [dbo_actions12].[CountOfFilter] > 0 THEN 'Overdue0-30days' WHEN [dbo_actions13].[CountOfFilter] > 0 THEN 'Overdue90daysPlus' WHEN [db </inputbuf>
</process>
<process id="process476ae08" taskpriority="0" logused="16056" waitresource="OBJECT: 7:1854941980:0 " waittime="4539" ownerId="3132267" transactionname="user_transaction" lasttranstarted="2013-01-07T15:16:18.373" XDES="0x9a7f3970" lockMode="Sch-M" schedulerid="7" kpid="1940" status="suspended" spid="63" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2013-01-07T15:16:33.183" lastbatchcompleted="2013-01-07T15:16:33.183" clientapp=".Net SqlClient Data Provider" hostname="MYMACHINE" hostpid="14788" loginname="MYMACHINE\MyUser" isolationlevel="read committed (2)" xactid="3132267" currentdb="7" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056">
<executionStack>
<frame procname="adhoc" line="3" stmtstart="202" stmtend="278" sqlhandle="0x02000000cf24d22c6cc84dbf398267db80eb194e79f91543">
DROP VIEW [sec].[actions_authorized] </frame>
</executionStack>
<inputbuf>
IF EXISTS ( SELECT * FROM sys.VIEWS WHERE object_id = OBJECT_ID(N'[sec].[actions_authorized]'))
DROP VIEW [sec].[actions_authorized]
</inputbuf>
</process>
</process-list>
<resource-list>
<objectlock lockPartition="0" objid="617365564" subresource="FULL" dbid="7" objectname="617365564" id="lock932d2f00" mode="Sch-M" associatedObjectId="617365564">
<owner-list>
<owner id="process476ae08" mode="Sch-M"/>
</owner-list>
<waiter-list>
<waiter id="process4785288" mode="IS" requestType="wait"/>
</waiter-list>
</objectlock>
<objectlock lockPartition="0" objid="1854941980" subresource="FULL" dbid="7" objectname="1854941980" id="locke6f0b580" mode="IS" associatedObjectId="1854941980">
<owner-list>
<owner id="process4785288" mode="IS"/>
</owner-list>
<waiter-list>
<waiter id="process476ae08" mode="Sch-M" requestType="convert"/>
</waiter-list>
</objectlock>
</resource-list>
</deadlock>
</deadlock-list>
The LONG story:
I've decided to do this as a Q&A.
Q: Why do you have to make frequent schema changes just to enforce security on reports?
A: Well, I only arrived that this approach because our SSRS reporting mechanism is totally based on report models, and our application supports row-level security by applying rules. The rules themselves are defined in the database as little SQL fragments. These fragments are re-assembled at run-time and applied based on a) who the user is, b) what they are trying to do, and c) what they are trying to do it to. So, each user may have a unique view of the data based on the rules that apply to them. We have users authoring and saving their own reports, so I wanted this security enforced at the model to prevent them from stumbling upon data they should not have access to.
The challenge we faced with report models is that they are based on a data source view (DSV) that can only be made up of static sources, e.g. tables, named-queries, views. You cannot inject some C# code into the DSV to get it to dynamically respond to the particular user running the report. You do get the UserID at the model (SMDL) so you can use this for filtering. Our solution is to get the DSV to expose a view with ALL of the data for ALL of the currently logged in users' unique rulesets (namely, the AuthRuleCache), then the SMDL will filter this back to the unique ruleset of the requesting user. Hey-presto, you've got dynamic row-level, rule-based security in an SSRS report model!
The rules change infrequently, so it's OK for these to behave the same way for the duration of a user's session. Because we have tens of thousnds of users, but only a few hundred or so may log in during a 24 hour period, I decided to refresh the AuthRuleCache any time a user logs in and expire it after 24 hours so it contains only security info for users with current sessions.
Q: What form does the AuthRuleCache take?
A: It's a view UNIONing a buch of other views. Each user has their own view e.g. widgets_authorized_123 where widgets is the table containing data being secured, and 123 is the user id. Then, there's a master view (e.g. widgets_authorized) that UNIONs together all the user views
Q: That sounds hideously inefficient, are you a moron?
A: Possibly - however thanks to the awesomeness of the SQL Query Processor, it all seems to run nice and fast for live user reports. I experimented with using a cache table to actually hold record-ids for use with the application security and found this led to bloated-tables and delays refreshing and reading from the cache.
Q: Okay, you may still be a moron, but let's explore another option. Can you rebuild the AuthRuleCache asynchronously instead of having the user wait at logon?
A: Well, the first thing the user does after logon is hit a dashboard containing reports based on the model - so we need the security rules up and running immediately after logon.
Q: Have you explored different locking modes and isolation levels?
A: Sort of - I tried enabling altering the database read_committed_snapshot ON but that seemed to make no difference. In retrospect, I think the fact that I'm trying to do a DROP/CREATE VIEW and requiring a Sch-M lock means that Read Committed Snapshot Isolation (RCSI) wouldn't help because it's about handling concurrency of DML statements, and I'm doing DDL.
Q: Have you explored whole-database database snapshots or mirroring for reporting purposes?
A: I wouldn't rule this out, but I was hoping for more of an application-centric solution rather than making infrastructural changes. This would be a jump in resources utilization and maintenance overhead which I'd need to escalate to other people.
Q: Is there anything else we should know?
A: Yes, the AuthRuleCache refresh process is wrapped in a transaction because I wanted to make sire that nobody gets to see an incomplete/invalid cache, e.g. widget_authorized view referring to widget_authorized_123 when widget_authorized_123 has been dropped because the user's session has expired. I tested without the transaction, and the deadlocks stopped, but I started getting blocked process reports from SQL Profiler instead. I saw ~15 second delays at login, and sometimes timeouts - so put the transaction back in.
Q: How often is it happening?
A: The AuthRuleCache is switched off in the production environment at the moment so it's not affecting users. My local testing of 100 sequential logons shows that maybe 10% deadlock or fail. I suspect it is worse for users that have a long-running report model based report on their dashboard.
Q: How about report snapshots?
A: Maybe a possibility - not sure how well this works with parametized reports. My concern is that we do have some users who will be alarmed if they insert a record but don't see it on the dashboard until half an hour later. Also, I can't always guarantee everyone will use report snapshots correctly all the time, so don't want to leave the door open for deadlocks to sneak back in at a later date.
Q: Can I see the full T-SQL of the AuthRuleCache refresh transaction?
A: Here are the statements issued inside one transaction captured from SQL Profiler for one user logging on:
Look for expired sessions - we'd delete the associated view if found
SELECT TABLE_SCHEMA + '.' + TABLE_NAME
FROM INFORMATION_SCHEMA.VIEWS
WHERE TABLE_SCHEMA + '.' + TABLE_NAME LIKE 'sec.actions_authorized_%'
AND RIGHT(TABLE_NAME, NULLIF(CHARINDEX('_', REVERSE(TABLE_NAME)), 0) - 1) NOT IN (
SELECT DISTINCT CAST(empid AS NVARCHAR(20))
FROM session
)
Drop any pre-existing view for user 'myuser', id 298
IF EXISTS (
SELECT *
FROM sys.VIEWS
WHERE object_id = OBJECT_ID(N'[sec].[actions_authorized_298]')
)
DROP VIEW [sec].[actions_authorized_298]
Create a view for user id 298
CREATE VIEW [sec].[actions_authorized_298]
AS
SELECT actid
,'myuser' AS username
FROM actions
WHERE actid IN (
SELECT actid
FROM actions
WHERE (
--A bunch of custom where statements generated from security rules in the system prior to this transaction starting
)
Get a list of ALL user specific views for the actions entity
SELECT TABLE_SCHEMA + '.' + TABLE_NAME
FROM INFORMATION_SCHEMA.VIEWS
WHERE TABLE_SCHEMA + '.' + TABLE_NAME LIKE 'sec.actions_authorized_%'
Drop the existing master actions view
IF EXISTS (
SELECT *
FROM sys.VIEWS
WHERE object_id = OBJECT_ID(N'[sec].[actions_authorized]')
)
DROP VIEW [sec].[actions_authorized]
Create a new master actions view and we're done
CREATE VIEW [sec].[actions_authorized]
AS
SELECT actid
,username
FROM sec.actions_authorized_182
UNION
SELECT actid
,username
FROM sec.actions_authorized_298
UNION
-- Repeat for a bunch of other per-user custom views, generated from the prior select
-- ...
Thanks for all who offered suggestions. I've settled on a solution that I think will work for us. It may be a while before I get the final code together, but I've done some tests and it's looking positive - I wanted to close this question off with my planned approach.
Firstly, the deadlocks are a totally appropriate consequence of what I was trying to do from the outset. As I understand, recreating a view requires a schema modification lock - and any process in the middle of reading from that view requires a schema stability lock. Dependent on timing, these competing locks resulted in a deadlock in about 10% of logon attempts during busy periods.
When I changed the code to do a SET TRANSACTION ISOLATION LEVEL SERIALIZABLE before running the view drop/recreate, the deadlocks went away because it is much more restrictive about what can happen concurrently, sacrificing response speed for stability.
Unfortunately, instead of deadlocking, I was seeing blocked process reports where processes were waiting upwards of 10 seconds to obtain the necessary locks. Still not really solving my problem.
I had a rethink about my "weird solution" of using a big UNIONed view to combine multiple views. Let me be clear that I didn't arrive at this approach by choice, I am simply trying to work around a limitation in SSRS Report Models whereby you can't implement parameters in the tables/named queries underlying the model.
I found in MS documentation that Partitioned Views can use a similar structure when merging together rows from multiple tables into a single view, example here:
http://msdn.microsoft.com/en-us/library/ms190019(v=sql.105).aspx
So I'm not alone in using views in this way. I need this UNIONed view, but dropping and recreating views is going to be a performance problem. So, I did some testing using Service Broker and found I could queue up the view drop/recreate operation, allowing users to log in rapidly without waiting around for the for the DDL to complete. I'm going to follow #usr's suggestions and get the transaction as lean as possible, moving stuff not critical to completing a logon (such as expiring old sessions) out of the transaction.
Let's use your example with widgets, I assume there is a table which says which widgets are authorized for each user (if you have user groups it's just a bit more complex)
As you are using User_ID, I assume you have another tables with the user logins.
Users (User_ID, Login)
Widgets (Widget_ID, ...)
widgets_authorized (User_ID, Widget_ID)
Rename the table Widgets to AllWidgets
Create the view Widgets:
CREATE VIEW widgets
AS
SELECT AW.*
FROM AllWidgets AW
INNER JOIN widgets_authorized WA ON WA.Widget_ID = AW.Widget_ID
INNER JOIN Users U ON WA.User_ID = U.User_ID
WHERE U.Login = SYSTEM_USER
You can keep the previous model linked to the view Widgets instead of the previous table Widgets, they return the same columns, the data is filtered according to the connected user.
If you have performance problems try this one, I had a similar problem:
CREATE VIEW widgets
AS
SELECT AW.*
FROM AllWidgets AW
INNER JOIN widgets_authorized WA ON WA.Widget_ID = AW.Widget_ID
WHERE WA.User_ID IN (SELECT U.User_ID FROM Users U WHERE U.Login = SYSTEM_USER)
Another suggestion closer to your weird solution.
Instead of several views with a single schema, create views with an unique name and several schemas: sec_182.actions_authorized
Run your query with "FROM actions_authorized", don't explicit the schema, the sql engine will use the view which belong to the connected user schema.
The schema and its views may be created with a background process or at user logon (CREATE TRIGGER ... ON ALL SERVER ... AFTER LOGON ...)

Pessimistic locking of record?

I am creating a WCF Web Service, for a Silverlight application, and I need to have a record to be Read/Write Locked when Modified.
I am using MySQL version 5.5.11.
To be more specific, i would like to prevent a request from reading data from a Row when it is being modified.
The two SQL commands for UPDATE and SELECT are actually pretty simple, something like:
Update(should lock for write/read):
UPDATE user SET user = ..... WHERE id = .....
Select(should not be able to read when locked from the query above):
SELECT * FROM user WHERE id = .....
Here is what i tried but it doesn't seem to work or lock anything at all:
START TRANSACTION;
SELECT user
FROM user
WHERE id = 'the user id'
FOR UPDATE;
UPDATE user
SET user = 'the user data'
WHERE id = 'the user id';
COMMIT;
How are you determining that it's not locking the record?
When a query is run over a table with locks on it, it will wait for the locks to be released or eventually timeout. Your update transaction would happen so fast that you'd never even be able to tell that it was locked.
The only way you'd be able to tell there was a problem is if you had a query that ran after your transaction started, but returned the original value for user instead of the updated value. Has that happened?
I would have just put this in a comment but it was too long, but I'll update this with a more complete answer based off your response.
MySql uses multi-versioned concurrency control by default (and this is a very, very good behavior, instead of MSSQL). Try to use locking reads (LOCK IN SHARE MODE) to achieve what you want.

Categories

Resources