I am working with a Multi threaded Console Application, in which each thread basically tries to obtain the TOP 1 "File" row with certain criteria met and locks it.( There is a LockID column which is populated when this happens so that the next thread picks up the next available 'unlocked' 'File' row)
We put a monitor on the SQL Server DB and each time the deadlock happens on 2 queries.
SELECT TOP 1 F.Id, F.ContentTypeId, F.ManufacturerId, F.DocumentTypeId, F.Name, F.Description, F.VersionId, F.LastChangedVersionOn, F.ReferenceCount, F.LastChangedReferencesOn, F.LastChangedImageOn, F.ImageSize, F.IsStale, F.InvalidFile, CT.Id, CT.Name, CT.MimeType, CT.IsMimeAttachment, CT.Extensions, CT.CanTrackVersions, CT.UseRemoteSource, CT.FullTextFilter, CT.ContentHandler, V.Id, V.Size, V.Hash, V.Title, DT.Id, DT.Code, DT.Ordinal, DT.Name, DT.PluralName, DT.UrlPart
FROM Docs.Files F
INNER JOIN Docs.ContentTypes CT ON CT.Id = F.ContentTypeId
LEFT JOIN Docs.Versions V ON V.Id = F.VersionId
LEFT JOIN Docs.DocumentTypes DT ON DT.Id = F.DocumentTypeId
WHERE (F.LockId IS NULL OR F.LockedOn < DATEADD(hh,-1,GETUTCDATE()))
AND F.IsStale = 1 AND F.InvalidFile = 0
AND
(#Id int)UPDATE Docs.Files
SET LastChangedImageOn = GETUTCDATE(), ImageSize = (
SELECT DATALENGTH(FileImage)
FROM Docs.FileImages
WHERE FileId = #Id)
WHERE Id = #Id;
SELECT TOP 1 LastChangedImageOn FROM Docs.Files WHERE Id = #Id
The first query is run when a new thread is created and we try to obtain a new 'File' row.
The second query is run when a thread(may be a previously created one) is almost done with processing the 'File' record.Used Transactions on this query. Isolation level was "ReadCommitted".
I'm pretty sure that both the queries aren't trying to access the same "FileID" because two threads would never process the same "FileID" subsequently.
I'm terribly confused as to how I can diagnose this issue. What might be causing the deadlock between these 2 queries?
I'd really really appreciate it if someone could guide me in the right direction. Thanks a lot in advance :)
Hmm... it's been a while since I did anything with SQL Server. But let's try.
You mention that "two threads would never process the same "FileID" subsequently", how can you be sure of this? Is the ID being fed from a source outside the thread?
Related
I have an intermittent issue that I can't seem to get to the bottom of whereby a .NET Framework 4.6 MVC API (C#) uses an Entity Framework 6 (runtime v4.0.30319) stored procedure call that takes a long time to respond.
The stored procedure is hosted on Azure SQL DB.
It is an unremarkable procedure, it interrogates a couple of tables to give a nutritionist their clients' latest data and website interactions.
ALTER PROCEDURE [PORTAL].[GetNutritionistCustomers]
#id_usernut varchar(20)
AS
WITH Activity AS (
SELECT LastActivity = MAX(ExecutionDate),
ded.UserName
FROM portal.DailyExecutionData ded
WHERE ded.ExecutionDate < GETDATE()
GROUP BY ded.UserName
),
Logging AS (
SELECT LastLogin = MAX(l.LoggedOnDate),
l.UserName
FROM PORTAL.Logins l
WHERE l.LoginType = 'Direct Client'
GROUP BY l.UserName
)
SELECT unc.CompanyCode,
a.ACCOUNT,
ad.ADDRESS1,
un.ID_UserNut,
ueu.ExpirationDate,
Expired = CAST(CASE WHEN ueu.ExpirationDate < GETDATE() THEN 1 ELSE 0 END AS BIT),
LastActive = la.LastActivity,
l.LastLogin
FROM UK_Nutritionist un
JOIN UK_NutCompany unc ON un.ID_UserNut = unc.ID_UserNut
JOIN UK_EnabledUsers ueu ON unc.CompanyCode = ueu.UserName
JOIN Klogix.ACCOUNT a ON ueu.ID_User = a.ACCOUNTID
LEFT JOIN Klogix.ADDRESS ad ON a.ADDRESSID = ad.ADDRESSID AND ad.IsPrimary = 1
LEFT JOIN Activity la ON ueu.UserName = la.UserName
LEFT JOIN Logging l ON ueu.UserName = l.UserName
WHERE un.ID_UserNut = #id_usernut
Run in SSMS or ADS this procedure takes on average about 50-100ms to complete. This is consistent, the tables are well indexed and the query plan is all index seeks. There are no Key or RID lookup red flags.
Having investigated with profiler, I've determined that what happens is EF creates a connection, calls the procedure. I can see the RPC entered, EF reaches it connection timeout period, and then we get RPC completed, and the data returns to EF, and the code spins out a list of pocs to return to the caller in json.
[HttpGet]
[Route("Customers")]
public HttpResponseMessage Get()
{
try
{
if (IsTokenInvalid(Request))
return Request.CreateResponse(HttpStatusCode.Unauthorized);
var nutritionistCustomers = new ExcdbEntities().GetNutritionistCustomers(TokenPayload.Nutritionist).Select(x => new NutritionistCustomers()
{
Account = x.ACCOUNT,
Address = x.ADDRESS1,
CompanyCode = x.CompanyCode,
ExpirationDate = x.ExpirationDate,
expired = x.Expired,
LastActive = x.LastActive,
LastLogin = x.LastLogin
}).ToList();
return Request.CreateResponse(HttpStatusCode.OK, GenerateResponse(nutritionistCustomers))
}
catch (Exception e)
{
return Request.CreateResponse(HttpStatusCode.InternalServerError, GenerateResponse(e));
}
}
If I change the timeout on the connection then the amount of time that SQL waits before releasing the data changes.
I thought it might be something to do with the Azure app service that hosts the API, but it turns out that running it in debug in Visual Studio has the same issue.
The short term fix is to recompile the stored procedure. This will then return the data immediately. But at some undetermined point in the future the same procedure will suddenly manifest this behaviour again.
Only this procedure does this, all other EF interactions, whether linq to sql table or view interrogations, and all other procedures seem to behave well.
This has happened before in a now legacy system that had the same architecture, though it was a different procedure on a different database.
Whilst shortening the timeout is a workaround, it's a system wide value, and even 10 seconds, enough to cope with system hiccouphs, is way too long for this customer selection screen, which should be instantaneous.
Also I would rather fix the underlying problem, if anyone has any clue what might be going on.
I have considered an OPTION_RECOMPILE on the statement, but it just feels wrong to do so.
If anyone else has experienced this and has any insight I'd be most grateful is it's driving me to distraction.
In the end this turned out to be a concatenation issue.
EF sets the CONCAT_NULL_YIELDS_NULL off inits context, and for whatever reason this caused a truly terrible performance issue.
For what I can only call legacy issue of inept previous employees, a date returned by the procedure did some inexplicable chopping up and concatenation of various dateparts of a date.
I replaced this by, you know, returning the date (and amending the EF object obviously), and hey presto no more performance issue.
I am working with a project which involves processing a lot of text files and results in either inserting records into an mssql db or updating existing information.
The sql statement is written and stored in a list until the files have finished being processed.
This list is then processed. Each statement was being processed one at a time but as this could be thousands of statements and could create a very long running process.
To attempt to speed up this process i introduced some parallel processing but this occasionally results in the following error:
Transaction (Process ID 94) was deadlocked on lock | communication
buffer resources with another process and has been chosen as the
deadlock victim. Rerun the transaction.
Code as follows:
public static void ParallelNonScalarExecution(List<string> Statements, string conn)
{
ParallelOptions po = new ParallelOptions();
po.MaxDegreeOfParallelism = 8;
CancellationTokenSource cancelToken = new CancellationTokenSource();
po.CancellationToken = cancelToken.Token;
Parallel.ForEach(Statements, po, Statement =>
{
using (SqlConnection mySqlConnection = new SqlConnection(conn))
{
mySqlConnection.Open();
using (SqlCommand mySqlCommand = new SqlCommand(Statement, mySqlConnection))
{
mySqlCommand.CommandTimeout = Timeout;
mySqlCommand.ExecuteScalar();
}
}
});
}
The update statements i believe are simple in what they are trying to achieve:
UPDATE TableA SET Notes = 'blahblahblah' WHERE Code = 1
UPDATE TableA SET Notes = 'blahblahblah', Date = '2016-01-01' WHERE Code = 2
UPDATE TableA SET Notes = 'blahblahblah' WHERE Code = 3
UPDATE TableA SET Notes = 'blahblahblah' WHERE Code = 4
UPDATE TableB SET Type = 1 WHERE Code = 100
UPDATE TableA SET Notes = 'blahblahblah', Date = '2016-01-01' WHERE Code = 5
UPDATE TableB SET Type = 1 WHERE Code = 101
What is the best way to overcome this issue?
From what I see you don't want to do what you are doing. I would NOT recommend having multiple update statements that effect the same data/table on different threads. This is the breeding of a race condition/dead lock. In your case it should be safe, but if at any point you changed the where condition and there was overlap you would have a race condition issue.
If you really wanted to speed this up with multi-threading than having all of the update statements for tableA on one thread and all of the update statements on tableB on one thread. Another idea is to block your update statements.
UPDATE TableA SET Notes = 'blahblahblah' WHERE Code IN (1,2,3,4,5)
UPDATE TableA SET Date = '2016-01-01' WHERE Code IN (2,5)
UPDATE TableB SET Type = 1 WHERE Code IN (100,101)
These above statements should be able to be independently execute in a concurent enviroment as no two statements effect the same column?
Thread A updates resource X and does not commit and can continue doing more updates.
Thread B updates resource y and does not commit and can continue doing more updates.
At this point, both have uncommitted updates.
Now thread A updates resource y and waits on the lock from Thread B.
Thread B is not held up by anything, so it goes on, eventually tries to update resource x and is blocked by the lock A has on x.
Now they are in a deadlock. It's a stalemate, not one can proceed to commit, so the system kills one.
You have to commit more often to reduce the chances of a deadlock (but that does not eliminate the possibility entirely), or you have to carefully order your updates so all updates to x get done and completed before going on to do any updates on y.
Code, notice the order of the values is different. So it alternates between locking rows:
static void Main( string[] args )
{
List<int> list = new List<int>();
for(int i = 0; i < 1000; i++ )
list.Add( i );
Parallel.ForEach( list, i =>
{
using( NamePressDataContext db = new NamePressDataContext() )
{
db.ExecuteCommand( #"update EBayDescriptionsCategories set CategoryId = Ids.CategoryId from EBayDescriptionsCategories
join (values (7276, 20870),(240, 20870)) as Ids(Id,CategoryId) on Ids.Id = EBayDescriptionsCategories.Id" );
db.ExecuteCommand( #"update EBayDescriptionsCategories set CategoryId = Ids.CategoryId from EBayDescriptionsCategories
join (values (240, 20870),(7276, 20870)) as Ids(Id,CategoryId) on Ids.Id = EBayDescriptionsCategories.Id" );
}
} );
}
Table def:
CREATE TABLE [dbo].[EDescriptionsCategories](
[CategoryId] [int] NOT NULL,
[Id] [int] NOT NULL,
CONSTRAINT [PK_EDescriptionsCategories] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
Exception:
Transaction (Process ID 80) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
The code works only with WITH (TABLOCK) hint. Is it possible not to lock the whole table just to update just those 2 rows in parallel?
Your two statements acquire row locks in different order. That's a classic case for deadlocks. You can fix this by ensuring that the order of locks taken is always in some global order (for example, ordered by ID). You should probably coalesce the two UPDATE statements into one and sort the list of IDs on the client before sending it to SQL Server. For many typical UPDATE plans this actually works fine (not guaranteed, though).
Or, you add retry logic in case you detect a deadlock (SqlException.Number == 1205). This is more elegant because it requires no deeper code changes. But deadlocks have performance implications so only do this for low deadlock rates.
If your parallel processing generates lots of updates, you could INSERT all those updates into a temp table (which can be done concurrently) and when you are done you execute one big UPDATE that copies all the individual update records to the main table. You just change the join source in your sample queries.
Code, notice the order of the values is different. So it alternates between locking rows
No, it doesn't alternate. It acquires the locks in two different order. Deadlock is guaranteed.
Is it possible not to ... update just those 2 rows in parallel?
Not like that it isn't. What you're asking for is the definition of a deadlock. Something gotta give. The solution must come from your business logic, there should be no attempts to process the same set of IDs from distinct transactions. What that means, is entire business specific. IF you cannot achieve that, then basically you are just begging for deadlocks. There are some things you can do, but none is bulletproof and all come at great cost. The problem is higher up the chain.
Agree with other answers as regards to the locking.
The more pressing question is what are you hoping to gain from this? There's only one cable those commands are travelling down.
You are probably making the overall performance worse by doing this. Far better to do your computation in parallel but serialize (and possibly batch) your updates.
I have the following query which checks goods receipts against purchase orders to see what items were originally ordered and how many have been booked in via goods receipts. E.g I place a purchase order for 10 banana milkshakes, I then generate a goods receipt stating that I received 5 of these milkshakes on said purchase order.
SELECT t.PONUM, t.ITMNUM, t.ordered,
SUM(t.received) as received,
t.ordered - ISNULL(SUM(t.received),0) as remaining,
SUM(t.orderedcartons) as orderedcartons,
SUM(t.cartonsreceived) as cartonsreceived,
SUM(t.remainingcartons) as remainingcartonsFROM(SELECT pod.PONUM,
pod.ITMNUM, pod.QTY as ordered, ISNULL(grd.QTYRECEIVED, 0) as received,
pod.DELIVERYSIZE as orderedcartons,
ISNULL(grd.DELIVERYSIZERECEIVED, 0) as cartonsreceived,
(pod.DELIVERYSIZE - ISNULL(grd.DELIVERYSIZERECEIVED, 0)) as remainingcartons
FROM TBLPODETAILS pod
LEFT OUTER JOIN TBLGRDETAILS grd
ON pod.PONUM = grd.PONUM and pod.ITMNUM = grd.ITMNUM) t
GROUP BY t.ITMNUM, t.PONUM, t.ordered
ORDER BY t.PONUM
Which returns the following data:
PONUM ITMNUM ordered received remaining orderedcartons cartonsreceived remainingcartons
1 1 5.0000 3.0000 2.0000 5.0000 3.0000 2.0000
Next I have a C# loop to generate update queries based on the data I get back from the above query:
foreach (DataRow POUpdate in dt.Rows) {...
query += "UPDATE MYTABLE SET REMAININGITEMS=" + remainingQty.ToString()
+ ", REMAININGDELIVERYSIZE=" + remainingBoxes.ToString() + " WHERE ITMNUM="
+ itemNumber + " AND PONUM=" + poNumber + ";";
I then execute each update query against the DB. Which works fine on my local dev machine.
However deploying to production server pulls back over 150,000 records on that first query.
So looping around so many rows locks up SQL and my app. Is it the foreach? Is it the original select loading all that data into memory? Both? Can I make this query into one single query and cut out the C# loop? If so what's the most efficient way to achieve this?
In SQL, the goal should be to write operations on entire tables at once. The SQL server can be very efficient at doing so, but will need a significant overhead on any interaction, since it needs to deal with consistency, atomicity of transactions, etc. So in a way, your fixed cost per transaction is high, for the server to do its thing, but your marginal cost for additional rows in a transaction is very low - updating 1m rows may be 1/2 as fast as updating 10.
This means that the foreach is going to cause the SQL server to constantly go back and forth with your application, and that fixed cost of locking/unlocking and doing transactions is being occurred every time.
Can you write the query to operate in SQL, instead of manipulating data in C#? It seems you want to write a relatively simple update based on your select statement (See, for instance, SQL update from one Table to another based on a ID match.
Try something like the following (Not code tested, since i don't have access to your database structure, etc.):
UPDATE MYTABLE
SET REMAININGITEMS = remainingQty,
REMAININGDELIVERYSIZE=remainingBoxes
From
(SELECT t.PONUM, t.ITMNUM, t.ordered,
SUM(t.received) as received,
t.ordered - ISNULL(SUM(t.received),0) as remaining,
SUM(t.orderedcartons) as orderedcartons,
SUM(t.cartonsreceived) as cartonsreceived,
SUM(t.remainingcartons) as remainingcartonsFROM(SELECT pod.PONUM,
pod.ITMNUM, pod.QTY as ordered, ISNULL(grd.QTYRECEIVED, 0) as received,
pod.DELIVERYSIZE as orderedcartons,
ISNULL(grd.DELIVERYSIZERECEIVED, 0) as cartonsreceived,
(pod.DELIVERYSIZE - ISNULL(grd.DELIVERYSIZERECEIVED, 0)) as remainingcartons
FROM TBLPODETAILS pod
LEFT OUTER JOIN TBLGRDETAILS grd
ON pod.PONUM = grd.PONUM and pod.ITMNUM = grd.ITMNUM) t
GROUP BY t.ITMNUM, t.PONUM, t.ordered
ORDER BY t.PONUM ) as x
join MYTABLE on MYTABLE.ITMNUM=x.itmnum AND MYTABLE.PONUM=i.ponum
As KM says in the comments, the problem here is coming back to the client app, and to then operate on each row with another database trip. That's slow, and can lead to stupid little bugs, which could cause bogus data.
Also, concatenating strings into SQL as you're doing is generally considered a very bad idea - SQL Injection (as Joel Coehoorn writes) is a real possibility.
How about:
create view OrderBalance
as
SELECT t.PONUM, t.ITMNUM, t.ordered,
SUM(t.received) as received,
t.ordered - ISNULL(SUM(t.received),0) as remaining,
SUM(t.orderedcartons) as orderedcartons,
SUM(t.cartonsreceived) as cartonsreceived,
SUM(t.remainingcartons) as remainingcartonsFROM(SELECT pod.PONUM,
pod.ITMNUM, pod.QTY as ordered, ISNULL(grd.QTYRECEIVED, 0) as received,
pod.DELIVERYSIZE as orderedcartons,
ISNULL(grd.DELIVERYSIZERECEIVED, 0) as cartonsreceived,
(pod.DELIVERYSIZE - ISNULL(grd.DELIVERYSIZERECEIVED, 0)) as remainingcartons
FROM TBLPODETAILS pod
LEFT OUTER JOIN TBLGRDETAILS grd
ON pod.PONUM = grd.PONUM and pod.ITMNUM = grd.ITMNUM) t
GROUP BY t.ITMNUM, t.PONUM, t.ordered
This seems to have exactly the data that your "MYTABLE" has - maybe you don't even need MYTABLE anymore, and you can just use the view!
If you have other data on MYTABLE, your update becomes:
UPDATE MYTABLE
SET REMAININGITEMS = ob.remainingitems,
REMAININGDELIVERYSIZE = ob.remainingBoxes
from MYTABLE mt
join OrderBalance ob
on mt.ITMNUM = ob.itemNumber
AND mt.PONUM = ob.poNumber
(Although, as David Mannheim writes, it may be better to not use a view and use a solution similar to the one he proposes).
The other answers show you a great way to perform the whole update entirely in RDBMS. If you can do it like that, that's the perfect solution: you cannot beat it with a C# / RDBMS combination because of extra roundtrips and data transfer issues.
However, if your update requires some calculations that for one reason or the other cannot be performed in RDBMS, you should modify your code to construct a single parameterized update in place of a potentially gigantic 150000-row update that you are currently constructing.
using (var upd = conn.CreateCommand()) {
upd.CommandText = #"
UPDATE MYTABLE SET
REMAININGITEMS=#remainingQty
, REMAININGDELIVERYSIZE=#remainingBoxes
WHERE ITMNUM=#itemNumber AND PONUM=#poNumber";
var remainingQtyParam = upd.CreateParameter();
remainingQtyParam.ParameterName = "#remainingQty";
remainingQtyParam.DbType = DbType.Int64; // <<== Correct for your specific type
upd.Parameters.Add(remainingQtyParam);
var remainingBoxesParam = upd.CreateParameter();
remainingBoxesParam.ParameterName = "#remainingBoxes";
remainingBoxesParam.DbType = DbType.Int64; // <<== Correct for your specific type
upd.Parameters.Add(remainingBoxesParam);
...
foreach (DataRow POUpdate in dt.Rows) {
remainingQtyParam.Value = ...
remainingBoxesParam.Value = ...
upd.ExecuteNonQuery();
}
}
The idea is to make 150,000 updates that look the same into a single parameterized update that is actually a single statement.
I'd like to increase performance of very simple select and update queries of .NET & MSSQL 2k8.
My queries always select or update a single row. The DB tables have indexes on the columns I query on.
My test .NET code looks like this:
public static MyData GetMyData(int accountID, string symbol)
{
using (var cnn = new SqlConnection(connectionString))
{
cnn.Open();
var cmd = new SqlCommand("MyData_Get", cnn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(CreateInputParam("#AccountID", SqlDbType.Int, accountID));
cmd.Parameters.Add(CreateInputParam("#Symbol", SqlDbType.VarChar, symbol));
SqlDataReader reader = cmd.ExecuteReader();
while (reader.Read())
{
var MyData = new MyData();
MyData.ID = (int)reader["ID"];
MyData.A = (int)reader["A"];
MyData.B = reader["B"].ToString();
MyData.C = (int)reader["C"];
MyData.D = Convert.ToDouble(reader["D"]);
MyData.E = Convert.ToDouble(reader["E"]);
MyData.F = Convert.ToDouble(reader["F"]);
return MyData;
}
}
}
and the according stored procedure looks like this:
PROCEDURE [dbo].[MyData_Get]
#AccountID int,
#Symbol varchar(25)
AS
BEGIN
SET NOCOUNT ON;
SELECT p.ID, p.A, p.B, p.C, p.D, p.E, p.F FROM [MyData] AS p WHERE p.AccountID = #AccountID AND p.Symbol = #Symbol
END
What I'm seeing if I run GetMyData in a loop, querying MyData objects, I'm not exceeding about ~310 transactions/sec. I was hoping to achieve well over a 1000 transactions/sec.
On the SQL Server side, not really sure what I can improve for such a simple query.
ANTS profiler shows me that on the .NET side, as expected, the bottleneck is cnn.Open and cnn.ExecuteReader, however I have no idea how I could significantly improve my .NET code?
I've seen benchmarks though where people seem to easily achieve 10s of thousands transactions/sec.
Any advice on how I can significantly improve the performance for this scenario would be greatly appreciated!
Thanks,
Tom
EDIT:
Per MrLink's recommendation, adding "TOP 1" to the SELECT query improved performance to about 585 transactions/sec from 310
EDIT 2:
Arash N suggested to have the select query "WITH(NOLOCK)" and that dramatically improved the performance! I'm now seeing around 2500 transactions/sec
EDIT 3:
Another slight optimization that I just did on the .NET side helped me to gain another 150 transactions/sec. Changing while(reader.Read()) to if(reader.Read()) surprisingly made quite a difference. On avg. I'm now seeing 2719 transactions/sec
Try using WITH(NOLOCK) in your SELECT statement to increase the performance. This would select the row without locking it.
SELECT p.ID, p.A, p.B, p.C, p.D, p.E, p.F FROM [MyData] WITH(NOLOCK) AS p WHERE p.AccountID = #AccountID AND p.Symbol = #Symbol
Some things to consider.
First, your not closing the server connection. (cnn.Close();) Eventually, it will get closed by the garbage collector. But until that happens, your creating a brand new connection to the database every time rather than collecting one from the connection pool.
Second, Do you have an index set in Sql Server covering the AccountID and Symbol columns?
Third, While accountId being and int is nice and fast. The Symbol column being varchar(25) is always going to be much slower. Can you change this to an int flag?
Make sure your database connections are actually pooling. If you are seeing a bottleneck in cnn.Open, there would seem to be a good chance they are not getting pooled.
I was hoping to achieve well over a 1000 transactions/sec [when running GetMyData in a loop]
What you are asking for is for GetMyData to run in less than 1ms - this is just pointless optimisation! At the bare minimum this method involves a round trip to the database server (possibly involving network access) - you wouldn't be able to make this method much faster if your query was SELECT 1.
If you have a genuine requirement to make more requests per second then the answer is either to use multiple threads or to buy a faster PC.
There is absolutely nothing wrong with your code - I'm not sure where you have seen people managing 10,000+ transactions per second, but I'm sure this must have involved multiple concurrent clients accessing the same database server rather than a single thread managing to execute queries in less than a 10th of a ms!
Is your method called frequently? Could you batch your requests so you can open your connection, create your parameters, get the result and reuse them for several queries before closing the whole thing up again?
If the data is not frequently invalidated (updated) I would implement a cache layer. This is one of the most effective ways (if used correctly) to gain performance.
You could use output parameters instead of a select since there's always a single row.
You could also create the SqlCommand ahead of time and re-use it. If you are executing a lot of queries in a single thread you can keep executing it on the same connection. If not, you could create a pool of them or do cmdTemplate.Clone() and set the Connection.
Try re-using the Command and Prepare'ing it first.
I can't say that it will definitely help, but it seems worth a try.
In no particular order...
Have you (or your DBAs) examined the execution plan your stored procedure is getting? Has SQL Server cached a bogus execution plan (either due to oddball parameters or old stats).
How often are statistics updated on the database?
Do you use temp tables in your stored procedure? If so, are they create upfront. If not, you'll be doing a lot of recompiles as creating a temp table invalidates the execution plan.
Are you using connection pooling? Opening/Closing a SQL server connection is an expensive operation.
Is your table clustered on accountID and Symbol?
Finally...
Is there a reason you're hitting this table by account and symbol rather than, say, just retrieving all the data for an entire account in one fell swoop?