The thing is that SQL Server sometimes chooses a session as its deadlock victim when 2 processes lock each other out. The one process does an update and the other just a read. During read SQL Server creates so called 'shared locks' which does not block other reader but does block updaters. So far the only way to solve this is to reprocess the victimized thread.
Now this is happening in a web application and I would like to have a mechanism that can do the reprocessing (let's say with a maximum of 5 times) when needed.
I've looked at the IHttpModule which has a BeginRequest() and EndRequest() event being called (amongst other events) but that does not give me the ability to reprocess the request.
In fact what I need is something that forces itself between the http handler and the process being called.
I could write something like this:
int maxtries = 5;
while(maxtries > 0)
{
try
{
using(var scope = Session.OpenTransaction())
{
// process
scope.Complete(); // commit
return result;
}
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
but I would have to write that for all requests which is tedious and error prone. I would be nice if I could just configure a kind of reprocessing handler via the Web.Config that is automatically called and does the processing deadlock reprocessing for me.
If your getting deadlocks you've got something wrong in your DB layer. You missing indices or something similar, or you are doing out of sequence updates within transactions that are locking dependant entities.
Regardless using HTTP as a mechanism to handle this error is not the way to go.
If you truly need to retry a deadlock, then you should wrap the attempt in your own function and retry almost exactly as you describe above.
BUT I would strongly suggest that you identify the cause of the deadlock and resolve it.
Hope that does not sound too dismissive of your problem, but fix the cause of the problem not the symptoms.
Since you're using MVC and assuming it is safe to rerun your entire action on DB failure, you can simply write a common base controller class from which all of your controllers will inherit (if you already don't have one), and in it override OnActionExecuting and trap specific exception(s) and retry. This way you'll have the code only in one place, but, again, assuming it is safe to rerun the entire action in such case.
Example:
public abstract class MyBaseController : Controller
{
protected override void OnActionExecuting(
ActionExecutingContext filterContext
)
{
int maxtries = 5;
while(maxtries > 0)
{
try
{
return base.OnActionExecuting(filtercontext);
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
throw new Exception("Persistent DB locking - max retries reached.");
}
}
... and then simply update every relevant controller to inherit from this controller (again, if you don't already have a common controller).
EDIT: B/w, Bigtoe's answer is correct - deadlock is the cause and should be dealt with accordingly. The above solution is really a workaround if DB layer cannot be reliably fixed. The first attempt should be on reviewing and (re-)structuring queries so as to avoid deadlock in the first place. Only if that is not practical should the above workaround be employed.
Related
I am developing one application and in that I need to consume Rest API which has token associated with it. After specific interval that token get's expired so in that case suppose if I try to call that API it's throwing exception. So to resolve this should I refresh the token in catch block and use GoTo to execute the try block again. I read couple of articles and most of them suggest to avoid using GoTo.
Below are the links which I refer for best approach to follow but still am not convinced whether to go with it or not:
1> Is it possible to execute the code in the try block again after an exception in caught in catch block?
2> https://blogs.msdn.microsoft.com/abhinaba/2005/10/01/c-try-and-retry/
Just put a retry count and a continue to skip to the next iteration of a loop in your logic:
int maxRetry = 10;
for (int i = 0; i<=maxRetry; i++)
{
try
{
//DO YOUR STUFF
}
catch (Exception)
{
//OH NOES! ERROR!
continue; //RETRY!
}
}
When it has tried 10 times it exits and that's it.
You can unleash your fantasy with any loop you like for, while, do while etc. Use the one that fits your needs.
If there is some really bad error that needs to stop the execution of the loop then break then throw an exception and use
catch(VeryWrongException ex)
{
throw;
}
catch (Exception)
{
//OH NOES! ERROR!
continue; //RETRY!
}
where VeryWrongException is the type of exceptions you want to actually manage, instead of using the previous catch condition.
Extra:
To have an idea of what kind of exceptions your code can generate and catch them, use intellisense, it's your friend:
Catch statements which use GoTo to retry the same logic can be dangerous if they are not used properly.
A better way of dealing with this is to write some retry logic, that will attempt to perform your tasks a limited number of times, ideally allowing you to specify your exception.
If you don't want to write your own retry logic, I can recommend you use an external library such as Polly
An Example of its usage would be this :
// Set up the policy
var retryPolicy = Policy
.Handle<Exception>()
.WaitAndRetry(
3,
retryAttempt => TimeSpan.FromSeconds(5 * retryAttempt)
);
// Attempt to send the message, use Polly to retry a maximum of three times.
retryPolicy.Execute(() =>
{
// Your Code
});
I am afraid you are trying to solve this problem at the wrong place. If you request to an API fails, because of an expired token, you should just throw an exception.
Another class, maybe the one that is responsible for initiating the request in the first place, could resolve the error (refreshing the token) and retry requesting data.
If you merge all this responsibility in one place, things could get complicated really fast.
I am currently using Masstransit in with the Courier pattern.
I´ve set up an Activity which may fail, and I want to be able to subscribe to this failure and act accordingly.
My problem is, even though I can subscribe to the failure, and even see the exception that caused the failure, I am unable to pass any arguments to it.
For testing purposes, supose I have the following activity:
public class MyActivity : ExecuteActivity<MyMessage>
{
public Task<ExecutionResult> Execute(ExecuteContext<MyMessage> context)
{
try
{
// .... some code
throw new FaultException<RegistrationRefusedData>(
new RegistrationRefusedData(RegistrationRefusedReason.ItemUnavailable));
// .... some code
}
catch (Exception ex)
{
return Task.FromResult(context.Faulted(ex));
}
}
}
The problem is in the reason (RegistrationRefusedReason) I am passing as a argument of the exception. If I subscribe a RoutingSlipActivityFaulted consumer, I can almost get all the information I need:
public class ActivityFaultedConsumer : IMessageConsumer<RoutingSlipActivityFaulted>
{
public void Consume(RoutingSlipActivityFaulted message)
{
string exceptionMessage = message.ExceptionInfo.Message; // OK
string messageType = message.ExceptionInfo.ExceptionType; // OK
RegistrationRefusedReason reason = ??????;
}
}
I feel like I am missing something important here, (maybe misusing the pattern?).
Is there any other way to get parameters from a faulted activity ?
So, the case you're describing isn't a Fault. It's a failure to meet a business condition. In this case, you wouldn't want to retry the transaction, you'd want to terminate it. To notify the initiator of the routing slip, you'd Publish a business event signifying that the transaction was not completed due to the business condition.
For instance, in your case, you may do something like:
context.Publish<RegistrationRefused>(new {
CustomerId = xxx,
ItemId = xxxx,
Reason = "Item was unavailable"
});
context.Terminate();
This would terminate the routing slip (the subsequent activities would not be executed), and produce a RoutingSlipTerminated event.
That's the proper way to end a routing slip due to a business condition or rule. Exceptions are for exceptional behavior only, since you'll likely want to retry them to handle the failure.
Kinda raising this from the dead, but I really haven't found a neat solution to this.
Here is my scenario:
I want to implement a request/response, but I want to wait for the execution of a routing slip.
As Fabio, I want to compensate for any previous activities and I want to pass data back to the request client in case of a fault.
Conveniently, Chris provided a RoutingSlipRequestProxy/RoutingSlipResponseProxy which does just that. I've found 2 approaches, but both of them seem very hacky to me.
Approach 1:
The request client waits for ISimpleResponse or ISimpleFailResponse.
RoutingSlipRequestProxy sets the ResponseAddress in the variables.
The activity sends ISimpleFailResponse to the ResponseAddress.
The client waits for either response
The RoutingSlipResponseProxy sends back Fault<ISimpleResponse> to the ResponseAddress.
From what I see the hackiness comes from step 4/5 and their order. I am pretty sure it works, but it could easily stop working in case messages are consumed out-of-order.
Sample code: https://github.com/steliyan/Sample-RequestResponse/commit/3fcb196804d9db48617a49c7a8f8c276b47b03ef
Approach 2:
The request client waits for ISimpleResponse or ISimpleFailResponse.
The activity calls ReviseItirery with the variables and adds a faulty activity.*
The faulty activity faults
The RoutingSlipResponseProxy2 get the ValidationErrors and sends back ISimpleFailResponse to the ResponseAddress.
* The activity needs to be Activity and not ExecuteActivity because there is no overload of ReviseItinerary with variables but with no activity log.
This approach seems hacky because an additional fault activity is added to the itinerary, just to be able to add a variable to the routing slip.
Sample code: https://github.com/steliyan/Sample-RequestResponse/commit/e9644fa683255f2bda8ae33d8add742f6ffe3817
Conclusion:
Looking at MassTransit code, it doesn't seem like a problem to add a FaultedWithVariables overload. However, I think Chris' point is that there should be a better way to design the workflow, but I am not sure about that.
I got a serviced component which looks something like this (not written by me):
[Transaction(TransactionOption.Required, Isolation = TransactionIsolationLevel.Serializable, Timeout = 120), EventTrackingEnabled(true)]
public class SomeComponent : ServicedComponent
{
public void DoSomething()
{
try
{
//some db operation
}
catch (Exception err)
{
ContextUtil.SetAbort();
throw;
}
}
Is the ContextUtil.SetAbort() really required? Won't the exception abort the transaction when the component is left?
Only if you want to manage the transaction manually.
Your component will vote automatically to abort (in case any exception is raised), or commit, if you decorate your operation with the [AutoComplete] attribute in this way:
[AutoComplete]
public void DoSomething()
EDIT:
For more info about this attribute, see MSDN here:
The transaction automatically calls SetComplete if the method call
returns normally. If the method call throws an exception, the
transaction is aborted.
Anyway if you are (in the rare case) that really need to manage the transaction manually, is really important that you don't leave your transactions in doubt. I'm missing in your code the ContextUtil.SetComplete(); that should be explicitly called.
So I'm working on this Entity Framework project that'll be used as kind of a DAL and when running stress tests (starting a couple of updates on entities through Thread()'s) and I'm getting these:
_innerException = {"Transaction (Process ID 94) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."}
Here's some example of how I implemented my classes' methods:
public class OrderController
{
public Order Select(long orderID)
{
using (var ctx = new BackEndEntities())
{
try
{
var res = from n in ctx.Orders
.Include("OrderedServices.Professional")
.Include("Agency")
.Include("Agent")
where n.OrderID == orderID
select n;
return res.FirstOrDefault();
}
catch (Exception ex)
{
throw ex;
}
}
}
public bool Update(Order order)
{
using (var ctx = new BackEndEntities())
{
try
{
order.ModificationDate = DateTime.Now;
ctx.Orders.Attach(order);
ctx.SaveChanges();
return true;
}
catch (Exception ex)
{
throw ex;
}
}
}
}
and:
public class AgentController
{
public Agent Select(long agentID)
{
using (var ctx = new BackEndEntities())
{
try
{
var res = from n in ctx.Agents.Include("Orders")
where n.AgentID == agentID
select n;
return res.FirstOrDefault();
}
catch (Exception ex)
{
throw ex;
}
}
}
public bool Update(Agent agent)
{
using (var ctx = new BackEndEntities())
{
try
{
agent.ModificationDate = DateTime.Now;
ctx.Agents.Attach(agent);
ctx.ObjectStateManager.ChangeObjectState(agent, System.Data.EntityState.Modified);
ctx.SaveChanges();
return true;
}
catch (Exception ex)
{
throw ex;
}
}
}
}
Obviously, the code here probably could be better but I'm rather of an EF newbie. But I think my problem is rather a design problem with the context.
I remember someone here mentioning that if my context is NOT shared, I won't run into these deadlock issues.
This does not seem 'shared' to me as I do a using new BackEndEntities() in each method, so what do I have to change to make it more robust ?
This DAL will be used in a web service exposed on the internet (after code review of coure) so I have no control on how much it'll be stressed and lots of different instances might want to update the same entity.
Thanks!
The reason for thouse deadlocks isn't your code but due to EF that is using SERIALIZABLE for default TransactionScope isolation level.
SERIALIZABLE is the most restricted locking possible, this means that you are by default opting into the most restrictive isolation level, and you can expect a lot of locking!
The solution is to specify another TransactionScope depending on the action you want to perform. You can surround your EF actions with something like this:
using (var scope = new TransactionScope(TransactionScopeOption.Required, new
TransactionOptions { IsolationLevel= IsolationLevel.Snapshot }))
{
// do something with EF here
scope.Complete();
}
Read more on this issue:
http://blogs.msdn.com/b/diego/archive/2012/04/01/tips-to-avoid-deadlocks-in-entity-framework-applications.aspx
http://blogs.u2u.be/diederik/post/2010/06/29/Transactions-and-Connections-in-Entity-Framework-40.aspx
http://blog.aggregatedintelligence.com/2012/04/sql-server-transaction-isolation-and.html
https://serverfault.com/questions/319373/sql-deadlocking-and-timing-out-almost-constantly
Deadlock freedom is a pretty hard problem in a big system. It has nothing to do with EF by itself.
Shortening the lifetime of your transactions reduces deadlocks but it introduces data inconsistencies. In those places where you were deadlocking previously you are now destroying data (without any notification).
So choose your context lifetime and your transaction lifetime according to the logical transaction, not according to physical considerations.
Turn on snapshot isolation. This takes reading transactions totally out of the equation.
For writing transactions you need to find a lock ordering. Often it is the easiest way to lock pessimistically and at a higher level. Example: Are you always modifying data in the context of a customer? Take an update lock on that customer as the first statement of your transactions. That provides total deadlock freedom by serializing access to that customer.
The context is what gives entity its ability to talk to the database, without a context there's no concept of what goes where. Spinning up a context, therefore, is kind of a big deal and it occupies a lot of resources, including external resources like the database. I believe your problem IS the 'new' command, since you would have multiple threads attempting to spin up and grab the same database resource, which definitely would deadlock.
Your code as you've posted it seems to be an anti-pattern. The way it looks, you have your Entity Context spinning up and going out of scope relatively quickly, while your repository CRUD objects seem to be persisting for a much longer time.
The way the companies I have implemented Entity for have traditionally done it exactly the opposite way - the Context is created and is kept for as long as the assembly has need of database, and the repository CRUD objects are created and die in microseconds.
I cannot say where you got your assertion of the context not being shared from so I dunno what circumstances that was said under, but it is absolutely true that you should not share the context across assemblies. Among the same assembly I cannot see any reason why you wouldn't with how many resources it takes to start up a context, and how long it takes to do so. The Entity Context is quite heavy, and if you were to make your current code work by going single-threaded I suspect you would see some absolutely atrocious performance.
So what I would recommend instead is to refactor this so you have Create(BackEndEntites context) and Update(BackEndEntities context), then have your master thread (the one making all these child threads) create and maintain a BackEndEntities context to pass along to its children. Also be sure that you get rid of your AgentControllers and OrderControllers the instant you're done with them and never, ever, ever reuse them outside of a method. Implementing a good inversion of control framework like Ninject or StructureMap can make this a lot easier.
I have a website built in C#.NET that tends to produce a fairly steady stream of SQL timeouts from various user controls and I want to easily pop some code in to catch all unhandled exceptions and send them to something that can log them and display a friendly message to the user.
How do I, through minimal effort, catch all unhandled exceptions?
this question seems to say it's impossible, but that doesn't make sense to me (and it's about .NET 1.1 in windows apps):
All unhandled exceptions finally passed through Application_Error in global.asax. So, to give general exception message or do logging operations, see Application_Error.
If you need to catch exeptions in all threads the best aproach is to implement UnhandledExceptionModule and add it to you application look here
for an example
Use the Application_Error method in your Global.asax file. Inside your Application_Error method implementation call Server.GetLastError(), log the details of the exception returned by Server.GetLastError() however you wish.
e.g.
void Application_Error(object sender, EventArgs e)
{
// Code that runs when an unhandled error occurs
log4net.ILog log = log4net.LogManager.GetLogger(typeof(object));
using (log4net.NDC.Push(this.User.Identity.Name))
{
log.Fatal("Unhandled Exception", Server.GetLastError());
}
}
Don't pay too much attention to the log4net stuff, Server.GetLastError() is the most useful bit, log the details however you prefer.
The ELMAH project sounds worth a try, its list of features include:
ELMAH (Error Logging Modules and
Handlers) is an application-wide error
logging facility that is completely
pluggable. It can be dynamically added
to a running ASP.NET web application,
or even all ASP.NET web applications
on a machine, without any need for
re-compilation or re-deployment.
Logging of nearly all unhandled exceptions.
A web page to remotely view the entire log of recoded exceptions.
A web page to remotely view the full details of any one logged
exception.
In many cases, you can review the original yellow screen of death that
ASP.NET generated for a given
exception, even with customErrors mode
turned off.
An e-mail notification of each error at the time it occurs.
An RSS feed of the last 15 errors from the log.
A number of backing storage implementations for the log
More on using ELMAH from dotnetslackers
You can subscribe to the AppDomain.CurrentDomain.UnhandledException event.
It's probably important to note that you are not supposed to catch unhandled exceptions. If you are having SQL timeout issues, you should specifically catch those.
Do you mean handling it in all threads, including ones created by third-party code? Within "known" threads just catch Exception at the top of the stack.
I'd recommend looking at log4net and seeing if that's suitable for the logging part of the question.
If using .net 2.0 framework, I use the built in Health Monitoring services. There's a nice article describing this method here: https://web.archive.org/web/20210305134220/https://aspnet.4guysfromrolla.com/articles/031407-1.aspx
If you're stuck with the 1.0 framework, I would use ELMAH:
http://msdn.microsoft.com/en-us/library/aa479332.aspx
hope this helps
There are 2 parts to this problem handling & identifying.
Identifying
This is what you do when the exception is finally caught, not necessarily where it is thrown. So the exception at that stage must have enough context information for you to idenitfy what the problem was
Handling
For handling, you can
a) add a HttpModeule. See
http://www.eggheadcafe.com/articles/20060305.asp
I would suggest this approach only when there is absolutely no context informaatn available and there might be issuus wiih IIS/aspnet, In short for catastrophic situations
b) Create a abstract class called AbstractBasePage which derives from Page class and have all your codebehind classes derive from AbstractBasePage
The AbstractBasePage can implement that Page.Error delegate so that all exceptions which percolate up through the n-tier architecture can be caught here(and possibly logged)
I would suggest this cause for the kind of exceptions you are talking about (SQlException) there is enough context information for you to identify that it was a timeout and take possible action. This action might include redirecting user to a custom error page with appropriate message for each different kind of exception (Sql, webservice, async call timeouts etc).
Thanks
RVZ
One short answer is to use (Anonymous) delegate methods with common handling code when the delegate is invoked.
Background: If you have targeted the weak points, or have some boilerplate error handling code you need to universally apply to a particular class of problem, and you don't want to write the same try..catch for every invocation location, (such as updating a specific control on every page, etc).
Case study: A pain point is web forms and saving data to the database. We have a control that displays the saved status to the user, and we wanted to have common error handling code as well as common display without copy-pasting-reuse in every page. Also, each page did it's own thing in it's own way, so the only really common part of the code was the error handling and display.
Now, before being slammed, this is no replacement for a data-access layer and data access code. That's all still assumed to exist, good n-tier separation, etc. This code is UI-layer specific to allow us to write clean UI code and not repeat ourselves. We're big believers in not quashing exceptions, but certain exceptions shouldn't necessitate the user getting a generic error page and losing their work. There will be sql timeouts, servers go down, deadlocks, etc.
A Solution: The way we did it was to pass an anonymous delegate to a method on a custom control and essentially inject the try block using anonymous delegates.
// normal form code.
private void Save()
{
// you can do stuff before and after. normal scoping rules apply
saveControl.InvokeSave(
delegate
{
// everywhere the save control is used, this code is different
// but the class of errors and the stage we are catching them at
// is the same
DataContext.SomeStoredProcedure();
DataContext.SomeOtherStoredProcedure();
DataContext.SubmitChanges();
});
}
The SaveControl itself has the method like:
public delegate void SaveControlDelegate();
public void InvokeSave(SaveControlDelegate saveControlDelegate)
{
// I've changed the code from our code.
// You'll have to make up your own logic.
// this just gives an idea of common handling.
retryButton.Visible = false;
try
{
saveControlDelegate.Invoke();
}
catch (SqlTimeoutException ex)
{
// perform other logic here.
statusLabel.Text = "The server took too long to respond.";
retryButton.Visible = true;
LogSqlTimeoutOnSave(ex);
}
// catch other exceptions as necessary. i.e.
// detect deadlocks
catch (Exception ex)
{
statusLabel.Text = "An unknown Error occurred";
LogGenericExceptionOnSave(ex);
}
SetSavedStatus();
}
There are other ways to achieve this (e.g. common base class, intefaces), but in our case this had the best fit.
This isn't a replacement to a great tool such as Elmah for logging all unhandled exceptions. This is a targeted approach to handling certain exceptions in a standard manner.
Timeout errors typically occur if you are not forcefully closing your sqlconnections.
so if you had a
try {
conn.Open();
cmd.ExecuteReader();
conn.Close();
} catch (SqlException ex) {
//do whatever
}
If anything goes wrong with that ExecuteReader your connection will not be closed. Always add a finally block.
try {
conn.Open();
cmd.ExecuteReader();
conn.Close();
} catch (SqlException ex) {
//do whatever
} finally {
if(conn.State != ConnectionState.Closed)
conn.Close();
}
This is old question, but the best method (for me) is not listed here. So here we are:
ExceptionFilterAttribute is nice and easy solution for me. Source: http://weblogs.asp.net/fredriknormen/asp-net-web-api-exception-handling.
public class ExceptionHandlingAttribute : ExceptionFilterAttribute
{
public override void OnException(HttpActionExecutedContext context)
{
var exception = context.Exception;
if(exception is SqlTimeoutException)
{
//do some handling for this type of exception
}
}
}
And attach it to f.e. HomeController:
[ExceptionHandling]
public class HomeController: Controller
{
}