I have a method which internally performs different sub-operations in an order and at failure of any of the sub operation i want to Rollback the entire operation.
My issue is the sub-operations are not all database operations. These are mainly system level changes like adding something in windows registry, creating a folder at a specified path and setting permissions etc. the sub-operations can be more than this.
want to do somthing like this;
CreateUser(){
CreateUserFtpAccount();
CreateUserFolder();
SetUserPermission();
CreateVirtualDirectoryForUser();
....
....
....
and many more
}
if last operation fails, i want to roll back all previous operations.
So, what is the standard way to do this? is there a design pattern do handle this?
Note: i'm using C#.net
Here's one way to do it:
Using the command pattern, you can create undoable actions. With each operation, you register the related commands, so that you can undo the executed commands when a fail condition occurs.
For example, this might all belong in a transaction-like context object that implements IDisposable and put in a using block. The undoable actions would be registered to this context object. On dispose, if not committed, "undo" is carried out for all registered commands. Hope it helps. The downside is you may have to convert some methods to classes. This might be a necessary evil though.
Code sample:
using(var txn = new MyTransaction()) {
txn.RegisterCommand(new CreateUserFtpAccountCommand());
txn.RegisterCommand(new CreateUserFolderCommand());
txn.RegisterCommand(new SetUserPermissionCommand());
txn.RegisterCommand(new CreateVirtualDirectoryForUserCommand());
txn.Commit();
}
class MyTransaction : IDisposable {
public void RegisterCommand(Command command){ /**/ }
public void Commit(){ /* Runs all registered commands */ }
public void Dispose(){ /* Executes undo for all registered commands */ }
}
class UndoableCommand {
public Command(Action action) { /**/ }
public void Execute() { /**/ }
public void Undo{ /**/ }
}
Update:
You mentioned that you have hundreds of such reversible operations. In this case, you can take a more functional approach and get rid of UndoableCommand completely. You would register delegates instead, like this:
using(var txn = new MyTransaction()) {
txn.Register(() => ftpManager.CreateUserAccount(user),
() => ftpManager.DeleteUserAccount(user));
txn.Register(() => ftpManager.CreateUserFolder(user, folder),
() => ftpManager.DeleteUserFolder(user, folder));
/* ... */
txn.Commit();
}
class MyTransaction : IDisposable {
public void Register(Action operation, Action undoOperation){ /**/ }
public void Commit(){ /* Runs all registered operations */ }
public void Dispose(){ /* Executes undo for all registered and attempted operations */ }
}
As a note, you'd need to be careful with closures with this approach.
I think your best bet would be to encapsulate the execution and reversal of each step of the process. It will make a lot more easily read code than nested try-catch blocks. Something like:
public interface IReversableStep
{
void DoWork();
void ReverseWork();
}
public void DoEverything()
{
var steps = new List<IReversableStep>()
{
new CreateUserFTPAccount(),
new CreateUserFolder(),
...
}
var completed = new List<IReversableStep>();
try
{
foreach (var step in steps)
{
step.DoWork();
completed.Add(step);
}
}
catch (Exception)
{
//if it is necessary to undo the most recent actions first,
//just reverse the list:
completed.Reverse();
completed.ForEach(x => x.ReverseWork());
}
}
Both NTFS and the Registry support enrollment in KTM and MS DTC transactions (and by extension, TransactionScope). However, the transactional file system has been deprecated due to complexity, and may not be in some future version of windows.
Example of using transactional file system and registry from C#
Transactional NTFS
If not everything fits in a transaction, I would certainly look to the command history patterns presented in other answers to this question.
I'm not aware of any standard pattern for this type of thing but I'd probably do it with nested try/catch blocks myself - with appropriate code for the rollback of non-database operations in the catch block. Use a TransactionScope to ensure all the database operations are transactionactional.
eg:
using (TransactionScope scope)
{
try
{
DoOperationOne();
try
{
DoOperationTwo();
DoDataBaseOperationOne(); // no need for try/catch surrounding as using transactionscope
try
{
DoOperationThree();
}
catch
{
RollBackOperationThree();
throw;
}
}
catch
{
RollBackOperationTwo();
throw;
}
}
catch
{
RollbackOperationOne();
throw;
}
scope.Complete(); // the last thing that happens, and only if there are no errors!
}
Related
I am using this basic structure:
using(IDataContext ctx = DataContext.Instance())
{
try{
ctx.BeginTransaction();
ctx.Commit();
}
catch(Exception)
{
ctx.RollbackTransaction();
throw;
}
}
What I would like to do is to nest transactions so that I can build using functional programming.
Very simplified version:
public void DoSomething(){
using(IDataContext ctx = DataContext.Instance())
{
try{
ctx.BeginTransaction();
// make a change to the data
ctx.Commit();
}
catch(Exception)
{
ctx.RollbackTransaction();
throw;
}
}
}
public void CallingFunction(){
using(IDataContext ctx = DataContext.Instance())
{
try{
ctx.BeginTransaction();
//do some stuff
DoSomething();
//do some other stuff
ctx.Commit();
}
catch(Exception)
{
ctx.RollbackTransaction();
throw;
}
}
}
So I want to be able to have multiple 'CallingFunctions' that all call DoSomething(), but if there is an exception thrown in the code that comes in CallingFunction after DoSomething, then I want DoSomething to roll back also.
DoSomething may be in the same class as CallingFunction or it may be in another class.
Surely this is possible, but I haven't been able to find the answer.
Thank you for your assistance.
After checking my code, I realised that it is using DataContext from the DotNetNuke.Data namespace. Perhaps a DNN expert can assist?
You should inject the context in all the functions that need to use it, instead of having create their own. That way all changes before commit are on the same transaction and you can rollback everything if anything errors.
I was looking at this kind of code and wondering, is there anything that can be improved if you approach from functional programming perspective?
You don't have to strictly re-implement my example to answer, if you have different example involving Transactions, that would be great.
using (var unitOfWork = _uowManager.Begin())
{
_paymentRepository
.InsertOrUpdate(payment); // Returns payment instance
// Being executed to get Payment.Id
_uowManager
.SaveChanges();
_otherRepository
.OtherMethod(payment.Id); // Could be changed as necessary
unitOfWork
.Complete()
}
Code above is based on ASP.NET Boilerplate and Entity Framework if it helps.
I think that creating transactional monad is a great way to handle this. Doesn't matter how much FP is in C#, this creates a lot of benefits:
single, global, unified way to handle transactions
don't repeat yourself, write this class once and don't recreate transaction logic n times for each controller function
you can create this class as interface and mock it in integration tests, for example, to rollback transaction, so your tests don't impact the database state
I use something like this:
public class Tx
{
public DbConnectionExtra _connection { get; set; } // class which has DbTransaction and DbConnection combined
public T UseTx<T>(Func<T> fnToCall)
{
try
{
_connection.BeginTransaction();
var result = fnToCall();
_connection._transaction.Commit();
_connection._transaction.Dispose();
return result;
}
catch
{
_connection._transaction.Rollback();
_connection._transaction.Dispose();
throw;
}
}
}
I want to log EF query (or transaction) times back to the database so that I can monitor performance in a live system. Note I know that more than one query can happen within my UOW. That is fine, I just want to be able to tell what areas of the system are slowing down etc.
What is the best way for me to do this? My initial thought was to do this in UnitOfWork.Dispose() so that every query would automatically be logged once completed. But this smells a bit to me because I'd have to call Save() to persist the information, but what if the caller didn't want to save?
Is there another, better way I can automatically log the query time to the database?
protected virtual void Dispose(bool disposing)
{
if (this.logQueryToDatabase)
{
var timeTaken = DateTime.UtcNow - this.start;
LogPerformance(this.callingFnName, timeTaken.TotalMilliseconds);
}
this.ctx.Dispose();
}
If you know which actions to measure you can create a simple class to handle this. All you need is to wrap around the action.
The profiler class.
public class Profiler:IDisposable
{
private readonly string _msg;
private Stopwatch _sw;
public Profiler(string msg)
{
_msg = msg;
_sw = Stopwatch.StartNew();
}
public void Dispose()
{
_sw.Stop();
LogPerformance(_msg, _sw.ElapsedMilliseconds);
}
}
The usage:
using (new Profiler("Saving products"))
{
db.SaveChanges();
}
If I use Microsoft implementation unit of work from this tutorial:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
public class UnitOfWork : IDisposable
{
private SchoolContext context = new SchoolContext();
private GenericRepository<Department> departmentRepository;
private GenericRepository<Course> courseRepository;
public GenericRepository<Department> DepartmentRepository
{
get
{
if (this.departmentRepository == null)
{
this.departmentRepository = new GenericRepository<Department>(context);
}
return departmentRepository;
}
}
public GenericRepository<Course> CourseRepository
{
get
{
if (this.courseRepository == null)
{
this.courseRepository = new GenericRepository<Course>(context);
}
return courseRepository;
}
}
public void Save()
{
context.SaveChanges();
}
//......
}
I don't need to use transactions when I must add related items? For example when I must add order and order positions to database I don't need to start transaction because if something will go wrong then method Save() won't execute yes? Am I right?
_unitOfWork.OrdersRepository.Insert(order);
_unitOfWork.OrderPositionsRepository.Insert(orderPosition);
_unitOfWork.Save();
??
SaveChanges itself is transactional. Nothing happens at the database level when you call Insert, which based on the tutorial merely calls Add on the DbSet. Only once SaveChanges is called on the context does the database get hit and everything that happened up to that point is sent in one transaction.
You need transactions if you have multiple save changes in one method ... or chain of method calls using the same context.
Then you can roll back over the multiple save changes when your final update fails.
An example would be multiple repositories wrapping crud for an entity under the unit of work (IE a generic class). You may have many functions inserting and saving in each repository. However at the end you may find an issue which causes you to roll back previous saves.
EG in a service layer that needs to hit many repositories and execute a complex operation.
I have an application that involves a database. Previously, upon opening a window, I would query the database and use this to populate aspects of my view model. This worked reasonably well, but could create noticeable pauses when the data access took longer than expected.
The natural solution, of course, is to run the database query asynchronously and then populate the view model when that query completes. This isn't too hard, but it raises some interesting questions regarding error handling.
Previously, if something went wrong with the database query (a pretty big problem, granted), I would propagate the exception through the view model constructor, ultimately making it back up to the caller that wanted to open the window. It could then display an appropriate error and not actually open the window.
Now, however, the window opens right away, then populates later as the query completes. The question, now, is at what point should I check for an error in the background task? The window is already open, so the behavior needs to be different somehow, but what is a clean way to indicate the failure to the user and allow for graceful recovery/shutdown?
For reference, here is a snippet demonstrating the basic pattern:
public ViewModel()
{
_initTask = InitAsync();
//Now where do I check on the status of the init task?
}
private async Task InitAsync()
{
//Do stuff...
}
//....
public void ShowWindow()
{
var vm = new ViewModel(); //Previously this could throw an exception that would prevent window from being shown
_windowServices.Show(vm);
}
One option I've considered is use an asynchronous factory method for constructing the ViewModel, allowing the entire thing to be constructed and initialized before attempting to display the window. This preserves the old approach of reporting errors before the window is ever opened. However, it gives up some of the UI responsiveness gained by this approach, which allows initial loading of the window to occur in parallel with the query and also allows me (in some cases) to update the UI in increments as each query completes, rather than having the UI compose itself all at once. It avoids locking up the UI thread, but it doesn't reduce the time before the user actually sees the window and can start interacting with it.
Maybe use some kind of messaging/mediator between your viewmodel and underlying service?
Semi-pseudo code using MVVMLight
public ViewModel()
{
Messenger.Default.Register<NotificationMessage<Exception>>(this, message =>
{
// Handle here
});
Task.Factory.StartNew(() => FetchData());
}
public async Task FetchData()
{
// Some magic happens here
try
{
Thread.Sleep(2000);
throw new ArgumentException();
}
catch (Exception e)
{
Messenger.Default.Send(new NotificationMessage<Exception>(this, e, "Aw snap!"));
}
}
I dealt with a similar problem here. I found it'd be best for me to raise an error event from inside the task, like this:
// ViewModel
public class TaskFailedEventArgs: EventArgs
{
public Exception Exception { get; private set; }
public bool Handled { get; set; }
public TaskFailedEventArgs(Exception ex) { this.Exception = ex; }
}
public event EventHandler<TaskFailedEventArgs> TaskFailed = delegate { };
public ViewModel()
{
this.TaskFailed += (s, e) =>
{
// handle it, e.g.: retry, report or set a property
MessageBox.Show(e.Exception.Message);
e.Handled = true;
};
_initTask = InitAsync();
//Now where do I check on the status of the init task?
}
private async Task InitAsync()
{
try
{
// do the async work
}
catch (Exception ex)
{
var args = new TaskFailedEventArgs(ex);
this.TaskFailed(this, args);
if (!args.Handled)
throw;
}
}
// application
public void ShowWindow()
{
var vm = new ViewModel(); //Previously this could throw an exception that would prevent window from being shown
_windowServices.Show(vm);
}
The window still shows up, but it should be displaying some kind of progress notifications (e.g. using IProgress<T> pattern), until the end of the operation (and the error info in case it failed).
Inside the error event handler, you may give the user an option to retry or exit the app gracefully, depending on your business logic.
Stephen Cleary has a series of posts on his blog about Async OOP. In particular, about constructors.