Using nested transactions in C# DataContext DNN - c#

I am using this basic structure:
using(IDataContext ctx = DataContext.Instance())
{
try{
ctx.BeginTransaction();
ctx.Commit();
}
catch(Exception)
{
ctx.RollbackTransaction();
throw;
}
}
What I would like to do is to nest transactions so that I can build using functional programming.
Very simplified version:
public void DoSomething(){
using(IDataContext ctx = DataContext.Instance())
{
try{
ctx.BeginTransaction();
// make a change to the data
ctx.Commit();
}
catch(Exception)
{
ctx.RollbackTransaction();
throw;
}
}
}
public void CallingFunction(){
using(IDataContext ctx = DataContext.Instance())
{
try{
ctx.BeginTransaction();
//do some stuff
DoSomething();
//do some other stuff
ctx.Commit();
}
catch(Exception)
{
ctx.RollbackTransaction();
throw;
}
}
}
So I want to be able to have multiple 'CallingFunctions' that all call DoSomething(), but if there is an exception thrown in the code that comes in CallingFunction after DoSomething, then I want DoSomething to roll back also.
DoSomething may be in the same class as CallingFunction or it may be in another class.
Surely this is possible, but I haven't been able to find the answer.
Thank you for your assistance.
After checking my code, I realised that it is using DataContext from the DotNetNuke.Data namespace. Perhaps a DNN expert can assist?

You should inject the context in all the functions that need to use it, instead of having create their own. That way all changes before commit are on the same transaction and you can rollback everything if anything errors.

Related

Safely release dependency injected wcf client in .net core

I want to use Microsoft's dependency injection in .Net Core (2.2) to inject and safely release WCF clients. I'm using the "WCF Web Service Reference Provider Tool" in VS2019 to add WCF proxy classes to my solution. Using Microsoft.Extensions.DependencyInjection I can register the clients in the services collection, but I can't seem to find a way of hooking into a release lifecycle event (as can be done in various other IoC frameworks, e.g. Autofac), to add code for doing a safe release according to Microsoft's recommendations described here: https://learn.microsoft.com/en-us/dotnet/framework/wcf/samples/use-close-abort-release-wcf-client-resources
Is there any way of doing something like that in the quite basic dependency injection functionality that comes with .Net Core framework? Or am I forced to use 3rd party IoC framework?
Pseudo code:
So basically I want to do something like this:
// Register the channel factory for the service. Make it
// Singleton since you don't need a new one each time.
services.AddSingleton(p => new ChannelFactory<IWcfService>(
new BasicHttpBinding(),
new EndpointAddress("http://localhost/WcfService")));
// Register the service interface using a lambda that creates
// a channel from the factory.
// TODO: need a way to handle proper disposal, e.g. like OnRelease().
services.AddTransient<IWcfService>(p =>
p.GetService<ChannelFactory<IWcfService>>().CreateChannel())
.OnRelease(CloseChannel); // <---This is what I would like to do
static void CloseChannel<T>(T channel)
{
var disp = (ICommunicationObject) channel;
try
{
if (disp.State == CommunicationState.Faulted)
disp.Abort();
else
disp.Close();
}
catch (TimeoutException)
{
disp.Abort();
}
catch (CommunicationException)
{
disp.Abort();
}
catch (Exception)
{
disp.Abort();
throw;
}
}
But I need a way to hook into the service release lifecycle event, e.g. something like .OnRelease() in Autofac, so I can do proper disposal.
I dont know if you still need a response, but on my end to resolve this issue I implemented the dispose into the partial class.
Each time the wcf client is disposed the correct clean up is made:
public partial class MyWcfClient : IDisposable
{
protected void Dispose(bool disposing)
{
if (disposing)
{
bool success = false;
try
{
if (State != CommunicationState.Faulted)
{
Close();
}
success = true;
}
finally
{
if (!success)
{
Abort();
}
}
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
}

Is it possible to wrap a method with a constant code from inside of it?

I have many database queries that I'd like to wrap inside of the same try-catch style error handler. Trying to keep a code DRY I figured it would be effective to do something like this but i couldn't find anything similar. Is there any other approach or is it possible to do exactly this?
I'd like to make an outside method like this:
try
{
// I would like to put any method here
}
catch (DbEntityValidationException)
{
// Error handling
}
catch(DbUpdateException)
{
// Error handling
}
catch (InvalidOperationException)
{
// Error handling
}
catch (EntityException)
{
// Error handling
}
And where "// I would like to put any method here" is I would like to put a method like this:
public DatabaseResultEnum DoSmth(int someId)
{
using ( var context = new TestingDatabaseEntities() )
{
// Any database action
}
}
and it would be really convenient to just call the inside ("DoSmth()") method instead of putting an action in the first one, and then passing parameters into it like in this example: Generic Function wrapper
Thanks in advance!
Use a delegate.
So the caller would use code like:
result = ExceptionChecker((context) => {
// do something with the DbContext context
return results...
});
Where you then have
IEnumerable<TRes> ExceptionChecker<TRes>(Func<MyDbContext, IEnumerable<TRes>> function) {
try {
using (var ctx = new MyDbContext(...)) {
return function(ctx);
}
} catch (InvalidOperationException) {
// Handle...
} // etc.
}
Of course, real code should be using async/await to avoid blocking threads on long running queries. And ideally you would manage the context instances better to make use of EF's support for the unit of work pattern (eg. have a single context per web request) in which case the DbContext instance is passed to the helper. But this shows the approach.
Yes, it is possible.
public T DoSomething(Func<TestingDatabaseEntities, T> func)
{
try
{
using ( var context = new TestingDatabaseEntities() )
{
// Any database action
return func(context);
}
}
catch (DbEntityValidationException)
{
// Error handling
}
catch(DbUpdateException)
{
// Error handling
}
catch (InvalidOperationException)
{
// Error handling
}
catch (EntityException)
{
// Error handling
}
}
And then to consume:
public DatabaseResultEnum DoSmth(int someId)
{
return this.DoSomething(context => context
.DatabaseResultEnum.FirstOrDefault(y => y.Id == someId));
}

Entity Framework 5 Connection Pooling In MultiThreading

I am relatively new to Entity Framework and have limited knowledge on entity frameworks/sqlservers connection pooling functionality. I have to implement a multithreaded operation that includes database changes.
The MainMethod looks for new database entries and queues up the tasks using ThreadPool.QueueUserWorkItem().
I am having database operations in the child threads too. For that I am creating the dbcontext object in the child thread.
I would like to know if the way I have implemented will have any performance issues. Will my child threads be creating new connection strings or reusing available ones. I have read that Entity framework makes use of the sql server pooling functionality but don't know how exactly.
public class DataProcessor
{
public void MainMethod(DataUow dataUow) //Entry point (Main thread)DataUow is the repository class
{
while (true)
{
try
{
var toDoMessageList = dataUow.MessageDumps.GetAll().ToList(); //Get Records
if (toDoMessageList.Count() > 0)
{
foreach (var message in toDoMessageList)
{
ThreadPool.QueueUserWorkItem(RunThread, message);
message.ThreadHandlingStatus = 2; //Do changes to message
dataUow.MessageDumps.Update(message); //and update
}
dataUow.Commit();
}
else
{
Thread.Sleep(500);
}
}
catch (Exception ex)
{
//Do Something
}
}
}
private void RunChildThread(object messageDataSent)
{
using (var dataContext = new DataUow(new RepositoryProvider(new RepositoryFactories())))
{
try
{
//Some Long Task Including Database changes
dataContext.Commit(); //Repository method that contains the entity framework .SaveChanges()
}
catch (Exception ex)
{
//Do Something
}
}
}
}
DataUow is the repository pattern class.
Thank you.

Asynchronous data loading and subsequent error handling

I have an application that involves a database. Previously, upon opening a window, I would query the database and use this to populate aspects of my view model. This worked reasonably well, but could create noticeable pauses when the data access took longer than expected.
The natural solution, of course, is to run the database query asynchronously and then populate the view model when that query completes. This isn't too hard, but it raises some interesting questions regarding error handling.
Previously, if something went wrong with the database query (a pretty big problem, granted), I would propagate the exception through the view model constructor, ultimately making it back up to the caller that wanted to open the window. It could then display an appropriate error and not actually open the window.
Now, however, the window opens right away, then populates later as the query completes. The question, now, is at what point should I check for an error in the background task? The window is already open, so the behavior needs to be different somehow, but what is a clean way to indicate the failure to the user and allow for graceful recovery/shutdown?
For reference, here is a snippet demonstrating the basic pattern:
public ViewModel()
{
_initTask = InitAsync();
//Now where do I check on the status of the init task?
}
private async Task InitAsync()
{
//Do stuff...
}
//....
public void ShowWindow()
{
var vm = new ViewModel(); //Previously this could throw an exception that would prevent window from being shown
_windowServices.Show(vm);
}
One option I've considered is use an asynchronous factory method for constructing the ViewModel, allowing the entire thing to be constructed and initialized before attempting to display the window. This preserves the old approach of reporting errors before the window is ever opened. However, it gives up some of the UI responsiveness gained by this approach, which allows initial loading of the window to occur in parallel with the query and also allows me (in some cases) to update the UI in increments as each query completes, rather than having the UI compose itself all at once. It avoids locking up the UI thread, but it doesn't reduce the time before the user actually sees the window and can start interacting with it.
Maybe use some kind of messaging/mediator between your viewmodel and underlying service?
Semi-pseudo code using MVVMLight
public ViewModel()
{
Messenger.Default.Register<NotificationMessage<Exception>>(this, message =>
{
// Handle here
});
Task.Factory.StartNew(() => FetchData());
}
public async Task FetchData()
{
// Some magic happens here
try
{
Thread.Sleep(2000);
throw new ArgumentException();
}
catch (Exception e)
{
Messenger.Default.Send(new NotificationMessage<Exception>(this, e, "Aw snap!"));
}
}
I dealt with a similar problem here. I found it'd be best for me to raise an error event from inside the task, like this:
// ViewModel
public class TaskFailedEventArgs: EventArgs
{
public Exception Exception { get; private set; }
public bool Handled { get; set; }
public TaskFailedEventArgs(Exception ex) { this.Exception = ex; }
}
public event EventHandler<TaskFailedEventArgs> TaskFailed = delegate { };
public ViewModel()
{
this.TaskFailed += (s, e) =>
{
// handle it, e.g.: retry, report or set a property
MessageBox.Show(e.Exception.Message);
e.Handled = true;
};
_initTask = InitAsync();
//Now where do I check on the status of the init task?
}
private async Task InitAsync()
{
try
{
// do the async work
}
catch (Exception ex)
{
var args = new TaskFailedEventArgs(ex);
this.TaskFailed(this, args);
if (!args.Handled)
throw;
}
}
// application
public void ShowWindow()
{
var vm = new ViewModel(); //Previously this could throw an exception that would prevent window from being shown
_windowServices.Show(vm);
}
The window still shows up, but it should be displaying some kind of progress notifications (e.g. using IProgress<T> pattern), until the end of the operation (and the error info in case it failed).
Inside the error event handler, you may give the user an option to retry or exit the app gracefully, depending on your business logic.
Stephen Cleary has a series of posts on his blog about Async OOP. In particular, about constructors.

How to ensure that a system level operation is atomic? any pattern?

I have a method which internally performs different sub-operations in an order and at failure of any of the sub operation i want to Rollback the entire operation.
My issue is the sub-operations are not all database operations. These are mainly system level changes like adding something in windows registry, creating a folder at a specified path and setting permissions etc. the sub-operations can be more than this.
want to do somthing like this;
CreateUser(){
CreateUserFtpAccount();
CreateUserFolder();
SetUserPermission();
CreateVirtualDirectoryForUser();
....
....
....
and many more
}
if last operation fails, i want to roll back all previous operations.
So, what is the standard way to do this? is there a design pattern do handle this?
Note: i'm using C#.net
Here's one way to do it:
Using the command pattern, you can create undoable actions. With each operation, you register the related commands, so that you can undo the executed commands when a fail condition occurs.
For example, this might all belong in a transaction-like context object that implements IDisposable and put in a using block. The undoable actions would be registered to this context object. On dispose, if not committed, "undo" is carried out for all registered commands. Hope it helps. The downside is you may have to convert some methods to classes. This might be a necessary evil though.
Code sample:
using(var txn = new MyTransaction()) {
txn.RegisterCommand(new CreateUserFtpAccountCommand());
txn.RegisterCommand(new CreateUserFolderCommand());
txn.RegisterCommand(new SetUserPermissionCommand());
txn.RegisterCommand(new CreateVirtualDirectoryForUserCommand());
txn.Commit();
}
class MyTransaction : IDisposable {
public void RegisterCommand(Command command){ /**/ }
public void Commit(){ /* Runs all registered commands */ }
public void Dispose(){ /* Executes undo for all registered commands */ }
}
class UndoableCommand {
public Command(Action action) { /**/ }
public void Execute() { /**/ }
public void Undo{ /**/ }
}
Update:
You mentioned that you have hundreds of such reversible operations. In this case, you can take a more functional approach and get rid of UndoableCommand completely. You would register delegates instead, like this:
using(var txn = new MyTransaction()) {
txn.Register(() => ftpManager.CreateUserAccount(user),
() => ftpManager.DeleteUserAccount(user));
txn.Register(() => ftpManager.CreateUserFolder(user, folder),
() => ftpManager.DeleteUserFolder(user, folder));
/* ... */
txn.Commit();
}
class MyTransaction : IDisposable {
public void Register(Action operation, Action undoOperation){ /**/ }
public void Commit(){ /* Runs all registered operations */ }
public void Dispose(){ /* Executes undo for all registered and attempted operations */ }
}
As a note, you'd need to be careful with closures with this approach.
I think your best bet would be to encapsulate the execution and reversal of each step of the process. It will make a lot more easily read code than nested try-catch blocks. Something like:
public interface IReversableStep
{
void DoWork();
void ReverseWork();
}
public void DoEverything()
{
var steps = new List<IReversableStep>()
{
new CreateUserFTPAccount(),
new CreateUserFolder(),
...
}
var completed = new List<IReversableStep>();
try
{
foreach (var step in steps)
{
step.DoWork();
completed.Add(step);
}
}
catch (Exception)
{
//if it is necessary to undo the most recent actions first,
//just reverse the list:
completed.Reverse();
completed.ForEach(x => x.ReverseWork());
}
}
Both NTFS and the Registry support enrollment in KTM and MS DTC transactions (and by extension, TransactionScope). However, the transactional file system has been deprecated due to complexity, and may not be in some future version of windows.
Example of using transactional file system and registry from C#
Transactional NTFS
If not everything fits in a transaction, I would certainly look to the command history patterns presented in other answers to this question.
I'm not aware of any standard pattern for this type of thing but I'd probably do it with nested try/catch blocks myself - with appropriate code for the rollback of non-database operations in the catch block. Use a TransactionScope to ensure all the database operations are transactionactional.
eg:
using (TransactionScope scope)
{
try
{
DoOperationOne();
try
{
DoOperationTwo();
DoDataBaseOperationOne(); // no need for try/catch surrounding as using transactionscope
try
{
DoOperationThree();
}
catch
{
RollBackOperationThree();
throw;
}
}
catch
{
RollBackOperationTwo();
throw;
}
}
catch
{
RollbackOperationOne();
throw;
}
scope.Complete(); // the last thing that happens, and only if there are no errors!
}

Categories

Resources