Design pattern with prolog and epilog - c#

I'm searching for design patter that could implement some prolog code and then epilog code.
Let me explain:
I have an function (a lot of them) that amost do the same thing:
this is presudo code but actually it's written in C# 4.5
public IDatabaseError GetUserByName(string Name)
{
try
{
//Initialize session to database
}
catch (Exception)
{
// return error with description for this step
}
try
{
// Try to create 'transaction' object
}
catch(Exception)
{
// return error with description about this step
}
try
{
// Execute call to database with session and transaction object
//
// Actually in all function only this section of the code is different
//
}
catch(Exception)
{
// Transaction object rollback
// Return error with description for this step
}
finally
{
// Close session to database
}
return everything-is-ok
}
So - as you can see 'prolog' (Create session, transaction, other helper function) and 'epilog' (close session, rollback transaction, clean memeory, etc..) is the same for all functions.
Some restrictions:
I want to keep session and transaction object creation/destruction process in function and not in ctor
Custom code (that running in the middle) must be wrapped in try/catch and return different error for different situation
I'm open for any Func<>, Action<> preferable Task<> functions suggestions
Any ideas for design patter or code refactoring ?

This can be achieved by using IDisposable objects as for example:
using(var uow = new UnitOfWork() )
using(var t = new TransactionScope() )
{
//query the database and throws exceptions
// in case of errors
}
Please nothe the TransactionScope class is an out-of-the box class you have in System.Transaction that works ( not only ) with DB connections.
In the UnitOfWork constructor do the "Prologue" code ( ie open the connection... ), in the Dispose do the epilogue part. By throwing exception when error occours you are sure the epilogue part is called anyway.

It sounds like you're looking for the Template Method Pattern.
The template method pattern will allow you to reduce the amount of duplicated code in similar methods by extracting out only the parts of the method which are different.
For this particular example, you could write a method that does all the grunt work, and then invokes a callback to do the interesting work...
// THIS PART ONLY WRITTEN ONCE
public class Database
{
// This is the template method - it only needs to be written once, so the prolog and epilog only exist in this method...
public static IDatabaseError ExecuteQuery(Action<ISession> queryCallback)
{
try
{
//Initialize session to database
}
catch (Exception)
{
// return error with description for this step
}
try
{
// Try to create 'transaction' object
}
catch(Exception)
{
// return error with description about this step
}
try
{
// Execute call to database with session and transaction object
//
// Actually in all function only this section of the code is different
//
var session = the session which was set up at the start of this method...
queryCallback(session);
}
catch(Exception)
{
// Transaction object rollback
// Return error with description for this step
}
finally
{
// Close session to database
}
return everything-is-ok
}
}
This is the usage:
// THIS PART WRITTEN MANY TIMES
IDatabaseError error = Database.ExecuteQuery(session =>
{
// do your unique thing with the database here - no need to write the prolog / epilog...
// you can use the session variable - it was set up by the template method...
// you can throw an exception, it will be converted to IDatabaseError by the template method...
});
if (error != null)
// something bad happened!
I hope I have explained better this time :)

Related

How can I structure a try-catch-finally block to handle errors inside finally?

I've got a problem with making calls to a third-party C++ dll which I've wrapped in a class using DllImport to access its functions.
The dll demands that before use a session is opened, which returns an integer handle used to refer to that session when performing operations. When finished, one must close the session using the same handle. So I did something like this:
public void DoWork(string input)
{
int apiHandle = DllWrapper.StartSession();
try
{
// do work using the apiHandle
}
catch(ApplicationException ex)
{
// log the error
}
finally
{
DllWrapper.CloseSession(apiHandle);
}
}
The problem I have is that CloseSession() sometimes causes the Dll in question to throw an error when running threaded:
System.AggregateException: One or more errors occurred. --->
System.AccessViolationException: Attempted to read or write protected
memory. This is often an indication that other memory is corrupt.
I'm not sure there's much I can do about stopping this error, since it seems to be arising from using the Dll in a threaded manner - it is supposed to be thread safe. But since my CloseSession() function does nothing except call that Dll's close function, there's not much wiggle room for me to "fix" anything.
The end result, however, is that the session doesn't close properly. So when the process tries again, which it's supposed to do, it encounters an open session and just keeps throwing new errors. That session absolutely has to be closed.
I'm at a loss as to how to design an error handling statement that's more robust any will ensure the session always closes?
I would change the wrapper to include disposal of the external resource and to also wrap the handle. I.e. instead of representing a session by a handle, you would represent it by a wrapper object.
Additionally, wrapping the calls to the DLL in lock-statements (as #Serge suggests), could prevent the multithreading issues completely. Note that the lock object is static, so that all DllWrappers are using the same lock object.
public class DllWrapper : IDisposable
{
private static object _lockObject = new object();
private int _apiHandle;
private bool _isOpen;
public void StartSession()
{
lock (_lockObject) {
_apiHandle = ...; // TODO: open the session
}
_isOpen = true;
}
public void CloseSession()
{
const int MaxTries = 10;
for (int i = 0; _isOpen && i < MaxTries; i++) {
try {
lock (_lockObject) {
// TODO: close the session
}
_isOpen = false;
} catch {
}
}
}
public void Dispose()
{
CloseSession();
}
}
Note that the methods are instance methods, now.
Now you can ensure the closing of the session with a using statement:
using (var session = new DllWrapper()) {
try {
session.StartSession();
// TODO: work with the session
} catch(ApplicationException ex) {
// TODO: log the error
// This is for exceptions not related to closing the session. If such exceptions
// cannot occur, you can drop the try-catch completely.
}
} // Closes the session automatically by calling `Dispose()`.
You can improve naming by calling this class Session and the methods Open and Close. The user of this class does not need to know that it is a wrapper. This is just an implementation detail. Also, the naming of the methods is now symmetrical and there is no need to repeat the name Session.
By encapsulating all the session related stuff, including error handling, recovery from error situations and disposal of resources, you can considerably diminish the mess in your code. The Session class is now a high-level abstraction. The old DllWrapper was somewhere at mid distance between low-level and high-level.

C# transaction with handled exception will still roll back?

Considering this piece of code:
using(TransactionScope tran = new TransactionScope()) {
insertStatementMethod1();
insertStatementMethod2();
// this might fail
try {
insertStatementMethod3();
} catch (Exception e) {
// nothing to do
}
tran.Complete();
}
Is anything done in insertStatementMethod1 and insertStatementMethod2 going to be rolled back? In any case?
If I want them to execute anyway, I would need to check if it insertStatementMethod3 will fail before the transaction, and build my transaction code based on that?
Update
The code looks similar to this
using(TransactionScope tran = new TransactionScope()) {
// <standard code>
yourExtraCode();
// <standard code>
tran.Complete();
}
where I get to write the yourExtraCode() method
public void yourExtraCode() {
insertStatementMethod1();
insertStatementMethod2();
// this call might fail
insertStatementMethod3();
}
I can only edit the yourExtraCode() method, so I cannot chose to be in the transaction scope or no. One simple possible solution would be this:
public void yourExtraCode() {
insertStatementMethod1();
insertStatementMethod2();
// this call might fail
if (findOutIfIcanInsert()) { // <-- this would come by executing sql query
try {
insertStatementMethod3();
} catch (Exception e) {
// nothing to do
}
}
}
But that would come with the need of looking up things in the db which would affect performance.
Is there a better way, or I need to find out before I'd call the method?
I tried out and, of course the transaction was rolled back as expected.
If you don't want your first two methods to be transacted, just move them out from the ambient transaction's scope.
If you don't have control over the code which starts an ambient transaction, you can suppress it by creating a new ambient transaction: using (var scope = new TransactionScope(TransactionScopeOption.Suppress)).

Getting "Not logged on in interface XBP" error when calling XBP function module via sap .net connector

I am getting the error while calling BAPI_XBP_JOB_START_IMMEDIATELY
IRfcFunction rfcFunc = repository.CreateFunction("BAPI_XMI_LOGON");
rfcFunc.SetValue("extcompany", "testC");
rfcFunc.SetValue("extproduct", "testP");
rfcFunc.SetValue("interface", "XBP");
rfcFunc.SetValue("version", "3.0");
rfcFunc.Invoke(dest);
rfcFunc = repository.CreateFunction("BAPI_XBP_JOB_START_IMMEDIATELY");
rfcFunc.SetValue("jobname", "MYSCHEDULEDJOB");
rfcFunc.SetValue("jobcount", "15530600");
rfcFunc.SetValue("external_user_name", "username");
rfcFunc.SetValue("target_server", "devsapsystem");
rfcFunc.Invoke(dest);
first function module is giving sessionid in output, but the second xbp call is giving the message "Not logged on in interface XBP". Is there any problem parameters that I am passing or do I need maintain some session during these sequential calls.
You will need to execute the function calls in a single session (stateful mode). This is outlined in detail in the JCo documentation - basically you will have to wrap your logic into JCoContext method invocations like this:
try
{
JCoContext.begin(destination);
try
{
// your function calls here
// probably bapiTransactionCommit.execute(destination);
}
catch(AbapException ex)
{
// probably bapiTransactionRollback.execute(destination);
}
}
catch(JCoException ex)
{
[...]
}
finally
{
JCoContext.end(destination);
}

How to in case of timeout to execute method again and again until it completes successfully?

I have asp.net application. All business logic in business layer.
Here is the example of the method
public void DoSomething()
{
PersonClass pc = new PersonClass();
pc.CreatePerson();
pc.AssignBasicTask();
pc.ChangePersonsStatus();
pc.CreateDefaultSettings();
}
what happens once in a while, one of the sub method can timeout, so as a result the process can be incompleted.
what I think in this case to make sure all steps completed properly is
public void DoSomething()
{
PersonClass pc = new PersonClass();
var error = null;
error = pc.CreatePerson();
if(error != timeout exception)
error = pc.AssignBasicTask();
else
return to step above
if(error != timeout exception)
error = pc.ChangePersonsStatus();
else
return to step above
if(error != timeout exception)
error = pc.CreateDefaultSettings();
else
return to step above
}
but it's just an idea, more then sure it's a proper way how to handle this.
Of course, this can be done more or less elegantly, with different options for timing out or giving up - but an easy way to achieve what you want, would be to define a retry method which keeps retrying an action until it succeeds:
public static class RetryUtility
{
public T RetryUntilSuccess<T>(Func<T> action)
{
while(true)
{
try
{
return action();
}
catch
{
// Swallowing exceptions is BAD, BAD, BAD. You should AT LEAST log it.
}
}
}
public void RetryUntilSuccess(Action action)
{
// Trick to allow a void method being passed in without duplicating the implementation.
RetryUntilSuccess(() => { action(); return true; });
}
}
Then do
RetryUtility.RetryUntilSuccess(() => pc.CreatePerson());
RetryUtility.RetryUntilSuccess(() => pc.AssignBasicTask());
RetryUtility.RetryUntilSuccess(() => pc.ChangePersonsStatus());
RetryUtility.RetryUntilSuccess(() => pc.CreateDefaultSettings());
I must urge you to think about what to do if the method keeps failing, you could be creating an infinite loop - perhaps it should give up after N retries or back off with exponentially raising retry time - you will need to define that, since we cannot know enough about your problem domain to decide that.
You have it pretty close to correct in your psuedo-code, and there a lot of ways to do this, but here is how I would do it:
PersonClass pc = new PersonClass();
while(true)
if(pc.CreatePerson())
break;
while(true)
if(pc.AssignBasicTask())
break;
This assumes that your methods return true to indicate success, false to indicate a timeoiut failure (and probably an exception for any other kind of failure). And while I didn't do it here, I would strongly recommend some sort of try counting to make sure it doesn't just loop forever and ever.
Use a TransactionScope for to make sure everything is executed as a unit. More info here: Implementing an Implicit Transaction using Transaction Scope
You should never retry a timed out operation infinitely, you may end up hanging the server or with an infinite loop or both. There should always be a threshold of how many retries is acceptable to attempt before quitting.
Sample:
using(TransactionScope scope = new TransactionScope())
{
try
{
// Your code here
// If no errors were thrown commit your transaction
scope.Complete();
}
catch
{
// Some error handling
}
}

Additional try statement in catch statement - code smell?

Situation:
My application need to process the first step in the business rules (the initial try-catch statement). If an certain error occurs when the process calls the helper method during the step, I need to switch to a second process in the catch statement. The back up process uses the same helper method. If an same error occurs during the second process, I need to stop the entire process and throw the exception.
Implementation:
I was going to insert another try-catch statement into the catch statement of the first try-catch statement.
//run initial process
try
{
//initial information used in helper method
string s1 = "value 1";
//call helper method
HelperMethod(s1);
}
catch(Exception e1)
{
//backup information if first process generates an exception in the helper method
string s2 = "value 2";
//try catch statement for second process.
try
{
HelperMethod(s2);
}
catch(Exception e2)
{
throw e2;
}
}
What would be the correct design pattern to avoid code smells in this implementation?
I caused some confusion and left out that when the first process fails and switches to the second process, it will send different information to the helper method. I have updated the scenario to reflect the entire process.
If the HelperMethod needs a second try, there is nothing directly wrong with this, but your code in the catch tries to do way too much, and it destroys the stacktrace from e2.
You only need:
try
{
//call helper method
HelperMethod();
}
catch(Exception e1)
{
// maybe log e1, it is getting lost here
HelperMethod();
}
I wouldn't say it is bad, although I'd almost certainly refactor the second block of code into a second method, so keep it comprehensible. And probably catch something more specific than Exception. A second try is sometimes necessary, especially for things like Dispose() implementations that might themselves throw (WCF, I'm looking at you).
The general idea putting a try-catch inside the catch of a parent try-catch doesn't seem like a code-smell to me. I can think of other legitimate reasons for doing this - for instance, when cleaning up an operation that failed where you do not want to ever throw another error (such as if the clean-up operation also fails). Your implementation, however, raises two questions for me: 1) Wim's comment, and 2) do you really want to entirely disregard why the operation originally failed (the e1 Exception)? Whether the second process succeeds or fails, your code does nothing with the original exception.
Generally speaking, this isn't a problem, and it isn't a code smell that I know of.
With that said, you may want to look at handling the error within your first helper method instead of just throwing it (and, thus, handling the call to the second helper method in there). That's only if it makes sense, but it is a possible change.
Yes, a more general pattern is have the basic method include an overload that accepts an int attempt parameter, and then conditionally call itself recursively.
private void MyMethod (parameterList)
{ MyMethod(ParameterList, 0)l }
private void MyMethod(ParameterList, int attempt)
{
try { HelperMethod(); }
catch(SomeSpecificException)
{
if (attempt < MAXATTEMPTS)
MyMethod(ParameterList, ++attempt);
else throw;
}
}
It shouldn't be that bad. Just document clearly why you're doing it, and most DEFINITELY try catching a more specific Exception type.
If you need some retry mechanism, which it looks like, you may want to explore different techniques, looping with delays etc.
It would be a little clearer if you called a different function in the catch so that a reader doesn't think you're just retrying the same function, as is, over again. If there's state happening that's not being shown in your example, you should document it carefully, at a minimum.
You also shouldn't throw e2; like that: you should simply throw; if you're going to work with the exception you caught at all. If not, you shouldn't try/catch.
Where you do not reference e1, you should simply catch (Exception) or better still catch (YourSpecificException)
If you're doing this to try and recover from some sort of transient error, then you need to be careful about how you implement this.
For example, in an environment where you're using SQL Server Mirroring, it's possible that the server you're connected to may stop being the master mid-connection.
In that scenario, it may be valid for your application to try and reconnect, and re-execute any statements on the new master - rather than sending an error back to the caller immediately.
You need to be careful to ensure that the methods you're calling don't have their own automatic retry mechanism, and that your callers are aware there is an automatic retry built into your method. Failing to ensure this can result in scenarios where you cause a flood of retry attempts, overloading shared resources (such as Database servers).
You should also ensure you're catching exceptions specific to the transient error you're trying to retry. So, in the example I gave, SqlException, and then examining to see if the error was that the SQL connection failed because the host was no longer the master.
If you need to retry more than once, consider placing an 'automatic backoff' retry delay - the first failure is retried immediately, the second after a delay of (say) 1 second, then doubled up to a maximum of (say) 90 seconds. This should help prevent overloading resources.
I would also suggest restructuring your method so that you don't have an inner-try/catch.
For example:
bool helper_success = false;
bool automatic_retry = false;
//run initial process
try
{
//call helper method
HelperMethod();
helper_success = true;
}
catch(Exception e)
{
// check if e is a transient exception. If so, set automatic_retry = true
}
if (automatic_retry)
{ //try catch statement for second process.
try
{
HelperMethod();
}
catch(Exception e)
{
throw;
}
}
Here's another pattern:
// set up state for first attempt
if(!HelperMethod(false)) {
// set up state for second attempt
HelperMethod(true);
// no need to try catch since you're just throwing anyway
}
Here, HelperMethod is
bool HelperMethod(bool throwOnFailure)
and the return value indicates whether or not success occurred (i.e., false indicates failure and true indicates success). You could also do:
// could wrap in try/catch
HelperMethod(2, stateChanger);
where HelperMethod is
void HelperMethod(int numberOfTries, StateChanger[] stateChanger)
where numberOfTries indicates the number of times to try before throwing an exception and StateChanger[] is an array of delegates that will change the state for you between calls (i.e., stateChanger[0] is called before the first attempt, stateChanger[1] is called before the second attempt, etc.)
This last option indicates that you might have a smelly setup though. It looks like the class that is encapsulating this process is responsible for both keeping track of state (which employee to look up) as well as looking up the employee (HelperMethod). By SRP, these should be separate.
Of course, you need to a catch a more specific exception than you currently are (don't catch the base class Exception!) and you should just throw instead of throw e if you need to rethrow the exception after logging, cleanup, etc.
You could emulate C#'s TryParse method signatures:
class Program
{
static void Main(string[] args)
{
Exception ex;
Console.WriteLine("trying 'ex'");
if (TryHelper("ex", out ex))
{
Console.WriteLine("'ex' worked");
}
else
{
Console.WriteLine("'ex' failed: " + ex.Message);
Console.WriteLine("trying 'test'");
if (TryHelper("test", out ex))
{
Console.WriteLine("'test' worked");
}
else
{
Console.WriteLine("'test' failed: " + ex.Message);
throw ex;
}
}
}
private static bool TryHelper(string s, out Exception result)
{
try
{
HelperMethod(s);
result = null;
return true;
}
catch (Exception ex)
{
// log here to preserve stack trace
result = ex;
return false;
}
}
private static void HelperMethod(string s)
{
if (s.Equals("ex"))
{
throw new Exception("s can be anything except 'ex'");
}
}
}
Another way is to flatten the try/catch blocks, useful if you're using some exception-happy API:
public void Foo()
{
try
{
HelperMethod("value 1");
return; // finished
}
catch (Exception e)
{
// possibly log exception
}
try
{
HelperMethod("value 2");
return; // finished
}
catch (Exception e)
{
// possibly log exception
}
// ... more here if needed
}
An option for retry (that most people will probably flame) would be to use a goto. C# doesn't have filtered exceptions but this could be used in a similar manner.
const int MAX_RETRY = 3;
public static void DoWork()
{
//Do Something
}
public static void DoWorkWithRetry()
{
var #try = 0;
retry:
try
{
DoWork();
}
catch (Exception)
{
#try++;
if (#try < MAX_RETRY)
goto retry;
throw;
}
}
In this case you know this "exception" probably will happen so I would prefer a simple approach an leave exceptions for the unknown events.
//run initial process
try
{
//initial information used in helper method
string s1 = "value 1";
//call helper method
if(!HelperMethod(s1))
{
//backup information if first process generates an exception in the helper method
string s2 = "value 2";
if(!HelperMethod(s2))
{
return ErrorOfSomeKind;
}
}
return Ok;
}
catch(ApplicationException ex)
{
throw;
}
I know that I've done the above nested try catch recently to handle decoding data where two third party libraries throw exceptions on failure to decode (Try json decode, then try base64 decode), but my preference is to have functions return a value which can be checked.
I generally only use the throwing of exceptions to exit early and notify something up the chain about the error if it's fatal to the process.
If a function is unable to provide a meaningful response, that is not typically a fatal problem (Unlike bad input data).
It seems like the main risk in nested try catch is that you also end up catching all the other (maybe important) exceptions that might occur.

Categories

Resources