CA2000 and disposal of WCF client - c#

There is plenty of information out there concerning WCF clients and the fact that you cannot simply rely on a using statement to dispose of the client. This is because the Close method can throw an exception (i.e. if the server hosting the service doesn't respond).
I've done my best to implement something that adheres to the numerous suggestions out there.
public void DoSomething()
{
MyServiceClient client = new MyServiceClient(); // from service reference
try
{
client.DoSomething();
}
finally
{
client.CloseProxy();
}
}
public static void CloseProxy(this ICommunicationObject proxy)
{
if (proxy == null)
return;
try
{
if (proxy.State != CommunicationState.Closed
&& proxy.State != CommunicationState.Faulted)
{
proxy.Close();
}
else
{
proxy.Abort();
}
}
catch (CommunicationException)
{
proxy.Abort();
}
catch (TimeoutException)
{
proxy.Abort();
}
catch
{
proxy.Abort();
throw;
}
}
This appears to be working as intended. However, when I run Code Analysis in Visual Studio 2010 I still get a CA2000 warning.
CA2000 : Microsoft.Reliability : In
method 'DoSomething()', call
System.IDisposable.Dispose on object
'client' before all references to it
are out of scope.
Is there something I can do to my code to get rid of the warning or should I use SuppressMessage to hide this warning once I am comfortable that I am doing everything possible to be sure the client is disposed of?
Related resources that I've found:
http://www.theroks.com/2011/03/04/wcf-dispose-problem-with-using-statement/
http://www.codeproject.com/Articles/151755/Correct-WCF-Client-Proxy-Closing.aspx
http://codeguru.earthweb.com/csharp/.net/net_general/tipstricks/article.php/c15941/

You could add a call to Dispose in your finally block, after the call to CloseProxy. At that point you can be reasonably sure that Dispose won't throw, although it seems a bit silly to add superfluous code just to keep code analysis happy - I'd probably just suppress the message.
(Whichever option you choose, include very clear comments explaining why the code does what it does.)

Related

Azure Functions 2.x keep throwing catched exceptions

In my Azure Functions 2.x Project, i have a part of an Function, a try-catch block without finally, that more or less look like this.
Dictionary<string, int> varDict = null;
Tuple<string, DateTime> varTupl = null;
try
{
varDict = await Core.GetDict(lcLat.Value, lcLong.Value);
varTupl = await Core.GetTupl(lcLat.Value, lcLong.Value);
}
catch (AggregateException ae)
{
ae.Flatten().Handle(ex =>
{
// `log` is an ILogger, the standard Azure Functions passed param
log.LogError(ex, ""); // Writes the ex's error
Debug.WriteLine(""); // Writes the ex's error
// the written content is ommited for readability sake
// But will be shown below
return true;
});
}
catch (Exception ex)
{
// Does exactly like Handle() Does
}
if(varDict != null && varTupl != null)
{
// The Code won't go here, and always return HTTP 500 Instead
}
else
{
// Here neither
}
The Run method itself is an async Task<IActionResult>, with Core as a static public class containing GetDict() and GetTupl() methods, each of them are also an static async Task<T> with their respective T return type and both doesn't have any try-catch block, only using (which are not supposed to throw any exceptions, right ?)
The problem is, even though (i assume) the exceptions raised then bubbled up into my try-catch block, even with my catch block running printing the exception with my formatting from catch block, as shown in the screenshot ,my Azure Functions keep returning HTTP Error 500, skipping the rest of the code after the try-catch block
What i have tried
Disable 'Just My Code' debugging options in my Visual Stuido 2017
Adding AggregateExceptions, before this it's only catching for Exception
Flatten the AggregateException before Handle() it
Is this common on local development environment, or it's just me handling everything incorectly ?
Also, the output window keep printing out something like this
and this
even in idle state (while the HTTP endpoint isn't being invoked, just run in debug mode, idly waiting for invocation)
are these something that i have to concerned about ? are those even related with my problem

Correct way to close WCF 4 channels effectively

I am using the following ways to close the WCF 4 channels. Is this right way to do it?
using (IService channel
= CustomChannelFactory<IService>.CreateConfigurationChannel())
{
channel.Open();
//do stuff
}// channels disposes off??
That used to be the commonly accepted way to release WCF client proxies in the "early" days of WCF.
However things have since changed. It turned out that the implementation of IClientChannel<T>.Dispose() simply invokes the IClientChannel<T>.Close() method, which may throw an exception under some circumstances, such as when the underlying channel isn't open or can't be closed in a timely fashion.
Therefore it's not a good idea to invoke Close() within a catch block since that may leave behind some unreleased resources in case of an exception.
The new recommended way is to invoke IClientChannel<T>.Abort() within the catch block instead, in case Close() would fail. Here's an example:
try
{
channel.DoSomething();
channel.Close();
}
catch
{
channel.Abort();
throw;
}
Update:
Here's a reference to an MSDN article that describes this recommendation.
Although not strictly directed at the channel, you can do:
ChannelFactory<IMyService> channelFactory = null;
try
{
channelFactory =
new ChannelFactory<IMyService>();
channelFactory.Open();
// Do work...
channelFactory.Close();
}
catch (CommunicationException)
{
if (channelFactory != null)
{
channelFactory.Abort();
}
}
catch (TimeoutException)
{
if (channelFactory != null)
{
channelFactory.Abort();
}
}
catch (Exception)
{
if (channelFactory != null)
{
channelFactory.Abort();
}
throw;
}

WCF Proxy usage

This answer was posted in response to this question.
It's a little above my head right now, but is the "higher order function" supposed to be used within a client proxy class? Is this correct usage?:
public class MyProxy
{
readonly IMyService service =
new ChannelFactory<IMyService>("IMyService").CreateChannel();
public ResponseObject Foo(RequestObject request)
{
return UseService((IMyService service) =>
service.Bar(request));
}
T UseService<T>(Func<IIssueTrackerService, T> code)
{
bool error = true;
try
{
T result = code(issueTrackerChannel);
((IClientChannel)issueTrackerChannel).Close();
error = false;
return result;
}
finally
{
if (error)
{
((IClientChannel)issueTrackerChannel).Abort();
}
}
}
}
All I'm really looking for is some guidance here, and the correct way to do this.
This is actually not to bad. Perhaps you can cast to an ICommunicationObject instead, as the same code is required for your hosts as well.
The way to think about it is close is the friendly call. Please finish my call and return the proxy to the connection pool. Abort is "I don't care, shut the proxy because it's dead and also remove it from the pool because it's dead".
Depending on your code, you might want to abstract the "WCF Proxy" parts of the code from the function call parts if it's possible. That way you can unit test your application logic separately from the WCF proxy code.
You may want to look at a try {} catch (CommunicationException) so you can treat your WCF exceptions separately to an application level exception too, instead of the finally.
i.e
try
{
try
{
proxy.call();
//app logic
((ICommunicationObject)proxy).Close();
}
catch (SomeAppException)
{
//recover app exception
}
}
catch (CommunicationException)
{
((ICommunicationObject)proxy).Abort();
}

Additional try statement in catch statement - code smell?

Situation:
My application need to process the first step in the business rules (the initial try-catch statement). If an certain error occurs when the process calls the helper method during the step, I need to switch to a second process in the catch statement. The back up process uses the same helper method. If an same error occurs during the second process, I need to stop the entire process and throw the exception.
Implementation:
I was going to insert another try-catch statement into the catch statement of the first try-catch statement.
//run initial process
try
{
//initial information used in helper method
string s1 = "value 1";
//call helper method
HelperMethod(s1);
}
catch(Exception e1)
{
//backup information if first process generates an exception in the helper method
string s2 = "value 2";
//try catch statement for second process.
try
{
HelperMethod(s2);
}
catch(Exception e2)
{
throw e2;
}
}
What would be the correct design pattern to avoid code smells in this implementation?
I caused some confusion and left out that when the first process fails and switches to the second process, it will send different information to the helper method. I have updated the scenario to reflect the entire process.
If the HelperMethod needs a second try, there is nothing directly wrong with this, but your code in the catch tries to do way too much, and it destroys the stacktrace from e2.
You only need:
try
{
//call helper method
HelperMethod();
}
catch(Exception e1)
{
// maybe log e1, it is getting lost here
HelperMethod();
}
I wouldn't say it is bad, although I'd almost certainly refactor the second block of code into a second method, so keep it comprehensible. And probably catch something more specific than Exception. A second try is sometimes necessary, especially for things like Dispose() implementations that might themselves throw (WCF, I'm looking at you).
The general idea putting a try-catch inside the catch of a parent try-catch doesn't seem like a code-smell to me. I can think of other legitimate reasons for doing this - for instance, when cleaning up an operation that failed where you do not want to ever throw another error (such as if the clean-up operation also fails). Your implementation, however, raises two questions for me: 1) Wim's comment, and 2) do you really want to entirely disregard why the operation originally failed (the e1 Exception)? Whether the second process succeeds or fails, your code does nothing with the original exception.
Generally speaking, this isn't a problem, and it isn't a code smell that I know of.
With that said, you may want to look at handling the error within your first helper method instead of just throwing it (and, thus, handling the call to the second helper method in there). That's only if it makes sense, but it is a possible change.
Yes, a more general pattern is have the basic method include an overload that accepts an int attempt parameter, and then conditionally call itself recursively.
private void MyMethod (parameterList)
{ MyMethod(ParameterList, 0)l }
private void MyMethod(ParameterList, int attempt)
{
try { HelperMethod(); }
catch(SomeSpecificException)
{
if (attempt < MAXATTEMPTS)
MyMethod(ParameterList, ++attempt);
else throw;
}
}
It shouldn't be that bad. Just document clearly why you're doing it, and most DEFINITELY try catching a more specific Exception type.
If you need some retry mechanism, which it looks like, you may want to explore different techniques, looping with delays etc.
It would be a little clearer if you called a different function in the catch so that a reader doesn't think you're just retrying the same function, as is, over again. If there's state happening that's not being shown in your example, you should document it carefully, at a minimum.
You also shouldn't throw e2; like that: you should simply throw; if you're going to work with the exception you caught at all. If not, you shouldn't try/catch.
Where you do not reference e1, you should simply catch (Exception) or better still catch (YourSpecificException)
If you're doing this to try and recover from some sort of transient error, then you need to be careful about how you implement this.
For example, in an environment where you're using SQL Server Mirroring, it's possible that the server you're connected to may stop being the master mid-connection.
In that scenario, it may be valid for your application to try and reconnect, and re-execute any statements on the new master - rather than sending an error back to the caller immediately.
You need to be careful to ensure that the methods you're calling don't have their own automatic retry mechanism, and that your callers are aware there is an automatic retry built into your method. Failing to ensure this can result in scenarios where you cause a flood of retry attempts, overloading shared resources (such as Database servers).
You should also ensure you're catching exceptions specific to the transient error you're trying to retry. So, in the example I gave, SqlException, and then examining to see if the error was that the SQL connection failed because the host was no longer the master.
If you need to retry more than once, consider placing an 'automatic backoff' retry delay - the first failure is retried immediately, the second after a delay of (say) 1 second, then doubled up to a maximum of (say) 90 seconds. This should help prevent overloading resources.
I would also suggest restructuring your method so that you don't have an inner-try/catch.
For example:
bool helper_success = false;
bool automatic_retry = false;
//run initial process
try
{
//call helper method
HelperMethod();
helper_success = true;
}
catch(Exception e)
{
// check if e is a transient exception. If so, set automatic_retry = true
}
if (automatic_retry)
{ //try catch statement for second process.
try
{
HelperMethod();
}
catch(Exception e)
{
throw;
}
}
Here's another pattern:
// set up state for first attempt
if(!HelperMethod(false)) {
// set up state for second attempt
HelperMethod(true);
// no need to try catch since you're just throwing anyway
}
Here, HelperMethod is
bool HelperMethod(bool throwOnFailure)
and the return value indicates whether or not success occurred (i.e., false indicates failure and true indicates success). You could also do:
// could wrap in try/catch
HelperMethod(2, stateChanger);
where HelperMethod is
void HelperMethod(int numberOfTries, StateChanger[] stateChanger)
where numberOfTries indicates the number of times to try before throwing an exception and StateChanger[] is an array of delegates that will change the state for you between calls (i.e., stateChanger[0] is called before the first attempt, stateChanger[1] is called before the second attempt, etc.)
This last option indicates that you might have a smelly setup though. It looks like the class that is encapsulating this process is responsible for both keeping track of state (which employee to look up) as well as looking up the employee (HelperMethod). By SRP, these should be separate.
Of course, you need to a catch a more specific exception than you currently are (don't catch the base class Exception!) and you should just throw instead of throw e if you need to rethrow the exception after logging, cleanup, etc.
You could emulate C#'s TryParse method signatures:
class Program
{
static void Main(string[] args)
{
Exception ex;
Console.WriteLine("trying 'ex'");
if (TryHelper("ex", out ex))
{
Console.WriteLine("'ex' worked");
}
else
{
Console.WriteLine("'ex' failed: " + ex.Message);
Console.WriteLine("trying 'test'");
if (TryHelper("test", out ex))
{
Console.WriteLine("'test' worked");
}
else
{
Console.WriteLine("'test' failed: " + ex.Message);
throw ex;
}
}
}
private static bool TryHelper(string s, out Exception result)
{
try
{
HelperMethod(s);
result = null;
return true;
}
catch (Exception ex)
{
// log here to preserve stack trace
result = ex;
return false;
}
}
private static void HelperMethod(string s)
{
if (s.Equals("ex"))
{
throw new Exception("s can be anything except 'ex'");
}
}
}
Another way is to flatten the try/catch blocks, useful if you're using some exception-happy API:
public void Foo()
{
try
{
HelperMethod("value 1");
return; // finished
}
catch (Exception e)
{
// possibly log exception
}
try
{
HelperMethod("value 2");
return; // finished
}
catch (Exception e)
{
// possibly log exception
}
// ... more here if needed
}
An option for retry (that most people will probably flame) would be to use a goto. C# doesn't have filtered exceptions but this could be used in a similar manner.
const int MAX_RETRY = 3;
public static void DoWork()
{
//Do Something
}
public static void DoWorkWithRetry()
{
var #try = 0;
retry:
try
{
DoWork();
}
catch (Exception)
{
#try++;
if (#try < MAX_RETRY)
goto retry;
throw;
}
}
In this case you know this "exception" probably will happen so I would prefer a simple approach an leave exceptions for the unknown events.
//run initial process
try
{
//initial information used in helper method
string s1 = "value 1";
//call helper method
if(!HelperMethod(s1))
{
//backup information if first process generates an exception in the helper method
string s2 = "value 2";
if(!HelperMethod(s2))
{
return ErrorOfSomeKind;
}
}
return Ok;
}
catch(ApplicationException ex)
{
throw;
}
I know that I've done the above nested try catch recently to handle decoding data where two third party libraries throw exceptions on failure to decode (Try json decode, then try base64 decode), but my preference is to have functions return a value which can be checked.
I generally only use the throwing of exceptions to exit early and notify something up the chain about the error if it's fatal to the process.
If a function is unable to provide a meaningful response, that is not typically a fatal problem (Unlike bad input data).
It seems like the main risk in nested try catch is that you also end up catching all the other (maybe important) exceptions that might occur.

To handle exception with every form or just at main

I have a question about handling exception. I have a Winform that uses a webservice proxy on each form for data retrieval and processing. Here is where I really got confused and having a long time deciding which is better.
A. For each call in the web service do a try catch to display the error message and allow the user to re try the process by clicking the button again.
B. Since the error occurred on the web-service and the error was probably because the web service was inaccessible, just make a generic try catch in the WinMain function in the Program.cs and show an error message that web service is inaccessible before the application closes.
The main argument in this is A is more user friendly but needs a lot of try catch code. B is easier to code but just lets the application ends. I am leaning on A but am trying to search the net with options how to lessen the code needed to be written to do this. Any ideas there?
When you add a web reference, the code generator automatically adds "Async" methods to access the web service.
I would recommend that you use the Async methods rather than the synchronous methods. The nice thing about that is that the EventArgs for the Async methods provide an Error property that you can use to see if the request was successful or not.
private void CheckWebservice(string data)
{
WebService.Server server = new WebService.server();
server.methodCompleted += server_methodCompleted;
server.methodAsync(data);
}
private void server_methodCompleted(object sender, methodCompletedEventArgs e)
{
if (e.Error != null)
if (MessageBox.Show("Error", "Error", MessageBoxButtons.AbortRetryIgore) == DialogResult.Retry)
{
// call method to retry
}
else
{
if (e.Result == "OK") { // Great! }
}
}
If you must use the synchronous methods for some reason, then you could, of course, write a class to encapsulate the methods to call your web service so that you can call it from various places without duplicating the code. Your encapsulation class could do all the error handling and return a result.
class CallWebService
{
public enum Result
{ Unknown, Success, NotAvailable, InvalidData } // etc
public Call(string data)
{
Webservice.Server server = new Webservice.Server();
string result = string.Empty;
try
{
result = server.getResult(data);
}
catch (Exception ex) // replace with appropriate exception class
{
return Result.NotAvailable;
}
if (result == "OK") return Result.Success
else return Result.InvalidData;
}
}
Encapsulate the webservice call and the try/catch block inside a class =)

Categories

Resources