Using Dispose on a Singleton to Cleanup Resources - c#

The question I have might be more to do with semantics than with the actual use of IDisposable. I am working on implementing a singleton class that is in charge of managing a database instance that is created during the execution of the application. When the application closes this database should be deleted.
Right now I have this delete being handled by a Cleanup() method of the singleton that the application calls when it is closing. As I was writing the documentation for Cleanup() it struck me that I was describing what a Dispose() method should be used for i.e. cleaning up resources. I had originally not implemented IDisposable because it seemed out of place in my singleton, because I didn't want anything to dispose the singleton itself. There isn't currently, but in the future might be a reason that this Cleanup() might be called but the singleton should will need to still exist. I think I can include GC.SuppressFinalize(this); in the Dispose method to make this feasible.
My question therefore is multi-parted:
1) Is implementing IDisposable on a singleton fundamentally a bad idea?
2) Am I just mixing semantics here by having a Cleanup() instead of a Dispose() and since I'm disposing resources I really should use a dispose?
3) Will implementing 'Dispose()' with GC.SuppressFinalize(this); make it so my singleton is not actually destroyed in the case I want it to live after a call to clean-up the database.

Implementing IDisposable for singleton might be good idea if you use a CAS techinque instead of locks to create a singleton. Something like this.
if (instance == null) {
var temp = new Singleton();
if (Interlocked.CompareExchange(ref instance, temp, null) != null) &&
temp is IDisposable) {
((IDisposable)temp).Dispose();
}
}
return instance
We created a temporary object and tried atomic Compare-and-Swap so we need to dispose this temporary object if it impletents IDisposable and it wasn't written to instance's location.
It might be good to avoid locks but also it's a bit overhead if some heavy logic is used to create a singleton's instance in its constructor.
If you don't want other code to cleanup or dispose your object just don't provide any opportunity for this. However, it might be good idea to provide some kind of reset() method to allow singleton to recreate itself if, let say, you use a lazy init. Something like this:
public static Singletong GetInstance() {
if (instance == null) {
instance = new Singleton(); //here we re-evalute cache for example
}
return instance
}
public static void Reset() {
instance = null;
}

In short, if you have a singleton and you call dispose. Any time that any object tries to use it after that, will be using an object in a disposed state.
Now putting it in, and disposing of the object after the application is done with it, isn't necessarily bad. You do have to be careful though when you call it. If you are really concerned with the cleanup and you have only one reference to it, you can put the clean up code in the objects finalizer ~YourClass that way it will only be called with .Net is sure it is no longer needed (when the application closes, if it is a true singleton).
Am I just mixing semantics here by
having a Cleanup() instead of a
Dispose() and since I'm disposing
resources I really should use a
dispose?
Yes, this is just semantics. Dispose is the standard for showing there are things that need to be cleared up after the program is done with the object.
Will implementing 'Dispose()' with
GC.SuppressFinalize(this); make it so
my singleton is not actually destroyed
in the case I want it to live after a
call to clean-up the database.
No what this means is that when you call the dispose method, the Garbage Collector won't call the object's custom finalizer.

I agree with Kevin's answer, but like to add something to this. I'm a bit confused about your statement:
When the application closes this
database should be deleted.
Do you really mean deleted? As in destroyed? Are you talking about a real (SQL) database?
You must understand that even if you put your clean up code in a finalizer, or in the Application_End event (ASP.NET) there is no way you can guarantee these are called. The process can be terminated, or the computer loses power. It seems more reasonable to delete the database on application startup, or at least have a fallback mechanism at startup with some cleanup.
While the finalizer would be a good place to put cleanup when dealing with resources, in your case we're talking about an application resource. What I mean by this is that the resource isn't perhaps bound to a single object (your singleton) but is part of the whole application. This can get a bit an abstract discussion, and it's perhaps more a matter of view.
What I'm trying to say it that when you see that database as a application resource, you will have to have your initialization and cleanup not bound to an object, but to the application. In an ASP.NET application this would be Application_Start and Application_End (global.asax). In an Windows Forms application, this would be Program.Main.
Still, when using these mechanisms over finalizers, you have no certainty that your clean up code will execute.

Related

Should Dispose() ever create new instances of objects?

Using C#.NET 4.0
My company's application makes use of a resource locker to keep records from being edited simultaneously. We use the database to store the start time of a lock as well as the user who acquired the lock. This has led to the following (strange?) implementation of dispose on the resource locker, which happens to be called from a destructor:
protected virtual void Dispose(bool disposing)
{
lock (this)
{
if (lockid.HasValue)
{
this.RefreshDataButtonAction = null;
this.ReadOnlyButtonAction = null;
try
{
**Dictionary<string, object> parameters = new Dictionary<string, object>();
parameters.Add("#lockID", lockid.Value);
parameters.Add("#readsToDelete", null);
Object returnObject = dbio2.ExecuteScalar("usp_DeleteResourceLockReads", parameters);**
lockid = null;
}
catch (Exception ex)
{
Logger.WriteError("ResourceLockingController", "DeleteResourceLocks", ex);
}
finally
{
((IDisposable)_staleResourcesForm).Dispose();
_staleResourcesForm = null;
}
}
}
}
I am concerned about the bolded section we because have been logging strange "Handle is not initialized" exceptions from the database call. I read elsewhere that it is not safe to create new objects during Finalize(), but does the same rule apply to dispose()? Are there any possible side effects that accompany creating new objects during Dispose()?
which happens to be called from a destructor
That's the real problem. You cannot assume that the *dbio2" object hasn't been finalized itself. Finalization order is not deterministic in .NET. The outcome would look much like you describe, an internal handle used by the dbase provider will have been released so a "Handle is not initialized" exception is expected. Or the dbio2 object was simply already disposed.
This is especially likely to go wrong at program exit. You'll then also have problem when the 2 second timeout for the finalizer thread, a dbase operation can easily take more.
You simply cannot rely on a finalizer to do this for you. You must check the disposing argument and not call the dbio2.ExecuteScalar() method when it is false. Which probably ends the usefulness of the destructor as well.
Dispose is just a method, like any other method. There are some conventions about things that it should/shouldn't do, but there's nothing from the system's perspective that is wrong with creating objects in a Dispose call.
Making a DB call is a bit concerning to be personally; I wouldn't expect such an expensive and error prone activity to be called in a Dispose method, but that's more of a convention/expectation. The system won't have a problem with that.
Yes, but I would not do so unless the object created is within the local scope of the method. IDisposable is an advertisement that this class has some resource (often an unmanaged resource) that should be freed when the object is no longer used. If your Dispose is being called by your finializer (ie you are not calling the destructor directly, but waiting for the GC to do it) it can be an indication that you should be calling it earlier. You never know when the C# destructor will run, so you may be unnecessarily tying up that resource. It could also be in indication that your class doesn't need to implement IDisposable.
In your case your are using object dbio2 which I assume represents your DB connection. However, since this is called from the destructor how do you know if your connection is still valid? You destructor could an hour after your connection has been lost. You should try to ensure that this Dispose is called while you know the dbio2 object is still in scope.

Form.ShowDialog() and dispose

If I have a method like this:
public void Show()
{
Form1 f = new Form1();
f.ShowDialog();
}
Do I still need to call dispose on the form even though it will go out of scope, which will be eligible for garbage collection.
From some testing, calling this Show() multiple times .. at some point it seems like the GC collects it since I can see the memory spiking then it goes down at some point in time.
From MSDN it seems to say you MUST call dispose when the form is not needed anymore.
What tends to happen is if the item has purely managed resources, calling dispose is not necessarily required, but is strongly advised because it makes disposal deterministic. It isn't always required (in a technical sense) because those managed resources would likely themselves now be eligible for GC, or there is actually nothing to dispose by default and it's an extensibility point.
For unmanaged resources, the Dispose Pattern advises implementing a finalizer, which will be called on GC. If types do not implement the finalizer and dispose is not called, then it is possible (well, very likely) that resources would be left unhandled. Finalizers are the last chance offered by the runtime for clearing your stuff up - they are also time-limited.
Note, that it does not make GC or managed memory reclamation deterministic, disposal is not delete from C++. A disposed item could be a long way away from actually being collected. However, in the managed world, you don't care about deterministic collection, only resource management - in other words, disposal.
That said, I always make sure I call Dispose or use a using statement if a type is disposable, regardless of whether it uses managed or unmanaged resources - it is the expected convention:
public void Show()
{
using (var f = new Form1())
{
f.ShowDialog();
} // Disposal, even on exceptions or nested return statements, occurs here.
}
Update:
After a discussion with Servy I feel I have to express this point as the reasoning behind my advice of disposing where possible. In the case of MemoryStream, it is clearly a disposable type, but actually does not dispose of anything currently.
Relying on this, however, is to rely on the implementation of MemoryStream. Were this to change to include an unmanaged resource, this would then mean that a reliance on MemoryStream not having anything to dispose becomes problematic.
Where possible (as is the case with IDisposable) I prefer to rely on the public contract. Working against the contract in this instance would mean I am safe from underlying implementation changes.
Although you rarely have to manually dispose in C# imo, you could try it like this:
public void Show()
{
using (Form1 f = new Form1())
{
f.ShowDialog();
}
}
Then at the last accolade of the using part it will get disposed of automatically.
ShowDialog has side effect of keeping the GDI objects alive. In order to avoid GDI leak we need to dispose the ShowDialog appropriately. Where as Show method does not have any implication and GDI will be released appropriately. It is recommended to dispose the showDialog and do not rely on Garbage collector.
You could simply do:
using (var f = new Form1())
f.ShowDialog();
If you want to explicitly dispose, use
using(Form1 f = new Form1()){
f.ShowDialog();
}
This ensures Dispose() is called, as well as it occurs immediately
In your specific example, no, it's unlikely that it would be particularly useful. Forms do not hold onto a significant amount of resources, so if it takes a little bit longer for some portion of it's code to get cleaned up it isn't going to cause a problem. If that form just happens to be holding onto a control that is used to, say, play a video, then maybe it's actually holding onto some significant number of resources, and if you actually do dispose of those resources in the dispose method then it's worth taking the time to call dispose. For 99% of your forms though, their Dispose method will be empty, and whether you call it or not is unlikely to have any (or any noticeable) effect on your program.
The reason that it's there is primarily to enable the ability to dispose of resources in those 1% of cases where it's important.
It's also worth noting that when a Form is closed its Dispose method is already being called. You would only ever need to add a using or explicit Dispose call if you want to dispose of a Forms resources before that form is closed. (That sounds like a generally bad idea to me). This is easy enough to test. Just create a project with two forms. Have the second form attach an event handler to the Disposing event and show a message box or something. Then when you create an instance of that form and show it (as a dialog or not) you'll see that when you close it the message box will pop up right away, even if you keep the 'Form' instance around and without you ever needing to add a using or Dispose call.
Yes, you need and MUST call Dispose or use using statement, otherwise it can be, that instance remains in memory.
You can check it, if you put a Timer to the form and set a break point in the Timer.Tick handler. Even after closing the form without Dispose the handler will be called.

For Microsoft built classes that inherit IDisposable, do I explicitly have to call Dispose?

Regarding the Microsoft built classes that inherit IDisposable, do I explicitly have to call Dispose to prevent memory leaks?
I understand that it is best practice to call Dispose (or better yet use a using block), however when programming, typically I don't always immediately realise that a class inherits from IDisposable.
I also understand that Microsoft implementation of IDisposable is a bit borked, which is why they created the article explaining the correct usage of IDisposable.
Long story short, in which instances is it okay to forget to call Dispose?
There are a couple of issues in the primary question
Do I explicitly have to call Dispose to prevent memory leaks?
Calling Dispose on any type which implements IDisposable is highly recomended and may even be a fundamental part of the types contract. There is almost no good reason to not call Dispose when you are done with the object. An IDisposable object is meant to be disposed.
But will failing to call Dispose create a memory leak? Possibly. It's very dependent on what exactly that object does in it's Dispose method. Many free memory, some unhook from events, others free handles, etc ... It may not leak memory but it will almost certainly have a negative effect on your program
In which instances is it okay to forget to call Dispose?
I'd start with none. The vast majority of objects out there implement IDisposable for good reason. Failing to call Dispose will hurt your program.
It depends on two things:
What happens in the Dispose method
Does the finalizer call Dispose
Dispose functionlity
Dispose can do several type of actions, like closing a handle to a resource (like file stream), change the class state and release other components the class itself uses.
In case of resource being released (like file) there's a functionality difference between calling it explicitly and waiting for it to be called during garbage collection (assuming the finalizer calls dispose).
In case there's no state change and only components are released there'll be no memory leak since the object will be freed by the GC later.
Finalizer
In most cases, disposable types call the Dispose method from the finalizer. If this is the case, and assuming the context in which the dispose is called doesn't matter, then there's a high chance that you'll notice no difference if the object will not be disposed explicitly. But, if the Dispose is not called from the finalizer then your code will behave differently.
Bottom line - in most cases, it's better to dispose the object explicitly when you're done with it.
A simple example to where it's better to call Dispose explicitly: Assuming you're using a FileStream to write some content and enable no sharing, then the file is locked by the process until the GC will get the object. The file may also not flush all the content to the file so if the process crashes in some point after the write was over it's not guaranteed that it will actually be saved.
It can be safe to not call Dispose, but the problem is knowing when this is the case.
A good 95% of IEnumerator<T> implementations have a Dispose that's safe to ignore, but the 5% is not just 5% that'll cause a bug, but 5% that'll cause a nasty hard to trace bug. More to the point, code that gets passed an IEnumerator<T> will see both the 95% and the 5% and won't be able to dynamically tell them apart (it's possible to implement the non-generic IEnumerable without implementing IDisposable, and how well that turned out can be guessed at by MS deciding to make IEnumerator<T> inherit from IDisposable!).
Of the rest, maybe there's 3 or 4% of the time it's safe. For now. You don't know which 3% without looking at the code, and even then the contract says you have to call it, so the developer can depend on you doing so if they release a new version where it is important.
In summary, always call Dispose(). (I can think of an exception, but it's frankly too weird to even go into the details of, and it's still safe to call it in that case, just not vital).
On the question of implementing IDisposable yourself, avoid the pattern in that accursed document.
I consider that pattern an anti-pattern. It is a good pattern for implementing both IDisposable.Dispose and a finaliser in a class that holds both managed and unmanaged resources. However holding both managed IDisposable and unmanaged resources is a bad idea in the first place.
Instead:
If you have an unmanaged resource, then don't have any unmanaged resources that implement IDisposable. Now the Dispose(true) and Dispose(false) code paths are the same, so really they can become:
public class HasUnmanaged : IDisposable
{
IntPtr unmanagedGoo;
private void CleanUp()
{
if(unmanagedGoo != IntPtr.Zero)
{
SomeReleasingMethod(unmanagedGoo);
unmanagedGoo = IntPtr.Zero;
}
}
public void Dispose()
{
CleanUp();
GC.SuppressFinalize(this);
}
~HasUnmanaged()
{
CleanUp();
}
}
If you have managed resources that need to be disposed, then just do that:
public class HasUnmanaged : IDisposable
{
IDisposable managedGoo;
public void Dispose()
{
if(managedGoo != null)
managedGoo.Dispose();
}
}
There, no cryptic "disposing" bool (how can something be called Dispose and take false for something called disposing?) No worrying about finalisers for the 99.99% of the time you won't need them (the second pattern is way more common than the first). All good.
Really need something that has both a managed and an unmanaged resource? No, you don't really, wrap the unmanaged resource in a class of your own that works as a handle to it, and then that handle fits the first pattern above and the main class fits the second.
Only implement the CA10634 pattern when you're forced to because you inherited from a class that did so. Thankfully, most people aren't creating new ones like that any more.
It is never OK to forget to call Dispose (or, as you say, better yet use using).
I guess if the goal of your program is to cause unmanaged resource leaks. Then maybe it would be OK.
The implementation of IDisposable indicates that a class uses un-managed resources. You should always call Dispose() (or use a using block when possible) when you're sure you're done with the class. Otherwise you are unnecessarily keeping un-managed resources allocated.
In other words, never forget to call Dispose().
Yes, always call dispose. Either explicitly or implicitly (via using). Take, for example, the Timer class. If you do not explicitly stop a timer, and do not dispose it, then it will keep firing until the garbage collector gets around to collecting it. This could actually cause crashes or unexpected behavior.
It's always best to make sure Dispose is called as soon as you are done with it.
Microsoft (probably not officially) says it is ok to not call Dispose in some cases.
Stephen Toub from Microsoft writes (about calling Dispose on Task):
In short, as is typically the case in .NET, dispose aggressively if
it's easy and correct to do based on the structure of your code. If
you start having to do strange gyrations in order to Dispose (or in
the case of Tasks, use additional synchronization to ensure it's safe
to dispose, since Dispose may only be used once a task has completed),
it's likely better to rely on finalization to take care of things. In
the end, it's best to measure, measure, measure to see if you actually
have a problem before you go out of your way to make the code less
sightly in order to implement clean-up functionality.
[bold emphasize is mine]
Another case is base streams
var inner = new FileStrem(...);
var outer = new StreamReader(inner, Encoding.GetEncoding(1252));
...
outer.Dispose();
inner.Dispose(); -- this will trigger a FxCop performance warning about calling Dispose twice.
(I have turned off this rule)

Should a .Net/C# object call Dispose() on itself?

Below is some sample code written by a colleague. This seems obviously wrong to me but I wanted to check. Should an object call its own Dispose() method from within one of its own methods? It seems to me that only the owner/creator of the object should call Dispose() when it's done with the object and not the object itself.
It's an .asmx web method that calls Dispose() on itself when it's done. (The fact that it's a web method is probably incidental to the question in general.) In our code base we sometimes instantiate web service classes within methods of other web services and then call methods on them. If my code does that to call this method, the object is toast when the method returns and I can't really use the object any more.
[WebMethod]
public string MyWebMethod()
{
try
{
return doSomething();
}
catch(Exception exception)
{
return string.Empty;
}
finally
{
Dispose(true);
}
}
UPDATE:
Found a few links that are related:
Do I need to dispose a web service reference in ASP.NET?
Dispose a Web Service Proxy class?
For sure it's not a good prartice. The caller should decide when he is finished using the IDisposable object, not an object itself.
There are very few valid reasons to perform a "Self Disposing" action.
Threading is the one I use more than not.
I have several applications that "fire and forget" threads.
Using this methodology allow the object to self dispose.
This helps keep the environment clean without a threading manager process.
if I ever see that in one of my projects, I would ask why and I'm 99.9999% sure that i would remove it anyway
for me this is a kind of red flag / code smells
There are no technical restrictions on what a Dispose method is allowed to do. The only thing special about it is that Dispose gets called in certain constructs (foreach, using). Because of that, Dispose might reasonably be used to flag an object as no-longer-useable, especially if the call is idempotent.
I would not use it for this purpose however, because of the accepted semantics of Dispose. If I wanted to mark an object as no-longer-useable from within the class itself, then I would create a MarkUnuseable() method that could be called by Dispose or any other place.
By restricting the calls to Dispose to the commonly accepted patterns, you buy the ability to make changes to the Dispose methods in all of your classes with confidence that you will not unexpectedly break any code that deviates from the common pattern.
Just remove it, but take care to dispose it in all object that call it.
Technically yes, if that "method" is the finaliser and you are implementing the Finalise and IDisposable pattern as specified by Microsoft.
While a .Net object would not normally call Dispose on itself, there are times when code running within an object may be the last thing that expects to be using it. As a simple example, if a Dispose method can handle the cleanup of a partially-constructed object, it may be useful to have a constructor coded something like the following:
Sub New()
Dim OK As Boolean = False
Try
... do Stuff
OK = True
Finally
If Not OK Then Me.Dispose
End Try
End Sub
If the constructor is going to throw an exception without returning, then the partially-constructed object, which is slated for abandonment, will be the only thing that will ever have the information and impetus to do the necessary cleanup. If it doesn't take care of ensuring a timely Dispose, nothing else will.
With regard to your particular piece of code, the pattern is somewhat unusual, but it looks somewhat like the way a socket can get passed from one thread to another. There's a call which returns an array of bytes and invalidates a Socket; that array of bytes can be used in another thread to create a new Socket instance which takes over the communications stream established by the other Socket. Note that the data regarding the open socket is effectively an unmanaged resource, but it can't very well be wrapped in an object with a finalizer because it's often going to be handed off to something the garbage-collector can't see.
No! It is unexpected behavior and breaks the best practices guidelines. Never do anything that is unexpeceted. Your object should only do what is needed to maintain it's state while protecting the integrity of the object for the caller. The caller will decide when it's done (or the GC if nothing else).

How do I track down where I've been leaking IDisposable objects from?

I've been debugging some code recently that was a bit memory leaky. It's a long running program that runs as a Windows service.
If you find a class wearing an IDisposable interface, it is telling you that some of the resources it uses are outside the abilities of the garbage collector to clean up for you.
The reason it is telling you this is that you, the user of this object, are now responsible for when these resources are cleaned up. Congratulations!
As a conscientious developer, you are nudged towards calling the .Dispose() method when you've finished with the object in order to release those unmanaged resources.
There is the nice using() pattern to help clean up these resources once they are finished with. Which just leaves finding which exact objects are causing the leakyness?
In order to aid tracking down these rogue unmanaged resources, is there any way to query what objects are loitering around waiting to be Disposed at any given point in time?
There shouldn't be any cases where you don't want to call Dispose, but the compiler cannot tell you where you should call dispose.
Suppose you write a factory class which creates and returns disposable objects. Should the compiler bug you for not calling Dispose when the cleanup should be the responsibility of your callers?
IDisposable is more for making use of the using keyword. It's not there to force you to call Dispose() - it's there to enable you to call it in a slick, non-obtrusive way:
class A : IDisposable {}
/// stuff
using(var a = new A()) {
a.method1();
}
after you leave the using block, Dispose() is called for you.
"Is there any way to detect at the end of the program which objects are loitering around waiting to be Disposed?"
Well, if all goes well, at the end of the program the CLR will call all object's finalizers, which, if the IDisposable pattern was implemented properly, will call the Dispose() methods. So at the end, everything will be cleared up properly.
The problem is that if you have a long running program, chances are some of your IDiposable instances are locking some resources that shouldn't be locked. For cases like this, user code should use the using block or call Dispose() as soon as it is done with an object, but there's really no way for a anyone except the code author to know that.
You are not required to call the Dispose method. Implementing the IDisposable interface is a reminder that your class probably is using resources such as a database connection, a file handle, that need to be closed, so GC is not enough.
The best practice AFAIK is to call Dispose or even better, put the object in a using statement.
A good example is the .NET 2.0 Ping class, which runs asynchronously. Unless it throws an exception, you don't actually call Dispose until the callback method. Note that this example has some slightly weird casting due to the way Ping implements the IDisposable interface, but also inherits Dispose() (and only the former works as intended).
private void Refresh( Object sender, EventArgs args )
{
Ping ping = null;
try
{
ping = new Ping();
ping.PingCompleted += PingComplete;
ping.SendAsync( defaultHost, null );
}
catch ( Exception )
{
( (IDisposable)ping ).Dispose();
this.isAlive = false;
}
}
private void PingComplete( Object sender, PingCompletedEventArgs args )
{
this.isAlive = ( args.Error == null && args.Reply.Status == IPStatus.Success );
( (IDisposable)sender ).Dispose();
}
Can I ask how you're certain that it's specifically objects which implement IDisposable? In my experience the most-likely zombie objects are objects which have not properly had all their event handlers removed (thereby leaving a reference to them from another 'live' object and not qualifying them as unreachable during garbage collection).
There are tools which can help track these down by taking a snapshot of the managed heap and stacks and allowing you to see what objects are considered in-use at a given point in time. A freebie is windbg using sos.dll; it'll take some googling for tutorials to show you the commands you need--but it works and it's free. A more user-friendly (don't confused that with "simple") option is Red Gate's ANTS Profiler running in Memory Profiling mode--it's a slick tool.
Edit: Regarding the usefulness of calling Dispose--it provides a deterministic way to cleanup objects. Garbage Collection only runs when your app has ran out of its allocated memory--it's an expensive task which basically stops your application from executing and looks at all objects in existance and builds a tree of "reachable" (in-use) objects, then cleans up the unreachable objects. Manually cleaning up an object frees it before GC ever has to run.
Because the method creating the disposable object may be legitimately returning it as a value, that is, the compiler can't tell how the programming is intending to use it.
What if the disposable object is created in one class/module (say a factory) and is handed off to a different class/module to be used for a while before being disposed of? That use case should be OK, and the compiler shouldn't badger you about it. I suspect that's why there's no compile-time warning---the compiler assumes the Dispose call is in another file.
Determining when and where to call Dispose() is a very subjective thing, dependent on the nature of the program and how it uses disposable objects. Subjective problems are not something compilers are very good at. Instead, this is more a job for static analysis, which is the arena of tools like FxCop and StyleCop, or perhaps more advanced compilers like Spec#/Sing#. Static analysis uses rules to determine if subjective requirements, such as "Always ensure .Dispose() is called at some point.", are met.
I am honestly not sure if any static analyzers exist that are capable of checking whether .Dispose() is called. Even for static analysis as it exists today, that might be a bit on the too-subjective side of things. If you need a place to start looking, however, "Static Analysis for C#" is probably the best place.

Categories

Resources