Autofac uses lifetime scopes as a way of disposing of all of the components created during a unit of work. While this is a powerful feature, it is easy to write code that doesn't dispose the lifetime scopes properly, which results in the number of tracked disposables growing over time: effectively a memory leak.
Is there a way of monitoring the number of Disposable objects being tracked by a Lifetime Scope at any point in time. I'm interested in writing tool to help me find issues related to not properly assigning disposables to units of work. At the moment I use a memory profiler tool to find the leaks, but this is pretty onerous work.
I've looked at the public interface of ILifetimeScope but do not see anything that is of use.
Unfortunately, analytics is one of Autofac's weaker spots at the moment. There is a repository where some work was started on an analytics package but there are definitely some gaps - for example, you can track when an object is activated and you can see when lifetime scopes are disposed, but you can't track when individual objects are disposed as part of a scope. There's just no event for it.
A very simple tracking module for activations would look like this:
using System;
using System.Collections.Generic;
using System.Linq;
using Autofac;
using Autofac.Core;
namespace DiagnosticDemo
{
public class TrackingModule : Module
{
private readonly IDictionary<Type, int> _activations = new Dictionary<Type, int>();
private readonly object _syncRoot = new object();
public void WriteActivations()
{
foreach (var pair in this._activations.Where(p => p.Value > 0))
{
Console.WriteLine("* {0} = {1}", pair.Key, pair.Value);
}
}
protected override void AttachToComponentRegistration(
IComponentRegistry componentRegistry,
IComponentRegistration registration)
{
if (registration.Ownership == InstanceOwnership.OwnedByLifetimeScope)
{
registration.Activated += this.OnRegistrationActivated;
}
}
private void OnRegistrationActivated(
object sender,
ActivatedEventArgs<object> e)
{
if (e.Instance is IDisposable)
{
var type = e.Instance.GetType();
Console.WriteLine("Activating {0}", type);
lock (this._syncRoot)
{
if (this._activations.ContainsKey(type))
{
this._activations[type]++;
}
else
{
this._activations[type] = 1;
}
}
}
}
}
}
You would use it something like this:
static void Main(string[] args)
{
var trackingModule = new TrackingModule();
var builder = new ContainerBuilder();
// Register types, then register the module
builder.RegisterType<Consumer>().As<IConsumer>();
builder.RegisterType<DisposableDependency>().As<IDependency>();
builder.RegisterType<NonDisposableDependency>().As<IDependency>();
builder.RegisterModule(trackingModule);
var container = builder.Build();
// Do whatever it is you want to do...
using (var scope = container.BeginLifetimeScope())
{
scope.Resolve<IConsumer>();
}
// Dump data
Console.WriteLine("Activation totals:");
trackingModule.WriteActivations();
}
However, this isn't going to tell you which items weren't disposed, which is I think what you want to know. It may get you some ideas, though, or at least help a bit.
If you're interested in helping to improve the analytics in Autofac, we'd love to take PRs or specific design ideas for how to improve.
Whitebox is a working but un-finished GUI diagnostics tool for Autofac that does pretty much what you're looking for. It may need an update for the latest version of Autofac - any help with getting the project back up and running would be welcome. Some docs on the old Google Code page.
Related
Been having some problems with a web based .Net(C#) application. I'm using the LazyCache library to cache frequent JSON responses (some in & around 80+KB) for users belonging to the same company across user sessions.
One of the things we need to do is to keep track of the cache keys for a particular company so when any user in the company makes mutating changes to items being cached we need to clear the cache for those items for that particular company's users to force the cache to be repopulated immediately upon the receiving the next request.
We choose LazyCache library as we wanted to do this in memory without needing to use an external cache source such as Redis etc as we don't have heavy usage.
One of the problems we have using this approach is we need to keep track of all the cache keys belonging to a particular customer anytime we cache. So when any mutating change is made by company user's to the relevant resource we need to expire all the cache keys belonging to that company.
To achieve this we have a global cache which all web controllers have access to.
private readonly IAppCache _cache = new CachingService();
protected IAppCache GetCache()
{
return _cache;
}
A simplified example (forgive any typos!) of our controllers which use this cache would be something like below
[HttpGet]
[Route("{customerId}/accounts/users")]
public async Task<Users> GetUsers([Required]string customerId)
{
var usersBusinessLogic = await _provider.GetUsersBusinessLogic(customerId)
var newCacheKey= "GetUsers." + customerId;
CacheUtil.StoreCacheKey(customerId,newCacheKey)
return await GetCache().GetOrAddAsync(newCacheKey, () => usersBusinessLogic.GetUsers(), DateTimeOffset.Now.AddMinutes(10));
}
We use a util class with static methods and a static concurrent dictionary to store the cache keys - each company (GUID) can have many cache keys.
private static readonly ConcurrentDictionary<Guid, ConcurrentHashSet<string>> cacheKeys = new ConcurrentDictionary<Guid, ConcurrentHashSet<string>>();
public static void StoreCacheKey(Guid customerId, string newCacheKey)
{
cacheKeys.AddOrUpdate(customerId, new ConcurrentHashSet<string>() { newCacheKey }, (key, existingCacheKeys) =>
{
existingCacheKeys.Add(newCacheKey);
return existingCacheKeys;
});
}
Within that same util class when we need to remove all cache keys for a particular company we have a method similar to below (which is caused when mutating changes are made in other controllers)
public static void ClearCustomerCache(IAppCache cache, Guid customerId)
{
var customerCacheKeys = new ConcurrentHashSet<string>();
if (!cacheKeys.TryGetValue(customerId,out customerCacheKeys))
{
return new ConcurrentHashSet<string>();
}
foreach (var cacheKey in customerCacheKeys)
{
cache.Remove(cacheKey);
}
cacheKeys.TryRemove(customerId, out _);
}
We have recently been getting performance problems that our web requests response time slow significantly over time - we don't see significant change in terms of the number of requests per second.
Looking at the garbage collection metrics we seem to notice a large Gen 2 heap size and a large object size which seem to going upwards - we don't see memory been reclaimed.
We are still in the middle of debugging this but I'm wondering could using the approach described above lead to the problems we are seeing. We want thread safety but could there be an issue using the concurrent dictionary we have above that even after we remove items that memory is not being freed leading to excessive Gen 2 collection.
Also we are using workstation garbage collection mode, imagine switching to server mode GC will help us (our IIS server has 8 processors + 16 GBs ram) but not sure switching will fix all the problems.
You may want to take advantage of the ExpirationTokens property of the MemoryCacheEntryOptions class. You can also use it from the ICacheEntry argument passed in the delegate of the LazyCache.Providers.MemoryCacheProvider.GetOrCreateAsync method. For example:
Task<T> GetOrAddAsync<T>(string key, Func<Task<T>> factory,
int durationMilliseconds = Timeout.Infinite, string customerId = null)
{
return GetMemoryCacheProvider().GetOrCreateAsync<T>(key, (options) =>
{
if (durationMilliseconds != Timeout.Infinite)
{
options.SetSlidingExpiration(TimeSpan.FromMilliseconds(durationMilliseconds));
}
if (customerId != null)
{
options.ExpirationTokens.Add(GetCustomerExpirationToken(customerId));
}
return factory();
});
}
Now the GetCustomerExpirationToken should return an object implementing the IChangeToken interface. Things are becoming a bit complex, but bear with me for a minute. The .NET platform doesn't provide a built-in IChangeToken implementation suitable for this case, since it is mainly focused on file system watchers. Implementing one is not difficult though:
class ChangeToken : IChangeToken, IDisposable
{
private volatile bool _hasChanged;
private readonly ConcurrentQueue<(Action<object>, object)>
registeredCallbacks = new ConcurrentQueue<(Action<object>, object)>();
public void SignalChanged()
{
_hasChanged = true;
while (registeredCallbacks.TryDequeue(out var entry))
{
var (callback, state) = entry;
callback?.Invoke(state);
}
}
bool IChangeToken.HasChanged => _hasChanged;
bool IChangeToken.ActiveChangeCallbacks => true;
IDisposable IChangeToken.RegisterChangeCallback(Action<object> callback,
object state)
{
registeredCallbacks.Enqueue((callback, state));
return this; // return null doesn't work
}
void IDisposable.Dispose() { } // It is called by the framework after each callback
}
This is a general implementation of the IChangeToken interface, that is activated manually with the SignalChanged method. The signal will be propagated to the underlying MemoryCache object, which will subsequently invalidate all entries associated with this token.
Now what is left to do is to associate these tokens with a customer, and store them somewhere. I think that a ConcurrentDictionary should be quite adequate:
private static readonly ConcurrentDictionary<string, ChangeToken>
CustomerChangeTokens = new ConcurrentDictionary<string, ChangeToken>();
private static ChangeToken GetCustomerExpirationToken(string customerId)
{
return CustomerChangeTokens.GetOrAdd(customerId, _ => new ChangeToken());
}
Finally the method that is needed to signal that all entries of a specific customer should be invalidated:
public static void SignalCustomerChanged(string customerId)
{
if (CustomerChangeTokens.TryRemove(customerId, out var changeToken))
{
changeToken.SignalChanged();
}
}
Large objects (> 85k) belong in gen 2 Large Object Heap (LOH), and they are pinned in memory.
GC scans LOH and marks dead objects
Adjacent dead objects are combined into free memory
The LOH is not compacted
Further allocations only try to fill in the holes left by dead objects.
No compaction, but only reallocation may lead to memory fragmentation.
Long running server processes can be done in by this - it is not uncommon.
You are probably seeing fragmentation occur over time.
Server GC just happens to be multi-threaded - I wouldn't expect it to solve fragmentation.
You could try breaking up your large objects - this might not be feasible for your application.
You can try setting LargeObjectHeapCompaction after a cache clear - assuming it's infrequent.
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
Ultimately, I'd suggest profiling the heap to find out what works.
In a previous question about how I visualize the graph of my dependencies I got the foundation for the code I now use to visualize my dependency graph as it is resolved by Autofac.
Running the code I get a tree that results in code like the following.
Usd.EA.Bogfoering.WebApi.Controllers.BogfoerController (3851,7 ms. / 0,0 ms.) Depth: 0
Usd.EA.Bogfoering.WebApi.Controllers.BogfoerController (3851,7 ms. / 0,4 ms.) Depth: 1
Usd.Utilities.WebApi.Controllers.UnikOwinContext (0,1 ms. / 0,0 ms.) Depth: 2
Usd.Utilities.WebApi.Controllers.UnikOwinContext (0,1 ms. / 0,0 ms.) Depth: 3
In the start I thought there was a problem with the code, and that it for some reason resulted in the components getting resolved multiple times. As Steven points out, this could happen when a component is registered as InstancePerDependency. But as several of my components are registered as InstancePerLifetime or SingleInstance dependencies, those dependencies shouldn't be resolved twice in the graph.
Steven does mention that "the first resolve of the InstancePerDependency dependency seems to have more dependencies than the next resolve, because this graph only shows resolves. Perhaps this is what's going on." But as I'm seeing InstancePerLifetime components being registered multiple times, on several occasions throughout the graph, I have the feeling that there's something else going on here.
What could be going on here?
How the dependencies are registered
The following code is the one we use to register our assemblies:
public static void RegisterAssemblies(this ContainerBuilder containerBuilder, IList<Assembly> assemblies, params Type[] typesToExclude)
{
if (containerBuilder != null && assemblies.Any())
{
var allTypes = assemblies.SelectMany(assembly => assembly.GetTypes()).Where(t => !typesToExclude.Any(t2 => t2.IsAssignableFrom(t))).ToList();
RegisterAllClassesWithoutAttribute(containerBuilder, allTypes);
RegisterClassesThatAreSingleton(containerBuilder, allTypes);
RegisterClassesThatAreInstancePerLifetimeScope(containerBuilder, allTypes);
RegisterGenericInterfaces(containerBuilder, allTypes);
RegisterRealOrTestImplementations(containerBuilder, allTypes);
RegisterAutofacModules(containerBuilder, allTypes);
containerBuilder.Register(c => UnikCallContextProvider.CurrentContext).As<IUnikCallContext>();
}
}
private static void RegisterAutofacModules(ContainerBuilder containerBuilder, List<Type> allTypes)
{
var modules = allTypes.Where(type => typeof(IModule).IsAssignableFrom(type) && type.GetCustomAttribute<DoNotRegisterInIocAttribute>() == null);
foreach (var module in modules)
{
containerBuilder.RegisterModule((IModule) Activator.CreateInstance(module));
}
}
private static void RegisterRealOrTestImplementations(ContainerBuilder containerBuilder, List<Type> allTypes)
{
if (StaticConfigurationHelper.UseRealImplementationsInsteadOfTestImplementations)
{
var realTypes = allTypes.Where(type => type.GetCustomAttribute<RealImplementationAsInstancePerLifetimeScopeAttribute>() != null).ToArray();
containerBuilder.RegisterTypes(realTypes).AsImplementedInterfaces()
.InstancePerLifetimeScope();
}
else
{
var testTypes = allTypes.Where(type => type.GetCustomAttribute<TestImplementationAsInstancePerLifetimeScopeAttribute>() != null).ToArray();
containerBuilder.RegisterTypes(testTypes).AsImplementedInterfaces()
.InstancePerLifetimeScope();
}
}
private static void RegisterGenericInterfaces(ContainerBuilder containerBuilder, List<Type> allTypes)
{
var typesAsGenericInterface = allTypes.Where(type => type.GetCustomAttribute<RegisterAsGenericInterfaceAttribute>() != null).ToArray();
foreach (var type in typesAsGenericInterface)
{
var attribute = type.GetCustomAttribute<RegisterAsGenericInterfaceAttribute>();
containerBuilder.RegisterGeneric(type).As(attribute.Type);
}
}
private static void RegisterClassesThatAreInstancePerLifetimeScope(ContainerBuilder containerBuilder, List<Type> allTypes)
{
var typesAsInstancePerDependency = allTypes.Where(type => type.GetCustomAttribute<InstancePerLifetimeScopeAttribute>() != null).ToArray();
containerBuilder.RegisterTypes(typesAsInstancePerDependency).InstancePerLifetimeScope().AsImplementedInterfaces();
}
private static void RegisterClassesThatAreSingleton(ContainerBuilder containerBuilder, List<Type> allTypes)
{
var typesAsSingleton = allTypes.Where(type => type.GetCustomAttribute<SingletonAttribute>() != null).ToArray();
containerBuilder.RegisterTypes(typesAsSingleton).SingleInstance().AsImplementedInterfaces();
}
private static void RegisterAllClassesWithoutAttribute(ContainerBuilder containerBuilder, List<Type> allTypes)
{
var types = allTypes.Where(type => !typeof(IModule).IsAssignableFrom(type) &&
type.GetCustomAttribute<DoNotRegisterInIocAttribute>() == null &&
type.GetCustomAttribute<SingletonAttribute>() == null &&
type.GetCustomAttribute<RealImplementationAsInstancePerLifetimeScopeAttribute>() == null &&
type.GetCustomAttribute<TestImplementationAsInstancePerLifetimeScopeAttribute>() == null &&
type.GetCustomAttribute<InstancePerLifetimeScopeAttribute>() == null &&
type.GetCustomAttribute<RegisterAsGenericInterfaceAttribute>() == null).ToArray();
containerBuilder.RegisterTypes(types).AsSelf().AsImplementedInterfaces();
}
Where the assemblies that are delivered to the RegisterAssemblies method could be fetched like this:
private List<Assembly> GetAssemblies()
{
var assemblies = AssemblyResolveHelper.LoadAssemblies(AppDomain.CurrentDomain.BaseDirectory,
new Regex(#"Usd.EA.*\.dll"),
SearchOption.TopDirectoryOnly);
assemblies.AddRange(AssemblyResolveHelper.LoadAssemblies(AppDomain.CurrentDomain.BaseDirectory,
new Regex(#"Usd.Utilities.*\.dll"),
SearchOption.TopDirectoryOnly));
assemblies.Add(GetType().Assembly);
return assemblies.Distinct().ToList();
}
The attributes
The attributes used in RegisterAllClassesWithoutAttribute are custom attributes that we manually assign to individual classes
using System;
[AttributeUsage(AttributeTargets.Class)]
public class DoNotRegisterInIocAttribute : Attribute
{
}
Used like this
[ExcludeFromCodeCoverage]
[DoNotRegisterInIoc]
public sealed class TestClass : ITestClass
When I'm not overwriting Autofacs MaxResolveDepth I get the following error
Failed An error occurred when trying to create a controller of type
'BogfoerController'. Make sure that the controller has a parameterless
public constructor. An exception was thrown while activating λ:Usd.EA
.Bogfoering.WebApi.Controllers.BogfoerController ->
Usd.EA.Bogfoering.WebApi.Controllers.BogfoerController -> ......
Probable circular dependency between factory-scoped components. Chain
includes 'Activator = DomainWrapper (DelegateActivator), Services =
SomeService, Lifetime = Autofac.Core.Lifetime.CurrentScopeLifetime,
Sharing = None, Ownership = ExternallyOwned'
Short answer:
This is casused by the Autofac behaviour when resolving services from a child ILifetimeScope created by calling BeginLifetimeScope(Action<ContainerBuilder> configurationAction).
Long answer:
I have set up a simple test to prove above statement. I have generated a 51 test classes referencing themselves.
public class Test0
{
public Test0() { }
}
public class Test1
{
public Test1(Test0 test) { }
}
(...)
public class Test50
{
public Test50(Test49 test) { }
}
Registered them in a newly created container and tried to resolve the "Test50" class directly from the container. As you already found out. There is hard coded limit of 50 dependencies depth in the Autofac library, which you can see it on the GitHub page. After reaching this limit the DependencyResolutionException is thrown stating "Probable circular dependency between factory-scoped components." And this is exactly what happened in my first test.
Now you have asked, why are you seeing multiple registrations of the same dependencies. So here comes the fun part. When you are trying to resolve your instance, you are probably gonna use the BeginLifetimeScope function to create new ILifetimeScope. This would be still ok, unless you are going to add some new registrations to the child scope using one of the overloads. See example below:
using (var scope = container.BeginLifetimeScope(b => { }))
{
var test = scope.Resolve<Test49>();
}
I'm resolving only 50 dependencies (which have previously worked), but now, it yields an exception:
As you can see, this is exactly the same behaviour as you previously described. Each dependency is now showed 2 times. On that image, you can also see that the dependency graph has only reached the Test25 class. This has effectively reduced the previous max depth by a half (whole 25 dependencies!). We can test this by successuflly resolving Test24 class, but exception is thrown when trying to resolve the Test25. This goes even funnier, how do you think, what happens if we add another scope?
using (var scope1 = container.BeginLifetimeScope(b => { }))
{
using (var scope2 = scope1.BeginLifetimeScope(b => { }))
{
var test2 = scope2.Resolve<Test49>();
}
}
You probably guessed it, now you can only resolve the dependencies of depth 50 / 3 = ~16.
Conclusion: Creating nested scopes is limiting the actual available maximum depth of the dependencies graph N times, where the N is the depth of the scope. To be honest, scopes created without extending the container builder do not affect this number. In my opinion, this is a huge absurd, to have hard-coded magic number, which is nowhere in the documentation, cannot be easily configured, doesn't even represent the actual maximum depth and when overflowed, it throws misleading exception stating that you have circular dependencies in the graph somewhere.
Solutions: As a resolution to this issue you could not use this overload of this function. This could be not possible due to architecture limitations, or even the 3rd party framework which could be using the Autofac as DI container.
Another solution that you have already mentioned is overwriting the MaxResolveDepth using dirty reflection.
string circularDependencyDetectorTypeName = typeof(IContainer).AssemblyQualifiedName.Replace(typeof(IContainer).FullName, "Autofac.Core.Resolving.CircularDependencyDetector");
Type circularDependencyDetectorType = Type.GetType(circularDependencyDetectorTypeName);
FieldInfo maxResolveDepthField = circularDependencyDetectorType.GetField("MaxResolveDepth", BindingFlags.Static | BindingFlags.NonPublic);
maxResolveDepthField.SetValue(null, 500);
On the Autofac's GitHub you can also read that they are already planning to change the behaviour of the CircularDependencyDetector, so it could handle the infinite depth of dependencies, but those plans were mentioned in 2018 and they even couldn't change that exception message by this date.
is there a way I can keep track of how much time is taken to resolve an instance via Simple Injector and Constructor's IoC?
I mean something at trace level
Thanks
Resolving instances in Simple Injector is blazingly fast, and should never be a problem, unless your constructors do too much.
Nonetheless, adding tracing can be done using the following extension method (works for Simple Injector v2.x and beyond):
public static void ApplyInterceptor(
this ContainerOptions options, Func<Func<object>, object> interceptor)
{
options.Container.ExpressionBuilding += (s, e) =>
{
var factory = Expression.Lambda(typeof(Func<object>), e.Expression).Compile();
e.Expression = Expression.Convert(
Expression.Invoke(
Expression.Constant(interceptor),
Expression.Constant(factory)),
e.Expression.Type);
};
}
This ApplyInterceptor extension method can be called to intercept the creation of all types produced by the container, for instance to add this monitoring behavior:
container.Options.ApplyInterceptor(producer =>
{
var watch = Stopwatch.StartNew();
object instance = null;
try
{
instance = producer();
return instance;
}
finally
{
watch.Stop();
if (watch.ElapsedMilliseconds > 50)
{
string name = instance.GetType().ToFriendlyName();
Console.WriteLine(
$"WARNING: {name} took {watch.ElapsedMilliseconds} ms. to resolve.");
}
}
})
WARNING: This interceptor gets applied to all registrations in Simple Injector and could severely impact runtime performance, so make sure you only add this during debug builds or when the debugger is attached, to make sure you don't impact runtime performance.
So i am currently disposing many objects when i close my form. Even though it probably disposes it automatically. But still i prefer to follow the "rules" in disposing, hopefully it will stick and help prevent mistakes.
So here is how i currently dispose, which works.
if (connect == true)
{
Waloop.Dispose();
connect = false;
UninitializeCall();
DropCall();
}
if (KeySend.Checked || KeyReceive.Checked)
{
m_mouseListener.Dispose();
k_listener.Dispose();
}
if (NAudio.Wave.AsioOut.isSupported())
{
Aut.Dispose();
}
if (Wasout != null)
{
Wasout.Dispose();
}
if (SendStream != null)
{
SendStream.Dispose();
}
So basically, the first is if a bool is true, meaning if it isn´t those can be ignore, as they haven´t been made i think.
The others are just ways for me to dispose if it´s there. but it´s not a very good way, i would like to have it in 1 big function, meaning.
Dispose if it´s NOT disposed. or something.
I know that many of them has the "isdisposed" bool, so it should be possible if i can check every object, and dispose if it´s false.
How about a helper method which takes objects which implement IDisposable as params?
void DisposeAll(params IDisposable[] disposables)
{
foreach (IDisposable id in disposables)
{
if (id != null) id.Dispose();
}
}
When you want to dispose multiple objects, call the method with whatever objects you want to dispose.
this.DisposeAll(Wasout, SendStream, m_mouseListener, k_listener);
If you want to avoid calling them explicity, then store them all in a List<>:
private List<IDisposable> _disposables;
void DisposeAll() {
foreach(IDisposable id in _disposables) {
if(id != null) id.Dispose();
}
}
You can implement a Disposer class, that will do the work for you, along these lines:
public class Disposer
{
private List<IDisposable> disposables = new List<IDisposable>();
public void Register(IDisposable item)
{
disposables.Add(item);
}
public void Unregister(IDisposable item)
{
disposables.Remove(item);
}
public void DisposeAll()
{
foreach (IDisposable item in disposables)
{
item.Dispose();
}
disposables.Clear();
}
}
Then, instead of the ugly code in your main class, you can have something like:
public class Main
{
//member field
private Disposer m_disposer;
//constructor
public Main()
{
....
m_disposer = new Disposer();
//register any available disposables
disposer.Register(m_mouseListener);
disposer.Register(k_listener);
}
...
public bool Connect()
{
...
if (isConnected)
{
Waloop = ...
Wasout = ...
// register additional disposables as they are created
disposer.Register(Waloop);
disposer.Register(Wasout);
}
}
...
public void Close()
{
//disposal
disposer.DisposeAll();
}
}
I suggest you use the using statement. So with your code, it would look something like this:
using (WaloopClass Waloop = new WaloopClass())
{
// Some other code here I know nothing about.
connect = false; // Testing the current value of connect is redundant.
UninitializeCall();
DropCall();
}
Note there is now no need to explicitly Dispose Waloop, as it happens automatically at the end of the using statement.
This will help to structure your code, and makes the scope of the Waloop much clearer.
I am going to suppose that the only problem you’re trying to solve is how to write the following in a nicer way:
if (Wasout != null)
Wasout.Dispose();
if (SendStream != null)
SendStream.Dispose();
This is a lot of logic already implemented by the using keyword. using checks that the variable is not null before calling Dispose() for you. Also, using guarantees that thrown exceptions (perhap by Wasout.Dispose()) will not interrupt the attempts to call Dispose() on the other listed objects (such as SendStream). It seems that using was intended to allow management of resources based on scoping rules: using using as an alternative way to write o.Dispose() may be considered an abuse of the language. However, the benefits of using’s behavior and the concision it enables are quite valuable. Thus, I recommend to replace such mass statically-written batches of the “if (o != null) o.Dispose()” with an “empty” using:
using (
IDisposable _Wasout = Wasout,
_SendStream = SendStream)
{}
Note that the order that Dispose() is called in is in reverse of how objects are listed in the using block. This follows the pattern of cleaning up objects in reverse of their instantiation order. (The idea is that an object instantiated later may refer to an object instantiated earlier. E.g., if you are using a ladder to climb a house, you might want to keep the ladder around so that you can climb back down before putting it away—the ladder gets instantiated first and cleaned up last. Uhm, analogies… but, basically, the above is shorthand for nested using. And the unlike objects can be smashed into the same using block by writing the using in terms of IDisposable.)
dotnetfiddle of using managing exceptions.
I am having a hard time tracking down a lock issue, so I would like to log every method call's entry and exit. I've done this before with C++ without having to add code to every method. Is this possible with C#?
Probably your best bet would be to use an AOP (aspect oriented programming) framework to automatically call tracing code before and after a method execution. A popular choice for AOP and .NET is PostSharp.
If your primary goal is to log function entry/exit points and occasional information in between, I've had good results with an Disposable logging object where the constructor traces the function entry, and Dispose() traces the exit. This allows calling code to simply wrap each method's code inside a single using statement. Methods are also provided for arbitrary logs in between. Here is a complete C# ETW event tracing class along with a function entry/exit wrapper:
using System;
using System.Diagnostics;
using System.Diagnostics.Tracing;
using System.Reflection;
using System.Runtime.CompilerServices;
namespace MyExample
{
// This class traces function entry/exit
// Constructor is used to automatically log function entry.
// Dispose is used to automatically log function exit.
// use "using(FnTraceWrap x = new FnTraceWrap()){ function code }" pattern for function entry/exit tracing
public class FnTraceWrap : IDisposable
{
string methodName;
string className;
private bool _disposed = false;
public FnTraceWrap()
{
StackFrame frame;
MethodBase method;
frame = new StackFrame(1);
method = frame.GetMethod();
this.methodName = method.Name;
this.className = method.DeclaringType.Name;
MyEventSourceClass.Log.TraceEnter(this.className, this.methodName);
}
public void TraceMessage(string format, params object[] args)
{
string message = String.Format(format, args);
MyEventSourceClass.Log.TraceMessage(message);
}
public void Dispose()
{
if (!this._disposed)
{
this._disposed = true;
MyEventSourceClass.Log.TraceExit(this.className, this.methodName);
}
}
}
[EventSource(Name = "MyEventSource")]
sealed class MyEventSourceClass : EventSource
{
// Global singleton instance
public static MyEventSourceClass Log = new MyEventSourceClass();
private MyEventSourceClass()
{
}
[Event(1, Opcode = EventOpcode.Info, Level = EventLevel.Informational)]
public void TraceMessage(string message)
{
WriteEvent(1, message);
}
[Event(2, Message = "{0}({1}) - {2}: {3}", Opcode = EventOpcode.Info, Level = EventLevel.Informational)]
public void TraceCodeLine([CallerFilePath] string filePath = "",
[CallerLineNumber] int line = 0,
[CallerMemberName] string memberName = "", string message = "")
{
WriteEvent(2, filePath, line, memberName, message);
}
// Function-level entry and exit tracing
[Event(3, Message = "Entering {0}.{1}", Opcode = EventOpcode.Start, Level = EventLevel.Informational)]
public void TraceEnter(string className, string methodName)
{
WriteEvent(3, className, methodName);
}
[Event(4, Message = "Exiting {0}.{1}", Opcode = EventOpcode.Stop, Level = EventLevel.Informational)]
public void TraceExit(string className, string methodName)
{
WriteEvent(4, className, methodName);
}
}
}
Code that uses it will look something like this:
public void DoWork(string foo)
{
using (FnTraceWrap fnTrace = new FnTraceWrap())
{
fnTrace.TraceMessage("Doing work on {0}.", foo);
/*
code ...
*/
}
}
A profiler is great for looking at your running code during development but if you're looking for the ability to do custom traces in production, then, as Denis G. mentionned, PostSharp is the perfect tool: you don't have to change all your code and you can easily switch it on/off.
It's also easy to set-up in a few minutes and Gaël Fraiteur, the creator of PostSharp even has videos that shows you how easy it is to add tracing to an existing app.
You will find examples and tutorials in the documentation section.
Use ANTS Profiler from Red Gate would be your best bet. Failing that, look into interceptors in Castle Windsor. That does assume you're loading your types via IoC though.
Reflection is another way, you can use the System.Reflection.Emit methods to "write" code into memory. That code could replace your method's code, and execute it but with appropriate logging. Good luck on that one, though... Easier would be to use an Aspect Oriented Programming framework like Aspect#.
It might be waiting for the lock issue to take hold, doing a memory dump and analysing the call stack on various threads. You can use DebugDiag or the adplus script (hang mode, in this case) that comes with Debugging Tools for Windows.
Tess Ferrandez also has an excellent lab series on learning to debug various issues using .NET memory dumps. I highly recommend it.
How do you know that it's happening? If this is a multithreaded application, i would recommend testing for the condition and calling System.Diagnostics.Debugger.Break() at runtime when it's detected. Then, simply open up the Threads window and step through the call stacks on each relevant thread.