How can Ninject be configured to always deactivate pooled references? - c#

We're using a library that uses pooled objects (ServiceStack.Redis's PooledRedisClientManager). Objects are created and reused for multiple web requests. However, Dispose should be called after each use to release the object back into the pool.
By default, Ninject only deactivates an object reference if it has not been deactivated before.
What happens is that the pool instantiates an object and marks it as active. Ninject then runs the activation pipeline. At the end of the request (a web request), Ninject runs the deactivation pipeline which calls Dispose (and thus the pool marks the object as inactive). The next request: the first pooled instance is used and the pool marks it as active. However, at the end of the request, Ninject does not run its deactivation pipeline because the ActivationCache has already marked this instance as deactivated (this is in the Pipeline).
Here's a simple sample that we've added in a new MVC project to demonstrate this problem:
public interface IFooFactory
{
IFooClient GetClient();
void DisposeClient(FooClient client);
}
public class PooledFooClientFactory : IFooFactory
{
private readonly List<FooClient> pool = new List<FooClient>();
public IFooClient GetClient()
{
lock (pool)
{
var client = pool.SingleOrDefault(c => !c.Active);
if (client == null)
{
client = new FooClient(pool.Count + 1);
client.Factory = this;
pool.Add(client);
}
client.Active = true;
return client;
}
}
public void DisposeClient(FooClient client)
{
client.Active = false;
}
}
public interface IFooClient
{
void Use();
}
public class FooClient : IFooClient, IDisposable
{
internal IFooFactory Factory { get; set; }
internal bool Active { get; set; }
internal int Id { get; private set; }
public FooClient(int id)
{
this.Id = id;
}
public void Dispose()
{
if (Factory != null)
{
Factory.DisposeClient(this);
}
}
public void Use()
{
Console.WriteLine("Using...");
}
}
public class HomeController : Controller
{
private IFooClient foo;
public HomeController(IFooClient foo)
{
this.foo = foo;
}
public ActionResult Index()
{
foo.Use();
return View();
}
public ActionResult About()
{
return View();
}
}
// In the Ninject configuration (NinjectWebCommon.cs)
private static void RegisterServices(IKernel kernel)
{
kernel.Bind<IFooFactory>()
.To<PooledFooClientFactory>()
.InSingletonScope();
kernel.Bind<IFooClient>()
.ToMethod(ctx => ctx.Kernel.Get<IFooFactory>().GetClient())
.InRequestScope();
}
The solutions that we've come up with thus far are:
Mark these objects as InTransientScope() and use other deactivation mechanism (like an MVC ActionFilter to dispose of the object after each request). We'd lose the benefits of Ninject's deactivation process and require an indirect approach to disposing of the object.
Write a custom IActivationCache that checks the pool to see if the object is active. Here's what I've written so far, but I'd like some one else's eyes to see how robust it is:
public class PooledFooClientActivationCache : DisposableObject, IActivationCache, INinjectComponent, IDisposable, IPruneable
{
private readonly ActivationCache realCache;
public PooledFooClientActivationCache(ICachePruner cachePruner)
{
realCache = new ActivationCache(cachePruner);
}
public void AddActivatedInstance(object instance)
{
realCache.AddActivatedInstance(instance);
}
public void AddDeactivatedInstance(object instance)
{
realCache.AddDeactivatedInstance(instance);
}
public void Clear()
{
realCache.Clear();
}
public bool IsActivated(object instance)
{
lock (realCache)
{
var fooClient = instance as FooClient;
if (fooClient != null) return fooClient.Active;
return realCache.IsActivated(instance);
}
}
public bool IsDeactivated(object instance)
{
lock (realCache)
{
var fooClient = instance as FooClient;
if (fooClient != null) return !fooClient.Active;
return realCache.IsDeactivated(instance);
}
}
public Ninject.INinjectSettings Settings
{
get
{
return realCache.Settings;
}
set
{
realCache.Settings = value;
}
}
public void Prune()
{
realCache.Prune();
}
}
// Wire it up:
kernel.Components.RemoveAll<IActivationCache>();
kernel.Components.Add<IActivationCache, PooledFooClientActivationCache>();
Specifically for ServiceStack.Redis's: use the PooledRedisClientManager.DisposablePooledClient<RedisClient> wrapper so we always get a new object instance. Then let the client object become transient since the wrapper takes care of disposing it. This approach does not tackle the broader concept of pooled objects with Ninject and only fixes it for ServiceStack.Redis.
var clientManager = new PooledRedisClientManager();
kernel.Bind<PooledRedisClientManager.DisposablePooledClient<RedisClient>>()
.ToMethod(ctx => clientManager.GetDisposableClient<RedisClient>())
.InRequestScope();
kernel.Bind<IRedisClient>()
.ToMethod(ctx => ctx.Kernel.Get<PooledRedisClientManager.DisposablePooledClient<RedisClient>>().Client)
.InTransientScope();
Is one of these approaches more appropriate than the other?

I have not use Redis so far so I can not tell you how to do it correctly. But I can give you some input in general:
Disposing is not the only thing that is done by the ActivationPipeline. (E.g. it also does property/method injection and excuting activation/deactivation actions.) By using a custom activation cache that returns false even though it has been activated before will cause that these other actions are executed again (E.g. resulting in property injection done again.)

Related

Test Environment.Exit() in C#

Is there in C# some kind of equivalent of ExpectedSystemExit in Java? I have an exit in my code and would really like to be able to test it. The only thing I found in C# is a not really nice workaround.
Example Code
public void CheckRights()
{
if(!service.UserHasRights())
{
Environment.Exit(1);
}
}
Test Code
[TestMethod]
public void TestCheckRightsWithoutRights()
{
MyService service = ...
service.UserHasRights().Returns(false);
???
}
I am using the VS framework for testing (+ NSubstitute for mocking) but it is not a problem to switch to nunit or whatever for this test.
You should use dependency injection to supply to the class being tested an interface that provides an environmental exit.
For example:
public interface IEnvironment
{
void Exit(int code);
}
Let's also assume that you have an interface for calling UserHasRights():
public interface IRightsService
{
bool UserHasRights();
}
Now suppose your class to be tested looks like this:
public sealed class RightsChecker
{
readonly IRightsService service;
readonly IEnvironment environment;
public RightsChecker(IRightsService service, IEnvironment environment)
{
this.service = service;
this.environment = environment;
}
public void CheckRights()
{
if (!service.UserHasRights())
{
environment.Exit(1);
}
}
}
Now you can use a mocking framework to check that IEnvironment .Exit() is called under the right conditions. For example, using Moq it might look a bit like this:
[TestMethod]
public static void CheckRights_exits_program_when_user_has_no_rights()
{
var rightsService = new Mock<IRightsService>();
rightsService.Setup(foo => foo.UserHasRights()).Returns(false);
var enviromnent = new Mock<IEnvironment>();
var rightsChecker = new RightsChecker(rightsService.Object, enviromnent.Object);
rightsChecker.CheckRights();
enviromnent.Verify(foo => foo.Exit(1));
}
Ambient contexts and cross-cutting concerns
A method such as Environment.Exit() could be considered to be a cross-cutting concern, and you might well want to avoid passing around an interface for it because you can end up with an explosion of additional constructor parameters. (Note: The canonical example of a cross cutting concern is DateTime.Now.)
To address this issue, you can introduce an "Ambient context" - a pattern which allows you to use a static method while still retaining the ability to unit test calls to it. Of course, such things should be used sparingly and only for true cross-cutting concerns.
For example, you could introduce an ambient context for Environment like so:
public abstract class EnvironmentControl
{
public static EnvironmentControl Current
{
get
{
return _current;
}
set
{
if (value == null)
throw new ArgumentNullException(nameof(value));
_current = value;
}
}
public abstract void Exit(int value);
public static void ResetToDefault()
{
_current = DefaultEnvironmentControl.Instance;
}
static EnvironmentControl _current = DefaultEnvironmentControl.Instance;
}
public class DefaultEnvironmentControl : EnvironmentControl
{
public override void Exit(int value)
{
Environment.Exit(value);
}
public static DefaultEnvironmentControl Instance => _instance.Value;
static readonly Lazy<DefaultEnvironmentControl> _instance = new Lazy<DefaultEnvironmentControl>(() => new DefaultEnvironmentControl());
}
Normal code just calls EnvironmentControl.Current.Exit(). With this change, the IEnvironment parameter disappears from the RightsChecker class:
public sealed class RightsChecker
{
readonly IRightsService service;
public RightsChecker(IRightsService service)
{
this.service = service;
}
public void CheckRights()
{
if (!service.UserHasRights())
{
EnvironmentControl.Current.Exit(1);
}
}
}
But we still retain the ability to unit-test that it has been called:
public static void CheckRights_exits_program_when_user_has_no_rights()
{
var rightsService = new Mock<IRightsService>();
rightsService.Setup(foo => foo.UserHasRights()).Returns(false);
var enviromnent = new Mock<EnvironmentControl>();
EnvironmentControl.Current = enviromnent.Object;
try
{
var rightsChecker = new RightsChecker(rightsService.Object);
rightsChecker.CheckRights();
enviromnent.Verify(foo => foo.Exit(1));
}
finally
{
EnvironmentControl.ResetToDefault();
}
}
For more information about ambient contexts, see here.
I ended up creating a new method which I can then mock in my tests.
Code
public void CheckRights()
{
if(!service.UserHasRights())
{
Environment.Exit(1);
}
}
internal virtual void Exit()
{
Environment.Exit(1);
}
Unit test
[TestMethod]
public void TestCheckRightsWithoutRights()
{
MyService service = ...
service.When(svc => svc.Exit()).DoNotCallBase();
...
service.CheckRights();
service.Received(1).Exit();
}
If your goal is to avoid extra classes/interfaces just to support tests, how do you feel about Environment.Exit action via Property Injection?
class RightsChecker
{
public Action AccessDeniedAction { get; set; }
public RightsChecker(...)
{
...
AccessDeniedAction = () => Environment.Exit();
}
}
[Test]
public TestCheckRightsWithoutRights()
{
...
bool wasAccessDeniedActionExecuted = false;
rightsChecker.AccessDeniedAction = () => { wasAccessDeniedActionExecuted = true; }
...
Assert.That(wasAccessDeniedActionExecuted , Is.True);
}

Hangfire dependency injection lifetime scope

I'm rewriting this entire question because I realize the cause, but still need a solution:
I have a recurring job in Hangfire that runs every minute and check the database, possibly updates some stuff, then exits.
I inject my dbcontext into the class containing the job method. I register this dbcontext to get injected using the following
builder.RegisterType<ApplicationDbContext>().As<ApplicationDbContext>().InstancePerLifetimeScope();
However, it seems that Hangfire does not create a seperate lifetime scope every time the job runs, because the constructor only gets called once, although the job method get's called every minute.
This causes issues for me. If the user updates some values in the database (dbcontext gets injected somewhere else, and used to update values), the context still being used Hangfire starts returning out-dated values that have already been changed.
Hangfire currently uses a shared Instance of JobActivator for every Worker, which are using the following method for resolving a dependency:
public override object ActivateJob(Type jobType)
It is planned to add a JobActivationContext to this method for Milestone 2.0.0.
For now, there is no way to say for which job a dependency gets resolved. The only way I can think of to workaround this issue would be to use the fact that jobs are running serial on different threads (I don't know AutoFac so I use Unity as an example).
You could create a JobActivator that can store separate scopes per thread:
public class UnityJobActivator : JobActivator
{
[ThreadStatic]
private static IUnityContainer childContainer;
public UnityJobActivator(IUnityContainer container)
{
// Register dependencies
container.RegisterType<MyService>(new HierarchicalLifetimeManager());
Container = container;
}
public IUnityContainer Container { get; set; }
public override object ActivateJob(Type jobType)
{
return childContainer.Resolve(jobType);
}
public void CreateChildContainer()
{
childContainer = Container.CreateChildContainer();
}
public void DisposeChildContainer()
{
childContainer.Dispose();
childContainer = null;
}
}
Use a JobFilter with IServerFilter implementation to set this scope for every job (thread):
public class ChildContainerPerJobFilterAttribute : JobFilterAttribute, IServerFilter
{
public ChildContainerPerJobFilterAttribute(UnityJobActivator unityJobActivator)
{
UnityJobActivator = unityJobActivator;
}
public UnityJobActivator UnityJobActivator { get; set; }
public void OnPerformed(PerformedContext filterContext)
{
UnityJobActivator.DisposeChildContainer();
}
public void OnPerforming(PerformingContext filterContext)
{
UnityJobActivator.CreateChildContainer();
}
}
And finally setup your DI:
UnityJobActivator unityJobActivator = new UnityJobActivator(new UnityContainer());
JobActivator.Current = unityJobActivator;
GlobalJobFilters.Filters.Add(new ChildContainerPerJobFilterAttribute(unityJobActivator));
We have created a new pull request in the Hangfire.Autofac with the work around described by Dresel. Hopefully it gets merged in the main branch:
https://github.com/HangfireIO/Hangfire.Autofac/pull/4
Edit: With Autofac, .NET 4.5 and Hangfire >= 1.5.0, use the Hangfire.Autofac nuget package (github).
Working with .NET 4.0 (Autofac 3.5.2 and Hangfire 1.1.1), we set up Dresel's solution with Autofac. Only difference is in the JobActivator:
using System;
using Autofac;
using Hangfire;
namespace MyApp.DependencyInjection
{
public class ContainerJobActivator : JobActivator
{
[ThreadStatic]
private static ILifetimeScope _jobScope;
private readonly IContainer _container;
public ContainerJobActivator(IContainer container)
{
_container = container;
}
public void BeginJobScope()
{
_jobScope = _container.BeginLifetimeScope();
}
public void DisposeJobScope()
{
_jobScope.Dispose();
_jobScope = null;
}
public override object ActivateJob(Type type)
{
return _jobScope.Resolve(type);
}
}
}
To work around this problem, I've created a disposable JobContext class that has a ILifetimeScope that will be disposed when Hangfire completes the job. The real job is invoked by reflection.
public class JobContext<T> : IDisposable
{
public ILifetimeScope Scope { get; set; }
public void Execute(string methodName, params object[] args)
{
var instance = Scope.Resolve<T>();
var methodInfo = typeof(T).GetMethod(methodName);
ConvertParameters(methodInfo, args);
methodInfo.Invoke(instance, args);
}
private void ConvertParameters(MethodInfo targetMethod, object[] args)
{
var methodParams = targetMethod.GetParameters();
for (int i = 0; i < methodParams.Length && i < args.Length; i++)
{
if (args[i] == null) continue;
if (!methodParams[i].ParameterType.IsInstanceOfType(args[i]))
{
// try convert
args[i] = args[i].ConvertType(methodParams[i].ParameterType);
}
}
}
void IDisposable.Dispose()
{
if (Scope != null)
Scope.Dispose();
Scope = null;
}
}
There is a JobActivator that will inspect the action and create the LifetimeScope if necessary.
public class ContainerJobActivator : JobActivator
{
private readonly IContainer _container;
private static readonly string JobContextGenericTypeName = typeof(JobContext<>).ToString();
public ContainerJobActivator(IContainer container)
{
_container = container;
}
public override object ActivateJob(Type type)
{
if (type.IsGenericType && type.GetGenericTypeDefinition().ToString() == JobContextGenericTypeName)
{
var scope = _container.BeginLifetimeScope();
var context = Activator.CreateInstance(type);
var propertyInfo = type.GetProperty("Scope");
propertyInfo.SetValue(context, scope);
return context;
}
return _container.Resolve(type);
}
}
To assist with creating jobs, without using string parameters there is another class with some extensions.
public static class JobHelper
{
public static object ConvertType(this object value, Type destinationType)
{
var sourceType = value.GetType();
TypeConverter converter = TypeDescriptor.GetConverter(sourceType);
if (converter.CanConvertTo(destinationType))
{
return converter.ConvertTo(value, destinationType);
}
converter = TypeDescriptor.GetConverter(destinationType);
if (converter.CanConvertFrom(sourceType))
{
return converter.ConvertFrom(value);
}
throw new Exception(string.Format("Cant convert value '{0}' or type {1} to destination type {2}", value, sourceType.Name, destinationType.Name));
}
public static Job CreateJob<T>(Expression<Action<T>> expression, params object[] args)
{
MethodCallExpression outermostExpression = expression.Body as MethodCallExpression;
var methodName = outermostExpression.Method.Name;
return Job.FromExpression<JobContext<T>>(ctx => ctx.Execute(methodName, args));
}
}
So to queue up a job, e.g. with the following signature:
public class ResidentUploadService
{
public void Load(string fileName)
{
//...
}
The code to create the job looks like
var localFileName = "Somefile.txt";
var job = ContainerJobActivator
.CreateJob<ResidentUploadService>(service => service.Load(localFileName), localFileName);
var state = new EnqueuedState("queuename");
var client = new BackgroundJobClient();
client.Create(job,state);
A solution is supported out-of-the-box since hangfire.autofac 2.2.0.
In your situation, where your dependency is being registered per-lifetime-scope, you should be able to use non-tagged scopes when setting up hangfire.autofac. From the link:
GlobalConfiguration.Configuration.UseAutofacActivator(builder.Build(), false);

Abstract factories when using dependency injection frameworks

I'm wondering how to properly use abstract factories when using a DI framework and one of the parameters in that factory is a dependency that should be handled by the DI framework.
I am not sure whether to make my abstract factory omit the parameter completely then use my DI container to wire it up or whether I should pass the dependency to the object.
For example, I have a TcpServer and it uses a Session.Factory to create sockets. The Session object actually takes a Processor in its constructor. Should I pass the Processor to the TcpServer then have it pass it onto the Session.Factory or have my DI container do the wiring?
If I were to have the DI container do the wiring it would look like this:
class Session : ISession
{
public delegate ISession Factory(string name);
...
public Session(string name, Processor processor)
{
...
}
}
class TcpServer : ITcpServer
{
private readonly Session.Factory _sessionFactory;
public TcpServer(Session.Factory sessionFactory)
{
this._sessionFactory = socketFactory;
}
...
public void OnConnectionReceived()
{
...
var session= _sessionFactory(ip.LocalEndPoint());
...
}
}
Then using a DI container like Ninject I'd be able to do this when configuring the container:
Bind<Session.Factory>().ToMethod(c =>
{
var processor = Kernel.Get<Processor>();
return (name) => new Session(name, processor);
}).InSingletonScope();
My main issue with this approach is that it assumes whoever creates the Session.Factory knows about the processor. In my case, since I am using a DI container, this is actually very convenient but it seems weird to have a factory have its own dependencies. I always imagined a factory not really ever having any members.
If I were to pass the dependency through
class Session : ISession
{
public delegate ISession Factory(string name, Processor processor);
...
public Session(string name, Processor processor)
{
...
}
}
class TcpServer : ITcpServer
{
private readonly Session.Factory _sessionFactory;
private readonly Processor _processor;
public TcpServer(Session.Factory sessionFactory, Processor processor)
{
this._processor = processor;
}
...
public void OnConnectionReceived()
{
...
var session = _sessionFactory(ip.LocalEndPoint(), _processor);
...
}
}
I have two issues with the second approach:
The TcpServer doesn't actually do anything with the Processor. It just passes it along. Seems like this is poor man's DI at work almost.
In the real program behind this code, the Processor actually has a reference to the TcpServer. Therefore when using this approach, I get a circular reference. When I break it apart by using the first scenario then it's not an issue.
What do you think is the best approach? I am open to new ideas as well.
Thanks!
Many containers support factories in one or another way and this is the way you should go.
E.g. Taking your example define a ISessionFactory interface like this
public interface ISessionFactory
{
ISession CreateSession(string name);
}
For Ninject 2.3 see https://github.com/ninject/ninject.extensions.factory and let it be implemented by Ninject
Bind<ISessionFactory>().AsFactory();
For 2.2 do the implementation yourself
public class SessionFactory : ISessionFactory
{
private IKernel kernel;
public SessionFactory(IKernel kernel)
{
this.kernel = kernel;
}
public ISession CreateSession(string name)
{
return this.kernel.Get<ISession>(new ConstructorArgument("name", name));
}
}
The pattern I use for an abstract factory pattern is a little different from yours. I use something like setter injection on a generic singleton, but wrap the configurable delegate "property" in a more intuitive interface.
I would prefer not to have to register each implementation individually, so I would prefer to use some convention that can be tested at application start up. I'm not sure about the Ninject syntax for autoregistering custom conventions, but the logic would come down to scanning the relevant assemblies for reference types, T, that have static readonly fields of type AbstractFactory<T>, then calling Configure(Func<T>) on that static member using reflection.
An example of the generic abstract factory singleton and how it would be declared on a Session is below.
public class Session {
public static readonly AbstractFactory<Session> Factory = AbstractFactory<Session>.GetInstance();
}
public sealed class AbstractFactory<T>
where T: class{
static AbstractFactory(){
Bolt = new object();
}
private static readonly object Bolt;
private static AbstractFactory<T> Instance;
public static AbstractFactory<T> GetInstance(){
if(Instance == null){
lock(Bolt){
if(Instance == null)
Instance = new AbstractFactory<T>();
}
}
return Instance;
}
private AbstractFactory(){}
private Func<T> m_FactoryMethod;
public void Configure(Func<T> factoryMethod){
m_FactoryMethod = factoryMethod;
}
public T Create() {
if(m_FactoryMethod == null) {
throw new NotImplementedException();
}
return m_FactoryMethod.Invoke();
}
}
Update
If you need to pass parameters into your factory method, then you can alter the class such as:
public sealed class AbstractFactory<TDataContract,T>
where T: class{
static AbstractFactory(){
Bolt = new object();
}
private static readonly object Bolt;
private static AbstractFactory<TDataContract,T> Instance;
public static AbstractFactory<TDataContract,T> GetInstance(){
if(Instance == null){
lock(Bolt){
if(Instance == null)
Instance = new AbstractFactory<T>();
}
}
return Instance;
}
private AbstractFactory(){}
private Func<TDataContract,T> m_FactoryMethod;
public void Configure(Func<TDataContract,T> factoryMethod){
m_FactoryMethod = factoryMethod;
}
public T Create(TDataContract data) {
if(m_FactoryMethod == null) {
throw new NotImplementedException();
}
return m_FactoryMethod.Invoke(data);
}
}
Your SessionData, Session and TcpServer might look like
public class SessionData{
public DateTime Start { get; set; }
public string IpAddress { get; set; }
}
public class Session {
public static readonly AbstractFactory<SessionData,Session> Factory = AbstractFactory<Session>.GetInstance();
private readonly string _ip;
private readonly DateTime _start;
public Session(SessionData data) {
_ip = data.IpAddress;
_start = DateTime.Now;
}
public event EventHandler<RequestReceivedEventEventArgs> RequestAdded;
}
public class RequestReceivedEventArgs: EventArgs {
public SessionRequest Request { get; set; }
}
public class TcpServer : ITcpServer
{
private readonly Processor _processor;
public TcpServer(Processor processor)
{
this._processor = processor;
}
public void OnConnectionReceived()
{
var sessionData = new SessionData {
IpAddress = ip.LocalEndPoint(),
Start = DateTime.Now
};
var session = Session.Factory.Create(sessionData);
//...do other stuff
}
public void ServeResponse(SessionRequest request){
_processor.Process(request);
}
}
When configuring your DI container, you can set up the factory such as:
Session.Factory.Configure(sessionData => {
// instead of injecting the processor into the Session, configure events
// that allow the TcpServer to process the data.
// (After all, it is more logical for a servers to serve a request than
// it is for a Session to do the Processing. Session's tend to store data
// and state, not invoke processes
session.RequestAdded += (sender,e) => {
Kernel.Get<ITcpServer>.ServeResponse(e.Request);
};
});

Attempt to use my singleton that references another singleton

I'm calling my custom factory that I created (PhotoServiceFactory), which is a singleton that allows me to get at a specific custom service type back (in this case FacebookService). FacebookService is also a singleton. In FacebookService I've exposed an instance of FacebookAlbumPhoto through a property. I did this because then I don't have to have a ton of the same code over and over again creating a new instance of FacebookAlbumPhoto...I can get an instance using the FacebookService's property.
PhotoServiceFactory service = PhotoServiceFactory.CurrentPhotoServiceFactory;
FacebookService facebookService = (FacebookService)service.GetAPIService(APIType.Facebook);
FacebookAlbumPhoto facebookPhoto = facebookService.FacebookAlbumPhoto.GetFacebookAlbumPhoto(selectedPhotoID);
So this is all set up now, I created all this and just testing it now.
What's happening is my code is bombing out at this line:
FacebookAlbumPhoto facebookPhoto = facebookService.FacebookAlbumPhoto.GetFacebookAlbumPhoto(selectedPhotoID);
The error I get is when I try to reference the facebookService.FacebookAlbumPhoto instance:
CurrentSession = '_singletonInstance.CurrentSession' threw an exception of type 'System.Threading.ThreadAbortException'
So I don't know if it's because the service singleton is on one thread and then it tries to reference another singleton that's on a completely different thread and that's just not possible? That it's not possible to nest singletons like this? Or could this be another issue altogether? Cause I can't see it.
Here's my ServiceFactory:
public class PhotoServiceFactory
{
private static PhotoServiceFactory _singletonInstance;
private PhotoServiceFactory(){}
public static PhotoServiceFactory CurrentPhotoServiceFactory
{
get
{
_singletonInstance = _singletonInstance ?? (_singletonInstance = new PhotoServiceFactory());
return _singletonInstance;
}
}
public object GetAPIService(APIType apiType)
{
object apiService = null;
switch (apiType)
{
case APIType.Facebook:
apiService = FacebookService.CurrentService;
break;
// rest of code
}
return apiService;
}
So the main singleton here Service has a property to get its related Session:
Here's the FacebookServiceClass:
public class FacebookService
{
private static FacebookService _singletonInstance;
private FacebookService(){}
public FacebookSession CurrentSession
{
get
{
return FacebookSession.GetCurrentSession();
}
}
/// <summary>
/// Gets the current facebook service singleton instance.
/// </summary>
/// <value>The current facebook service.</value>
public static FacebookService CurrentService
{
get
{
_singletonInstance = _singletonInstance ?? (_singletonInstance = new FacebookService());
return _singletonInstance;
}
}
public FacebookAlbumPhoto FacebookAlbumPhoto
{
get
{
return new FacebookAlbumPhoto(); // create an instance automatically so we can start working with this object
}
}
}
Here's the session class:
public class FacebookSession
{
const string loginCallbackUrl = "http://localhost/PhotoUpload/FacebookOauth.aspx";
private FacebookSession()
{
}
public string UserID { get; private set; }
public static FacebookSession GetCurrentSession()
{
//....bunch of other logic is here
FacebookSession facebookSession = CreateNewSession();
return facebookSession;
}
public FacebookSession CreateNewSession()
{
//...some code here
FacebookSession newFacebookSession = new FacebookSession
//... rest of code...
return newFacebookSession;
}
// ... rest of code
}
UPDATED:
As requested here's my FacebookAlbumPhoto class that I created:
public class FacebookAlbumPhoto : FacebookPhotoBase
{
private FacebookSession currentSession;
public FacebookAlbumPhoto()
{
currentSession = FacebookService.CurrentService.CurrentSession;
}
#region Methods
public FacebookAlbumPhoto GetFacebookAlbumPhoto(string photoID)
{
...more code
FacebookPhotoRequest request = new FacebookPhotoRequest(currentSession.UserID, photoID);
...more code
FacebookAlbumPhoto facebookPhoto = ParseFacebookPhoto(json);
return facebookPhoto;
}
...rest of code
}
Two things. First, remember to read over Skeet's catalogue of singleton implementations.
Second, try breaking your code just before the spot where the exception occurs, and then bring up your "Exception" dialogue (ctrl-alt-e). Click the "throw" checkbox next to the CLR (second row of dialogue) and hit ok. Continue debugging your code. The results may tell you where the real problem is.
Don't forgot to go back to the Exception dialogue and remove that check from the check box, after you are done. :)
separate instance creation from initialization

Object Context, Repositories and Transactions

I was wondering what the best way to use transations with the entity framework.
Say I have three repositories:
Repo1(ObjectContext context)
Repo2(ObjectContext context)
Repo3(ObjectContext context)
and a service object that takes the three repositories:
Service(Repo1 repo1,Repo2 repo2, Repo3 repo3)
Serive.CreateNewObject <- calls repo1, repo2, repo3 to do stuff.
So when I create the service I create three repositories first and pass them down, each repositry takes a object context so my code looks something like this:
MyObjectContext context = new MyObjectContext();
Repo1 repo = new Repo1(context);
// etc
Now I have a controller class that is responsible for calling different services and compants of my application, showing the right forms etc. Now what I want to be able to do is wrap everything that happens in one of the controller methods in a transaction so that if some thing goes wrong I can rollback back.
The controller takes a few different Service objects, but doesn't know anything about the object context.
My questions are:
Should the context be passed in to the service layer also.
How do I implement a transaction in the controller so that anything that happens in the service
layers arn't commited untill everything has passed.
Sorry if it's a bit hard to understand..
Why doesn't your controller know about the ObjectContext?
This is where I would put it. Check out - http://msdn.microsoft.com/en-us/magazine/dd882510.aspx - here the Command is what will commit/rollback the UnitOfWork(ObjectContext).
If you don't want to have your Controller know exactly about the EF (good design) then you want to abstract your ObjectContext into an interface similar to the approach in the above link.
How about using a custom TransactionScope, one that commits when all of your services have committed?
public class TransactionScope : Scope<IDbTransaction>
{
public TransactionScope()
{
InitialiseScope(ConnectionScope.CurrentKey);
}
protected override IDbTransaction CreateItem()
{
return ConnectionScope.Current.BeginTransaction();
}
public void Commit()
{
if (CurrentScopeItem.UserCount == 1)
{
TransactionScope.Current.Commit();
}
}
}
So the transaction is only committed when the UserCount is 1, meaning the last service has committed.
The scope classes are (shame we can't do attachements...):
public abstract class Scope<T> : IDisposable
where T : IDisposable
{
private bool disposed = false;
[ThreadStatic]
private static Stack<ScopeItem<T>> stack = null;
public static T Current
{
get { return stack.Peek().Item; }
}
internal static string CurrentKey
{
get { return stack.Peek().Key; }
}
protected internal ScopeItem<T> CurrentScopeItem
{
get { return stack.Peek(); }
}
protected void InitialiseScope(string key)
{
if (stack == null)
{
stack = new Stack<ScopeItem<T>>();
}
// Only create a new item on the stack if this
// is different to the current ambient item
if (stack.Count == 0 || stack.Peek().Key != key)
{
stack.Push(new ScopeItem<T>(1, CreateItem(), key));
}
else
{
stack.Peek().UserCount++;
}
}
protected abstract T CreateItem();
public void Dispose()
{
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
// If there are no users for the current item
// in the stack, pop it
if (stack.Peek().UserCount == 1)
{
stack.Pop().Item.Dispose();
}
else
{
stack.Peek().UserCount--;
}
}
// There are no unmanaged resources to release, but
// if we add them, they need to be released here.
}
disposed = true;
}
}
public class ScopeItem<T> where T : IDisposable
{
private int userCount;
private T item;
private string key;
public ScopeItem(int userCount, T item, string key)
{
this.userCount = userCount;
this.item = item;
this.key = key;
}
public int UserCount
{
get { return this.userCount; }
set { this.userCount = value; }
}
public T Item
{
get { return this.item; }
set { this.item = value; }
}
public string Key
{
get { return this.key; }
set { this.key = value; }
}
}
public class ConnectionScope : Scope<IDbConnection>
{
private readonly string connectionString = "";
private readonly string providerName = "";
public ConnectionScope(string connectionString, string providerName)
{
this.connectionString = connectionString;
this.providerName = providerName;
InitialiseScope(string.Format("{0}:{1}", connectionString, providerName));
}
public ConnectionScope(IConnectionDetailsProvider connectionDetails)
: this(connectionDetails.ConnectionString, connectionDetails.ConnectionProvider)
{
}
protected override IDbConnection CreateItem()
{
IDbConnection connection = DbProviderFactories.GetFactory(providerName).CreateConnection();
connection.ConnectionString = connectionString;
connection.Open();
return connection;
}
}
Wrap the operation in a TransactionScope.
You might want to implement the transaction model used by the Workflow Foundation. It basically has an interface that all "components" implement. After each does the main work successfully, then the host calls the "commit" method on each. If one failed, it calls the "rollback" method.

Categories

Resources