I'm using autofac in my application and struggling to get this situation working.
I registered my type as a PerLifetimeScope, but this rule apply to the root container too.
I have an instance that should be shared only for requests done within a child LifetimeScope, but should always returns a new instance for requests done direct to the root container.
To better describe:
I have an mobile app that uses autofac to resolve dependencies.
I build one global container for the application. When the user navigates to a new page, I open a new scope from the container for this page.
The page could access an internal sqlite database using EntityFrameworkCore. The instance of the DbContext used to access the database should be the same for all dependencies in the page's scope.
I also have a ConfigurationsService class that I use like a dictionary (key/value) but stored in the database too.
This class should perform updates in the database outside of any scope from pages, not interacting with other changes made from the page.
When a single configuration is changed, it requests an instance of the DbContext, make the changes, and dispose the instance.
The problem here is that when the ConfigurationService requests the DbContext instance from the container (not scope), the DbContext will now be a shared instance, and the connection will stay alive until the container is disposed.
I could open an scope for this, but it leaves the control of the scope lifetime for every class that fall in the same situation.
UPDATE 1:
The next code represents really well the dependency hierarchy and the situation I described above.
// CLASSES
public class Context
{
}
public class Repository
{
public Repository(Context context)
{
Context = context;
}
public Context Context { get; }
}
public class ViewModel
{
public ViewModel(Repository repository1, Repository repository2)
{
Repository1 = repository1;
Repository2 = repository2;
}
public Repository Repository1 { get; }
public Repository Repository2 { get; }
}
// SAMPLE
var repository1 = container.Resolve<Repository>();
var repository2 = container.Resolve<Repository>();
var viewModel = container.Resolve<ViewModel>();
The Context reference from repository1 and repository2 variables should be different instances.
The Context reference from viewModel.Repository1 and viewModel.Repository2 should be the same instance.
ViewModel is one of many classes that could have a dependency hierarchy like that.
The way I could get it to work for my situation was creating a service that manages the instances as singletons only within a scope that is not root.
// CLASSES
public class ScopeSingletons
{
private readonly List<object> _instances = new List<object>();
public T Get<T>(Func<T> factory)
{
var instance = _instances.OfType<T>().SingleOrDefault();
if (instance == null)
_instances.Add(instance = factory());
return instance;
}
}
// SAMPLE
var builder = new ContainerBuilder();
builder.RegisterType<ScopeSingletons>().AsSelf().InstancePerLifetimeScope();
builder.RegisterType<ViewModel>().AsSelf();
builder.RegisterType<Repository>().AsSelf();
builder.Register<Context>(c =>
{
if (Equals((c as IInstanceLookup)?.ActivationScope.Tag, "root"))
return new Context();
return c.Resolve<ScopeSingletons>().Get(() => new Context());
});
var container = builder.Build();
var scope = container.BeginLifetimeScope();
var repository1 = container.Resolve<Repository>();
var repository2 = container.Resolve<Repository>();
var viewModel = scope.Resolve<ViewModel>();
In this case, everthing will work as I need. Context resolutions will be unique for every request within the root scope and shared within a child scope.
This is... sort of a complicated thing and somewhat implies there may be some design issues. From my own experience, whenever I find I want to do X except this one weird case where I want Y it means I designed something wrong.
But let's say it's what you need.
From the "requirements" it sounds like the ConfigurationServices usage...
Needs to control the creation and disposal of its own DbContext instances
Is generally a singleton or otherwise lives at the root container level because it's used outside the context of your scope creation mechanism.
If resolved inside a "page" it doesn't have to "share" the DbContext with the page. (It could but it doesn't have to.)
Sounds like a prime use case for the Owned<T> relationship.
I'd set it up so...
ConfigurationServices is a stateless singleton.
When a context is needed in ConfigurationServices it handles the resolution, execution, and cleanup on the context.
It'd look something like this:
public class ConfigurationServices
{
Func<Owned<DbContext>> _contextFactory;
public ConfigurationServices(Func<Owned<DbContext>> contextFactory)
{
this._contextFactory = contextFactory;
}
public void DoWork()
{
using(var context = this._contextFactory().Value)
{
context.DoSomethingInDatabase();
}
}
}
The Func<T> relationship handles resolving - on the fly - a brand new instance of Owned<T>. The Owned<T> part means the container won't hold a reference to it in disposal and instead will let you control disposal.
Given ConfigurationServices would be a stateless singleton in this case, you could just register it like that.
builder.RegisterType<ConfigurationServices>()
.SingleInstance();
Regardless of where it gets resolved, it'll come from the root container scope (all singletons do) which means the Func<Owned<DbContext>> will also come from there... but since you're using Owned<T> you won't have the inadvertent memory leak. The container won't hold the references for disposal.
Related
In my Asp.Net Core App I need a singleton service that I can reuse for the lifetime of the application. To construct it, I need a DbContext (from the EF Core), but it is a scoped service and not thread safe.
Therefore I am using the following pattern to construct my singleton service. It looks kinda hacky, therefore I was wondering whether this is an acceptable approach and won't lead to any problems?
services.AddScoped<IPersistedConfigurationDbContext, PersistedConfigurationDbContext>();
services.AddSingleton<IPersistedConfigurationService>(s =>
{
ConfigModel currentConfig;
using (var scope = s.CreateScope())
{
var dbContext = scope.ServiceProvider.GetRequiredService<IPersistedConfigurationDbContext>();
currentConfig = dbContext.retrieveConfig();
}
return new PersistedConfigurationService(currentConfig);
});
...
public class ConfigModel
{
string configParam { get; set; }
}
What you're doing is not good and can definitely lead to issues. Since this is being done in the service registration, the scoped service is going to be retrieve once when your singleton is first injected. In other words, this code here is only going to run once for the lifetime of the service you're registering, which since it's a singleton, means it's only going to happen once, period. Additionally, the context you're injecting here only exists within the scope you've created, which goes away as soon as the using statement closes. As such, by the time you actually try to use the context in your singleton, it will have been disposed, and you'll get an ObjectDisposedException.
If you need to use a scoped service inside a singleton, then you need to inject IServiceProvider into the singleton. Then, you need to create a scope and pull out your context when you need to use it, and this will need to be done every time you need to use it. For example:
public class PersistedConfigurationService : IPersistedConfigurationService
{
private readonly IServiceProvider _serviceProvider;
public PersistedConfigurationService(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public async Task Foo()
{
using (var scope = _serviceProvider.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<IPersistedConfigurationDbContext>();
// do something with context
}
}
}
Just to emphasize, again, you will need to do this in each method that needs to utilize the scoped service (your context). You cannot persist this to an ivar or something. If you're put off by the code, you should be, as this is an antipattern. If you must get a scoped service in a singleton, you have no choice, but more often than not, this is a sign of bad design. If a service needs to use scoped services, it should almost invariably be scoped itself, not singleton. There's only a few cases where you truly need a singleton lifetime, and those mostly revolve around dealing with semaphores or other state that needs to be persisted throughout the life of the application. Unless there's a very good reason to make your service a singleton, you should opt for scoped in all cases; scoped should be the default lifetime unless you have a reason to do otherwise.
Although Dependency injection: Service lifetimes documentation in ASP.NET Core says:
It's dangerous to resolve a scoped service from a singleton. It may cause the service to have incorrect state when processing subsequent requests.
But in your case this is not the issue. Actually you are not resolving the scoped service from singleton. Its just getting an instance of scoped service from singleton whenever it requires. So your code should work properly without any disposed context error!
But another potential solution can be using IHostedService. Here is the details about it:
Consuming a scoped service in a background task (IHostedService)
Looking at the name of this service - I think what you need is a custom configuration provider that loads configuration from database at startup (once only). Why don't you do something like following instead? It is a better design, more of a framework compliant approach and also something that you can build as a shared library that other people can also benefit from (or you can benefit from in multiple projects).
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureAppConfiguration((context, config) =>
{
var builtConfig = config.Build();
var persistentConfigBuilder = new ConfigurationBuilder();
var connectionString = builtConfig["ConnectionString"];
persistentStorageBuilder.AddPersistentConfig(connectionString);
var persistentConfig = persistentConfigBuilder.Build();
config.AddConfiguration(persistentConfig);
});
}
Here - AddPersistentConfig is an extension method built as a library that looks like this.
public static class ConfigurationBuilderExtensions
{
public static IConfigurationBuilder AddPersistentConfig(this IConfigurationBuilder configurationBuilder, string connectionString)
{
return configurationBuilder.Add(new PersistentConfigurationSource(connectionString));
}
}
class PersistentConfigurationSource : IConfigurationSource
{
public string ConnectionString { get; set; }
public PersistentConfigurationSource(string connectionString)
{
ConnectionString = connectionString;
}
public IConfigurationProvider Build(IConfigurationBuilder builder)
{
return new PersistentConfigurationProvider(new DbContext(ConnectionString));
}
}
class PersistentConfigurationProvider : ConfigurationProvider
{
private readonly DbContext _context;
public PersistentConfigurationProvider(DbContext context)
{
_context = context;
}
public override void Load()
{
// Using _dbContext
// Load Configuration as valuesFromDb
// Set Data
// Data = valuesFromDb.ToDictionary<string, string>...
}
}
I have the following usage of Autofac
public class Executor
{
private readonly ILifetimeScope rootScope;
public Executor(ILifetimeScope rootScope)
{
this.rootScope = rootScope;
}
public async Task ExecuteAsync(IData data)
{
using (ILifetimeScope childScope = this.rootScope.BeginLifetimeScope(builder =>
{
var module = new ExecutionModule(data);
builder.RegisterModule(module);
}))
{
// resolve services that depend on IData and do the work
}
}
}
public class ExecutionModule : Module
{
private readonly IData data;
public ExecutionModule(IData data)
{
this.data = data;
}
protected override void Load(ContainerBuilder builder)
{
builder.RegisterInstance(this.data).As<IData>();
}
}
ExecuteAsync is called a lot of times during application runtime and IData represents input from the outside (varies for each ExecuteAsync) that should be shared with the services inside of the lifetime scope.
After running the app for a while I started experiencing memory exhaustion. Later via additional profiling I determined that instances of IData survived garbage collections and seem to be causing a memory leak.
As an experiment, the registration of ExecutionModule (i.e. RegisterInstance also) was removed from the code, and the memory leak was eliminated.
Autofac's code has interesting remarks for BeginLifetimeScope:
/// The components registered in the sub-scope will be treated as though they were
/// registered in the root scope, i.e., SingleInstance() components will live as long
/// as the root scope.
The question is: if RegisterInstance means registering singleton then in this case instances of IData will live as long as the root scope even they can only be resolved inside a childScope where it was registered, am I right?
I think that comment probably needs to be revisited and fixed up. A lot has changed in the (looks like six?) years since that was added to the ILifetimeScope interface and while the internals changed, the interface didn't so the docs haven't been revisited. I've added an issue to get those docs updated.
Here's a quick test I ran using Autofac 6:
using System;
using Autofac;
using Autofac.Diagnostics;
namespace AutofacDemo
{
public static class Program
{
public static void Main()
{
var builder = new ContainerBuilder();
builder.RegisterType<Dependency>().SingleInstance();
using var container = builder.Build();
var dep = container.Resolve<Dependency>();
Console.WriteLine("Dependency registered as single instance in root scope:");
Console.WriteLine("ID: {0}", dep.Id);
Console.WriteLine("Scope tag: {0}", dep.ScopeTag);
Console.WriteLine();
var rootScopeTag = dep.ScopeTag;
using var standaloneScope = container.BeginLifetimeScope();
dep = standaloneScope.Resolve<Dependency>();
Console.WriteLine("Dependency registered in root, resolved in child scope:");
Console.WriteLine("ID: {0}", dep.Id);
Console.WriteLine("Scope tag: {0}", dep.ScopeTag);
Console.WriteLine("Resolved from root? {0}", dep.ScopeTag == rootScopeTag);
Console.WriteLine();
using var singleInstanceScope = container.BeginLifetimeScope(
b => b.RegisterType<Dependency>()
.SingleInstance());
dep = singleInstanceScope.Resolve<Dependency>();
Console.WriteLine("Dependency registered as single instance in child scope:");
Console.WriteLine("ID: {0}", dep.Id);
Console.WriteLine("Scope tag: {0}", dep.ScopeTag);
Console.WriteLine("Resolved from root? {0}", dep.ScopeTag == rootScopeTag);
Console.WriteLine();
var instance = new Dependency();
using var registerInstanceScope = container.BeginLifetimeScope(
b => b.RegisterInstance(instance)
.OnActivating(e => e.Instance.ScopeTag = e.Context.Resolve<ILifetimeScope>().Tag));
dep = registerInstanceScope.Resolve<Dependency>();
Console.WriteLine("Dependency registered as instance in child scope:");
Console.WriteLine("ID: {0}", dep.Id);
Console.WriteLine("Scope tag: {0}", dep.ScopeTag);
Console.WriteLine("Resolved from root? {0}", dep.ScopeTag == rootScopeTag);
}
}
public class Dependency
{
public Dependency()
{
this.Id = Guid.NewGuid();
}
public Dependency(ILifetimeScope scope)
: this()
{
this.ScopeTag = scope.Tag;
}
public Guid Id { get; }
public object ScopeTag { get; set; }
}
}
What this does is try out a few ways to resolve a .SingleInstance() or .RegisterInstance<T>() component and show which scope they actually come from. It does this by injecting ILifetimeScope into the constructor - you'll always get the lifetime scope from which the component is being resolved. The console output for this looks like:
Dependency registered as single instance in root scope:
ID: 056e1584-fa2a-4657-b58c-ccc8bfc504d2
Scope tag: root
Dependency registered in root, resolved in child scope:
ID: 056e1584-fa2a-4657-b58c-ccc8bfc504d2
Scope tag: root
Resolved from root? True
Dependency registered as single instance in child scope:
ID: 3b410502-b6f2-4670-a182-6b3eab3d5807
Scope tag: System.Object
Resolved from root? False
Dependency registered as instance in child scope:
ID: 20028683-23c1-48a0-adbe-94c3a8180ed7
Scope tag: System.Object
Resolved from root? False
.SingleInstance() lifetime scope items will always be resolved from the lifetime scope in which they're registered. This stops you from creating a weird captive dependency problem. You'll notice this in the first two resolution operations - resolving both from the root container and a nested scope shows the scope tag is the root scope tag, so we know it's coming from the container in both cases.
RegisterInstance<T> doesn't have dependencies because it's constructed, but I simulate the functionality by adding an OnActivating handler. In this case, we see the object actually gets resolved from a child lifetime scope (sort of an empty System.Object tag) and not the root.
What you'll see in the Autofac source is that any time you BeginLifetimeScope() it creates a "scope restricted registry" which is where the components registered in the child scope are stored. That gets disposed when the scope is disposed.
While I can't guarantee what you're seeing isn't an issue, my experience has been that the simplification of code into a StackOverflow question can hide some of the complexities of a real world system that are actually important. For example:
If the IData is disposable, it'll get held until the child lifetime scope is disposed; and if the disposal of the child scope isn't getting called, that's a leak.
I've seen folks who accidentally use the original ContainerBuilder inside the BeginLifetimeScope lambda and that causes confusion and issues.
If there's something more complex about the threading or execution model that could stop disposal or abort the async thread before disposal occurs, that could be trouble.
I'm not saying you have any of these problems. I am saying that given the above code there's not enough to actually reproduce the issue, so it's possible the leak is a symptom of something larger that has been omitted.
If it was me, I'd:
Make sure I was on the latest Autofac if I wasn't already.
See if it makes a difference registering the data object not inside a module. It shouldn't matter, but... worth a check.
Really dive into the profiler results to see what is holding the data objects. Is it the root container? Or is it each child scope? Is there something holding onto those child scopes and not letting them get disposed or cleaned up?
And, possibly related, we're about to cut a release that includes a couple of fixes for ConcurrentBag usage on some platforms which might help. That should be 6.2.0 and I'll hopefully get that out today.
Given the following scenario...
I am concerned about two things...
1) Is it okay to inject a provider into a business model object? - like I did with the Folder implementation because I want to load Sub-folders on demand.
2) Since I am injecting the DbContext in the Sql implementation of IFolderDataProvider, the context could be disposed or it could live on forever, therefore should I instantiate the context in the constructor?
If this design is incorrect then someone please tell me how should business models be loaded.
//Business model.
interface IFolder
{
int Id { get; }
IEnumerable<IFolder> GetSubFolders();
}
class Folder : IFolder
{
private readonly int id_;
private readonly IFolderDataProvider provider_;
public Folder(int id, IFolderDataProvider provider)
{
id_ = id;
provider_ = provider;
}
public int Id { get; }
public IEnumerable<IFolder> GetSubFolders()
{
return provider_.GetSubFoldersByParentFolderId(id_);
}
}
interface IFolderDataProvider
{
IFolder GetById(int id);
IEnumerable<IFolder> GetSubFoldersByParentFolderId(int id);
}
class SqlFolderDataProvider : IFolderDataProvider
{
private readonly DbContext context_;
public SqlFolderDataProvider(DbContext context)
{
context_ = context;
}
public IFolder GetById(int id)
{
//uses the context to fetch the required folder entity and translates it to the business object.
return new Folder(id, this);
}
public IEnumerable<IFolder> GetSubFoldersByParentFolderId(int id)
{
//uses the context to fetch the required subfolders entities and translates it to the business objects.
}
}
Is it okay to inject a provider into a business model object? - like I did with the Folder implementation because I want to load Sub-folders on demand.
Yes, how else would you be able to call the provider and get the data?
However, the suffix DataProvider is very confusing because it is used for the provider that you use to connect to the database. I recommend changing it to something else. Examples: Repository, Context.
Since I am injecting the DbContext in the Sql implementation of IFolderDataProvider, the context could be disposed or it could live on forever, therefore should I instantiate the context in the constructor?
It won't necessarily live on forever. You decide its life span in your ConfigureServices function when you're adding it as a service, so you can change its scope from Singleton to whatever you like. I personally set the scope of my DBContext service to Transient and I also initiate it there with the connection string:
services.AddTransient<IDbContext, DbContext>(options =>
new DbContext(Configuration.GetConnectionString("DefaultDB")));
I then open and close the database connection in every function in my data layer files (you call it provider). I open it inside a using() statement which then guarantees closing the connection under any condition (normal or exception). Something like this:
public async Task<Location> GetLocation(int id) {
string sql = "SELECT * FROM locations WHERE id = #p_Id;";
using (var con = _db.CreateConnection()) {
//get results
}
}
Is it okay to inject a provider into a business model object
Yes if you call it "business" provider :). Actually do not take too serious all this terminology "inject", "provider". Till you pass (to business model layer's method/constructor) interface that is declared on business model layer (and document abstraction leaks) - you are ok.
should I instantiate the context in the constructor?
This could be observed as an abstraction leak that should be documented. Reused context can be corrupted or can be shared with another thread and etc -- all this can bring side effects. So developers tend to do create one "heavy" object like dbContext per "user request" (that usually means per service call using(var context = new DbContext()), but not always, e.g. Sometimes I share it with Authentication Service Call - to check is the next operation allowed for this user). BTW, DbContext is quite quick to create so do not reuse it just for "optimization".
I'm working on a Web App where I instantiated my a Singleton class below in Startup.cs in order to be reused (more like making a programmable session):
app.CreatePerOwinContext<XYZManager>(XYZManager.Create);
But I'm encountering a problem, as soon as UserA logs in on the app, the information inside XYZManager class gets overwritten when UserB enters and vice versa when they perform some action.
The problem I think is, they're sharing the same application pool, how can this be resolve, any hack?
An meanwhile the whole essence of this approach, I want to be able to call any getter / setter of method inside XYZManager for the current logged user for example:
HttpContext.GetOwinContext().Get<XYZManager>().GetFullDetails();
But sometimes throw details for another logged on user based on operations.
public class XYZManager : IDisposable
{
private static XYZManager instance { get; set; }
public static XYZManager Create()
{
var xyzManager = instance ?? (instance = new XYZManager());
xyzManager.ApplicationDbContext = new ApplicationDbContext();
return xyzManager;
}
public string GetFullDetails () {
return "blah blah";
}
}
As described in msdn, the CreatePerOwinContext method will accept a factory method to create an instance of your class (in this cas XYZManager), and it will keep it for all same context requests with.
HttpContext.GetOwinContext().Get<XYZManager>()
So each time a new Owin Context is created (a new http request received) XYZManager.Create will be invoked. In your case this method returns the same instance, so all contexts will share that instance.
Depending if you want to share that instance for all contexts or not you should return new or the same instances. Also note, that for singleton shared instances there is a different Owin extension method that will keep the singleton for you.
Check out this answer as it explains what is the purpose of the CreatePerOwinContext method, as well as provide some examples how to create a inter context shared instance.
This is how you create Context shared service
public class XYZManager : IDisposable
{
public static XYZManager Create()
{
return new XYZManager(new ApplicationDbContext());
}
private readonly ApplicationDbContext DbContext;
public XYZManager(ApplicationDbContext dbContext)
{
DbContext = dbContext;
}
public string SomeInfo {get;set;}
public string GetFullDetails ()
{
return dbContext.getFullDetails();
}
// dispose
}
Note: Since you will be creating instances each time a new owin context is creates it is advisable, to make sure any unmanaged objects are disposed.
I have a MVC application with all Ninject stuff wired up properly. Within the application I wanted to add functionality to call a WCF service, which then sends bulk messages (i.e. bulk printing) to RabbitMQ queue .
A 'processor' app subscribes to messages in the queue and process them. This is where I also want to update some stuff in the database, so I want all my services and repositories from the MVC app to be available too.
The processor app implements the following:
public abstract class KernelImplementation
{
private IKernel _kernel;
public IKernel Kernel
{
get
{
if (_kernel != null)
return _kernel;
else
{
_kernel = new StandardKernel(new RepositoryModule(),
new DomainModule(),
new ServiceModule(),
new MessageModule());
return _kernel;
}
}
}
}
All Ninject repository bindings are specified within RepositoryModule, which is also used within MVC app and look like this:
Bind<IReviewRepository>().To<ReviewRepository>().InCallScope();
The processor class
public class Processor : KernelImplementation
{
private readonly IReviewPrintMessage _reviewPrintMessage;
public Processor()
{
_reviewPrintMessage = Kernel.Get<IReviewPrintMessage>();
[...]
_bus.Subscribe<ReviewPrintContract>("ReviewPrint_Id",
(reviewPrintContract) => _reviewPrintMessage.ProcessReviewPrint(reviewPrintContract));
//calling ProcessReviewPrint where I want my repositories to be available
}
}
Everything works fine until I update the database from the MVC app or database directly. The processor app doesn't know anything about those changes and the next time it tries to process something, it works on a 'cached' DbContext. I'm sure it's something to do with not disposing the DbContext properly, but I'm not sure what scope should be used for a console app (tried all sort of different scopes to no avail).
The only solution I can think of at the moment is to call WCF service back from the processor app and perform all the necessary updates within the service, but I would want to avoid that.
UPDATE: Adding update logic
Simplified ReviewPrintMessage:
public class ReviewPrintMessage : IReviewPrintMessage
{
private readonly IReviewService _reviewService;
public ReviewPrintMessage(IReviewService reviewService)
{
_reviewService = reviewService;
}
public void ProcessReviewPrint(ReviewPrintContract reviewPrintContract)
{
var review =
_reviewService.GetReview(reviewPrintContract.ReviewId);
[...]
//do all sorts of stuff here
[...]
_reviewService.UpdateReview(review);
}
}
UpdateReview method in ReviewService:
public void UpdateTenancyAgreementReview(TenancyAgreementReview review)
{
_tenancyAgreementReviewRepository.Update(review);
_unitOfWork.Commit();
}
RepositoryBase:
public abstract class EntityRepositoryBase<T> where T : class
{
protected MyContext _dataContext;
protected EntityRepositoryBase(IDbFactory dbFactory)
{
this.DbFactory = dbFactory;
_dbSet = this.DataContext.Set<T>();
}
[...]
public virtual void Update(T entity)
{
try
{
DataContext.Entry(entity).State = EntityState.Modified;
}
catch (Exception exception)
{
throw new EntityException(string.Format("Failed to update entity '{0}'", typeof(T).Name), exception);
}
}
}
Context itself is bound like this:
Bind<MyContext>().ToSelf().InCallScope();
From the description of scopes I thought that Transient scope was the right choice, but as I said earlier I tried all sorts including RequestScope, TransientScope, NamedScope and even Singleton (although I knew it wouldn't be desired behaviour), but none of them seem to be disposing the context properly.
What you'll need is one DbContext instance per transaction.
Now other "applications" like web-applications or wcf-service may be doing one transaction per request (and thus use something like InRequestScope(). Also note, that these application create an object graph for each request. However, that is a concept unknown to your console application.
Furthermore, scoping only affects the instantiation of objects. Once they are instantiated, Scoping does not have any effect on them.
So one way to solve your issue would be to create the (relevant) object tree/graph per transaction and then you could use InCallScope() (InCallScope really means "once per instantiation of an object graph", see here).
That would mean that you'd need a factory for IReviewPrintMessage (have a look at ninject.extensions.factory) and create an instance of IReviewPrintMessage every time you want to execute IReviewPrintMessage.ProcessReviewPrint.
Now you have re-created the "per request pattern".
However, regarding CompositionRoot this is not recommended.
Alternative: you can also only re-create the DbContext as needed. Instead of passing it around everywhere (DbContext as additional parameter on almost every method) you use a SynchronizationContext local storage (or if you don't use TPL/async await: a ThreadLocal). I've already described this method in more detail here