My application is built around "Services" like this:
public class ProfileService : BaseService {
private CommService _commService;
public ProfileService(CommService commService) {
_commService = commService;
}
public ApiResponseDto GetProfile(string profileUsername) {
using (var db = new appContext()){ db.DoStuff(); }
}
}
What I would like to do is push the db instantiation into BaseService, but I don't want to create a dbContext and incur the cost of such when I don't need it. So I'm thinking about doing something like this:
public class BaseService {
public AppContext _db;
public AppContext db(){
return _db ?? new AppContext();
}
}
And then all of my methods will access the db via db().DoStuff().
I don't like the idea of parentheses everywhere, but I like the idea of cleaning up my services footprints more.
My question is - if I create an instance of DbContext and don't use it, is there any cost? Or is the object just instantiated for future use? I hate to ask for opinions here as I know it's not allowed, but is this a step in the right direction of keeping things DRY?
Unit of Work Pattern
DbContext is effectively an implementation of the 'unit of work' pattern - once the DbContext is created, all changed done to the DbSet are then persisted in one go when you call SaveChanges.
So the further question you need to answer in order to properly answer your question is: What is the scope of the changes that make up your unit of work? In other words - what set of changes need to be made atomically - all succeed, or all fail?
A practical (if example of this - say you have an API endpoint that exposes an operation allowing the client to submit an order. The controller uses OrderService to submit the order, and then InventoryService to update the inventory associated with the items in the order. If each service has their own DbContext, you have a risk that the OrderService will succeed to persist the order submission, but the InventoryService will fail to persist the inventory update.
Dependency Injection
To combat this, a common pattern is to create a context per-request and let your IoC container create and dispose the context, and make it available to inject into services per request. This blog post gives a few options for DbContext management, and includes an example of configuring Ninject to do it.
What this means is your ctor will look like:
public ProfileService(CommService commService, AppContext context) {
_commService = commService;
_context = context;
}
And you can safely use the context there without having to worry about how it was created or where it came from.
Medhi's DbScopeFactory
However, my preferred approach for more complex applications is an excellent open source library documented up here: http://mehdi.me/ambient-dbcontext-in-ef6/. Injecting DbContext per request will work fine for simpler applications, but as your application gets more involved (e.g. multiple Contexts per application, multiple databases etc.), the finer grain control offered by his IDbContextScopeFactory is invaluable.
Edit to Add - Pros and Cons of Injection vs Construction
Following your comment asking for pros/cons of the approach you proposed, I'd say that generally, injection of dependencies (including DbContext) is a far more flexible and powerful approach, and can still achieve the goal of ensuring your devs don't have to be concerned with dbcontext lifecycle management.
The pros and cons are generally the same for all instances of dependency injection not just db context, but here are a few concrete issues with constructing the context within the service (even in a base service):
each service will have its own instance of the dbcontext - this can lead to consistency problems where your unit of work spans tasks carried out by multiple services (see example above)
It will be much more difficult to unit test your services, as they are constructing their own dependency. Injecting the dbcontext means you can mock it in your unit tests and test functionality without hitting the database
It introduces unmanaged state into your services - if you are using dependency injection, you want the IoC container to manage the lifecycle of services. When your service has no per-request dependencies, the IoC container will create a single instance of the service for the whole application, which means your dbcontext saved to the private member will be used for all requests/threads - this can be a big problem and should be avoided.
(Note: this is less of an issue if you are not using DI and constructing new instances of the services within controllers, but then you are losing the benefits of DI at the controller level as well...)
All services are now locked to using the same DbContext instance - what if, in the future you decide to split your database and some services need to access a different DbContext? Would you create two different BaseServices? Or pass in configuration data to allow the base service to switch? DI would take care of that, because you would just register the two different Context classes, and then the container would provide each service with the context it needs.
Are you returning IQueryables anywhere? If you are, then you run a risk that the IQueryable will cause the Db to hit even after the DbContext has gone out of scope - it may have been disposed by the garbage collector and will not be available.
From a dev perspective, I think nothing is simpler than the DI approach - simply specify the DbContext in your constructor, and let the DI container container take care of the rest.
If you are using DbContext per request, you don't even have to create or dispose the context, and you can be confident that IQueryables will be resolvable at any point in the request call stack.
If you use Mehdi's approach, you do have to create a DbContextScope, but that approach is more appropriate if you are going down a repository pattern path and want explicit control over the context scope.
As you can see, I'm far less concerned about the computational cost of constructing a DbContext when it's not needed (as far as I can tell, it's a fairly low cost until you actually use it to hit the db), and more concerned about the application architecture that permits unit testing and decoupling from dependencies.
Related
It is considered that the Service Locator is an antipattern. But is it correct to get all the necessary dependencies in the constructor if they are used only under certain conditions?
Approach 1 (Service Locator)
public class MyType
{
public void MyMethod()
{
if (someRareCondition1)
{
var dep1 = Locator.Resolve<IDep1>();
dep1.DoSomething();
}
if (someRareCondition2)
{
var dep2 = Locator.Resolve<IDep2>();
dep2.DoSomething();
}
}
}
Approach 2 (Constructor injection)
public class MyType
{
private readonly IDep1 dep1;
private readonly IDep2 dep2;
public MyType(IDep1 dep1, IDep2 dep2)
{
this.dep1 = dep1;
this.dep2 = dep2;
}
public void MyMethod()
{
if (someRareCondition1)
{
dep1.DoSomething();
}
if (someRareCondition2)
{
dep2.DoSomething();
}
}
}
You can have many different voids that need different dependencies, but only in certain cases. Is it better to use a Service Locator for performance & memory?
Before I'll talk about the performance difference between the two approaches, I need to set the stage and talk about the Service Locator anti-pattern, because not every callback to the DI Container is an implementation of Service Locator.
Calls to the DI Container (or an abstraction over it) should be prevented from application code, e.g. inside your MVC controllers, or code part of your business layer. Callbacks from these parts of your code base can be considered examples of Service Locator.
Callbacks from parts of your application's startup path, a.k.a. the Composition Root, on the other hand, are not considered to be Service Locator implementations. That's because the Service Locator pattern is more than the mechanical description of a Resolve API, but rather a description of the role it plays in your application. These calls to the Container from inside the Composition Root are fine, beneficial, or even required for your application to function. Therefore, for the remaining part of my answer, I rather refer to "callback to the DI Container" rather than "using the Service Locator pattern."
When it comes to performance, there are many things to consider. It would be impossible for me to mention every possible performance bottleneck and tweak, but I'll mention the few things I think are most worthwhile to talk about in the context of your question.
Whether or not the lazy resolving of dependencies by calling back into the container is faster than constructor injection depends on a lot of factors. In general, I would say that in both cases performance is typically irrelevant as object composition would unlikely be your application's performance bottleneck. In most cases I/O takes up the bulk of the time. Time is typically better spent in optimizing I/O—it results in better performance gains with less investments.
That said, one thing to realize is that DI Containers are typically highly optimized and can do optimizations during compilation of the generated code that composes your application's object graphs. But these optimizations are thrown out when you start to break up object graphs by calling back into the container lazily. This makes constructor injection a more optimized approach, compared to breaking an object graph in pieces and resolving them one by one.
If I use Simple Injector—the DI Container that I maintain—as an example, it does quite aggressive optimizations on generated Expression trees before it starts compiling them. Those optimizations include:
Reducing size of compiled code by reusing compiled code within the graph.
Optimizing the request of scoped components within the graph, by caching them in variables (closures) inside the compiled method. This prevents duplicate dictionary look-ups.
Your mileage will obviously vary, but most DI Containers perform some kind of optimization. I'm unsure what kinds of optimizations the built-in ASP.NET Core DI Container applies though, but AFAIK its optimizations are limited.
There is overhead in calling your Container's Resolve method. At the very least it causes a dictionary lookup from the requested type to the code that is able compose the graph for that type, while dictionary look-ups tend not to happen (that much) for resolved dependencies. But in practice calls to Resolve tend to have some validity checks and other required logic that adds overhead to such call. This is another reason why constructor injection a more optimized approach, compared to doing callbacks.
Modern DI Containers are usually optimized so they can resolve big object graphs with ease (although with some containers there is a limit in the size of the object graph, although that limit is typically pretty big). Their overhead compared to creating those same object graphs manually (using plain old C#) is usually minimal (although differences and exceptions exist). But this only works if you follow the best practice to keep your injection constructors simple. When injection constructors are simple, it doesn't matter if you inject dependencies that are only used part of the time.
When you fail to follow this best practice, for instance by having injection constructors that callback to the database or perform some logging to disk, performance of object graph resolution can slow down considerably. This can be certainly painful when you're dealing with components that aren't always used. This seems to be the context of your question. Here's an example of a problematic injection constructor:
// This Injection Constructor does more than just receiving its dependencies.
public OrderShippingService(
ILogger logger, IConfigurationProvider provider)
{
// Storing the incoming dependency; this is fine.
this.logger = logger;
// Here it starts using its dependencies; this is problematic.
logger.LogInfo("Creating OrderShippingService.");
this.config = provider.Load<OrderShippingServiceConfig>();
logger.LogInfo("OrderShippingService Config loaded.");
}
My advise, therefore, is: follow the "simple injection constructors" best practice and make sure that injection constructors do no more than receive and store their incoming dependencies. Do not use dependencies from inside the constructor. This practice also helps when dealing with dependencies that are only used part of the time, because when those dependencies are fast to create, the problem goes away and using constructor injection will typically still be faster compared to doing callbacks.
On top of that, there are other best practices that should be followed, such as the Single-Responsibility Principle. Following it prevents constructors to get many dependencies and prevents the Constructor Over-Injection code smell. Object graphs that contain classes with many dependencies tend to become much bigger in size and, therefore, slower to resolve. This best practice doesn't help when dealing with those sometimes-used dependencies, though.
It might be the case, however, that you're unable to refactor such slow constructor, which requires you to prevent it to be eagerly loaded. But there are other cases in which eager loading can cause problems. That can happen, for instance, when your application uses Composites or Mediators. Composites and Mediators typically wrap many components and can forward an incoming call to a limited subset of them. Especially a Mediator, which typically forwards the call to a single component. For instance:
// Component using a mediator abstraction.
public class ShipmentController : Controller
{
private readonly IMediator mediator;
public void ShipOrder(ShipOrderCommand cmd) =>
mediator.Execute(cmd);
public void CancelOrder(CancelOrderCommand cmd) =>
mediator.Execute(cmd);
}
In the code above, the IMediator implementation should forward the Execute call to a component that knows how to handle the supplied command. In this example the ShipmentController forwards two different command types to the mediator.
Even with simple injection constructors, the previous example might cause performance problems when the application contains hundreds of those 'handlers' in case those handlers contain deep object graphs by themselves and are all recreated each time ShipmentController is composed.
The following implementation demonstrates these performance issues:
// I'm using C# 9 record type syntax here, because that makes the example succinct
record Mediator(IHandler[] Handlers) : IMediator
{
public void Execute<T>(T command) =>
Handlers.OfType<IHandler<T>>().Single().Execute(command);
}
}
In this example, all handlers are eagerly created before Mediator is, and injected into the Mediator's constructor, while a call to Execute just picks one from the list. This could lead to performance issues when there are many handlers that contain many dependencies of their own. This is because in order to execute one handler, all handlers with their dependencies need to be constructed. Not ideal.
To prevent this performance problem, calling back into the DI Container is an option to consider. It doesn't require the Service Locator anti-pattern, though, because the Mediator implementation (and, therefore, the callback) should reside inside your Composition Root. A possible IMediator implementation could look like this:
// As long as this implementation is placed inside the Composition Root,
// this is -not- an implementation of the Service Locator anti-pattern.
record Mediator(IServiceProvider Container) : IMediator
{
public void Execute<T>(T cmd) =>
Container.GetService<IHandler<T>>().Execute(cmd);
}
In this case, only the relevant handler is requested from the DI Container—not all of them. This means that the DI Container at this point, only creates the object graph for that particular handler.
In all cases, however, you should prevent calling back to the DI Container from within application code. I would even argue not to inject a Lazy<T> for a conditionally used dependency, even though some DI Containers have support for this. This only complicates the consumer's code, its tests, and makes it easy to forget to apply Lazy<T> to all constructors for that dependency.
Instead, creating a Proxy would be a better approach. That proxy would live inside the Composition Root and would either wrap a Lazy<T> or call back into the Container:
record DelayedDependencyProxy(IServiceProvider Container) : IDependency
{
private IDependency real;
public object SomeMethod()
{
if (real is null)
real = Container.GetService<RealDependency>();
return real.SomeMethod();
}
}
This Proxy keeps the consumers of IDependency clean and oblivious to the use of any mechanism to delay the creation of the dependency. Instead of injecting RealDependency into consumers of IDependency, you now inject DelayedDependencyProxy.
One last, but important note: do prevent premature optimizations. Prefer constructor injection over container callbacks even if container callbacks are faster. If you suspect any performance bottlenecks by constructor injection: measure, measure, measure. And do verify if the bottleneck really is in object composition itself, or in the constructor of one of your components. And if fixed, verify that this gives a performance boost significant enough to justify the increased complexity it causes. A performance win of 1 millisecond is not significant for most applications.
I'm currently building an application, and as it stands I have a single service for each controller (the service handles the business logic for the controller). Each service has it's own dbcontext.
I've recognised that several services need to perform the same functions (retrieve the same lists of data from the database and perform the same logic on them before returning them). So ideally I need a way for the services to access common functions.
My first thought is to create a simple helper class that each service could use, with simple functions that take a dbcontext as one of the parameters, so that the functions could perform database queries as well as logic and return the result.
Is this a good idea? Would I run into problems by structuring my code this way, or is there a better more robust and accepted approach I should take?
I'd say you're on the right track, but go one step further with the single responsibility principle. http://blog.codinghorror.com/curlys-law-do-one-thing/. It's a proven strategy for keeping code clean. I avoid "helper" classes per say. They can get messy by having too many responsibilities. Instead I try to really think about what my class should do. Then I give it a really good name to remind me that it only does that one thing.
The fact that your services each have their own Db Context can be a problem. Just make sure that if you call upon more than 1 dependent service that you pass in the same Db context to them all. If your object graph is large, a container like AutoFac will be a big help.
Is the data being returned the same? Are they using their own unique DB context or is it the same DB context?
Generally I would recommend avoiding creating a helper class. Generally a helper class is used to manipulate an object(s) rather than perform a database query.
Based on your comment there are two ways you could achieve this, one easier than the other.
Option 1:
If your application really is a simple one that you're not too concerned about doing things the 'correct' way then you could simply create base service class and update your services to extend it, and move your common database access into the base class, like so:
abstract class BaseService
{
...
public ICollection<ExampleRecord> GetDatabaseRecords()
{
using (var context = new ApplicationDbContext())
{
/* Your DbContext code */
}
return databaseRecords;
}
...
}
Then extend BaseService like so:
public class ExampleService : BaseService
{
...
public ICollection<ExampleRecord> GetRecords()
{
return this.GetDatabaseRecords();
}
...
}
This would get the job done and be a better option to what you're currently doing, however it's generally not the best approach.
Optios 2:
If your application is more than a simple one and you're concerned about code maintainability then I would suggest looking into moving your database access code into a separate repository class and use an IoC container such as StructureMap to inject it the said repository into your services via dependency injection.
Personally I would recommend option 2 as it's far cleaner, more maintainable/extensible and you're not violating any of the SOLID principles.
You can use an abstract service to define a common methods
This is a good tutorial about
generic repository, services layer, IoC, unit test an entity framework
Hi created an extension method to control the lifecycle of an EF context. My code is below
public static Entities GetCentralRepositoryContext(this HttpContext httpcontext)
{
if (HttpContext.Current.Items["context"] == null)
{
HttpContext.Current.Items["context"] = new Entities();
}
return (Entities)HttpContext.Current.Items["context"];
}
I've created many layers in my solution as projects and have started to think about IOC. The above code sits in my BL layer project but for it to work I need to create a reference to my DL layer as that's where the entities class resides. How can I remove the reference to my DL layer and inject into my extension method. Is this even possible?
The approach you are taking has several problems. First of all, static methods tend to be a problem for loose coupling, and you'll notice this quickly when trying to unit test your code. Besides this, your business layer has a dependency on System.Web, which makes your business layer technology specific, which will make it very hard to move part of the system to for instance a Windows Service, and again makes unit testing almost impossible.
Instead of doing this, start injecting your Entities class into the constructor of all types that need it. At the beginning of each request you can build up the dependency graph of services in your application, specific to that request. At that point you can determine that the Entities instance should have a lifetime of a web request.
This however, will start to get cumbersome to do without a DI framework. Or at least, a DI framework will make this much easier to do.
When you start writing unit tests, you'll find that it will be very hard when directly using the EF ObjectContext in your application. This article might give you some ideas how to abstract the ObjectContext behind a testable interface.
Currently I'm trying to understand dependency injection better and I'm using asp.net MVC to work with it. You might see some other related questions from me ;)
Alright, I'll start with an example controller (of an example Contacts Manager asp.net MVC application)
public class ContactsController{
ContactsManagerDb _db;
public ContactsController(){
_db = ContactsManagerDb();
}
//...Actions here
}
Allright, awesome that's working. My actions can all use the database for CRUD actions. Now I've decided I wanted to add unit testing, and I've added another contructor to mock a database
public class ContactsController{
IContactsManagerDb _db;
public ContactsController(){
_db = ContactsManagerDb();
}
public ContactsController(IContactsManagerDb db){
_db = db;
}
//...Actions here
}
Awesome, that's working to, in my unit tests I can create my own implementation of the IContactsManagerDb and unit test my controller.
Now, people usually make the following decision (and here is my actual question), get rid of the empty controller, and use dependency injection to define what implementation to use.
So using StructureMap I've added the following injection rule:
x.For<IContactsManagerDb>().Use<ContactsManagerDb>();
And ofcourse in my Testing Project I'm using a different IContactsManagerDb implementation.
x.For<IContactsManagerDb>().Use<MyTestingContactsManagerDb>();
But my question is, **What problem have I solved or what have I simplified by using dependency injection in this specific case"
I fail to see any practical use of it now, I understand the HOW but not the WHY? What's the use of this? Can anyone add to this project perhaps, make an example how this is more practical and useful?
The first example is not unit testable, so it is not good as it is creating a strong coupling between the different layers of your application and makes them less reusable. The second example is called poor man dependency injection. It's also discussed here.
What is wrong with poor man dependency injection is that the code is not autodocumenting. It doesn't state its intent to the consumer. A consumer sees this code and he could easily call the default constructor without passing any argument, whereas if there was no default constructor it would have immediately been clear that this class absolutely requires some contract to be passed to its constructor in order to function normally. And it is really not to the class to decide which specific implementation to choose. It is up to the consumer of this class.
Dependency injection is useful for 3 main reasons :
It is a method of decoupling interfaces and implementations.
It is good for reducing the amount of boiler plate / factory methods in an application.
It increases the modularity of packages.
As an example - consider the Unit test which required access to a class, defined as an interface. In many cases, a unit test for an interface would have to invoke implementations of that interface -- thus if an implementation changed, so would the unit test. However, with DI, you could "inject" an interface's implementation at run time into a unit test using the injection API - so that changes to implementations only have to be handled by the injection framework, not by individual classes that use those implementations.
Another example is in the web world : Consider the coupling between service providers and service definitions. If a particular component needs access to a service - it is better to design to the interface than to a particular implementation of that service. Injection enables such design, again, by allowing you to dynamically add dependencies by referencing your injection framework.
Thus, the various couplings of classes to one another are moved out of factories and individual classes, and dealt with in a uniform, abstract, reusable, and easily-maintained manner when one has a good DI framework. The best tutorials on DI that I have seen are on Google's Guice tutorials, available on YouTube. Although these are not the same as your particular technology, the principles are identical.
First, your example won't compile. var _db; is not a valid statement because the type of the variable has to be inferred at declaration.
You could do var _db = new ContactsManagerDb();, but then your second constructor won't compile because you're trying to assign an IContactsManagerDb to an instance of ContactsManagerDb.
You could change it to IContactsManagerDb _db;, and then make sure that ContactsManagerDb derives from IContactsManagerDb, but then that makes your first constructor irrelvant. You have to have the constructor that takes the interface argument anyways, so why not just use it all the time?
Dependency Injection is all about removing dependancies from the classes themselves. ContactsController doesn't need to know about ContactsManagerDb in order to use IContactsManagerDb to access the Contacts Manager.
I've been using IoC (mostly Unity) and Dependency Injection in .NET for some time now and I really like the pattern as a way to encourage creation of software classes with loose coupling and which should be easier to isolate for testing.
The approach I generally try to stick to is "Nikola's Five Laws of IoC" - in particular not injecting the container itself and only using constructor injection so that you can clearly see all the dependencies of a class from its constructor signature. Nikola does have an account on here but I'm not sure if he is still active.
Anyway, when I end up either violating one of the other laws or generally ending up with something that doesn't feel or look right, I have to question whether I'm missing something, could do it better, or simply shouldn't be using IoC for certain cases. With that in mind here are a few examples of this and I'd be grateful for any pointers or further discussion on these:
Classes with too many dependencies. ("Any class having more then 3 dependencies should be questioned for SRP violation"). I know this one comes up a lot in dependency injection questions but after reading these I still don't have any Eureka moment that solves my problems:
a) In a large application I invariably find I need 3 dependencies just to access infrastructure (examples - logging, configuration, persistence) before I get to the specific dependencies needed for the class to get its (hopefully single responsibility) job done. I'm aware of the approach that would refactor and wrap such groups of dependencies into a single one, but I often find this becomes simply a facade for several other services rather than having any true responsibility of its own. Can certain infrastructure dependencies be ignored in the context of this rule, provided the class is deemed to still have a single responsibility?
b) Refactoring can add to this problem. Consider the fairly common task of breaking apart a class that has become a bit big - you move one area of functionality into a new class and the first class becomes dependent on it. Assuming the first class still needs all the dependencies it had before, it now has one extra dependency. In this case I probably don't mind that this dependency is more tightly coupled, but its still neater to have the container provide it (as oppose to using new ...()), which it can do even without the new dependency having its own interface.
c) In a one specific example I have a class responsible for running various different functions through the system every few minutes. As all the functions rightly belong in different areas, this class ends up with many dependencies just to be able to execute each function. I'm guessing in this case other approaches, possibly involving events, should be considered but so far I haven't tried to do it because I want to co-ordinate the order the tasks are run and in some cases apply logic involving outcomes along the way.
Once I'm using IoC within an application it seems like almost every class I create that is used by another class ends up being registered in and/or injected by the container. Is this the expected outcome or should some classes have nothing to do with IoC? The alternative of just having something new'd up within the code just looks like a code smell since its then tightly coupled. This is kind of related to 1b above too.
I have all my container initialisation done at application startup, registering types for each interface in the system. Some are deliberately single instance lifecycles where others can be new instance each time they are resolved. However, since the latter are dependencies of the former, in practice they become a single instance too since they are only resolved once - at construction time of the single instance. In many cases this doesn't matter, but in some cases I really want a different instance each time I do an operation, so rather than be able to make use of the built in container functionality, I'm forced to either i) have a factory dependency instead so I can force this behaviour or ii) pass in the container so I can resolve each time. Both of these approaches are frowned upon in Nikola's guidance but I see i) as the lesser of two evils and I do use it in some cases.
In a large application I invariably find I need 3 dependencies just to access infrastructure (examples - logging, configuration, persistence)
imho infrastructure is not dependencies. I have no problem using a servicelocator for getting a logger (private ILogger _logger = LogManager.GetLogger()).
However, persistence is not infrastructure in my point of view. It's a dependency. Break your class into smaller parts.
Refactoring can add to this problem.
Of course. You will get more dependencies until you have successfully refactored all classes. Just hang in there and continue refactoring.
Do create interfaces in a separate project (Separated interface pattern) instead of adding dependencies to classes.
In a one specific example I have a class responsible for running various different functions through the system every few minutes. As all the functions rightly belong in different areas, this class ends up with many dependencies just to be able to execute each function.
Then you are taking the wrong approach. The task runner should not have a dependency on all tasks that should run, it should be the other way around. All tasks should register in the runner.
Once I'm using IoC within an application it seems like almost every class I create that is used by another class ends up being registered in and/or injected by the container.*
I register everything but business objects, DTOs etc in my container.
I have all my container initialisation done at application startup, registering types for each interface in the system. Some are deliberately single instance lifecycles where others can be new instance each time they are resolved. However, since the latter are dependencies of the former, in practice they become a single instance too since they are only resolved once - at construction time of the single instance.
Don't mix lifetimes if you can avoid it. Or don't take in short lived dependencies. In this case you could use a simple messaging solution to update the single instances.
You might want to read my guidelines.
Let me answer question 3. Having a singletons depend on a transient is a problem that container profilers try to detect and warn about. Services should only depend on other services that have a lifetime that is greater than or equals to that of their own. Injecting a factory interface or delegate to solve this is in general a good solution, and passing in the container itself is a bad solution, since you end up with the Service Locator anti-pattern.
Instead of injecting a factory, you can solve this by implementing a proxy. Here's an example:
public interface ITransientDependency
{
void SomeAction();
}
public class Implementation : ITransientDependency
{
public SomeAction() { ... }
}
Using this definition, you can define a proxy class in the Composition Root based on the ITransientDependency:
public class TransientDependencyProxy<T> : ITransientDependency
where T : ITransientDependency
{
private readonly UnityContainer container;
public TransientDependencyProxy(UnityContainer container)
{
this.container = container;
}
public SomeAction()
{
this.container.Resolve<T>().SomeAction();
}
}
Now you can register this TransientDependencyProxy<T> as singleton:
container.RegisterType<ITransientDependency,
TransientDependencyProxy<Implementation>>(
new ContainerControlledLifetimeManager());
While it is registered as singleton, it will still act as a transient, since it will forward its calls to a transient implementation.
This way you can completely hide that the ITransientDependency needs to be a transient from the rest of the application.
If you need this behavior for many different service types, it will get cumbersome to define proxies for each and everyone of them. In that case you could try Unity's interception functionality. You can define a single interceptor that allows you to do this for a wide range of service types.