Okay, I'm going to try and go short and straight to the point. I am trying to develop a loosely-coupled, multi-tier service application that is testable and supports dependency injection. Here's what I have:
At the service layer, I have a StartSession method that accepts some key data required to, well, start the session. My service class is a facade and delegates to an instance of the ISessionManager interface that is injected into the service class constructor.
I am using the Repository pattern in the data access layer. So I have an ISessionRepository that my domain objects will work with and that I implement using the data access technology du jour. ISessionRepository has methods for GetById, Add and Update.
Since my service class is just a facade, I think it is safe to say that my ISessionManager implementation is the actual service class in my architecture. This class coordinates the operations with my Session domain/business object. And here's where the shell game and problem comes in.
In my SessionManager class (the concrete ISessionManager), here's how I have StartSession implemented:
public ISession StartSession(object sessionStartInfo)
{
var session = Session.GetSession(sessionStartInfo);
if (session == null)
session = Session.NewSession(sessionStartInfo);
return session;
}
I have three problems with this code:
First, obviously I could move this logic into a StartSession method in my Session class but I think that would defeat the purpose of the SessionManager class which then simply becomes a second facade (or is it still considered a coordinator?). Alas, the shell game.
Second, SessionManager has a tightly-coupled dependance upon the Session class. I considered creating an ISessionFactory/SessionFactory that could be injected into SessionManager but then I'd have the same tight-coupling inside the factory. But, maybe that's okay?
Finally, it seems to me that true DI and factory methods don't mix. After all, we want to avoid "new"ing an instance of an object and let the container return the instance to us. And true DI says that we should not reference the container directly. So, how then do I get the concrete ISessionRepository class injected into my Session domain object? Do I have it injected into the factory class then manually pass it into Session when constructing a new instance (using "new")?
Keep in mind that this is also only one operation and I also need to perform other tasks such as saving a session, listing sessions based on various criteria plus work with other domain objects in my solution. Plus, the Session object also encapsulates business logic for authorization, validation, etc. so (I think) it needs to be there.
The key to what I am looking to accomplish is not only functional but testable. I am using DI to break dependencies so we can easily implement unit tests using mocks as well as give us the ability to make changes to the concrete implementations without requiring changes in multiple areas.
Can you help me wrap my head around the best practices for such a design and how I can best achieve my goals for a solid SOA, DDD and TDD solution?
UPDATE
I was asked to provide some additional code, so as succinctly as possible:
[ServiceContract()]
public class SessionService : ISessionService
{
public SessionService(ISessionManager manager) { Manager = manager; }
public ISessionManager Manager { get; private set; }
[OperationContract()]
public SessionContract StartSession(SessionCriteriaContract criteria)
{
var session = Manager.StartSession(Mapper.Map<SessionCriteria>(criteria));
return Mapper.Map<SessionContract>(session);
}
}
public class SessionManager : ISessionManager
{
public SessionManager() { }
public ISession StartSession(SessionCriteria criteria)
{
var session = Session.GetSession(criteria);
if (session == null)
session = Session.NewSession(criteria);
return session;
}
}
public class Session : ISession
{
public Session(ISessionRepository repository, IValidator<ISession> validator)
{
Repository = repository;
Validator = validator;
}
// ISession Properties
public static ISession GetSession(SessionCriteria criteria)
{
return Repository.FindOne(criteria);
}
public static ISession NewSession(SessionCriteria criteria)
{
var session = ????;
// Set properties based on criteria object
return session;
}
public Boolean Save()
{
if (!Validator.IsValid(this))
return false;
return Repository.Save(this);
}
}
And, obviously, there is an ISessionRepository interface and concrete XyzSessionRepository class that I don't think needs to be shown.
2nd UPDATE
I added the IValidator dependency to the Session domain object to illustrate that there are other components in use.
The posted code clarifies a lot. It looks to me like the session class holds state (with behavior), and the service and manager classes strictly perform actions/behavior.
You might look at removing the Repository dependency from the Session and adding it to the SessionManager. So instead of the Session calling Repository.Save(this), your Manager class would have a Save(ISession session) method that would then call Repository.Save(session). This would mean that the session itself would not need to be managed by the container, and it would be perfectly reasonable to create it via "new Session()" (or using a factory that does the same). I think the fact that the Get- and New- methods on the Session are static is a clue/smell that they may not belong on that class (does this code compile? Seems like you are using an instance property within a static method).
Finally, it seems to me that true DI
and factory methods don't mix. After
all, we want to avoid "new"ing an
instance of an object and let the
container return the instance to us.
And true DI says that we should not
reference the container directly. So,
how then do I get the concrete
ISessionRepository class injected into
my Session domain object? Do I have it
injected into the factory class then
manually pass it into Session when
constructing a new instance (using
"new")?
This question gets asked a LOT when it comes to managing classes that mix state and service via an IOC container. As soon as you use an abstract factory that uses "new", you lose the benefits of a DI framework from that class downward in the object graph. You can get away from this by completely separating state and service, and having only your classes that provide service/behavior managed by the container. This leads to passing all data through method calls (aka functional programming). Some containers (Windsor for one) also provide a solution to this very problem (in Windsor it's called the Factory Facility).
Edit: wanted to add that functional programming also leads to what Fowler would call "anemic domain models". This is generally considered a bad thing in DDD, so you might have to weigh that against the advice I posted above.
Just some comments...
After all, we want to avoid "new"ing an instance of an object and let the container return the instance to us.
this ain't true for 100%. You want to avoid "new"ing only across so called seams which basically are lines between layers. if You try to abstract persistence with repositories - that's a seam, if You try to decouple domain model from UI (classic one - system.web reference), there's a seam. if You are in same layer, then decoupling one implementation from another sometimes makes little sense and just adds additional complexity (useless abstraction, ioc container configuration etc.). another (obvious) reason You want to abstract something is when You already right now need polymorphism.
And true DI says that we should not reference the container directly.
this is true. but another concept You might be missing is so called composition root (it's good for things to have a name :). this concept resolves confusion with "when to use service locator". idea is simple - You should compose Your dependency graph as fast as possible. there should be 1 place only where You actually reference ioc container.
E.g. in asp.net mvc application, common point for composition is ControllerFactory.
Do I have it injected into the factory class then manually pass it into Session when constructing a new instance
As I see so far, factories are generally good for 2 things:
1.To create complex objects (Builder pattern helps significantly)
2.Resolving violations of open closed and single responsibility principles
public void PurchaseProduct(Product product){
if(product.HasSomething) order.Apply(new FirstDiscountPolicy());
if(product.HasSomethingElse) order.Apply(new SecondDiscountPolicy());
}
becomes as:
public void PurchaseProduct(Product product){
order.Apply(DiscountPolicyFactory.Create(product));
}
In that way Your class that holds PurchaseProduct won't be needed to be modified if new discount policy appears in sight and PurchaseProduct would become responsible for purchasing product only instead of knowing what discount to apply.
P.s. if You are interested in DI, You should read "Dependency injection in .NET" by Mark Seemann.
I thought I'd post the approach I ended up following while giving due credit above.
After reading some additional articles on DDD, I finally came across the observation that our domain objects should not be responsible for their creation or persistence as well as the notion that it is okay to "new" an instance of a domain object from within the Domain Layer (as Arnis eluded).
So, I retained my SessionManager class but renamed it SessionService so it would be clearer that it is a Domain Service (not to be confused with the SessionService in the facade layer). It is now implemented like:
public class SessionService : ISessionService
{
public SessionService(ISessionFactory factory, ISessionRepository repository)
{
Factory = factory;
Repository = repository;
}
public ISessionFactory Factory { get; private set; }
public ISessionRepository Repository { get; private set; }
public ISession StartSession(SessionCriteria criteria)
{
var session = Repository.GetSession(criteria);
if (session == null)
session = Factory.CreateSession(criteria);
else if (!session.CanResume)
thrown new InvalidOperationException("Cannot resume the session.");
return session;
}
}
The Session class is now more of a true domain object only concerned with the state and logic required when working with the Session, such as the CanResume property shown above and validation logic.
The SessionFactory class is responsible for creating new instances and allows me to still inject the ISessionValidator instance provided by the container without directly referencing the container itself:
public class SessionFactory : ISessionFactory
{
public SessionFactory(ISessionValidator validator)
{
Validator = validator;
}
public ISessionValidator Validator { get; private set; }
public Session CreateSession(SessionCriteria criteria)
{
var session = new Session(Validator);
// Map properties
return session;
}
}
Unless someone can point out a flaw in my approach, I'm pretty comfortable that this is consistent with DDD and gives me full support for unit testing, etc. - everything I was after.
Related
I have read dozens of posts regarding this topic, without finding a clear guideline of how to access the Ninject.Kernel without using the Service Locator pattern.
I currently have the following in the classes that need to use CustomerBusiness (which is my service) and it works fine, but I am well aware that it is not the recommended way of doing it.
private CustomerBusiness _customerBusiness;
private ICustomerRepository CustomerRepository
{
get { return NinjectWebCommon.Kernel.Get<IAccountRepository>(); }
}
private CustomerBusiness CustomerBusiness
{
get
{
if (_customerBusiness == null)
{
_customerBusiness = new CustomerBusiness(AccountRepository);
}
return _customerBusiness;
}
}
public Customer GetCustomer(int id)
{
return CustomerBusiness.GetCustomer(id);
}
This is the Kernel property accessed in the code above:
public static IKernel Kernel
{
get
{
return CreateKernel();
}
}
I've read many suggestions about using a factory for this, but none of them explain how to use this factory. I would really appreciate if anyone could show me the "CustomerFactory" or any other recommended approach including how to use it.
Update
I am using ASP.NET Web Forms and need to access CustomerBusiness from CodeBehind.
Solution
The final solution that I found to be working, was the answer with the most votes found at this post:
How can I implement Ninject or DI on asp.net Web Forms?
It looks like this (Note inheritance from PageBase, which is part of Ninject.Web - that is the key!):
public partial class Edit : PageBase
{
[Inject]
public ICustomerBusiness CustomerBusiness { get; set; }
...
The accepted answer below indirectly lead me to find this solution.
Since you're using NinjectWebCommon, I assume you have a web application of some sort. You really should only access the Ninject kernel in one place - at the composition root. It is the place where you are building the object graph and the only place you should ever need an access to the IoC container. To actually get the dependencies you need, you typically employ constructor injection.
In case of MVC web applications, for example, you have a controller factory using the Ninject kernel and that's the only place which references it.
To expand on your particular situation, your class would accept ICustomerBusiness in its constructor, declaring that it needs an instance of ICustomerBusiness as its dependency:
class CustomerBusinessConsumer : ICustomerBusinessConsumer
{
private readonly ICustomerBusiness customerBusiness;
public CustomerBusinessConsumer(ICustomerBusiness customerBusiness)
{
this.customerBusiness = customerBusiness;
}
...
}
Now, whichever class uses ICustomerBusinessConsumer as its dependency, would follow the same pattern (accepting an instance of ICustomerBusinessConsumer as its constructor parameter). You basically never construct your dependencies manually using new (specific exceptions aside).
Then, you just have to make sure your classes get their dependencies and it's this composition root where you do this. What exactly is composition root depends on the type of an application you are writing (console application, WPF application, web service, MVC web application...)
EDIT: To get myself familiar with the situation in the ASP.NET WebForms realm
I had to look up the details since I've never used it. Unfortunately, WebForms require you to have a parameterless constructor at each of your Page classes, so you can't use constructor injection all the way from the top of the object graph down to the bottom.
However, after consulting Mark Seeman's chapter on composing objects in WebForms, I can rephrase how to deal with this framework's inefficiency, while still acting in line with good DI practices:
Have a class responsible for resolving the dependencies, using the Ninject's kernel you have set up. This may be a very thin wrapper around the kernel. Let's call it DependencyContainer.
Create your container and save it in the application context, so that it's ready when you need it
protected void Application_Start(object sender, EventArgs e)
{
this.Application["container"] = new DependencyContainer();
}
Let's suppose your page class (let's call it HomePage) has a dependency on ICustomerBusinessConsumer. Then DependencyContainer has to allow us to retrieve an instance of ICustomerBusinessConsumer:
public ICustomerBusinessConsumer ResolveCustomerBusinessConsumer()
{
return Kernel.Get<ICustomerBusinessConsumer>();
}
Than in the MainPage class itself, you will resolve its dependencies in the default constructor:
public MainPage()
{
var container = (DependencyContainer) HttpContext.Current.Application["container"];
this.customerBusinessConsumer = container.ResolveCustomerBusinessConsumer();
}
Few notes:
having the dependency container available in the HttpContext must not be tempting to consider it a service locator. In fact, the best practice here (at least from the standpoint of being true to DI) is to have some kind of "implementor" classes to which you will relay the functionality of your page classes.
For example, each action handled by MainPage will be only relayed to its implementor class. This implementor class will be a dependency of MainPage and, as all other dependencies, will be resolved using the container.
The important part is that these implementor classes should be in assemblies not referencing the ASP.NET assemblies and thus without a chance to access the HttpContext
having to write so much code to achieve this is certainly not ideal, but it is only a consequence of the framework's limitation. In ASP.NET MVC applications, for example, this is dealt with in a much better way. There you have single point where you can compose the object graphs and you don't have to resolve them in each top-level class as you have to in WebForms.
the good thing is that while you have to write some plumbing code in the page class constructors, from there down the object graph you can employ constructor injection
Constructor injection is the preferred method for DI with ninject, however it also supports property injection. Take a read of the ninject page around injection patterns here https://github.com/ninject/ninject/wiki/Injection-Patterns.
Both of these are injection patterns unlike service location which is request based, not really injection at all.
The flipside of injection is that you need to control construction. When using MVC all of the wiring to do this is built into the MVC ninject package on nuget
I'm working on a project that is an older 3 tier design, any new functionality added needs to be unit testable.
The problem is the business layer / data layers are tightly coupled like in the sample below. The BL just news up a data layer object... so it's almost impossible to mock up this way. We don't have any Dependency Injection implemented so constructor injection isn't possible. So what is the best way to modify the structure so that the data layer can be mocked up without the use of DI?
public class BLLayer()
{
public GetBLObject(string params)
{
using(DLayer dl = new DLayer())
{
DataSet ds = dl.GetData(params);
BL logic here....
}
}
}
You're not ruling out constructor injection per se, you just don't have an IOC container set up. That's fine, you don't need one. You can do Poor Man's Dependency Injection and still keep constructor injection.
Wrap the DataLayer with an interface, and then create a factory that will make IDataLayer objects on command. Add this as a field to the object you're trying to inject into, replacing all new with calls to the factory. Now you can inject your fakes for testing, like this:
interface IDataLayer { ... }
interface IDataLayerFactory
{
IDataLayer Create();
}
public class BLLayer()
{
private IDataLayerFactory _factory;
// present a default constructor for your average consumer
ctor() : this(new RealFactoryImpl()) {}
// but also expose an injectable constructor for tests
ctor(IDataLayerFactory factory)
{
_factory = factory;
}
public GetBLObject(string params)
{
using(DLayer dl = _factory.Create()) // replace the "new"
{
//BL logic here....
}
}
}
Don't forget to have a default value of the actual factory you want to use in real code.
Dependency Injection is just one of many patterns that fall under the umbrella concept known as "Inversion of Control". The main criteria is to provide a "seam" between components so that you can separate In short, there's more than one way to skin a cat.
Dependency Injection itself has several derivates: Constructor Injection (dependencies passed in through the constructor), Property Injection (dependencies represented as read/write properties) and Method Injection (dependencies are passed into the method). These patterns assume that class is "closed for modification" and expose their dependencies for consumers to change. Legacy code is rarely designed this way, and system-wide architectural changes (such as moving to constructor injection and a IoC container) isn't always straight forward.
Other patterns involve decoupling the resolution and/or construction of objects away from the subject under a test. Simple Gang of Four patterns like a Factory can do wonders. A Service Locator is like a global object factory, and while I'm not a huge fan of this pattern, it can be used to decouple dependencies.
In the example that you've outlined above, the test pattern "Subclass to Test" would allow you to introduce seams without system wide re-architecture. In the pattern, you move object creation calls like "new DLayer()" to a virtual method, and then create a subclass of the subject.
Micheal Feather's "Working with Legacy Code" has a catalog of patterns and techniques that you can use to put your legacy code into a state that would allow you to move towards DI.
If DLayer is only used in the GetBLObject method I would inject a factory in the method call. Something like: (Building on #PaulPhillips example)
public GetBLObject(string params, IDataLayerFactory dataLayerFactory)
{
using(DLayer dl = dataLayerFactory.Create()) // replace the "new"
{
//BL logic here....
}
}
However it seems that what you really want to work with in the Business Layer is a DataSet. So another way is to let GetBLObject take the DataSet in the method call in stead of string param. In order to make that work you could create a class that handles just getting the DataSet from a DLayer. For instance:
public class CallingBusinesslayerCode
{
public void CallingBusinessLayer()
{
// It doesn't show from your code what is returned
// so here I assume that it is void.
new BLLayer().GetBLObject(new BreakingDLayerDependency().GetData("param"));
}
}
public class BreakingDLayerDependency
{
public DataSet GetData(string param)
{
using (DLayer dl = new DLayer()) //you can of course still do ctor injection here in stead of the new DLayer()
{
return dl.GetData(param);
}
}
}
public class BLLayer
{
public void GetBLObject(DataSet ds)
{
// Business Logic using ds here.
}
}
One warning: Mocking out DataSet (which you have to in both this and Paul Phillips solution) can be really cumbersome, so testing this will be possible, but not necessarily a lot of fun.
I'm trying to understand Inversion of Control and how it helps me with my unit testing. I've read several online explanations of IOC and what it does, but I'm just not quite understanding it.
I developed a sample project, which included using StructureMap for unit testing. StructureMap setup code like the following:
private readonly IAccountRepository _accountRepository
public Logon()
{
_accountRepository = ObjectFactory.GetInstance<IAccountRepository>();
}
The thing I'm not understanding though, is as I see it, I could simply declare the above as the following:
AccountRepository _accountRepository = new AccountRepository();
And it would do the same thing as the prior code. So, I was just wondering if someone can help explain to me in a simple way, what the benefit of using IOC is (especially when dealing with unit testing).
Thanks
Inversion of Control is the concept of letting a framework call back into user code. It's a very abstract concept but in essence describes the difference between a library and a framework. IoC can be seen as the "defining characteristic of a framework." We, as program developers, call into libraries, but frameworks instead call into our code; the framework is in control, which is why we say the control is inverted. Any framework supplies hooks that allow us to plug in our code.
Inversion of Control is a pattern that can only be applied by framework developers, or perhaps when you're an application developer interacting with framework code. IoC does not apply when working with application code exclusively, though.
The act of depending on abstractions instead of implementations is called Dependency Inversion, and Dependency Inversion can be practiced by both application and framework developers. What you refer to as IoC is actually Dependency Inversion, and as Krzysztof already commented: what you're doing is not IoC. I'll discuss Dependency Inversion for the remainder of my answer.
There are basically two forms of Dependency Inversion:
Service Locator
Dependency Injection.
Let's start with the Service Locator pattern.
The Service Locator pattern
A Service Locator supplies application components outside the [startup path of your application] with access to an unbounded set of dependencies. As its most implemented, the Service Locator is a Static Factory that can be configured with concrete services before the first consumer begins to use it. (But you’ll equally also find abstract Service Locators.) [source]
Here's an example of a static Service Locator:
public class Service
{
public void SomeOperation()
{
IDependency dependency =
ServiceLocator.GetInstance<IDependency>();
dependency.Execute();
}
}
This example should look familiar to you, because this what you're doing in your Logon method: You are using the Service Locator pattern.
We say that a Service Locator supplies access to an unbounded set of dependencies, because the caller can pass in any type it wishes at runtime. This is opposite to the Dependency Injection pattern.
The Dependency Injection pattern
With the Dependency Injection pattern (DI), you statically declaring a class's required dependencies; typically, by defining them in the constructor. The dependencies are made part of the class's signature. The class itself isn't responsible for getting its dependencies; that responsibility is moved up up the call stack. When refactoring the previous Service class with DI, it would likely become the following:
public class Service
{
private readonly IDependency dependency;
public Service(IDependency dependency)
{
this.dependency = dependency;
}
public void SomeOperation()
{
this.dependency.Execute();
}
}
Comparing both patterns
Both patterns are Dependency Inversion, since in both cases the Service class isn't responsible of creating the dependencies and doesn't know which implementation it is using. It just talks to an abstraction. Both patterns give you flexibility over the implementations a class is using and thus allow you to write more flexible software.
There are, however, many problems with the Service Locator pattern, and that's why it is considered an anti-pattern. You are already experiencing these problems, as you are wondering how Service Locator in your case helps you with unit testing.
The answer is that the Service Locator pattern does not help with unit testing. On the contrary: it makes unit testing harder compared to DI. By letting the class call the ObjectFactory (which is your Service Locator), you create a hard dependency between the two. Replacing IAccountRepository for testing, also means that your unit test must make use of the ObjectFactory. This makes your unit tests harder to read. But more importantly, since the ObjectFactory is a static instance, all unit tests make use of that same instance, which makes it hard to run tests in isolation and swap implementations on a per-test basis.
I used to use a static Service Locator pattern in the past, and the way I dealt with this was by registering dependencies in a Service Locator that I could change on a thread-by-thread basis (using [ThreadStatic] field under the covers). This allowed me to run my tests in parallel (what MSTest does by default) while keeping tests isolated. The problem with this, unfortunately, was that it got complicated really fast, it cluttered the tests with all kind of technical stuff, and it made me spent a lot of time solving these technical problems, while I could have been writing more tests instead.
But even if you use a hybrid solution where you inject an abstract IObjectFactory (an abstract Service Locator) into the constructor of Logon, testing is still more difficult compared to DI because of the implicit relationship between Logon and its dependencies; a test can't immediately see what dependencies are required. On top of that, besides supplying the required dependencies, each test must now supply a correctly configured ObjectFactory to the class.
Conclusion
The real solution to the problems that Service Locator causes is DI. Once you statically declare a class's dependencies in the constructor and inject them from the outside, all those issues are gone. Not only does this make it very clear what dependencies a class needs (no hidden dependencies), but every unit test is itself responsible for injecting the dependencies it needs. This makes writing tests much easier and prevents you from ever having to configure a DI Container in your unit tests.
The idea behind this is to enable you to swap out the default account repository implementation for a more unit testable version. In your unit tests you can now instantiate a version that doesn't make a database call, but instead returns back fixed data. This way you can focus on testing the logic in your methods and free yourself of the dependency to the database.
This is better on many levels:
1) Your tests are more stable since you no longer have to worry about tests failing due to data changes in the database
2) Your tests will run faster since you don't call out to an external data source
3) You can more easily simulate all your test conditions since your mocked repository can return any type of data needed to test any condition
The key to answer your question is testability and if you want to manage the lifetime of the injected objects or if you are going to let the IoC container do it for you.
Let's say for example that you are writing a class that uses your repository and you want to test it.
If you do something like the following:
public class MyClass
{
public MyEntity GetEntityBy(long id)
{
AccountRepository _accountRepository = new AccountRepository();
return _accountRepository.GetEntityFromDatabaseBy(id);
}
}
When you try to test this method you will find that there are a lot of complications:
1. There must be a database already set up.
2. Your database needs to have the table that has the entity you're looking for.
3. The id that you are using for your test must exist, if you delete it for whatever reason then your automated test is now broken.
If instead you have something like the following:
public interface IAccountRepository
{
AccountEntity GetAccountFromDatabase(long id);
}
public class AccountRepository : IAccountRepository
{
public AccountEntity GetAccountFromDatabase(long id)
{
//... some DB implementation here
}
}
public class MyClass
{
private readonly IAccountRepository _accountRepository;
public MyClass(IAccountRepository accountRepository)
{
_accountRepository = accountRepository;
}
public AccountEntity GetAccountEntityBy(long id)
{
return _accountRepository.GetAccountFromDatabase(id)
}
}
Now that you have that you can test the MyClass class in isolation without the need for a database to be in place.
How is this beneficial? For example you could do something like this (assuming you are using Visual Studio, but the same principles apply to NUnit for example):
[TestClass]
public class MyClassTests
{
[TestMethod]
public void ShouldCallAccountRepositoryToGetAccount()
{
FakeRepository fakeRepository = new FakeRepository();
MyClass myClass = new MyClass(fakeRepository);
long anyId = 1234;
Account account = myClass.GetAccountEntityBy(anyId);
Assert.IsTrue(fakeRepository.GetAccountFromDatabaseWasCalled);
Assert.IsNotNull(account);
}
}
public class FakeRepository : IAccountRepository
{
public bool GetAccountFromDatabaseWasCalled { get; private set; }
public Account GetAccountFromDatabase(long id)
{
GetAccountFromDatabaseWasCalled = true;
return new Account();
}
}
So, as you can see you are able, very confidently, to test that the MyClass class uses an IAccountRepository instance to get an Account entity from a database without the need to have a database in place.
There are a million things you can still do here to improve the example. You could use a Mocking framework like Rhino Mocks or Moq to create your fake objects instead of coding them yourself like I did in the example.
By doing this the MyClass class is completely independent of the AccountRepository so that's when the loosley coupled concept comes into play and your application is testable and more maintainable.
With this example you can see the benefits of IoC in itself. Now if you DO NOT use an IoC container you do have to instantiate all the dependencies and inject them appropriately in a Composition Root or configure an IoC container so it can do it for you.
Regards.
I am near the end of a new ASP.NET MVC application I have been developing, and I have realised that I am not 100% on what should be goning on in my controller methods.
Is it better for an action method to decide which services/methods are called and in what order like so:
AccountService _accountService;
BillingService _billingService;
InvoiceService _invoiceService;
...
public ActionResult UpgradeAccountPackage(PackageType packageType, int accountId)
{
_accountService.UpgradeAccountPackage(packageType, accountId);
_billingService.BillForAccountUpgrade(packageType, accountId);
_invoiceService.CreateAccountUpgradeInvoice(packageType, accountId);
}
Or is it better to stick to a single method call to one service and allow this method to call the other services/support method it needs?
public ActionResult UpgradeAccountPackage(PackageType packageType, int accountId)
{
// account service upgrades account then calls the BillingService and InvoicService
// methods called above within this method
_accountService.UpgradeAccountPackage(packageType, accountId);
}
I have tended to go for the second example here, as it seemed originally like the first method would constitute logic in some way, and means the acion method would have to intrinsically know about how the account upgrade process works within my application, which seems like a bad thing.
However, now my application is almost finished it has a large service layer and this approach has led to almost every service having a strong dependency on numerous other services, and there is no centralised place which decides the flow of business transactions such as the one mentioned above, you have to dig around a bit in service methods to discover the processes.
I am considering refactoring to more closesly resemble the second method above, or introducing a new layer in between the controller and service layer which controls the flow of processes.
Do people tend to use the first or second method? What are peoples opinions?
I prefer the second method - much easier to test (using mocks), and the logic is there for reuse. You end up with Facades to your actual business logic, but that's not bad thing.
I don't understand why your service layer is full of concrete dependencies though...
The thing to remember is that you want classes to rely on interfaces, not implementation. (and then string it all together with a Dependancy Injection tool)
In C#, we can have one class implement many interfaces. So your service implementations can implement many interface, and yet the caller need only know about the part they need.
For example, you might have an AccountTransactionService that imlpements IDepositor and IWithdrawer. If you implement double-entry accounting, then that could depend on IDepositor and IWithdrawer, which, in actual fact, just uses the same instance of AccountTransactionService, but it doesn't have to, and the implementation details could be changed afterwards.
In general, the less one class knows about the other classes in the system, the better.
I more closely use the first method. Let the controller control what happens. Let the services decide how that happens.
If you add a second layer to control the flow of processes would that leave your ActionMethods only making the one call? If so, it seems unnecessary at that point.
You could have a service layer which depends on multiple repositories (not other services) and which defines the business operations:
public class MyService: IMyService
{
private readonly IAccountRepository _accountRepository;
private readonly IInvoiceRepository _invoiceRepository;
private readonly IBillingRepository _billingRepository;
public MyService(IAccountRepository accountRepository, IInvoiceRepository invoiceRepository, IBillingRepository billingRepository)
{
_accountRepository = accountRepository;
_invoiceRepository = invoiceRepository;
_billingRepository = billingRepository;
}
public void UpgradeAccountPackage(PackageType packageType, int accountId)
{
...
}
}
and then have your controller take IMyService as dependency:
public class HomeController: Controller
{
private readonly IMyService _service;
public HomeController(IMyService service)
{
_service = service;
}
public ActionResult UpgradeAccountPackage(PackageType packageType, int accountId)
{
_service.UpgradeAccountPackage(packageType, accountId);
...
}
}
I define simple CRUD operations with the entities on the repositories and the service aggregates those multiple operations into one business transaction.
Hope I can explain this somewhat decently, as it's blowing a fuse in my brain today. I'm learning TDD in C#, so I'm still trying to rewire my brain to fit it.
Let's say I have a User class, that previously had a static method to retrieve a User object (simplified below).
public static User GetUser(string username)
{
User user = GetUserFromCache(username);
if(user == null)
{
user = GetUserFromDatabase(username);
StoreObjectInCache(user);
}
return user;
}
So I'm trying to rewrite this to use dependency injection so I can fake out the "GetUserFromDatabase" method if it needs to go there. That means I have to make the function not static. Whereas the data access layer would construct the user object from the database, mapping the returned columns to the object properties, a retrieval from cache would return a true-blue User object. However, in a non-static method, I can't just say
this = GetUserFromCache(username);
Because it just doesn't work that way. Though I'm by no means the world expert in how to dance around this with OO, it looks like I'd almost have to grab the User object from cache and write another mapping function that would store the returned User object properties into the new User instance.
What's the solution here? Any OO magic I'm missing? Is the only solution to refactor everything to use factories instead of having the instantiation logic in the object itself? Or have I been staring at this too long and missing something completely obvious?
I don't think you're missing any magic and I think refactoring to remove the persistence code from your business objects and into your persistence layer is the right way to go both from a unit testing and a design perspective. You may want to think about having the cache sit between your business layer and the persistence layer, mediating the retrieval/update of your business objects to simplify things. You should be able to mock/fake your cache and persistence layer if you separate things out this way.
Unit testing code that does date processing based on today's date
Before
There is some code that uses the database to fetch a User and place it into the cache.
After
There is some code that uses the database to fetch a User
There is some code that places the User in the cache.
These two sets of code should not be dependent on each other.
public static Func<string, UserName> Loader {get;set;}
public static Constructor()
{
Loader = GetFromDataBase;
}
public static User GetUser(string userName)
{
User user = GetUserFromCache()
if (user == null)
{
user = Loader(userName);
StoreUserInCache(user);
}
return user;
}
public void Test1()
{
UserGetter.Loader = Mock.GetUser;
UserGetter.GetUser("Bob");
}
Classically, an interface would be used instead of a Func. If more than one method is involved, an interface is an obvious choice over the Func. If the methods implementations themselves are static, Func is a way to do abstraction over them.
What I am missing in your example is the context of your call to "GetUser". This is probably because with a static method you don't need to think about as you can call it from everywhere. In DI this means the repository needs to be referenced by the sender in some way, a field most likely.
When your cache is a field of some object, a facade probably, you could use this make your cache a proxy of your database.
So you would have:
class ApplicationFacade{
private IUserRepository users = null;
public doStuff(){
this.users.GetUser("my-name");
}
}
where IUserRepository is a common interface for your cache, fake database and database. Something simple like:
interface IUserRepository{
User GetUser(string username);
}
Your cache could now be a simple object implementing this interface and because cache is injected the DI container can also inject into it.
class Cache : IUserRepository {
private IUserRepository users = null;
public User GetUser(string username){
if (this.NotCached(username)){
this.ToCache(this.users.GetUser(username));
}
return this.FromCache(username);
}
}
Now depending on what you want you can inject your fake, cache or database into your facade object and if you use your cache object you can inject your fake er database into it as desired (or even an other cache if you really wanted to).
Of cause the actual injection mechanism depends on your DI container and may require some extra code as public properties or constructor fields.
Take a look at Refactoring static method / static field for Testing
The approach suggested there may work for you, if for some reason you can't refactor everything to separate concerns as suggested in another answer.