SOLID-principle attempt, solid or not solid? - c#

In our layered architecture I am designing a BLL logic component called AppHandover and have written the basic high level code for this. I want it to follow the SOLID-principles and be loosly coupled, adopt separation of concern and be testable.
Here is what AppHandover should do
Check if User owns app. If not throw an error
remove history if possible (ie no more apps are assigned to user)
transfer the ownership to the next instance
Quesion is, am I on the right track and does the following sample seem SOLID?
public interface ITransferOwnership
{
void TransferOwnership(string userId, string appId, TransferDirection transferDirection);
}
public interface IOwnershipVerification
{
bool UserOwnsApp(string userId, int budgetId, string appId);
}
public interface IPreserveHistoryCheck
{
bool ShouldDeleteTemporaryBudgetData(string userId, int budgetId);
}
public interface IRemoveHistory
{
void DeleteTemporaryBudgetData(string userId, int budgetId);
}
Handover process implementation
public class AppHandoverProcess : KonstruktDbContext, ITransferOwnership
{
private IOwnershipVerification _ownerShipVerification;
private IPreserveHistoryCheck _preserveHistory;
private IRemoveHistory _removeHistory;
private ITransferOwnerShip _transferOwnership;
public AppHandoverProcess()
{
}
public AppHandoverProcess(IOwnershipVerification ownerShipVerification,
IPreserveHistoryCheck preserveHistory,
IRemoveHistory removeHistory)
{
_ownerShipVerification = ownerShipVerification;
_preserveHistory = preserveHistory;
_removeHistory = removeHistory;
}
public void PerformAppHandover(string userId, string appId, int budgetId)
{
if (_ownerShipVerification.UserOwnsApp(userId,budgetId,appId)) {
if (_preserveHistory.ShouldDeleteTemporaryBudgetData(userId, budgetId))
{
_removeHistory.DeleteTemporaryBudgetData(userId, budgetId);
}
//handover logic here..
_transferOwnership.TransferOwnership(userId, appId, TransferDirection.Forward);
}
else
{
throw new Exception("AppHandover: User does not own app, data cannot be handed over");
}
}
}

Concerning the code you outlined above I definitely think you're on the right track. I would push the design a little further and define TransferOwnership as an additional interface.
Following this approach your AppHandoverProcess is completely decoupled from it's client and the behaviour will be defined in the service configuration.
Enforcing an isolation for the TransferOwnership will allow you to easily UnitTest any object implementing the interface without the need to mock AppHandoverProcess dependency.
Also any AppHandoverProcess test should be trivial as the only thing you'll need to make sure is the your services are invoke or that the exception is thrown.
Hope this make sense,
Regards.

I would make KonstruktDbContext as an injectable dependency. AppHandoverprocess should not inherit from it as it looks like it is a different responsibility.

Related

How to abstract the DTF framework in the application layer?

I have a question regarding clean architecture and durable task framework. But first, let me show you by example what we can do with DTF. DTF enables us to run workflows/orchestrations of individual task in the background. Here is an example:
public class EncodeVideoOrchestration : TaskOrchestration<string, string>
{
public override async Task<string> RunTask(OrchestrationContext context, string input)
{
string encodedUrl = await context.ScheduleTask<string>(typeof (EncodeActivity), input);
await context.ScheduleTask<object>(typeof (EmailActivity), input);
return encodedUrl;
}
}
The TaskOrchestration wires together individual tasks into a workflow. Here is how you define the tasks:
public class EncodeActivity : TaskActivity<string, string>
{
protected override string Execute(TaskContext context, string input)
{
Console.WriteLine("Encoding video " + input);
// TODO : actually encode the video to a destination
return "http://<azurebloblocation>/encoded_video.avi";
}
}
public class EmailActivity : TaskActivity<string, object>
{
protected override object Execute(TaskContext context, string input)
{
// TODO : actually send email to user
return null;
}
}
Pretty straight forward, right? Then you create a worker in Program.cs and register all the tasks and orchestrations:
TaskHubWorker hubWorker = new TaskHubWorker("myvideohub", "connectionDetails")
.AddTaskOrchestrations(typeof (EncodeVideoOrchestration))
.AddTaskActivities(typeof (EncodeActivity), typeof (EmailActivity))
.Start();
Using the DTF client you can actually trigger an orchestration:
TaskHubClient client = new TaskHubClient("myvideohub", "connectionDetails");
client.CreateOrchestrationInstance(typeof (EncodeVideoOrchestration), "http://<azurebloblocation>/MyVideo.mpg");
DTF handles all the magic in the background and can use different storage solutions such as service bus or even mssql.
Say our application is organized into folders like this:
Domain
Application
Infrastructure
UI
In tasks we run application logic / use cases. But the DTF framework itself is infrastructure, right? If so, how would an abstraction of the DTF framework look like in the application layer? Is it even possible to make the application layer unaware of the DTF?
In regards to Clean Architecture approach, if you want to get rid of DTF in the Application layer, you can do following (original repo uses MediatR, so I did as well)
implement TaskActivity as query/command and put it in Application layer
using MediatR;
public class EncodeVideoQuery : IRequest<string>
{
// TODO: ctor
public string Url { get; set; }
}
public class EncodeHandler : IRequestHandler<EncodeVideoQuery, string>
{
public async Task<string> Handle(EncodeVideoQuery input, CancellationToken cancel)
{
Console.WriteLine("Encoding video " + input);
// TODO : actually encode the video to a destination
return "http://<azurebloblocation>/encoded_video.avi";
}
}
public class EmailCommand
{
public string UserEmail { get; set; }
}
public class EmailCommandHandler : IRequestHandler<EmailCommand>
{
public async Task<Unit> Handle(EmailCommand input, CancellationToken cancel)
{
// TODO : actually send email to user
return Unit.Value;
}
}
implement actual DTF classes (I looked up that they support async) and put them into a "UI" layer. There's no UI, but technically it's a console application.
using MediatR;
public class EncodeActivity : TaskActivity<string, string>
{
private readonly ISender mediator;
public EncodeActivity(ISender mediator)
{
this.mediator = mediator;
}
protected override Task<string> ExecuteAsync(TaskContext context, string input)
{
// Perhaps no ability to pass a CancellationToken
return mediator.Send(new EncodeVideoQuery(input));
}
}
I think your question is not really just a single question regarding the code but a request for the whole concept of how to make that main program "unaware" of the specific DTF library you going to use.
Well, it involves several areas of functionality you will need to use in order accomplish that. I added a diagram for how the architecture should look like to achieve what you ask for, however I didn't focus on the syntax there since the question is about architecture and not code itself as I understood it, so treat it as a pseudo code - it is just to deliver the concept.
The key idea is you will have to read the path or name of the DLL you wish to load from a configuration file (such as app.config) but to do that you will need to learn how to create custom configuration elements in a configuration file.
You can read about those in the links:
https://learn.microsoft.com/en-us/dotnet/framework/configure-apps/
https://learn.microsoft.com/en-us/dotnet/api/system.configuration.configuration?view=dotnet-plat-ext-6.0
Next you need to dynamically load the assembly, you can read about how to load assemblies dynamically here https://learn.microsoft.com/en-us/dotnet/framework/app-domains/how-to-load-assemblies-into-an-application-domain
Once you passed that, remember that the DLL you are loading is still something you need to implement and it needs to be aware of the specific DTF Library you wish to reference, however it also implement an interface well known in your application as well.
So basically you will have an interface describing the abstraction your program need from a DTF library (any DTF library) and your Proxy DLL which will be loaded at runtime will act as mediator between that interface which describe that abstraction and the actual implementation of the specific DTF library.
And so, per your questions:
how would an abstraction of the DTF framework look like in the
application layer?
Look at the diagram I provided.
Is it even possible to make the application layer unaware of the DTF?
Yes, like in any language that can support plugins/extensions/proxies
You have to fit your implementation with the Ubiquitous language. In the specific example: Who and when does encoding happen? Whichever entity or service (the client) does the encoding will simply call an IEncode.encode interface that'll take care of the "details" involved in invoking a DTF.
Yes, the definition for DTF is in the Infrastructure and it should be treated like everything else in the infrastructure like Logging or Notifications. That is: The functionality should be put behind an interface that can be injected into the Domain and used by its Domain Clients.
You could wrap the activities in a library that returns simple Tasks, and might mix long-running activities with short-running ones. Something like
public class BusinessContext
{
OrchestrationContext context;
public BusinessContext(OrchestrationContext context)
{
this.context = context;
}
public async Task<int> SendGreeting(string user)
{
return await context.ScheduleTask<string>(typeof(SendGreetingTask), user);
}
public async Task<string> GetUser()
{
return await context.ScheduleTask<string>(typeof(GetUserTask));
}
}
Then the orchestration is a bit cleaner
public override async Task<string> RunTask(OrchestrationContext context, string input)
{
//string user = await context.ScheduleTask<string>(typeof(GetUserTask));
//string greeting = await context.ScheduleTask<string>(typeof(SendGreetingTask), user);
//return greeting;
var bc = new BusinessContext(context);
string user = await bc.GetUser();
string greeting = await bc.SendGreeting(user);
return greeting;
}
Durable Task Framework has already done all the abstractions for you. TaskActivity is your abstraction:
public abstract class TaskActivity<TInput, TResult> : AsyncTaskActivity<TInput, TResult>
{
protected TaskActivity();
protected abstract TResult Execute(TaskContext context, TInput input);
protected override Task<TResult> ExecuteAsync(TaskContext context, TInput input);
}
You can work with TaskActivity type in your Application Layer. You don't care about its implementation. The implementation of TaskActivity goes to lower layers (probably Infrastructure Layer, but some tasks might be more suitable to be defined as a Domain Service, if they contain domain logic)
If you want, you can also group the task activities, for example you can define a base class for Email Activity:
Domain Layer Service (Abstraction)
public abstract class EmailActivityBase : TaskActivity<string, object>
{
public string From { get; set; }
public string To { get; set; }
public string Body { get; set; }
}
This is your abstraction of an Email Activity. You Application Layer is only aware of EmailActivityBase class.
Infrastructure Layer Implementation
The implementation of this class goes to Infrastructure Layer:
Production email implementation
public class EmailActivity : EmailActivityBase
{
protected override object Execute(TaskContext context, string input)
{
// TODO : actually send email to user
return null;
}
}
Test email implementation
public class MockEmailActivity : EmailActivityBase
{
protected override object Execute(TaskContext context, string input)
{
// TODO : create a file in local storage instead of sending an email
return null;
}
}
Where to Put Task Orchestration Code?
Depending on your application, this may change. For example, if you are using AWS you can use AWS lambda for orchestration, if you are using Windows Azure, you can use Azure Automation or you can even create a separate Windows service to execute the tasks (obviously the Windows service will have dependency on your application). Again this really depends on your application but it may not be a bad idea to put these house keeping jobs in a separate module.

Loosely coupling class

I'm making an application that uses an external API. But I don't want my application to be dependant on the API. So I have been reading about how to achieve this. I read that the thing I want is loose coupling. I want to loosely couple my class that uses the external API from the rest of my application. My question is how do I achieve this. If read about different design patterns, I can't find one that helps with my problem.
public class GoogleCalendarService
{
private const string CalendarId = ".....";
private CalendarService Authenticate(string calendarId)
{
...
}
public void Create(Booking newBooking, string userId)
{
...
InsertEvent(newEvent, userId);
}
private void Insert(Event newEvent, string userId)
{
call authenticate account
....
}
public List<Booking> GetEvents()
{
call authenticate account
...
}
}
Above is my code for the class that uses the external API. In the rest of my application I use this class the following way:
public class MyApplication
{
private void MyFunction()
{
GoogleCalendarService googleCalendarService = new GoogleCalendarService();
googleCalendarService.CreateEvent(..., ...)
}
}
I do this on multiple places in my application. So my question is: How can I loosely couple the API class from the rest?
Edit: I probably want a general calendar service interface that makes it easier to replace the google calendar service with an other calendar service when needed.
that makes it easier to replace the google calendar service with an other calendar service
The main pattern you will want to look at is Adapter. But you would want to use that in combination with Dependency Injection.
The DI first:
public class MyApplication
{
// constructor injection
private IGeneralCalendarService _calendarService;
public MyApplication(IGeneralCalendarService calendarService)
{
_calendarService = calendarService;
}
private void MyFunction()
{
_calendarService.CreateEvent(..., ...)
}
}
And the Adapter would look something like
public class GoogleCalendarServiceAdapter : IGeneralCalendarService
{
// implement the interface by calliong the Google API.
}
In addition you will need generic classes for Event etc. They belong to the same layer as the interface.
You need to write a wrapper around that API. And rewrite every Output/Input of that API with your wrapper IO. And after that, you can take advantage of Dependancy Injection to use your own code. By this way you can have an abstraction layer around that API

Single Responsibility Principle Concerns (Am I thinking about refactoring properly)

My current class PropertyManager looks like this:
public class PropertyManager : IDisposable
{
private readonly IPropertyRepo _propertyRepo;
private readonly IUserTypeRepo _userTypeRepo;
public PropertyManager(IPropertyRepo propertyRepo, IUserTypeRepo userTypeRepo = null)
{
if (propertyRepo == null)
throw new ArgumentNullException("propertyRepo");
_propertyRepo = propertyRepo;
if (userTypeRepo != null)
_userTypeRepo = userTypeRepo;
}
}
My Property Manager will use the _userTypeRepo in some method to accomplish some task. I think I want to implment a rule that says "Each Manager(Service,Factory,etc) should be responsible for its own repository."
The idea:
The PropertyManager, because it needs to do something with the UserTypeRepo, I should be using the UserManager for such activities.
As such, this means that I will not provide a repo when creating an instance of the UserManager (i.e., var usrMgr = new UserManager(); // no repo). Instead, the UserManager will use the default constructor which will create a new instance of the IUserTypeRepo and provide a new instance of a UserManager and then it can do its work.
I think this accomplishes some design principle such as Separation of Concerns and the Single Responsibility, but then I may be getting away from my Dependency Injection design pattern as the new Managers would now have multiple constructors and look like this:
public class PropertyManager : IDisposable
{
private readonly IPropertyRepo _propertyRepo;
public PropertyManager(){
// use the default repo
_propertyRepo = new PropertyRepo();
}
// Used from Controller or Unit Testing
public PropertyManager(IPropertyRepo propertyRepo)
{
if (propertyRepo == null)
throw new ArgumentNullException("propertyRepo");
}
}
public class UserManager : IDisposable
{
private readonly IUserRepo _userRepo;
public UserManager(){
// use the default repo
_userRepo = new UserRepo();
}
// Used from Controller or Unit Testing
public UserManager(IUserRepo userRepo)
{
if (userRepo == null)
throw new ArgumentNullException("userRepo");
}
}
Would this be frowned upon? Or am I on the right track? In either case, why and thanks?
Update. After reading Yawar's post I decided to update my post and I think I have a relevant concern.
Let's think of a real world example of the above. I have a PropertyManager in real life named "Robert" one of the jobs he performs each morning at work is to Open() the Property (i.e., he unlocks the Property he is the Manager of). I also have a UserManger who manages people who visit the Property and her name is "Sarah" she has a function that she does called EnterProperty() (which is what she does in the morning when she physically walks into the building).
Rule: UserManager has a dependency on PropertyManager when using the EnterProperty()
This looks like this according to all accepted standards:
Property Manager
class PropertyManager : IPropertyManager
{
private readonly IPropertyRepo _propertyRepo;
public PropertyManager(IPropertyRepo propertyRepo)
{
if (propertyRepo == null)
throw new ArgumentNullException("propertyRepo");
this._propertyRepo = propertyRepo;
}
// this is when Robert opens the property in the morning
public void Open()
{
_propertyRepo.Open();
}
// this is when Robert closes the property in the evening
public void Close()
{
_propertyRepo.Close();
}
// this answers the question
public bool IsOpen()
{
return _propertyRepo.IsOpen();
}
}
User Manager
class UserManager : IUserManager
{
private readonly IPropertyRepo _propertyRepo;
private readonly IUserRepo _userRepo;
public UserManager(IUserRepo userRepo, IPropertyRepo propertyRepo = null)
{
if (userRepo == null)
throw new ArgumentNullException("userRepo");
this._userRepo = userRepo;
if (propertyRepo != null)
this._propertyRepo = propertyRepo;
}
// this allows Sarah to physically enter the building
public void EnterProperty()
{
if(_propertyRepo.IsOpen())
{
Console.WriteLine("I'm in the building.");
}else{
_propertyRepo.Open(); // here is my issue (explain below)
Console.WriteLine("Even though I had to execute the Open() operation, I'm in the building. Hmm...");
}
}
}
Web API Controller
{
public void OpenForBusiness(){
private const IPropertyRepo propertyRepo = new PropertyRepo();
private IPropertyManager propertyManager = new PropertyManager(propertyRepo);
private IUserManager userManager = new UserManager(new UserRepo(), propertyRepo);
// Robert, the `PropertyManager`, opens the `Property` in the morning
propertyManager.Open();
// Sarah, the `UserManager`, goes into `Property` after it is opened
userManager.EnterProperty();
}
}
Now, everything is cool and I can walk away and I now have a Repository Pattern which use Dependency Injection which supports TDD and not tightly coupled classes among other benefits.
However, is the truly realistic? (explain why I ask in second)
I think a more real-world (realistic) approach is one that does:
Web API Controller
public void Method1()
{
private IPropertyManager propMgr = new PropertyManager(new PropertyRepo());
private IUserManager userMgr = new UserManager(new UserRepo()); // no dependencies on any repository but my own
// 1. Robert, the `PropertyManager`, opens the `Property`
propMgr.Open();
// 2. Check to see if `Property` is open before entering
// choice a. try to open the door of the `Property`
// choice b. call or text Robert, the `PropertyManager`, and ask him if he opened the `Property` yet, so...
if(propMgr.IsOpen()){
// 3. Sarah, the `UserManager`, arrives at work and enters the `Property`
userMgr.EnterProperty();
}else{
// sol, that sucks, I can't enter the `Property` until the authorized person - Robert - the `PropertyManager` opens it
// right???
}
}
the EnterProperty() method on the UserManager now looks like this:
// this allows Sarah to physically enter the building
public void EnterProperty()
{
Console.WriteLine("I'm in the building.");
}
The promised explanation from above:
If we think in real-world terms we must agree that the later is preferred over the former. When thinking of a Repository lets say this is the definition of ones self (i.e., one's Person) (i.e., the UserRepo having all the data related to the User, is to the UserManager as the DNA, Heartbeat, Brain Wave Pattern, etc. is to a Human (the HumanRepo). As such, allowing the UserManager to know about the PropertyRepo and having access to its Open() method violates all Real-World security principles and Business Rules. In reality this says that through My Contructor() I can get an Interface Representation of a PropertyRepo that I can use any way I see fit. This is synonymous to the following logic of the HumanRepo:
I, Sarah - a UserManager - through a new instance of myself with the satisfaction of the PropertyRepo through my Constructor() create a Hologram Interface of Robert, the PropertyManager that I can use any way I see fit. Granted right now I only want to use the IsOpen() method of the PropertyRepo I actually use the Open() method to do it myself if Robert has not yet performed his duty. This is a security concern to me. In the real-world this says I don't have to wait for Robert to open the Property and use the Holocopy of him and implement his Open() method to get access.
That doesn't seem right.
I think with the last implementation I get SoC, SRP, DI, Repository Pattern, TDD, and Logical Security and as close to a real-world implementation as possible.
What do you all think?
I think I agree with your SoC and breaking the PropertyManager class into PropertyManager and UserManager classes. You are almost there.
I would just refactor as shown below:
public class PropertyManager : IDisposable, IPropertyManager
{
private readonly IPropertyRepo _propertyRepo;
// Used from Controller or Unit Testing
public PropertyManager(IPropertyRepo propertyRepo)
{
if (propertyRepo == null)
throw new ArgumentNullException("propertyRepo");
this._propertyRepo = propertyRepo;
}
}
public class UserManager : IDisposable, IUserManager
{
private readonly IUserRepo _userRepo;
// Used from Controller or Unit Testing
public UserManager(IUserRepo userRepo)
{
if (userRepo == null)
throw new ArgumentNullException("userRepo");
this._userRepo = userRepo;
}
}
Note: Just extract IPropertyManager & IUserManager so that the calling classes will depend upon the interfaces and provide the implementation.
Creating parameterless constructor is useless if you want to (you should) force the client to provide the concrete implementation of IPropertyRepo and IUserRepo interfaces.
public PropertyManager(){
// use the default repo
_propertyRepo = new PropertyRepo();
}
I dont think you would need
if (propertyRepo == null)
throw new ArgumentNullException("propertyRepo");
or
if (userRepo == null)
throw new ArgumentNullException("userRepo");
as IPropertyRepo and IUserRepo will be resolved via a IoC at the startup of your application (say its MVC then before calling the controller IoC will resolve them) so no need to check for null. I have never checked the dependencies for null in my code.
From what you have posted here thats pretty much it.
Unit of Work pattern is used for repository layer not in the manager layer. I would delete that from the title.
Hope this helps!
I think this accomplishes some OOP goal such as Separating Concerns
and the Single Responsibility Principle.
The result is opposite. Now, PropertyManager tightly couples to PropertyRepo; previously, they were loosely coupled.
First approach is better than the latter one. However, PropertyManager and UserManager should not create other objects on which they rely to do their work. The responsibility for creating and managing object should be offloaded to IoC container.
Interfaces describe what can be done, whereas classes describe how it is done. Only classes involve the implementation details—interfaces are completely unaware of how something is accomplished. Because only classes have constructors, it follows that constructors are an implementation detail. An
interesting corollary to this is that, aside from a few exceptions, you can consider an appearance of the new keyword to be a code smell. - Gary McLean Hall
Answer for Updated Question:
In your updated question, you combine Service/Manager and somewhat Domain into a single class - PropertyManager, UserManager. It becomes personal preference.
I personally like to keep them separate. In addition, I like to use Role based and Claim based authorization. Let me use my GitHub sample project as a reference. Please feel free to clone it.
User Domain
User class is also used by Entity Framework Code First Fluent API.
public partial class User
{
public int Id { get; set; }
public string UserName { get; set; }
public string FirstName { get; set; }
}
User Service
public class UserService : IUserService
{
private readonly IRepository<User> _repository;
public UserService(IRepository<User> repository)
{
_repository = repository;
}
public async Task<IPagedList<User>> GetUsersAsync(UserPagedDataRequest request)
{
...
}
}
Action Method
Notice that UI related Business Logic stays at UI layer.
public async Task<ActionResult> Login(LoginModel model, string returnUrl)
{
if (ModelState.IsValid)
{
bool result = _activeDirectoryService.ValidateCredentials(
model.Domain, model.UserName, model.Password);
if (result)
{
...
}
}
...
}
you can take quite a bit of a different approach.....( ignoring your repositories, but allowing for it to be injected )
In this system, the property is only readable, with an event system to handle the mutations, the event system also has rules system which controls what mutations are allowed. This means even if you have a property object you can't mutate it without going through its rules.
This code is more conceptual. The next logical step is to use a full actor model and something like (akka.net) and you may find your repository pattern just disappearing :)
public class Property
{
public string Name { get; private set; }
private IPropertyRules _rules;
private List<User> _occupants = new List<User>();
private IEventLog _eventLog;
public Property(IPropertyRules rules, IEventLog eventLog)
{
_rules = rules;
_eventLog = eventLog;
}
public ActionResult Do(IAction action, User user)
{
_eventLog.Add(action, user);
if (_rules.UserAllowedTo(action, user, this))
{
switch (action)
{
case Open o:
Open();
return new ActionResult(true, $"{user} opened {Name}");
case Enter e:
Enter(user);
return new ActionResult(true, $"{user} entered {Name}");
}
return new ActionResult(false, $"{Name} does not know how to {action} for {user}");
}
return new ActionResult(false, $"{user} is not allowed to {action} {Name}");
}
private void Enter(User user)
{
_occupants.Add(user);
}
private void Open()
{
IsOpen = true;
}
public bool IsOpen { get; set; }
}
public interface IEventLog
{
void Add(IAction action, User user);
}
public class Enter : IAction
{
}
public interface IPropertyRules
{
bool UserAllowedTo(IAction action, User user, Property property);
}
public class Open : IAction
{
}
public class ActionResult
{
public ActionResult(bool successful, string why)
{
Successful = successful;
WhatHappened = why;
}
public bool Successful { get; private set; }
public string WhatHappened { get; private set; }
}
public interface IAction
{
}
public class User
{
}

How to write unit tests for proxy pattern?

Will be thankful for your attention, time and efforts !
I have the following code
public class Employee
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Role { get; set; }
}
public interface IEmployeeRepository
{
Employee GetEmployee(string firstName, string role);
}
public class EmployeeRepository : IEmployeeRepository
{
public Employee GetEmployee(string firstName, string role)
{
//logic here
return new Employee();
}
}
Now i want to implement cache for EmployeeRepository.
At first i did it using Proxy design pattern
public class ProxyEmployeeRepository : IEmployeeRepository
{
private EmployeeRepository _employeeRepository = new EmployeeRepository();
private MemoryCache _cache = new MemoryCache("UsualCache");
public Employee GetEmployee(string firstName, string role)
{
//do not cache administrators
if (role == "admin")
{
return _employeeRepository.GetEmployee(firstName, role);
}
else
{
//get from cache at first
//if absent call _employeeRepository.GetEmployee and add to cache
//...
}
}
But when wanted to write unit tests for this class i couldn't do it(i cannot create mock for _employeeRepository and verify whether it was called or not)
If i implement cache with Decorator pattern then i would have the following code
public class DecoratorEmployeeRepository : IEmployeeRepository
{
private IEmployeeRepository _employeeRepository;
public DecoratorEmployeeRepository(IEmployeeRepository repository)
{
_employeeRepository = repository;
}
private MemoryCache _cache = new MemoryCache("UsualCache");
public Employee GetEmployee(string firstName, string role)
{
//do not cache administrators
if (role == "admin")
{
return _employeeRepository.GetEmployee(firstName, role);
}
else
{
//get from cache at first
//if absent call _employeeRepository.GetEmployee and add to cache
return null;
}
}
}
and unit tests for it
[TestClass]
public class EmployeeRepositoryTests
{
[TestMethod]
public void GetEmployeeTest_AdminRole()
{
var innerMock = Substitute.For<IEmployeeRepository>();
var employeeRepository = new DecoratorEmployeeRepository(innerMock);
employeeRepository.GetEmployee("Ihor", "admin");
innerMock.Received().GetEmployee(Arg.Any<string>(), Arg.Any<string>());
}
[TestMethod]
public void GetEmployeeTest_NotAdminRole()
{
var innerMock = Substitute.For<IEmployeeRepository>();
var employeeRepository = new DecoratorEmployeeRepository(innerMock);
employeeRepository.GetEmployee("Ihor", "NotAdmin");
innerMock.DidNotReceive().GetEmployee("Ihor", "NotAdmin");
}
}
Is it possible to write unit tests for first approach with proxy pattern ? i just don't understand how it is possible to cover proxy class with unit tests ...
I know it is too late to answer your question but it might help other new visitors:
I think your problem is your misunderstanding of both patterns. By using composition instead of instantiating your class inside the proxy, does not necessarily mean that you have changed your pattern from proxy to decorator. Each of these patterns is solving a specific problem. Let me clarify each:
Decorator Pattern:
This pattern is useful when you have different kinds of behaviours in your main class (like caching, logging, lazy loading and etc.) and you want to use each of these or a combination of them in different places of your application. For example, in your controller, you need only caching, in the admin controller you don't need caching but logging and in another service, you need both plus lazy loading. Therefore you will create three decorators for each extra behaviour (caching, logging and lazy loading) and in each place, you link the decorators into each other to provide various kinds of behaviours. The benefit of this pattern is that each class has only one responsibility. Additionally, your application is open to extension and close to modification. If you need a new behaviour, you can simply implement a new decorator from the interface and add it only to the services or controllers that the new behaviour is required without modifying the current implementation.
Proxy Pattern:
This pattern is useful when you want to add specific behaviour or behaviours that are required for your class but can prevent the actual behaviour (querying the database) and/or new behaviours come into the picture (which is not the behaviour in the decorator pattern. It only enhances the main behaviour). Another usage of this pattern is when instantiating the main class is costly. So in contrast, you do not need each behaviour (or various combination of them) separately in several places of your application.
The benefit of this pattern is that it prevents adding several responsibilities to your main class. Besides, it is still close to modification and open to extension. If the requirements change in future, you can simply implement a new proxy and replace it with the correct one or use it separately.
The answer to your question:
Therefore, as I mentioned above, by having a composition to your interface instead of instantiating it directly, you are not changing the pattern. In proxy pattern, the main class can be injected via the interface or the concrete implementation as well.

reusing services calls in unit of work pattern

I have a scenario using WebApi, Generic Repository, EF6 and unit of work pattern
(in order to wrap all changes from several calls to the same context.)
Manager layer is used to perform calls to different repositories and also to other managers.
Currently Customer Manager does inject both repos and other Managers like:
public class CustomerManager {
public CustomerManager(IRepository<Customer> _customerRepository, IRepository<Order> orderRepository, IManager itemManager) {
_orderReporsitory = orderReporsitory;
_itemManager = itemManager;
_customerRepository = customerRepository;
}
public bool Save(Customer customer) {
_orderReporsitory.Find...
_itemManager.IsItemUnique(ItemId)
_customerRepository.Save(customer);
}
}
This code does not compile, for reference only.
Approaches like this
http://blog.longle.net/2013/05/11/genericizing-the-unit-of-work-pattern-repository-pattern-with-entity-framework-in-mvc/
Will wrap several repositories under a unit of work and flush the changes all together.
My issue involves also adding another Manager layer, to be wrapped also inside unit of work and allow both calls to repositories and other managers
(as I want to reuse some manager logic. Like in the example, I am re-using some ItemManager logic)
This code https://stackoverflow.com/a/15527444/310107
using (var uow = new UnitOfWork<CompanyContext>())
{
var catService = new Services.CategoryService(uow);
var custService = new Services.CustomerService(uow);
var cat = new Model.Category { Name = catName };
catService.Add(dep);
custService.Add(new Model.Customer { Name = custName, Category = cat });
uow.Save();
}
is using something similar of what I need but I would also like to be able to inject the services to unit test them (and not creating instances in the body of my manager/service method)
What would the best approach to do this ?
Thanks
Your code snippet with the unit of work has several problems, such as:
You create and dispose the unit of work explicitly within that method, forcing you to pass along that unit of work from method to method and class to class.
This causes you to violate the Dependency Inversion Principle, because you now depend on concrete types (CategoryService and CustomerService), which complicates your code and makes your code harder to test.
If you need to change the way the unit of work is created, managed or disposed, you will have to make sweeping changes throughout the application; A violation of the Open/Closed Principle.
I expressed these problems in more details in this answer.
Instead, I propose to have one DbContext, share it through a complete request, and control its lifetime in the application's infrastructure, instead of explicitly throughout the code base.
A very effective way of doing this is by placing your service layer behind a generic abstaction. Although the name of this abstraction is irrelevant, I usually call this abstraction 'command handler:
public interface ICommandHandler<TCommand>
{
void Handle(TCommand command);
}
There are a few interesting things about this abstaction:
The abstraction describes one service operation or use case.
Any arguments the operation might have are wrapped in a single message (the command).
Each operation gets its own unique command class.
Your CustomerManager for instance, might look as follows:
[Permission(Permissions.ManageCustomerDetails)]
public class UpdateCustomerDetailsCommand {
public Guid CustomerId { get; set; }
[Required] public string FirstName { get; set; }
[Required] public string LastName { get; set; }
[ValidBirthDate] public DateTime DateOfBirth { get; set; }
}
public class UpdateCustomerDetailsCommandHandler
: ICommandHandler<UpdateCustomerDetailsCommand> {
public UpdateCustomerDetailsCommandHandler(
IRepository<Customer> _customerRepository,
IRepository<Order> orderRepository,
IManager itemManager) {
_orderReporsitory = orderReporsitory;
_itemManager = itemManager;
_customerRepository = customerRepository;
}
public void Handle(UpdateCustomerDetailsCommand command) {
var customer = _customerRepository.GetById(command.CustomerId);
customer.FirstName = command.FirstName;
customer.LastName = command.LastName;
customer.DateOfBirth = command.DateOfBirth;
}
}
This might look like just a bunch of extra code, but having this message and this generic abstraction allows us to easily apply cross-cutting concerns, such as handling the unit of work for instance:
public class CommitUnitOfWorkCommandHandlerDecorator<TCommand>
: ICommandHandler<TCommand> {
private readonly IUnitOfWork unitOfWork;
private readonly ICommandHandler<TCommand> decoratee;
public CommitUnitOfWorkCommandHandlerDecorator(
IUnitOfWork unitOfWork,
ICommandHandler<TCommand> decoratee) {
this.unitOfWork = unitOfWork;
this.decoratee = decoratee;
}
public void Handle(TCommand command) {
this.decoratee.Handle(command);
this.unitOfWork.SaveChanges();
}
}
The class above is a decorator: It both implements ICommandHandler<TCommand> and it wraps ICommandHandler<TCommand>. This allows you to wrap an instance of this decorator around each command handler implementation and allow the system to transparently save the changes made in the unit of work, without any piece of code having to do this explicitly.
It is also possible to create a new unit of work here, but the easiest thing to start with is to let the unit of work live for the duration of the (web) request.
This decorator will however just be the beginning of what you can do with decorators. For instance, it will be trivial to:
Apply security checks
Do user input validation
Run the operation in a transaction
Apply a deadlock retry mechanism.
Prevent reposts by doing deduplication.
Register each operation in an audit trail.
Store commands for queuing or background processing.
More information can be found in the articles, here, here and here.

Categories

Resources