How to encapsulate business logic on business entities? - c#

On a 3 tier application, I am using my business entities to generate the dbSets on my dbContext.
On business layer:
public class User
{
string name {get;set;}
}
On the data layer:
public context:DbContext
{
public DbSet<User> Users {get;set;}
}
My question then is how can I encapsulate logic on the entities? I could use extension methods, but I also want some properties, and I don't want them to leak outside the domain layer.

With this type of architecture, it's best to create Interactors that contain all the business logic. That way your domain models (such as User) can be very lightweight.
There are two common ways to go about create Interactors. One way is to create a Service object. The service can offer all use cases and perform all business logic. This approach works better for simple domain models and for small/medium applications.
Service Interactor Example:
public class UserService
{
public void ChangeUsername(User user, string name)
{
... business logic ...
}
}
Another common way to encapsulate business logic is to create an object per use case. Whenever you add a new operation, simply create a new class. This requires more initial work and a better grasp of enterprise architecture, but results in a very scalable solution.
Use Case Interactor Example:
public class ChangeUsernameOperation
{
public void ChangeUsernameOperation(User user, string name)
{
... business logic ...
}
}

Related

Effective Repository in C# - Where to put methods?

I'm trying to build a new application using the Repository pattern for the first time and I'm a little confused about using a Repository. Suppose I have the following classes:
public class Ticket
{
}
public class User
{
public List<Ticket>AssignedTickets { get; set; }
}
public class Group
{
public List<User> GroupMembers { get;set; }
public List<Ticket> GroupAssignedTickets { get;set; }
}
I need a methods that can populate these collections by fetching data from the database.
I'm confused as to which associated Repository class I should put those methods in. Should I design my repositories so that everything returning type T goes in the repository for type T as such?
public class TicketRepository
{
public List<Ticket> GetTicketsForGroup(Group g) { }
public List<Ticket> GetTicketsForUser(User u) { }
}
public class UserRepository
{
public List<User> GetMembersForGroup(Group g) { }
}
The obvious drawback I see here is that I need to start instantiating a lot of repositories. What if my User also has assigned Widgets, Fidgets, and Lidgets? When I populate a User, I need to instantiate a WidgetRepository, a FidgetRepository, and a LidgetRepository all to populate a single user.
Alternatively, do I construct my repository so that everything requesting based on type T is lumped into the repository for type T as listed below?
public class GroupRepository
{
public List<Ticket> GetTickets(Group g) { }
public List<User> GetMembers(Group g) { }
}
public class UserRepository
{
public List<Ticket> GetTickets(User u) { }
}
The advantage I see here is that if I now need my user to have a collection of Widgets, Fidgets, and Lidgets, I just add the necessary methods to the UserRepository pattern and don't need to instantiate a bunch of different repository classes every time I want to create a user, but now I've scattered the concerns for a user across several different repositories.
I'm really not sure which way is right, if any. Any suggestions?
The repository pattern can help you to:
Put things that change for the same reason together
As well as
Separate things that change for different reasons
On the whole, I would expect a "User Repository" to be a repository for obtaining users. Ideally, it would be the only repository that you can use to obtain users, because if you change stuff, like user tables or the user domain model, you would only need to change the user repository. If you have methods on many repositories for obtaining a user, they would all need to change.
Limiting the impact of change is good, because change is inevitable.
As for instantiating many repositories, using a dependency injection tool such as Ninject or Unity to supply the repositories, or using a repository factory, can reduce new-ing up lots of repositories.
Finally, you can take a look at the concept of Domain Driven Design to find out more about the key purpose behind domain models and repositories (and also about aggregate roots, which are relevant to what you are doing).
Fascinating question with no right answer. This might be a better fit for programmers.stackexchange.com rather than stackoverflow.com. Here are my thoughts:
Don't worry about creating too many repositories. They are basically stateless objects so it isn't like you will use too much memory. And it shouldn't be a significant burden to the programmer, even in your example.
The real benefit of repositories is for mocking the repository for unit testing. Consider splitting them up based on what is simplest for the unit tests, to make the dependency injection simple and clear. I've seen cases where every query is a repository (they call those "queries" instead of repositories). And other cases where there is one repository for everything.
As it turns out, the first option was the more practical option in this case. There were a few reasons for this:
1) When making changes to a type and its associated repository (assume Ticket), it was far easier to modify the Ticket and TicketRepository in one place than to chase down every method in every repository that used a Ticket.
2) When I attempted to use interfaces to dictate the type of queues each repository could pull, I ran into issues where a single repository couldn't implement an generic interface using type T multiple times with the only differentiation in interface method implementation being the parameter type.
3) I access data from SharePoint and a database in my implementation, and created two abstract classes to provide data tools to the concrete repositories for either Sharepoint or SQL Server. Assume that in the example above Users come from Sharepoint while Tickets come from a database. Using my model I would not be able to use these abstract classes, as the group would have to inherit from both my Sharepoint abstract class and my SQL abstract class. C# does not support multiple inheritance of abstract classes. However, if I'm grouping all Ticket-related behaviours into a TicketRepository and all User-related behaviours into a UserRepository, each repository only needs access to one type of underlying data source (SQL or Sharepoint, respectively).

Design issue with interaction between Service Layer and DAL Layer

I have a design problem with my poject that I don't know how to fix, I have a DAL Layer which holds Repositories and a Service Layer which holds "Processors". The role of processors is to provide access to DAL data and perform some validation and formatting logic.
My domain objects all have a reference to at least one object from the Service Layer (to retrieve the values of their properties from the repositories). However I face two cyclical dependencies. The first "cyclical dependency" comes from my design since I want my DAL to return domain objects - I mean that it is conceptual - and the second comes from my code.
A domain object is always dependent of at least one Service Object
The domain object retrieves his properties from the repositories by calling methods on the service
The methods of the service call the DAL
However - and there is the problem - when the DAL has finished his job, he has to return domain objects. But to create these objects he has to inject the required Service Object dependencies (As these dependencies are required by domain objects).
Therefore, my DAL Repositories have dependencies on Service Object.
And this results in a very clear cyclical dependency. I am confused about how I should handle this situation. Lastly I was thinking about letting my DAL return DTOs but it doesn't seem to be compatible with the onion architecture. Because the DTOs are defined in the Infrastructure, but the Core and the Service Layer should not know about Infrastucture.
Also, I'm not excited about changing the return types of all the methods of my repositories since I have hundreds of lines of code...
I would appreciate any kind of help, thanks !
UPDATE
Here is my code to make the situation more clear :
My Object (In the Core):
public class MyComplexClass1
{
MyComplexClass1 Property1 {get; set;}
MyComplexClass2 Property2 {get; set;}
private readonly IService MyService {get; set;}
public MyComplexClass1(IService MyService)
{
this.MyService = MyService;
this.Property1 = MyService.GetMyComplexClassList1();
.....
}
This is my Service Interface (In the Core)
public interface IService
{
MyComplexClass1 GetMyComplexClassList1();
...
}
This my Repository Interface (In the Core)
public interface IRepoComplexClass1
{
MyComplexClass1 GetMyComplexClassObject()
...
}
Now the Service Layer implements IService, and the DAL Layer Implements IRepoComplexClass1.
But my point is that in my repo, I need to construct my Domain Object
This is the Infrascruture Layer
using Core;
public Repo : IRepoComplexClass1
{
MyComplexClass1 GetMyComplexClassList1()
{
//Retrieve all the stuff...
//... And now it's time to convert the DTOs to Domain Objects
//I need to write
//DomainObject.Property1 = new MyComplexClass1(ID, Service);
//So my Repository has a dependency with my service and my service has a dependency with my repository, (Because my Service Methods, make use of the Repository). Then, Ninject is completely messed up.
}
I hope it's clearer now.
First of all, typically architectural guidance like the Onion Architecture and Domain Driven Design (DDD) do not fit all cases when designing a system. In fact, using these techniques is discouraged unless the domain has significant complexity to warrant the cost. So, the domain you are modelling is complex enough that it will not fit into a more simple pattern.
IMHO, both the Onion Architecture and DDD try to achieve the same thing. Namely, the ability to have a programmable (and perhaps easily portable) domain for complex logic that is devoid of all other concerns. That is why in Onion, for example, application, infrastructure, configuration and persistence concerns are at the edges.
So, in summary, the domain is just code. It can then utilize those cool design patterns to solve the complex problems at hand without worrying about anything else.
I really like the Onion articles because the picture of concentric barriers is different to the idea of a layered architecture.
In a layered architecture, it is easy to think vertically, up and down, through the layers. For example, you have a service on top which speaks the outside world (through DTOs or ViewModels), then the service calls the business logic, finally, the business logic calls down to some persistence layer to keep the state of the system.
However, the Onion Architecture describes a different way to think about it. You may still have a service at the top, but this is an application service. For example, a Controller in ASP.NET MVC knows about HTTP, application configuration settings and security sessions. But the job of the controller isn't just to defer work to lower (smarter) layers. The job is to as quickly as possible map from the application side to the domain side. So simply speaking, the Controller calls into the domain asking for a piece of complex logic to be executed, gets the result back, and then persists. The Controller is the glue that is holding things together (not the domain).
So, the domain is the centre of the business domain. And nothing else.
This is why some complain about ORM tools that need attributes on the domain entities. We want our domain completely clean of all concerns other than the problem at hand. So, plain old objects.
So, the domain does not speak directly to application services or repositories. In fact, nothing that the domain calls speaks to these things. The domain is the core, and therefore, the end of the execution stack.
So, for a very simple code example (adapted from the OP):
Repository:
// it is only infrastructure if it doesn't know about specific types directly
public Repository<T>
{
public T Find(int id)
{
// resolve the entity
return default(T);
}
}
Domain Entity:
public class MyComplexClass1
{
MyComplexClass1 Property1 {get; } // requred because cannot be set from outside
MyComplexClass2 Property2 {get; set;}
private readonly IService MyService {get; set;}
// no dependency injection frameworks!
public MyComplexClass1(MyComplexClass1 property1)
{
// actually using the constructor to define the required properties
// MyComplexClass1 is required and MyComplexClass2 is optional
this.Property1 = property1;
.....
}
public ComplexCalculationResult CrazyComplexCalculation(MyComplexClass3 complexity)
{
var theAnswer = 42;
return new ComplexCalculationResult(theAnswer);
}
}
Controller (Application Service):
public class TheController : Controller
{
private readonly IRepository<MyComplexClass1> complexClassRepository;
private readonly IRepository<ComplexResult> complexResultRepository;
// this can use IoC if needed, no probs
public TheController(IRepository<MyComplexClass1> complexClassRepository, IRepository<ComplexResult> complexResultRepository)
{
this.complexClassRepository = complexClassRepository;
this.complexResultRepository = complexResultRepository;
}
// I know about HTTP
public void Post(int id, int value)
{
var entity = this.complexClassRepository.Find(id);
var complex3 = new MyComplexClass3(value);
var result = entity.CrazyComplexCalculation(complex3);
this.complexResultRepository.Save(result);
}
}
Now, very quickly you will be thinking, "Woah, that Controller is doing too much". For example, how about if we need 50 values to construct MyComplexClass3. This is where the Onion Architecture is brilliant. There is a design pattern for that called Factory or Builder and without the constraints of application concerns or persistence concerns, you can implement it easily. So, you refactor into the domain these patterns (and they become your domain services).
In summary, nothing the domain calls knows about application or persistence concerns. It is the end, the core of the system.
Hope this makes sense, I wrote a little bit more than I intended. :)

Always traverse the Business Layer to get to the Data Layer?

I have set up my VS Solution with the common layers in separate projects: Presentation, Business, Entities, and Data Access Layers. I have this static class AppSettings in the DAL which i want to call its Load() method at Application_Start in the Globla.asax.cs. It basically loads up my application settings from the web.config.
My question is: Should i be making a business logic class to access it from my Presentation Layer or can i access my AppSettings directly from my Presentation Layer to the DataAccess Layer (ignoring the Business Layer).
If so, does the same go for everything? Must i always go through the business layer to get to the Data Layer?
public static class AppSettings
{
public static int ApplicationID { get; set; }
public static string ServiceEndpoint { get; set; }
public static string ServiceCode { get; set; }
public static string ConnectionString { get; set; }
public static void Load()
{
//Connection String
AppSettings.ConnectionString = System.Configuration.ConfigurationManager.ConnectionStrings["USpace"].ConnectionString;
//Applicatin Settings
AppSettings.ApplicationID = Convert.ToInt32(System.Configuration.ConfigurationManager.AppSettings["AppID"]);
AppSettings.ServiceEndpoint = (string)System.Configuration.ConfigurationManager.AppSettings["ServiceEndpoint"];
AppSettings.ServiceCode = (string)System.Configuration.ConfigurationManager.AppSettings["ServiceCode"];
}
}
If i must go through the business logic Layer the BLL's class will look like this?:
public static class BLLAppSettings
{
public static int ApplicationID
{
get
{
AppSettings.ApplicationID
}
}
...
I would recommend always going through the business logic layer to access the data layer, so that all of the safeguards built into the business logic layer are in play. Would you want the data layer to be used without the business layer?
If your focus is Design Patterns, then by all means, have fun pounding those square pegs in the little round holes.
If your focus is on Application Design, then you focus on the Design Patterns that make sense for your Application, and even for individual parts of your Application.
Knowing the patterns is knowledge. Knowing when, and when not, to use them is wisdom...
It's one man's oppinion, but I hope it helps...
Ayende recently posted a few articles against this practice (I understood it like that) :
http://ayende.com/blog/153061/northwind-starter-kit-review-it-is-all-about-the-services
And I agree with him : you have to ask your self "what is the purpose of this layer" and if you can't answer then you can't remove this layer and keep your software simple.
So if you have no business operation when you get your data then deal directly with your data layer !
If the data is in the application's config file (web.config) you don't need to "go through" anything besides System.ConfigurationManager.AppSettings
You should start out by keeping it simple but within reason. General principles of software engineering should be your guide when designing your application. In this case, my immediate thought is that by having one global AppSettings class then you will be coupling your business and data access layer to that class. That may seem reasonable now but what about when you have 50 different settings and only 20 of them apply to the data access layer? What if, down the road, your business layer has to load the settings from a different source than the DAL? On top of that, in your current design your coupling both layers to global singleton. That is typically not a good idea.
Even in smaller apps I would advocate for having different settings objects defined for each layer. In my design, it would be similar to your BLLAppSettings. It would encapsulate the source of the settings, in this case your global AppSettings class. However, where my design would differ is that BLLAppSettings would be a concrete instance of an Interface defined in the BLL layer that must be given to the BLL layer via Constructor, Factory, or Dependency Injection. A similar class, DALAppSettings would be necessary in my recommended design.
In this way, your BLL and DAL are not coupled to the global AppSettings defined in the Presentation Layer. The implementation details of BLLAppSettings and DALAppSettings can vary independently when necessary, but for the time being can remain internally tied to your global AppSettings class.

Data Access Layer - Designing Class where should responsibility of creating saving be

I am designing Data Access Layer with ADO.NET 2.0 and C#, Sql Server 2005. I often fight with my brain over where to place those calls. Which way of the below i should follow for maintainable robust code.
Method 1
Public Class Company
{
public string CompanyId
{get;set;}
public string CompanyAddress
{get;set;}
public bool Create()
{
}
public bool Update()
{
}
public bool Delete()
{
}
}
Method 2
Public Class Company
{
public string CompanyId
{get;set;}
public string CompanyAddress
{get;set;}
}
and i would use another class like below to do the core data access. Like below
Public Class CompanyRepository
{
public Company CreateCompany(string companyId,string companyDescription)
{
}
public bool UpdateCompany(Company updateCompany)
{
}
public bool DeleteCompany(string companyId)
{
}
public List<Company> FindById(string id)
{
}
}
Go with method 2. It is not the Company class's responsibility to read/write from a data source (single responsibility principle). However, I would even go as far as creating an ICompanyRepository interface and then creating a CompanyRepository implementation for the interface. This way you can inject the ICompanyRepository into the class that needs to save/retrieve company information. It also allows easier unit testing and the ability to create a different implementation in the future (switching from a database to xml files or whatever).
If you follow the principle of separation of concerns, you will go with your method 2.
Having different responsibilities in different classes does help with creating testable, maintainable code.
This also produces smaller, more cohesive classes that are easier to write, reason about and check for correctness.
As a note, you can use an ORM instead of hand crafting your data access layer.
I would stand for second choice, cause
first you create your data holder
after crate your operational unit
So in this case you separate the data from functions that operate on them, making UnitTesting notably easier and destributing the responsibilities between different domains of your code, with, possibly, easy bugs localizations.

How would you classify this type of design for classes?

The following type of design I have seen basically has "thin" classes, excluding any type of behaviour. A secondary class is used to insert/update/delete/get.
Is this wrong? Is it anti OOP?
User.cs
public class User
{
public string Username { get; set; }
public string Password { get; set; }
}
Users.cs
public class Users
{
public static User LoadUser(int userID)
{
DBProvider db = new DBProvider();
return dp.LoadUser(userID);
}
}
While your user.cs class is lending itself towards a domain transfer object, the Users.cs class is essentially where you can apply business rules within the data-access objects.
You may want to think about the naming convention of your classes along with the namespaces. When I look at a users.cs, I'm assuming that it will essentially be a class for working with a list of users.
Another option would be to look into the Active Record Pattern, which would combine the two classes that you've created.
User.cs
public class User
{
public string Username { get; set; }
public string Password { get; set; }
public User(int userID)
{
//data connection
//get records
this.Username = datarecord["username"];
this.Password = datarecord["password"];
}
}
I would classify it as a domain object or business object. One benefit of this kind of design is that it keeps the model agnostic of any business logic and they can be reused in different kind of environments.
The second class could be classified as a DAO (Data Access Object).
This pattern is not anti-oop at all and is widely used.
I think you're implementing a domain model and a data-access object. It's a good idea.
The first class is anti-OOP because it contains data without behaviour, a typical example of an anemic domain model. It's typical for people who do procedural programming in an OO language.
However, opinions are devided on whether it makes sense ot put DB access logic into the domain model itself (active record pattern) or, as in your code, into a separate class (Data Access Object pattern), since DB access is a separate technical concern that should not necessarily be closely coupled with the domain model.
It looks like it could be the repository pattern this seems to be an increasingly common pattern and is used to great affect in Rob Conery's Storefront example Asp.Net MVC app.
You're basically abstracting your data access code away from the Model, which is a good thing, generally. Though I would hope for a little guts to the model class. Also from previous experience, calling it Users is confusing, UserRepository might be beter. Also might want to consider removing static (which is a hot debate) but makes mocking easier. Plus the repository should be implementing an interface so you can mock it and hence replace it with a fake later.
It's not really object-oriented in any sense, since the object is nothing but a clump of data sticking together. Not that that's a terrible thing.

Categories

Resources