I have set up my VS Solution with the common layers in separate projects: Presentation, Business, Entities, and Data Access Layers. I have this static class AppSettings in the DAL which i want to call its Load() method at Application_Start in the Globla.asax.cs. It basically loads up my application settings from the web.config.
My question is: Should i be making a business logic class to access it from my Presentation Layer or can i access my AppSettings directly from my Presentation Layer to the DataAccess Layer (ignoring the Business Layer).
If so, does the same go for everything? Must i always go through the business layer to get to the Data Layer?
public static class AppSettings
{
public static int ApplicationID { get; set; }
public static string ServiceEndpoint { get; set; }
public static string ServiceCode { get; set; }
public static string ConnectionString { get; set; }
public static void Load()
{
//Connection String
AppSettings.ConnectionString = System.Configuration.ConfigurationManager.ConnectionStrings["USpace"].ConnectionString;
//Applicatin Settings
AppSettings.ApplicationID = Convert.ToInt32(System.Configuration.ConfigurationManager.AppSettings["AppID"]);
AppSettings.ServiceEndpoint = (string)System.Configuration.ConfigurationManager.AppSettings["ServiceEndpoint"];
AppSettings.ServiceCode = (string)System.Configuration.ConfigurationManager.AppSettings["ServiceCode"];
}
}
If i must go through the business logic Layer the BLL's class will look like this?:
public static class BLLAppSettings
{
public static int ApplicationID
{
get
{
AppSettings.ApplicationID
}
}
...
I would recommend always going through the business logic layer to access the data layer, so that all of the safeguards built into the business logic layer are in play. Would you want the data layer to be used without the business layer?
If your focus is Design Patterns, then by all means, have fun pounding those square pegs in the little round holes.
If your focus is on Application Design, then you focus on the Design Patterns that make sense for your Application, and even for individual parts of your Application.
Knowing the patterns is knowledge. Knowing when, and when not, to use them is wisdom...
It's one man's oppinion, but I hope it helps...
Ayende recently posted a few articles against this practice (I understood it like that) :
http://ayende.com/blog/153061/northwind-starter-kit-review-it-is-all-about-the-services
And I agree with him : you have to ask your self "what is the purpose of this layer" and if you can't answer then you can't remove this layer and keep your software simple.
So if you have no business operation when you get your data then deal directly with your data layer !
If the data is in the application's config file (web.config) you don't need to "go through" anything besides System.ConfigurationManager.AppSettings
You should start out by keeping it simple but within reason. General principles of software engineering should be your guide when designing your application. In this case, my immediate thought is that by having one global AppSettings class then you will be coupling your business and data access layer to that class. That may seem reasonable now but what about when you have 50 different settings and only 20 of them apply to the data access layer? What if, down the road, your business layer has to load the settings from a different source than the DAL? On top of that, in your current design your coupling both layers to global singleton. That is typically not a good idea.
Even in smaller apps I would advocate for having different settings objects defined for each layer. In my design, it would be similar to your BLLAppSettings. It would encapsulate the source of the settings, in this case your global AppSettings class. However, where my design would differ is that BLLAppSettings would be a concrete instance of an Interface defined in the BLL layer that must be given to the BLL layer via Constructor, Factory, or Dependency Injection. A similar class, DALAppSettings would be necessary in my recommended design.
In this way, your BLL and DAL are not coupled to the global AppSettings defined in the Presentation Layer. The implementation details of BLLAppSettings and DALAppSettings can vary independently when necessary, but for the time being can remain internally tied to your global AppSettings class.
Related
I have a design problem with my poject that I don't know how to fix, I have a DAL Layer which holds Repositories and a Service Layer which holds "Processors". The role of processors is to provide access to DAL data and perform some validation and formatting logic.
My domain objects all have a reference to at least one object from the Service Layer (to retrieve the values of their properties from the repositories). However I face two cyclical dependencies. The first "cyclical dependency" comes from my design since I want my DAL to return domain objects - I mean that it is conceptual - and the second comes from my code.
A domain object is always dependent of at least one Service Object
The domain object retrieves his properties from the repositories by calling methods on the service
The methods of the service call the DAL
However - and there is the problem - when the DAL has finished his job, he has to return domain objects. But to create these objects he has to inject the required Service Object dependencies (As these dependencies are required by domain objects).
Therefore, my DAL Repositories have dependencies on Service Object.
And this results in a very clear cyclical dependency. I am confused about how I should handle this situation. Lastly I was thinking about letting my DAL return DTOs but it doesn't seem to be compatible with the onion architecture. Because the DTOs are defined in the Infrastructure, but the Core and the Service Layer should not know about Infrastucture.
Also, I'm not excited about changing the return types of all the methods of my repositories since I have hundreds of lines of code...
I would appreciate any kind of help, thanks !
UPDATE
Here is my code to make the situation more clear :
My Object (In the Core):
public class MyComplexClass1
{
MyComplexClass1 Property1 {get; set;}
MyComplexClass2 Property2 {get; set;}
private readonly IService MyService {get; set;}
public MyComplexClass1(IService MyService)
{
this.MyService = MyService;
this.Property1 = MyService.GetMyComplexClassList1();
.....
}
This is my Service Interface (In the Core)
public interface IService
{
MyComplexClass1 GetMyComplexClassList1();
...
}
This my Repository Interface (In the Core)
public interface IRepoComplexClass1
{
MyComplexClass1 GetMyComplexClassObject()
...
}
Now the Service Layer implements IService, and the DAL Layer Implements IRepoComplexClass1.
But my point is that in my repo, I need to construct my Domain Object
This is the Infrascruture Layer
using Core;
public Repo : IRepoComplexClass1
{
MyComplexClass1 GetMyComplexClassList1()
{
//Retrieve all the stuff...
//... And now it's time to convert the DTOs to Domain Objects
//I need to write
//DomainObject.Property1 = new MyComplexClass1(ID, Service);
//So my Repository has a dependency with my service and my service has a dependency with my repository, (Because my Service Methods, make use of the Repository). Then, Ninject is completely messed up.
}
I hope it's clearer now.
First of all, typically architectural guidance like the Onion Architecture and Domain Driven Design (DDD) do not fit all cases when designing a system. In fact, using these techniques is discouraged unless the domain has significant complexity to warrant the cost. So, the domain you are modelling is complex enough that it will not fit into a more simple pattern.
IMHO, both the Onion Architecture and DDD try to achieve the same thing. Namely, the ability to have a programmable (and perhaps easily portable) domain for complex logic that is devoid of all other concerns. That is why in Onion, for example, application, infrastructure, configuration and persistence concerns are at the edges.
So, in summary, the domain is just code. It can then utilize those cool design patterns to solve the complex problems at hand without worrying about anything else.
I really like the Onion articles because the picture of concentric barriers is different to the idea of a layered architecture.
In a layered architecture, it is easy to think vertically, up and down, through the layers. For example, you have a service on top which speaks the outside world (through DTOs or ViewModels), then the service calls the business logic, finally, the business logic calls down to some persistence layer to keep the state of the system.
However, the Onion Architecture describes a different way to think about it. You may still have a service at the top, but this is an application service. For example, a Controller in ASP.NET MVC knows about HTTP, application configuration settings and security sessions. But the job of the controller isn't just to defer work to lower (smarter) layers. The job is to as quickly as possible map from the application side to the domain side. So simply speaking, the Controller calls into the domain asking for a piece of complex logic to be executed, gets the result back, and then persists. The Controller is the glue that is holding things together (not the domain).
So, the domain is the centre of the business domain. And nothing else.
This is why some complain about ORM tools that need attributes on the domain entities. We want our domain completely clean of all concerns other than the problem at hand. So, plain old objects.
So, the domain does not speak directly to application services or repositories. In fact, nothing that the domain calls speaks to these things. The domain is the core, and therefore, the end of the execution stack.
So, for a very simple code example (adapted from the OP):
Repository:
// it is only infrastructure if it doesn't know about specific types directly
public Repository<T>
{
public T Find(int id)
{
// resolve the entity
return default(T);
}
}
Domain Entity:
public class MyComplexClass1
{
MyComplexClass1 Property1 {get; } // requred because cannot be set from outside
MyComplexClass2 Property2 {get; set;}
private readonly IService MyService {get; set;}
// no dependency injection frameworks!
public MyComplexClass1(MyComplexClass1 property1)
{
// actually using the constructor to define the required properties
// MyComplexClass1 is required and MyComplexClass2 is optional
this.Property1 = property1;
.....
}
public ComplexCalculationResult CrazyComplexCalculation(MyComplexClass3 complexity)
{
var theAnswer = 42;
return new ComplexCalculationResult(theAnswer);
}
}
Controller (Application Service):
public class TheController : Controller
{
private readonly IRepository<MyComplexClass1> complexClassRepository;
private readonly IRepository<ComplexResult> complexResultRepository;
// this can use IoC if needed, no probs
public TheController(IRepository<MyComplexClass1> complexClassRepository, IRepository<ComplexResult> complexResultRepository)
{
this.complexClassRepository = complexClassRepository;
this.complexResultRepository = complexResultRepository;
}
// I know about HTTP
public void Post(int id, int value)
{
var entity = this.complexClassRepository.Find(id);
var complex3 = new MyComplexClass3(value);
var result = entity.CrazyComplexCalculation(complex3);
this.complexResultRepository.Save(result);
}
}
Now, very quickly you will be thinking, "Woah, that Controller is doing too much". For example, how about if we need 50 values to construct MyComplexClass3. This is where the Onion Architecture is brilliant. There is a design pattern for that called Factory or Builder and without the constraints of application concerns or persistence concerns, you can implement it easily. So, you refactor into the domain these patterns (and they become your domain services).
In summary, nothing the domain calls knows about application or persistence concerns. It is the end, the core of the system.
Hope this makes sense, I wrote a little bit more than I intended. :)
I'm currently wondering what's the suggested way to separate plain model classes (for e.g. using them in Entity Framework, Web API, MVC, WCF...) from their application logic parts (server side tasks, threads etc.) utilizing the DRY principe.
Consider this pseduo example:
public class HorseOfDoom {
private Thread _hungerThread;
private Laser _headMountedLaser = new Laser();
public int Age { get; set; }
public string Name { get; set; }
public int Health { get; set; }
public int HungerLevel { get; set; }
public HorseOfDoom() {
_hungerThread.Start();
}
public void PewPew() {
_headMountedLaser.PewPew();
}
}
In this class we have both - model properties that describe the model (age, name,..), but also a thread and methods. I can use this class in Entity Framework, WCF and so on.. but what if I want to use the model in a ASP.NET MVC client application without exposing the methods, threads? Do I have to write the same class again? Do I need managers, adapters and facades? Could I use the buddy class pattern?
Use a model fit for the context. DRY is not about repeating lines of code, it's about repeating behaviour. Your view model can have the same properties (copy paste ftw) as the business model, minus the methods. You can use Automapper to map one to the other. Chances are your view model will have more than only those properties, including validation attributes or other data neede by the view in a certain format.
A model to rule'm all is not good on the long term. Clean models will alow you to focus better on the context and avoid coupling to other contexts, which might use a very similar or identical model. Things change in time and it's easier to work with a specific model from the beginning even if that involves copy paste and it seems that you're repeating yourself.
I understand that a combination as you show it in your sample is not really desirable - my main point of critique would be the thread that already implies a very concrete way on how the object should behave. The probability is high that the thread contained in the class itself will make it harder to use the class in some environments. From my point of view, the platform that integrates the class should be able to choose how to orchestrate the actions of the class - of course the class can make some restrictions like "not to be used in a separate thread as the class is not implemented in a thread-safe way".
As for the point of whether to combine properties and methods in a class: I don't think that there is a clear and always valid answer. It depends very much on how big the architecture of your application is and whether you are willing to pay the price for the separation in terms of complexity and overhead.
The concept of combining properties and methods in a single class is usually referred to as "Domain Model". It is a very natural approach to design complex business logic.
If you have an architecture that sets out to separate the layers very well, you'd have a Domain Model in the business logic that implements the business rules. These classes combine properties and methods, but these classes are mapped to simpler versions (e.g. DTOs) that only transport the data to other layers. This way, you also de-couple a service interface from the domain model and change them with minimal influences on the other layers. For instance, if you have complex classes in the domain model and you want to present only a part of this information in a web interface or through a service layer, you could create one or more DTO classes that contain exactly the data that is needed. Changes to the domain model will not necessarily affect clients so that you gain freedom in this respect.
In a smaller architecture however, you might not need to separate the layers with DTOs if you can live with the consequences.
As for the mentioned example of WCF, you have separate service and data contracts that you typically implement in different classes. If you have additional methods in a class that serves as a data contract those methods will not be part of the data contract. You'd have to explicitly make the methods that you want to publish part of a service contract. If you don't share the classes with a service client (e.g. through a class library), the client will not even know that these methods exist.
Let's say that I want to create a blog application with these two simple persistence classes used with EF Code First or NHibernate and returned from repository layer:
public class PostPersistence
{
public int Id { get; set; }
public string Text { get; set; }
public IList<LikePersistence> Likes { get; set; }
}
public class LikePersistence
{
public int Id { get; set; }
//... some other properties
}
I can't figure out a clean way to map my persistence models to domain models. I'd like my Post domain model interface to look something like this:
public interface IPost
{
int Id { get; }
string Text { get; set; }
public IEnumerable<ILike> Likes { get; }
void Like();
}
Now how would an implementation underneath look like? Maybe something like this:
public class Post : IPost
{
private readonly PostPersistence _postPersistence;
private readonly INotificationService _notificationService;
public int Id
{
get { return _postPersistence.Id }
}
public string Text
{
get { return _postPersistence.Text; }
set { _postPersistence.Text = value; }
}
public IEnumerable<ILike> Likes
{
//this seems really out of place
return _postPersistence.Likes.Select(likePersistence => new Like(likePersistence ));
}
public Post(PostPersistence postPersistence, INotificationService notificationService)
{
_postPersistence = postPersistence;
_notificationService = notificationService;
}
public void Like()
{
_postPersistence.Likes.Add(new LikePersistence());
_notificationService.NotifyPostLiked(Id);
}
}
I've spent some time reading about DDD but most examples were theoretical or used same ORM classes in domain layer. My solution seems to be really ugly, because in fact domain models are just wrappers around ORM classes and it doens't seem to be a domain-centric approach. Also the way IEnumerable<ILike> Likes is implemented bothers me because it won't benefit from LINQ to SQL. What are other (concrete!) options to create domain objects with a more transparent persistence implementation?
One of the goals of persistence in DDD is persistence ignorance which is what you seem to be striving for to some extent. One of the issues that I see with your code samples is that you have your entities implementing interfaces and referencing repositories and services. In DDD, entities should not implement interfaces which are just abstractions of itself and have instance dependencies on repositories or services. If a specific behavior on an entity requires a service, pass that service directly into the corresponding method. Otherwise, all interactions with services and repositories should be done outside of the entity; typically in an application service. The application service orchestrates between repositories and services in order to invoke behaviors on domain entities. As a result, entities don't need to references services or repositories directly - all they have is some state and behavior which modifies that state and maintains its integrity. The job of the ORM then is to map this state to table(s) in a relational database. ORMs such as NHibernate allow you to attain a relatively large degree of persistence ignorance.
UPDATES
Still I don't want to expose method with an INotificationService as a
parameter, because this service should be internal, layer above don't
need to know about it.
In your current implementation of the Post class the INotificationService has the same or greater visibility as the class. If the INotificationService is implemented in an infrastructure layer, it already has to have sufficient visibility. Take a look at hexagonal architecture for an overview of layering in modern architectures.
As a side note, functionality associated with notifications can often be placed into handlers for domain events. This is a powerful technique for attaining a great degree of decoupling.
And with separate DTO and domain classes how would you solve
persistence synchronization problem when domain object doesn't know
about its underlying DTO? How to track changes?
A DTO and corresponding domain classes exist for very different reasons. The purpose of the DTO is to carry data across system boundaries. DTOs are not in a one-one correspondence with domain objects - they can represent part of the domain object or a change to the domain object. One way to track changes would be to have a DTO be explicit about the changes it contains. For example, suppose you have a UI screen that allows editing of a Post. That screen can capture all the changes made and send those changes in a command (DTO) to a service. The service would load up the appropriate Post entity and apply the changes specified by the command.
I think you need to do a bit more research, see all the options and decide if it is really worth the hassle to go for a full DDD implementation, i ve been there myself the last few days so i ll tell you my experience.
EF Code first is quite promising but there are quite a few issues with it, i have an entry here for this
Entity Framework and Domain Driven Design. With EF your domain models can be persisted by EF without you having to create a separate "persistence" class. You can use POCO (plain old objects) and get a simple application up and running but as i said to me it s not fully mature yet.
If you use LINQ to SQL then the most common approach would be to manually map a "data transfer object" to a business object. Doing it manually can be tough for a big application so check for a tool like Automapper. Alternatively you can simply wrap the DTO in a business object like
public class Post
{
PostPersistence Post { get; set;}
public IList<LikePersistence> Likes { get; set; }
.....
}
NHibernate: Not sure, havent used it for a long time.
My feeling for this (and this is just an opinion, i may be wrong) is that you ll always have to make compromises and you ll not find a perfect solution out there. If you give EF a couple more years it may get there. I think an approach that maps DTOs to DDD objects is probably the most flexible so looking for an automapping tool may be worth your time. If you want to keep it simple, my favourite would be some simple wrappers around DTOs when required.
I am trying to create a small personal project which uses EF to handle data access. My project architecture has a UI layer, a service layer, a business layer and a data access layer. The EF is contained within the DAL. I don't think it's right to then make reference to my DAL, from my UI. So I want to create custom classes for 'business objects' which is shared between all my layers.
Example: I have a User table. EF creates a User entity. I have a method to maybe GetListOfUsers(). That, in the presentation, shouldn't reply on a List, as the UI then has a direct link to the DAL. I need to maybe have a method exposed in the DAL to maybe be something like:
List<MyUserObject> GetListOfUsers();
That would then call my internal method which would GetListOfUsers which returns a list of user entities, and then transforms them into my MyUserObejcts, which is then passed back through the layers to my UI.
Is that correct design? I don't feel the UI, or business layer for that matter, should have any knowledge of the entity framework.
What this may mean, though, is maybe I need a 'Transformation layer' between my DAL and my Business layer, which transforms my entities into my custom objects?
Edit:
Here is an example of what I am doing:
I have a data access project, which will contain the Entity Framework. In this project, I will have a method to get me a list of states.
public class DataAccessor
{
taskerEntities te = new taskerEntities();
public List<StateObject> GetStates()
{
var transformer = new Transformer();
var items = (from s in te.r_state select s).ToList();
var states = new List<StateObject>();
foreach (var rState in items)
{
var s = transformer.State(rState);
states.Add(s);
}
return states;
}
}
My UI/Business/Service projects mustn't know about entity framework objects. It, instead, must know about my custom built State objects. So, I have a Shared Library project, containing my custom built objects:
namespace SharedLib
{
public class StateObject
{
public int stateId { get; set; }
public string description { get; set; }
public Boolean isDefault { get; set; }
}
}
So, my DAL get's the items into a list of Entity objects, and then I pass them through my transformation method, to make them into custom buily objects. The tranformation takes an EF object, and outputs a custom object.
public class Transformer
{
public StateObject State (r_state state)
{
var s = new StateObject
{
description = state.description,
isDefault = state.is_default,
stateId = state.state_id
};
return s;
}
}
This seems to work. But is it a valid pattern?
So, at some point, your UI will have to work with the data and business objects that you have. It's a fact of life. You could try to abstract farther, but that would only succeed in the interaction being deferred elsewhere.
I agree that business processes should stand alone from the UI. I also agree that your UI should not directly act with how you access your data. What have you suggested (something along the lines of "GetListOfUsers()") is known as the Repository Pattern.
The purpose of the repository pattern is to:
separate the logic that retrieves the data and maps it to the entity
model from the business logic that acts on the model. The business
logic should be agnostic to the type of data that comprises the data
source layer
My recommendation is to use the Repository Pattern to hide HOW you're accessing your data (and, allow a better separation of concerns) and just be concerned with the fact that you "just want a list of users" or you "just want to calculate the sum of all time sheets" or whatever it is that you want your application to actually focus on. Read the link for a more detailed description.
First, do you really need all that layers in your 'small personal project'?
Second, I think your suggested architecture is a bit unclear.
If I get you right, you want to decouple your UI from your DAL. For that purpose you can for example extract interface of MyUserObject (defined in DAL obviously) class, lets call it IMyUserObject, and instead of referencing DAL from UI, reference some abstract project, where all types are data-agnostic. Also, I suggest that you'd have some service layer which would provide your presentation layer (UI) with concrete objects. If you utilize MVC you can have a link to services from your controller class. Service layer in turn can use Repository or some other technique to deal with DAL (it depends on the complexity you choose)
Considering transformation layer, I think people deal with mapping from one type to another when they have one simple model (DTO) to communicate with DB, another one - domain model, that deals with all the subtleties of business logic, and another one - presentational model, that is suited best to let user interact with. Such layering separates concerns to good measure, making each task simpler, but making app more complicated in general.
So you may end having MyUserObjectDTO, MyUserObject and MyUserObjectView and some mapping or transformation btw them.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We normally use abstract function/Interfaces in our projects. Why it is really needed? Why can't we just go for Business logic Layer, Data Access Layer and Presentation Layer only
Function in Presentation Layer:
abc();
Function in Business Logic Layer:
public void abc()
{
//Preparing the list
}
Function in Data Access Layer:
public abstract void abc();
Function in Data Access SQLServer Layer:
public override void abc()
{
//Connection with database
}
Question is: Why is the Data Access Layer required ?
The easiest way to understand this, imo, is an abstraction over DataLayer.
You have set a functions to retrieve a data from xml file. But one day your product scales out and xml is not enough like a data storage. So you pass to some embeded database: sqlite. But one day you need to reuse your library in some enterprise context. So now you need to develop access to sqlserver, oracle, webservice.... In all these changes you will need to change not only the code that actually access the data, but the code that actually consumes it too. And what about the code that already use for years your first xml data access on client and happy with it? How about backcompatibility?
Having the abstraction if not direcly solves most of this problems, but definitely makes scallable your application and more resistant to changes, that, in our world, happen sometimes too frequently.
Generally, if you use interfaces in your code, then you will gain code manuverability in the form of Dependency Injection.
This will help you replace parts of your implementation in certain situations for example providing Mock objects during Unit Testing.
The abstract class or interface is not really a separate layer - it should be part of your business logic layer and it defines the interface that the actual data access layer (SQL data repository, for example) needs to implement to provide the data access service to your business layer.
Without this interface your business layer would be directly dependent on the SQL layer, while the interface removes this dependency: You put the abstract class or the interface into the business logic layer. Then the SQL layer (a separate assembly, for example) implements the abstract class/interface. This way the SQL layer is dependent on the business layer, not the other way around.
The result is a flexible app with an independent business layer that can work with multiple data repositories - all it needs is a layer that implements the interface the business layer defines. And it is not really only about data repositories - your business layer shouldn't be dependent on the context (asp.net vs. console app vs. service etc.), it shouldn't be dependent on the user interface classes, modules interfacing with your business app, etc.
Why interfaces :
Have you ever used using in c# :
using (Form f = new Form())
{
}
Here you will see that you can use only those classes inside using which implements IDisposable interface .
Two things which does not know each other can interact with each other using Interfaces only.
Interface gurantees that "some" functionality has surely been implemented by this type.
Why layers :
So that you can have separate dlls which will let you to reuse in different application.
Basically all is for code reuse and Performance gain.
I think you are talking about Facade layer.
It is an optional layer which will simplify the functions of Business Layer. Let's imagine, you have a ProductManager and CategoryManager and you want to do a particular action which involves using both (for example, get me top 5 products in all categories) then you could use a facade layer that uses ProductManager and CategoryManager.
It is inspired by Facade Pattern.
The abstraction helps create functionality, be it through a base class, an interface, or composition which, when used properly, does wonders for maintenance, readability, and reusability of code.
In regards to the code posted in the question, the code marked "Data Access Layer" acts as a common abstraction for the business layer to use. By doing so, the specific implementations of the the DAL (such as what's under "Data Access SQLServer Layer" in the sample) are decoupled from the business layer. Now you can make implementations of the DAL that access different databases, or perhaps automatically feed data for testing, etc.
The repository pattern is a fantastic example of this at work in a DAL (example is simplified):
public interface IProductRepository
{
Product Get(int id);
...
}
public class SqlProductRepository : IProductRepository
{
public Product Get(int id) { ... }
...
}
public class MockProductRepository : IProductRepository
{
private IDictionary<int, Product> _products = new Dictionary<int, Product>()
{
{ 1, new Product() { Name = "MyItem" } }
};
public Product Get(int id) { return _products[id]; }
...
}
public class AwesomeBusinessLogic
{
private IProductRepository _repository;
public AwesomeBusinessLogic(IProductRepository repository)
{
_repository = repository;
}
public Product GetOneProduct()
{
return _repository.GetProduct(1);
}
}
Even though this example uses interfaces, the same applies to the use of base classes. The beauty is that now I can feed either SqlProductRepository or MockProductRepository into AwesomeBusinessLogic and not have to change anything about AwesomeBusinessLogic. If another case comes along, all that's needed is a new implementation of IProductRepository and AwesomeBusinessLogic will still handle it without change because it only accesses the repository through the interface.
All of the previous answers may explain the needs of abstract layers, but I still want to add some of my thoughts.
Let's say that in our project we just have one implementation of a service in each layer. For instance, I have a contact DAL and a contact BLL service , and we could do something like this
namespace Stackoverflow
{
public class ContactDbService
{
public Contact GetContactByID(Guid contactID)
{
//Fetch a contact from DB
}
}
}
Contact BLL service:
namespace Stackoverflow
{
public class ContactBLLService
{
private ContactDbService _dbService;
public ContactBLLService()
{
_dbService = new ContactDbService();
}
public bool CheckValidContact(Guid contactID)
{
var contact = _dbService.GetContactByID(contactID);
return contact.Age > 50;
}
}
}
without defining interfaces/ abstract classes.
If we do like that, there would be some obvious drawbacks.
Code communication:
Imagine that if your project involves, your services may have many different methods, how could a maintainer (apart from you) know what your services do? Will he have to read your entire service in order to fix a small bug like InvalidCastOperation?
By looking at the interface, people will have the immediate knowledge of the capabilities of the service(at least).
Unit testing
You could test your logic using a fake/mock service to detect bugs in advance as well as prevent regression bugs from happening later.
Easier to change:
By referencing only by interfaces/ abstract classes in other classes, you could easily replace those interface implementations later without too many efforts of work.
Abstraction enables you to do refactoring quickly. Think of instead of using SQL server, you decide to use some other provider; if you do not have a data access layer, then you to do a huge refactor because you are calling data access methods directly. But if you have a data access layer, you only write a new data access layer, inheriting from your abstract data access layer and you do not change anything in the business layer.