DDD (Domain Driven Design) Application Layer - c#

I have been trying to build an application based on DDD but I have some questions.
Some layers that I have:
- Presentation Layer - MVC
- Application Layer
- Domain Layer
...
First of all, I would like to know if I can do this in the ApplicationLayer (get family info > get default message info > send an email > updated the database):
public ApproveFamilyOutput ApproveFamily(ApproveFamilyInput input)
{
Family family = _familyRepository.GetFamily(input.Username);
family.Approve();
DefaultMessage defaultMessage = _defaultMessageRepository.GetDefaultMessage(MessageTypes.FamilyApproved);
_email.Send(family.GetEmail(), defaultMessage.Subject, defaultMessage.Message);
_familyRepository.Update(family);
bool isSaved = _familyRepository.Save();
return new ApproveFamilyOutput()
{
Errors = Helper.GetErrorIfNotSaved(isSaved)
};
}
Am I thinking well? Is the Application layer responsible to do that job?
The second question is: I need to send some data to the presentation layer according to the privileges that the user has. These privileges are defined in the database. Example:
- The object Family have the Name, LastName, PhoneNumber, Email properties and the user can show/hide each of the values.
How can I handle with this?
Can I do something like in the Application Layer:
public GetFamilyOutput GetFamily(GetFamilyInput input)
{
Family family = _familyRepository.GetFamily(input.Username);
FamilyConfiguration familyConfiguration = _familyConfigurationRepository.GetConfigurations(family.Id);
//ProcessConfiguration will set to null the properties that I cannot show
family.ProcessConfiguration(familyConfiguration);
return new GetFamilyOutput
{
//Map Family Object to the GetFamilyOutput
};
}
Note: The Family, DefaultMessage and FamilyConfiguration are domain objects created inside the Domain Layer.
What is your opinion?
Thanks :)
Edited:
Note: I liked all the answers below and I used a little of all :) (I can´t mark all answers as acceptable)

What your application service is doing in #1 is perfectly valid: it coordinates the workflow with very little to no business logic knowledge.
There are certainly few improvements that could be done however, for instance:
I do not see any transactions? The email should only be sent upon a successfully comitted transaction.
Sending the email could be seen as a side-effect of the family's approval. I suppose the business experts could have stated: "when a family is approved then notify interested parties by email". Therefore, it could be wise to publish a FamilyApproved domain event and move the email sending logic in an event handler.
Note that you want the handler to be called asynchronously only
after the domain event was persisted to disk and you want to persist
the event in the same transaction as the aggregate.
You could probably further abstract the mailing process into something like emailService.send(MessageTypes.FamilyApproved, family.getEmail()). The application service wouldn't have to know about default messages.
Repositories are usually exclusive to aggregate roots (AR), if DefaultMessage is not an AR then I'd consider naming the DefaultMessageRepository service differently.
As for #2, although authorization checks could be done in the domain, it is much more common to relieve the domain from such task and enforce permissions in the application layer. You could even have a dedicated Identity & Access supporting Bounded Context (BC).
"//ProcessConfiguration will set to null the properties that I cannot
show"
That solution wouldn't be so great (just like implementing the IFamilyProperty solution) because your domain model becomes polluted by technical authorization concerns. If you are looking to apply DDD then the model should be as faithful as possible to the Ubiquitous Language (UL) and I doubt IFamilyProperty is something your domain experts would mention or even understand. Allowing properties to become null would probably violate some invariants as well.
Another problem with such solution is that the domain model is rarely adapted for queries (it's built for commands), so it's often preferrable to bypass it entirely and favor going directly to the DB. Implementing authorizations in the domain would prevent you from doing that easily.
For at least these reasons, I think it is preferrable to implement authorization checks outside the domain. There your are free to use whatever implementation you want and suits your needs. For instance, I believe that stripping off values from a DTO could be legitimate.

I also was doubting if it's ok to place some logic to Application Service or not. But things got much cleaner once I read Vladimir Khorikov's Domain services vs Application services article. It states that
domain services hold domain logic whereas application services don’t.
and illustrates the idea by great examples. So in your cases I think it's totally fine to place these scenarios to Application Service as it doesn't contain domain logic.

Ad 1
I usually move that logic into domain layer - services.
So then the application layer just calls:
public ApproveFamilyOutput ApproveFamily(ApproveFamilyInput input)
{
var approveService = diContainer.Get<ApproveService>(); // Or correctly injected by constructor
var result = approveService.ApproveFamily(input);
// Convert to ouput
}
And domain service (AppproveService class) looks like:
public ApproveResult ApproveFamily(ApproveFamilyInput input)
{
var family = _familyRepository.GetFamily(input.Username);
family.Approve();
_familyRepository.Update(family);
bool isSaved = _familyRepository.Save();
if(isSaved)
_eventPublisher.Publish(family.raisedEvents);
// return result
}
To make it work (and following hexagonal/onion architecture), domain layer defines all interfaces for its dependencies (IFamilyRepository, IDefaultMessageRepository, etc) and application layer injects specific implementation into domain layer.
To make it clear:
1. domain layer is independent
2. domain objects are pure - consist of entities, value objects
3. domain objects don't call repositories, it is up to the domain service
4. domain objects raise events
5. unrelated logic is handled by events (event handlers) - for example sending emails, it follows open-closed principle
class FamilyApprovedHandler : IHandle<FamilyApprovedEvent>
{
private readonly IDefaultMessageRepository _defaultMessageRepository;
private readonly IEmailSender _emailSender;
private readonly IEmailProvider _emailProvider;
// ctor
public Task Handle(FamilyApprovedEvent event)
{
var defaultMessage = _defaultMessageRepository.GetDefaultMessage(MessageTypes.FamilyApproved);
var email = _emailProvider.Generate(event.Family, defaultMessage.Subject, defaultMessage.Message);
_emailSender.Send(email);
}
}

As for #1:
Theoretically, the application layer can do what you have described. However, I personally prefer to separate concerns further: there should be a persistence layer. In your case, a developer needs to know to:
Get the family from the repository.
Call the method to approve the family object.
Update the family back in the repository.
Persist the repository.
Handle any possible errors if there were persistence errors.
I would argue that 2-3-4 should be moved to a persistence layer, to make the code look like:
Family family = _familyRepository.GetFamily(input.Username);
family.Approve().Notify(_email);
This approach gives one more flexibility in how to handle errors and some business logic improvements. For example, you would not be sending an e-mail if you encounter persistence errors.
Of course, you'd need to have some additional types and extension methods implemented (for "Notify()" as an example).
Finally, I'd argue that e-mail service should be implemented using a repository pattern too (so you have two repositories) and have a persistence-level implementation. My point of view: anything persisted outside of the application requires repository & persistence implementation; e-mails are persisted outside of the application in user's mailbox.
As for #2:
I would strongly recommend against nullable properties and clearing them out. It gets really confusing really fast, very hard to unit-test and has a lot of "hidden" caveats. Instead, implement classes for your properties. For example:
public class UserPriviledge { //... your db-defined privileges }
public interface IFamilyProperty<T>
{
public string PropertyName { get; }
public T PropertyValue { get; }
public List<UserPriviledge> ReadPriviledges { get; }
public bool IsReadOnly { get; }
}
public class FamilyName : IFamilyProperty<string>
{
public static string PropertyName => "Name";
public string PropertyValue { get; }
public List<UserPriviledge> ReadPriviledges { get; }
public bool IsReadOnly { get; private set; }
public FamilyName(string familyName) {
this.PropertyValue = familyName;
this.ReadPriviledges.Add(someUserPrivilege);
this.IsReadOnly = false;
}
public void MakeReadOnly() {
this.IsReadOnly = true;
}
}
public class Family
{
public int Id { get; }
public List<IFamilyProperty> LimitedProperties { get; }
}
With this kind of implementation you can have the same kind of method that removes the values instead of obfuscating the value or applies even more complicated logic:
public void ApplyFamilyPermissions(Family family, UserEntity user)
{
foreach (var property in family.LimitedProperties) {
if (property.ReadPriviledges.Intersect(user.Priviledges).Any() == false) {
family.LimitedProperties.Remove(property);
} else if (property.IsReadOnly == false && HasPropertyWriteAccess(property, user) == false) {
property.MakeReadOnly();
}
}
}
Note: the code was not verified and I'm pretty sure has some syntax mistakes, but I believe it communicates the idea clearly.

Related

How to expose internal domain access only to the repositories

Lets consider this simplified model:
a Subscription class:
public class Subscription
{
public string Name { get; }
public ReadOnlyCollection<Subscriber> Subscribers => _subscribers.AsReadOnly();
private readonly List<Subscriber> _subscribers;
public Subscription(string name)
{
Name = name;
_subscribers = new List<Subscriber>();
}
public Subscriber AddRecipient(Recipient recipient, ReceivingMethod receivingMethod)
{
var subscriber = new Subscriber(this, receivingMethod, recipient);
_subscribers.Add(subscriber);
return subscriber;
}
internal bool RemoveSubscriber(Subscriber subscriber)
=> _subscribers.Contains(subscriber) && _subscribers.Remove(subscriber);
}
a Recipient class:
public class Recipient
{
public Guid Id { get; }
public string Address { get; set; }
public Recipient(string address) : this(Guid.NewGuid(), address)
{
}
internal Recipient(Guid id, string address)
{
Id = id;
Address = address;
}
public Subscriber Subscribe(Subscription subscription, ReceivingMethod receivingMethod)
=> subscription.AddRecipient(this, receivingMethod);
}
and a Subscriber
public class Subscriber : Recipient
{
public Subscription Subscription { get; set; }
public ReceivingMethod ReceivingMethod { get; set; }
internal Subscriber(Subscription subscription, ReceivingMethod method, Recipient recipient)
: base(recipient.Id, recipient.Address)
{
Subscription = subscription;
ReceivingMethod = method;
}
public bool Unsubscribe()
=> Subscription != null && Subscription.RemoveSubscriber(this);
}
A Subscriber results as a Recipient subscribes to a Subscription and as such the instanciation of that object is internally prohebited. At this point I need to load and populate existing Subscribers from within a repository, which implementation resides in a different namespace (.Infrastructure) and cannot access internals of the domain due its protection level.
I strugle to find the right approach. I considered adding the infrastructure layer as a friend to the domain and allowing internal access, but this would make the domain dependend on the infrastructure, while I wan't it to be independent. Right now the domain holds the repositories interfaces, I could add abstract implementations of these, containing access to the models and requiring additional implementation and the injection of a persistence context but this doesn't feel right.
Can someone explain how this is usually done in a rich domain model.
P.S.: This is an architectureal question at application level and as such I think it fits best in SO.
As #maxdoxdev stated: there probably isn't much wrong with rather having domain classes with a public constructor.
If you feel that you definitely do not want public constructors then you could opt for a public factory method on the relevant class or use primitives in your Add methods in order for the method itself to internally instantiate the required object(s).
I feel like there is nothing wrong to expose Domain models to repositories as well as the other layers of onion architecture. It would be wrong other way round (expose other layers to Domain).
Furthermore - if your domain model is well encapsulated, and classes are protecting themselves from being created (or put) in anyhow incorrect state, blocking access to those classes seems to be pointless, as nothing wrong can happen to them being instantiated anywhere in your application as long as that part of application has enough information to create those objects.
Onion architecture is allowing dependency to the inside of the onion (so towards the domain).
Please refer to that image:
https://www.codeguru.com/imagesvr_ce/2236/Onion1.png
or the full article: https://www.codeguru.com/csharp/csharp/cs_misc/designtechniques/understanding-onion-architecture.html
Exposing Domain to outer layers of onion architecture is giving you some possibilities like implementing the CQRS pattern, still maintaining Queries, Commands inside the Domain - and therefore maintaining validations in one place etc.
EDIT:
Another thing that I am using a lot is the Application Layer, that is an orchestrator of all the dependencies, and holder of public API.
Application is holding the Interfaces of repositories, infrastructures and other external dependencies. Those interfaces are implemented in various layers and altogether is injected in the Persistence (UI) layer with IoC.
That gives you the flexibility to replace implementations from outer layers, still having application layer not touched at all, as it only relays on abstractions.
Example:
Controller - accepts DTO and maps it to Query or Command
Application - handles Query or Command by calling abstractions from outer layers and real implementations of Domain
Domain - has rich models that know how to do business actions
Repositories - just implementations of data access
Take a look at this GitHub:
https://github.com/matthewrenze/clean-architecture-demo
That is also related to great Pluralsight video if you are interested.
For completeness I will add an answer to my own question as the existing one while leading in the right direction but lacking concreteness about the example given, in case others are interested in a different approach / concrete implementation.
As the Subscriber is a product of a Recipient subscribing to a specific Subscription, which here is the aggregate root and as such initially responsible for creating the Subscriber, which it still does but in addition I made the constructor of the Subscriber public, in order to allow adding loaded entities.
Making the constructor of the Subscriber public introduced me to the challange to assure the new Subscriber is in a valid state. By this I mean that aswell the Subscriber points to a Subscription as that Subscription is also containing that Subscriber in it's collection of Subscribers and other dependencies that before were handled by the Subscription creation. The solution in the end seems rather simple (in the end) and consisted by adding an internal method to add the Subscriber to the Subscription's Subscribers and apply the other rules, which previously was only available by "subscribing" a Recipient.
So, I enrichted the Subscription class:
internal void AddSubscriber(Subscriber subscriber)
{
if (_subscribers.Contains(subscriber)) return;
_subscribers.Add(subscriber);
subscriber.Subscription = this;
}
And changed the Subscriber constructor:
public Subscriber(Subscription subscription, ReceivingMethod receivingMethod, Recipient recipient)
: base(recipient.EMailAdress, recipient.FirstName, recipient.LastName, recipient.Salutation)
{
Subscription = subscription;
ReceivingMethod = receivingMethod;
subscription.AddSubscriber(this);
}
Now the Repository is able to instanciate the Subscribers from the loaded persistence model.
I am still open to a better approach and/or details about the downsides of this approach.

Using service classes and repository classes in projects?

I've been entering the advanced stage of C# recently and I've seen a lot of applications that implement losely coupling and dependency injection. I've seen the word "Service" a lot associated with classes, I suppose you would call them Service classes? I've also seen classes in this project which include the word Repository, say you has a called 'Player', there would be 2 more classes 'PlayerService' and 'PlayerRepository' classes.
I've checked Linda, TreeHouse, Udemy and many other sites. I've even google the subject but it seems to bring up hundreds of results all leading to different things. None of these links really answer my question in simple plain detail, atleast none that I can understand.
Can anyone help explain this? Why do I need them, when should I use them, what are they?
Well, hard to make a specific explanation without seeing the code but in general terms the concept of a Repository refers to data layer components and the term service - mostly in ASP.NET world refers to business layer components.
You separate these layers from each other so they can be maintained, tested, expanded in isolation. Ideal architectures expose the functionality of these layers via Interfaces - especially the Repository layer. On the Service layer you can take these dependencies through constructor as Interfaces. Using an IoC container and Dependency Injection patterns, you can then register concrete classes to these interfaces and build your objects in a central location aka. Object Composition Root. that allows you easily manage your dependencies in a central location, rather then each dependency instantiated, passed around in scattered places within your code.
This answer is just a pointer to give you an overview. These are topics you should delve deeper by self research and digest.
The Repository pattern is used to abstract away how a class is persisted. This allows you to change the underlying Database or ORM mapper without influencing the persisted classes. See Using Repository Pattern for Abstracting Data Access from a Cache and Data Store.
A service is used if multiple classes are taking part in a certain usecase and none of these classes should have the responsibility to coordinate the other classes. (Maybe these classes do not even hold direct references to each other.) In this case, put the code that handles the interplay between the classes into a service method and pass the affected objects to it.
Note that if the affected classes are in a direct parent-child relationship, you could let the parent coordinate its children directly, without introducing a service. But this might lead to code that is hard to understand.
Let me give an example: assume we want to commit Transactions. After a Transaction was commited, we want to update the Person who has the transaction with the (denormalized) timestamp of the most recent transaction. As you can see, Person does not hold a direct reference to the transaction.
public class Person {
public long Id { get; set; }
public string Name { get; set; }
public DateTime? LastTransactionTimestamp { get; set; }
}
public class Transaction {
public long Id { get; set; }
public long PersonId { get; set; }
public DateTime Timestamp { get; set; }
public void Commit() {
Timestamp = DateTime.Now;
}
}
Now we have the problem where we should put the logic. If we put it into the Person class, it would need Repository access to load the Transaction (because it holds no direct reference). But it should only be concerned with storing its own data, not loading unrelated data from the DB. If we put it into the Transaction class, it does not know if it was the latest Transaction for this Person (because it does not see the other transactions).
So the solution is to put the logic into a service. If the service needs DB access, we inject a repository into it.
public class PersonTransactionService {
private readonly IDbSet<Transaction> _allTransactions;
public PersonTransactionService(IDbSet<Transaction> allTransactions) {
_allTransactions = allTransactions;
}
public void Commit(Person person, Transaction transaction) {
transaction.Commit();
var mostRecent = _allTransactions
.Where(t => t.PersonId == person.Id)
.OrderBy(t => t.Timestamp)
.LastOrDefault();
if (mostRecent != null) {
person.LastTransactionTimestamp = mostRecent.Timestamp;
}
}
}

Managing persistence in DDD

Let's say that I want to create a blog application with these two simple persistence classes used with EF Code First or NHibernate and returned from repository layer:
public class PostPersistence
{
public int Id { get; set; }
public string Text { get; set; }
public IList<LikePersistence> Likes { get; set; }
}
public class LikePersistence
{
public int Id { get; set; }
//... some other properties
}
I can't figure out a clean way to map my persistence models to domain models. I'd like my Post domain model interface to look something like this:
public interface IPost
{
int Id { get; }
string Text { get; set; }
public IEnumerable<ILike> Likes { get; }
void Like();
}
Now how would an implementation underneath look like? Maybe something like this:
public class Post : IPost
{
private readonly PostPersistence _postPersistence;
private readonly INotificationService _notificationService;
public int Id
{
get { return _postPersistence.Id }
}
public string Text
{
get { return _postPersistence.Text; }
set { _postPersistence.Text = value; }
}
public IEnumerable<ILike> Likes
{
//this seems really out of place
return _postPersistence.Likes.Select(likePersistence => new Like(likePersistence ));
}
public Post(PostPersistence postPersistence, INotificationService notificationService)
{
_postPersistence = postPersistence;
_notificationService = notificationService;
}
public void Like()
{
_postPersistence.Likes.Add(new LikePersistence());
_notificationService.NotifyPostLiked(Id);
}
}
I've spent some time reading about DDD but most examples were theoretical or used same ORM classes in domain layer. My solution seems to be really ugly, because in fact domain models are just wrappers around ORM classes and it doens't seem to be a domain-centric approach. Also the way IEnumerable<ILike> Likes is implemented bothers me because it won't benefit from LINQ to SQL. What are other (concrete!) options to create domain objects with a more transparent persistence implementation?
One of the goals of persistence in DDD is persistence ignorance which is what you seem to be striving for to some extent. One of the issues that I see with your code samples is that you have your entities implementing interfaces and referencing repositories and services. In DDD, entities should not implement interfaces which are just abstractions of itself and have instance dependencies on repositories or services. If a specific behavior on an entity requires a service, pass that service directly into the corresponding method. Otherwise, all interactions with services and repositories should be done outside of the entity; typically in an application service. The application service orchestrates between repositories and services in order to invoke behaviors on domain entities. As a result, entities don't need to references services or repositories directly - all they have is some state and behavior which modifies that state and maintains its integrity. The job of the ORM then is to map this state to table(s) in a relational database. ORMs such as NHibernate allow you to attain a relatively large degree of persistence ignorance.
UPDATES
Still I don't want to expose method with an INotificationService as a
parameter, because this service should be internal, layer above don't
need to know about it.
In your current implementation of the Post class the INotificationService has the same or greater visibility as the class. If the INotificationService is implemented in an infrastructure layer, it already has to have sufficient visibility. Take a look at hexagonal architecture for an overview of layering in modern architectures.
As a side note, functionality associated with notifications can often be placed into handlers for domain events. This is a powerful technique for attaining a great degree of decoupling.
And with separate DTO and domain classes how would you solve
persistence synchronization problem when domain object doesn't know
about its underlying DTO? How to track changes?
A DTO and corresponding domain classes exist for very different reasons. The purpose of the DTO is to carry data across system boundaries. DTOs are not in a one-one correspondence with domain objects - they can represent part of the domain object or a change to the domain object. One way to track changes would be to have a DTO be explicit about the changes it contains. For example, suppose you have a UI screen that allows editing of a Post. That screen can capture all the changes made and send those changes in a command (DTO) to a service. The service would load up the appropriate Post entity and apply the changes specified by the command.
I think you need to do a bit more research, see all the options and decide if it is really worth the hassle to go for a full DDD implementation, i ve been there myself the last few days so i ll tell you my experience.
EF Code first is quite promising but there are quite a few issues with it, i have an entry here for this
Entity Framework and Domain Driven Design. With EF your domain models can be persisted by EF without you having to create a separate "persistence" class. You can use POCO (plain old objects) and get a simple application up and running but as i said to me it s not fully mature yet.
If you use LINQ to SQL then the most common approach would be to manually map a "data transfer object" to a business object. Doing it manually can be tough for a big application so check for a tool like Automapper. Alternatively you can simply wrap the DTO in a business object like
public class Post
{
PostPersistence Post { get; set;}
public IList<LikePersistence> Likes { get; set; }
.....
}
NHibernate: Not sure, havent used it for a long time.
My feeling for this (and this is just an opinion, i may be wrong) is that you ll always have to make compromises and you ll not find a perfect solution out there. If you give EF a couple more years it may get there. I think an approach that maps DTOs to DDD objects is probably the most flexible so looking for an automapping tool may be worth your time. If you want to keep it simple, my favourite would be some simple wrappers around DTOs when required.

How would you classify this type of design for classes?

The following type of design I have seen basically has "thin" classes, excluding any type of behaviour. A secondary class is used to insert/update/delete/get.
Is this wrong? Is it anti OOP?
User.cs
public class User
{
public string Username { get; set; }
public string Password { get; set; }
}
Users.cs
public class Users
{
public static User LoadUser(int userID)
{
DBProvider db = new DBProvider();
return dp.LoadUser(userID);
}
}
While your user.cs class is lending itself towards a domain transfer object, the Users.cs class is essentially where you can apply business rules within the data-access objects.
You may want to think about the naming convention of your classes along with the namespaces. When I look at a users.cs, I'm assuming that it will essentially be a class for working with a list of users.
Another option would be to look into the Active Record Pattern, which would combine the two classes that you've created.
User.cs
public class User
{
public string Username { get; set; }
public string Password { get; set; }
public User(int userID)
{
//data connection
//get records
this.Username = datarecord["username"];
this.Password = datarecord["password"];
}
}
I would classify it as a domain object or business object. One benefit of this kind of design is that it keeps the model agnostic of any business logic and they can be reused in different kind of environments.
The second class could be classified as a DAO (Data Access Object).
This pattern is not anti-oop at all and is widely used.
I think you're implementing a domain model and a data-access object. It's a good idea.
The first class is anti-OOP because it contains data without behaviour, a typical example of an anemic domain model. It's typical for people who do procedural programming in an OO language.
However, opinions are devided on whether it makes sense ot put DB access logic into the domain model itself (active record pattern) or, as in your code, into a separate class (Data Access Object pattern), since DB access is a separate technical concern that should not necessarily be closely coupled with the domain model.
It looks like it could be the repository pattern this seems to be an increasingly common pattern and is used to great affect in Rob Conery's Storefront example Asp.Net MVC app.
You're basically abstracting your data access code away from the Model, which is a good thing, generally. Though I would hope for a little guts to the model class. Also from previous experience, calling it Users is confusing, UserRepository might be beter. Also might want to consider removing static (which is a hot debate) but makes mocking easier. Plus the repository should be implementing an interface so you can mock it and hence replace it with a fake later.
It's not really object-oriented in any sense, since the object is nothing but a clump of data sticking together. Not that that's a terrible thing.

ORM and layers

Sorry for this point being all over the place here...but I feel like a dog chasing my tail and I'm all confused at this point.
I'm trying to see the cleanest way of developing a 3 tiered solution (IL, BL, DL) where the DL is using an ORM to abstract access to a DB.
Everywhere I've seen, people use either LinqToSQL or LLBLGen Pro to generate objects which represent the DB Tables, and refer to those classes in all 3 layers.
Seems like 40 years of coding patterns have been ignored -- or a paradigm shift has happened, and I missed the explanaition part as to why its perfectly ok to do so.
Yet, it appears that there is still some basis to desiring being data storage mechanism agnostic -- look what just happened to LinqToSQL: a lot of code was written against it -- only for MS
to drop it... So I would like to isolate the ORM part as best I can, just don't know how.
So, going back to absolute basics, here are the basic parts that I wish to have assembled in a very very clean way:
The Assemblies I'm starting from:
UL.dll
BL.dll
DL.dll
The main classes:
A Message class that has a property exposing collection (called MessageAddresses) of MessageAddress objects:
class Message
{
public MessageAddress From {get;}
public MessageAddresses To {get;}
}
The functions per layer:
The BL exposes a Method to the UI called GetMessage (Guid id) which returns an instance of Message.
The BL in turn wraps the DL.
The DL has a ProviderFactory which wraps a Provider instance.
The DL.ProviderFactory exposes (possibly...part of my questions) two static methods called
GetMessage(Guid id), and
SaveMessage(Message message)
The ultimate goal would be to be able to swap out a provider that was written for Linq2SQL for one for LLBLGen Pro, or another provider that is not working against an ORM (eg VistaDB).
Design Goals:
I would like layer separation.
I would like each layer to only have dependency on layer below it, rather than above it.
I would like ORM generated classes to be in DL layer only.
I would like UL to share Message class with BL.
Therefore, does this mean that:
a) Message is defined in BL
b) The Db/Orm/Manual representation of the DB Table ('DbMessageRecord', or 'MessageEntity', or whatever else ORM calls it) is defined in DL.
c) BL has dependency on DL
d) Before calling DL methods, that do not have ref or know about BL, the BL has to convert them BL entities (eg: DbMessageRecord)?
UL:
Main()
{
id = 1;
Message m = BL.GetMessage(id);
Console.Write (string.Format("{0} to {1} recipients...", m.From, m.To.Count));
}
BL:
static class MessageService
{
public static Message GetMessage(id)
{
DbMessageRecord message = DLManager.GetMessage(id);
DbMessageAddressRecord[] messageAddresses = DLManager.GetMessageAddresses(id);
return MapMessage(message,
}
protected static Message MapMessage(DbMessageRecord dbMessage. DbMessageAddressRecord[] dbAddresses)
{
Message m = new Message(dbMessage.From);
foreach(DbMessageAddressRecord dbAddressRecord in dbAddresses){
m.To.Add(new MessageAddress (dbAddressRecord.Name, dbAddressRecord.Address);
}
}
DL:
static class MessageManager
{
public static DbMessageRecord GetMessage(id);
public static DbMessageAddressRecord GetMessageAddresses(id);
}
Questions:
a) Obviously this is a lot of work sooner or later.
b) More bugs
c) Slower
d) Since BL now dependency on DL, and is referencing classes in DL (eg DbMessageRecord), it seems that since these are defined by ORM, that you can't rip out one Provider, and replace it with another, ...which makes the whole exercise pointless...might as well use the classes of the ORM all through the BL.
e) Or ...another assembly is needed in between the BL and DL and another mapping is required in order to leave BL independent of underlying DL classes.
Wish I could ask the questions clearer...but I'm really just lost at this point. Any help would be greatly appreciated.
that is a little all over the place and reminds me of my first forays into orm and DDD.
I personally use core domain objects, messaging objects, message handlers and repositories.
So my UI sends a message to a handler which in turn hydrates my objects via repositories and executes the business logic in that domain object. I use NHibernate to for my data access and FluentNHibernate for typed binding rather than loosy goosey .hbm config.
So the messaging is all that is shared between my UI and my handlers and all BL is on the domain.
I know i might have opened myself up for punishment for my explanation, if its not clear i will defend later.
Personally i am not a big fan of code generated objects.
I have to keep adding onto this answer.
Try to think of your messaging as a command rather than as a data entity representing your db. I'll give u an example of one of my simple classes and an infrastructure decision that worked very well for me that i cant take credit for:
[Serializable]
public class AddMediaCategoryRequest : IRequest<AddMediaCategoryResponse>
{
private readonly Guid _parentCategory;
private readonly string _label;
private readonly string _description;
public AddMediaCategoryRequest(Guid parentCategory, string label, string description)
{
_parentCategory = parentCategory;
_description = description;
_label = label;
}
public string Description
{
get { return _description; }
}
public string Label
{
get { return _label; }
}
public Guid ParentCategory
{
get { return _parentCategory; }
}
}
[Serializable]
public class AddMediaCategoryResponse : Response
{
public Guid ID;
}
public interface IRequest<T> : IRequest where T : Response, new() {}
[Serializable]
public class Response
{
protected bool _success;
private string _failureMessage = "This is the default error message. If a failure has been reported, it should have overwritten this message.";
private Exception _exception;
public Response()
{
_success = false;
}
public Response(bool success)
{
_success = success;
}
public Response(string failureMessage)
{
_failureMessage = failureMessage;
}
public Response(string failureMessage, Exception exception)
{
_failureMessage = failureMessage;
_exception = exception;
}
public bool Success
{
get { return _success; }
}
public string FailureMessage
{
get { return _failureMessage; }
}
public Exception Exception
{
get { return _exception; }
}
public void Failed(string failureMessage)
{
_success = false;
_failureMessage = failureMessage;
}
public void Failed(string failureMessage, Exception exception)
{
_success = false;
_failureMessage = failureMessage;
_exception = exception;
}
}
public class AddMediaCategoryRequestHandler : IRequestHandler<AddMediaCategoryRequest,AddMediaCategoryResponse>
{
private readonly IMediaCategoryRepository _mediaCategoryRepository;
public AddMediaCategoryRequestHandler(IMediaCategoryRepository mediaCategoryRepository)
{
_mediaCategoryRepository = mediaCategoryRepository;
}
public AddMediaCategoryResponse HandleRequest(AddMediaCategoryRequest request)
{
MediaCategory parentCategory = null;
MediaCategory mediaCategory = new MediaCategory(request.Description, request.Label,false);
Guid id = _mediaCategoryRepository.Save(mediaCategory);
if(request.ParentCategory!=Guid.Empty)
{
parentCategory = _mediaCategoryRepository.Get(request.ParentCategory);
parentCategory.AddCategoryTo(mediaCategory);
}
AddMediaCategoryResponse response = new AddMediaCategoryResponse();
response.ID = id;
return response;
}
}
I know this goes on and on but this basic system has served me very well over the last year or so
you can see that the handler than allows the domain object to handle the domain specific logic
The concept you seem to be missing is IoC / DI (i.e. Inversion of Control / Dependency Injection). Instead of using static methods, each of your layers should only depend on an interface of the next layer, with actual instance injected into the constructor. You can call your DL a repository, a provider or anything else as long as it's a clean abstraction of the underlying persistence mechanism.
As for the objects that represent the entities (roughly mapping to tables) I strongly advise against having two sets of objects (one database-specific and one not). It is OK for them to be referenced by all three layers as long as they are POCOs (they should not really know they're persisted), or, even DTOs (pure structures with no behavior whatsoever). Making them DTOs fits your BL concept better, however I prefer having my business logic spread across my domain objects ("the OOP style") rather than having notion of the BL ("the Microsoft style").
Not sure about Llblgen, but NHibernate + any IoC like SpringFramework.NET or Windsor provide pretty clean model that supports this.
This is probably too indirect an answer, but last year I wrestled with these sorts of questions in the Java world and found Martin Fowler's Patterns of Enterprise Application Architecture quite helpful (also see his pattern catalog). Many of the patterns deal with the same issues you're struggling with. They are all nicely abstract and helped me organize my thinking to be able to see the problem at a higher level.
I chose an approach that used the iBatis SQL mapper to encapsulate our interactions with the database. (An SQL mapper drives the programming language data model from the SQL tables, whereas an ORM like yours goes the other way around.) The SQL mapper returns lists and hierarchies of Data Transfer Objects, each of which represents a row of some query result. Parameters to queries (and inserts, updates, deletes) are passed in as DTOs too. The BL layer makes calls on the SQL Mapper (run this query, do that insert, etc.) and passes around DTOs. The DTOs go up to the presentation layer (UI) where they drive the template expansion mechanisms that generate XHTML, XML, and JSON representations of the data. So for us, the only DL dependency that flowed up to the UI was the set of DTOs, but they made the UI a lot more streamlined than passing up unpacked field values would.
If you couple the Fowler book with the specific help other posters can give, you'll do fine. This is an area with a lot of tools and prior experience, so there should be many good paths forward.
Edit: #Ciel, You're quite right, a DTO instance is just a POCO (or in my case a Java POJO). A Person DTO could have a first_name field of "Jim" and so on. Each DTO basically corresponds to a row of a database table and is just a bundle of fields, nothing more. This means it's not coupled closely with the DL and is perfectly appropriate to pass up to the UI. Fowler talks about these on p. 401 (not a bad first pattern to cut your teeth on).
Now I'm not using an ORM, which takes your data objects and creates the database. I'm using an SQL mapper, which is just a very efficient and convenient way to package and execute database queries in SQL. I designed my SQL first (I happen to know it pretty well), then I designed my DTOs, and then set up my iBatis configuration to say that, "select * from Person where personid = #personid#" should return me a Java List of Person DTO objects. I've not yet used an ORM (Hibernate, eg, in the Java world), but with one of those you'd create your data model objects first and the database is built from them.
If your data model objects have all sorts of ORM-specific add-ons, then I can see why you would think twice before exposing them up to the UI layer. But there you could create a C# interface that only defines the POCO get and set methods, and use that in all your non-DL APIs, and create an implementation class that has all the ORM-specific stuff in it:
interface Person ...
class ORMPerson : Person ...
Then if you change your ORM later, you can create alternate POCO implementations:
class NewORMPerson : Person ...
and that would only affect your DL layer code, because your BL and UI code uses Person.
#Zvolkov (below) suggests taking this approach of "coding to interfaces, not implementations" up to the next level, by recommending that you can write your application in such a way that all your code uses Person objects, and that you can use a dependency injection framework to dynamically configure your application to create either ORMPersons or NewORMPersons depending on what ORM you want to use that day
Try centralizing all data access using a repository pattern. As far as your entities are concerned, you can try implementing some kind of translation layer that will map your entities, so it won't break your app. This is just temporary and will allow you to slowly refactor your code.
obviously I do not know the full scope of your code base so consider the pain and the gain.
My opinion only, YMMV.
When I'm messing with any new technology, I figure it should meet two criteria or I'm wasting my time. (Or I don't understand it well enough.)
It should simplify things, or worst case make them no more complicated.
It should not increase coupling or reduce cohesiveness.
It sounds like you feel like you're headed in the opposite direction, which I know is not the intention for either LINQ or ORMs.
My own perception of the value of this new stuff is it helps a developer move the boundary between the DL and the BL into a little more abstract territory. The DL looks less like raw tables and more like objects. That's it. (I usually work pretty hard to do this anyway with a little heavier SQL and stored procedures, but I'm probably more comfortable with SQL than average). But if LINQ and ORM aren't helping you with this yet, I'd say keep at it, but that's where the end of the tunnel is; simplification, and moving the abstraction boundary a bit.

Categories

Resources