Domain Logic leaking into Queries in MVC.NET application using DDD - c#

I am trying to implement a query to fetch some projection of data to MVC view from DB managed by domain model.
I've read that MVC controllers returning static views should request DTOs from Query handlers or so-called read model repositories rather than using aggregate root repositories returning full fledged domain objects. This way we maximize performance (optimizing queries for needed data) and reduce a risk of domain model misuse (we can't accidentally change model with DTOs).
The problem is that some DTO properties can't directly map to DB Table field and may be populated based on some business rule or be a result of some condition that is not implicitly stated in DB. That means that the query acts upon some logic leaking from domain. I heard that it's not right and that queries should directly filter, order, project and aggregate data from DB tables (using linq queries and EF in my case).
I envision 2 solutions so far:
1) Read model repositories internally query full domain model objects, use them to populate DTO properties (importantly those requiring some business logic from them to use). Here we don't gain performance benefits as we act upon instantiated domain models.
2) The other solution is cache all ever required data in DB through probably domain repositories (dealing with aggregate roots) so queries act upon data fields (with cached values) without addressing to domain logic. The consistency of the cached data then will be maintained by domain repositories and that results in some overhead as well.
Examples:
1) business rule can be as simple as string representation of certain objects or data (across the system) i.e. formatting;
2) Business rule can be calculated field returning bool as in the simple domain model below:
// first aggregate root
public class AssignedForm
{
public int Id {get;set}
public string FormName {get;set}
public ICollection<FormRevision> FormRevisions {get;set}
public bool HasTrackingInformation
{
get
{
return FormRevisions.Any(
fr=> fr.RevisionType==ERevisionType.DiffCopy
&& fr.FormRevisionItems.Any)
}
}
public void CreateNextRevision()
{
if(HasTrackingInformation)
{
.......
}
.......
}
}
public enum ERevisionType { FullCopy=0,DiffCopy=1 }
public class FormRevision
{
public int Id {get;set}
public ERevisionType RevisionType {get;set}
public ICollection<FormRevisionItem> FormRevisionItems {get;set}
}
And then we have a read model repository, say IFormTrackingInfoReader returning collection of objects
public class FormTrackingInfo
{
public int AssignedFormId {get;set}
public int AssignedFormName {get;set}
public bool HasTrackingInformation {get;set}
}
The question is how to implement IFormTrackingInfoReader and populate HasTrackingInformation property sticking to DRY principle and without domain leaking into query. I saw people just return domain objects and use mapping to populate view model. Probably this is way to go. Thank you for your help.

I don't like solution 1, the domain model is not persistent ignorant.
Personally, I prefer solution2. But the "ever required data" may be a problem. If new query requirement emerges, perhaps you'll need some data migration(I heard events replaying will do the trick when using event sourcing). So I'm thinking is there a hybrid solution: Use value objects to implement the derivations. And we can create new value object instances with dto.
public class SomeDto {
public String getSomeDerivation() {
return new ValueObject(some data in this dto).someDerivation();
}
}
In this case, I think domain logic is protected and seperated from persistence. But I haven't tried this in real projects.
UPDATE1:
The hybrid solution does not fit your particular FormTrackingInfo case, but your solution two does. One example is (sorry, I'm not a .net guy, in Java)
public class CustomerReadModel {
private String phoneNumber;
private String ....//other properties
public String getPhoneNumber() {
return phoneNumber;
}
public String getAreaCode() {
return new PhoneNumber(this.phoneNumber).getAreaCode();//PhoneNumber is a value object.
}
}
But like I said, I haven't tried it in real projects, I think it's at most an interim solution when the cached data is not ready.

Related

How to make use of DTO, domain model, value object,

I am designing the different layers of a scheduler in c#. This is going to be a service running in the background without a GUI.
This is my baseline for the architecture (ofcourse only being a small snippet of the structure.
I am uncertain about "best-practice" in terms of archeticture. I have been reading about, POCOs, Value Objects, DTOs, Domain Model and from what I can understand, the presented below is a wrong approach to DTOs.
In my class "ScheduleDTP", I have several methods doing some relativ complex manipulations with date coming from the database. CalculatePriority is a simplified example of one of the methods
Database properties:
ID, Name, Frequency, LastRun
Manipulated properties:
Priority
The purpose of the jobmanager is to evaluate all schedules and on-demands.
To my understanding the DTO should only contain the data, and transfer that between the different layers. I also believe that this should not be the JobManagers resposibility either.
public class ScheduleDTO
{
public Guid ID { get; set; }
public string Name { get; set; }
public int Frequency { get; set; }
public DateTime LastRun { get; set; }
//Calculation based on the values above
public double Priority
{
get
{
return CalculatePriority();
}
}
public double CalculatePriority()
{
return (DateTime.Now - LastRun.AddSeconds(Frequency)).TotalSeconds / 100;
}
}
Should I create some different type of object, POCO, Domail Model, ..., that manipulates the data in the DTOs?
I really appreciate any help about how to construct the different layers or something that that could lead me in the right direction
This is normally handled by a Service layer (aka business logic layer, BLL, etc). The service layer's job is to hold the core business logic. There exists a long-standing argument about whether this layer should be used, or if it should be integrated into the domain objects. See this post for more details and do some googling about anemic domain models and transaction scripts.
In general, when I see anything called "Manager", I immediately flag it for close inspection. It is likely violating rules around Single Responsibility Principal. It ends up creating "God Objects" which are usually a very dangerous anti-pattern.

Am I supposed to use Objects or ViewModels for MVC or both?

I'm learning MVC. I have an application that I developed using webforms that I'm porting over, but I've hit a bit of a snag.
I'm using the Entity Framework as well.
Currently, my models represent database tables. Generally my controller will grab the data from the table and create a view model that will be passed to the view.
Where I'm a bit confused is when I need to make some transformations based on the model data and pass it to the view I'm not sure where that transformation takes place.
In the webforms application I would have a class where I would create new objects from and then all of the transformations would happen there. A good example would be a User; the database would store first and last name, and the object would have a public property for FullName that would get set in the constructor.
I've read some conflicting discussions on Thin Controllers/Fat Models, and vice-versa. I'm just a little confused. Thin controllers and fat models seems to be the recommended way according to the Microsoft docs, but they don't really give real world examples of doing so.
So with an example:
public class UserEntity
{
public int ID { get; set; }
public string FName { get; set; }
public string LName { get; set; }
}
public class UserController : Controller {
{
protected readonly DBContext _context;
public UserController(DBContext context)
{
_context = context;
}
public IactionResult Index(int id)
{
var _user = _context.Users.Single(u => u.id == id);
UserViewModel _viewModel = new UserViewModel
{
FirstName = _user.FName,
LastName = _user.LName,
FullName = ""
};
return View(_viewModel)
}
}
If the above isn't perfect, forgive me - I just wrote it up for a quick example. It's not intended to be flawless code.
For Fullname, where would I put logic that would give me that information. Now, I realize that in this very simple example, I could easily get the full name right there. But let's just pretend that it's a much more complex operation than concatenating two strings. Where would I place a GetFullName method?
Would I have a method in my model? Would I instead create a User class and pass the returned model data? What about having a separate class library? If either of the latter, would I pass User objects to my view model or would I set view model properties from the User object that was created?
Entity Framework often correlates a representation of the business from a relational data implementation. This approach is ideal for a clean representation of the business model's. But within a web page that direct representation often doesn't translate or play well within the application structure.
They end up usually implementing a pattern called model-view-view-model (MVVM). Basically, a transformation of a single or multiple entities into a single object to be placed within the view as a model. This transformation solves an abundance of issues, example.
public class UserModel
{
private readonly UserEntity user;
public UserModel(UserEntity user) => this.user = user;
public string Name => $"{user.First} {user.Last}";
}
The entity and database reflect a users name separated, first and last. But placing the entity into another structure, allows you to build a representative model to adhere to the view. Obviously a simple example, but the approach is often utilized for a more transparent representation since the view and database may not directly coincide with each other exactly.
So now your controller would do something along these lines.
public class UserController : Controller
{
public IActionResult Index(int id) => View(new UserModel(new UserService().GetUserInformation(id)));
}
I finished answering, what I'm trying to say with an example a comment expresses quite well.
ViewModels are what the name implies. Models for specific views. They
aren't domain entities or DTOs. If a method makes sense for a view's
model, a good place to put it is in the ViewModel. Validations,
notifications, calculated properties etc. are all good candidates. A
mortgage calculator on the other hand would be a bad candidate -
that's a business functionality – Panagiotis Kanavos 7 mins ago

Managing persistence in DDD

Let's say that I want to create a blog application with these two simple persistence classes used with EF Code First or NHibernate and returned from repository layer:
public class PostPersistence
{
public int Id { get; set; }
public string Text { get; set; }
public IList<LikePersistence> Likes { get; set; }
}
public class LikePersistence
{
public int Id { get; set; }
//... some other properties
}
I can't figure out a clean way to map my persistence models to domain models. I'd like my Post domain model interface to look something like this:
public interface IPost
{
int Id { get; }
string Text { get; set; }
public IEnumerable<ILike> Likes { get; }
void Like();
}
Now how would an implementation underneath look like? Maybe something like this:
public class Post : IPost
{
private readonly PostPersistence _postPersistence;
private readonly INotificationService _notificationService;
public int Id
{
get { return _postPersistence.Id }
}
public string Text
{
get { return _postPersistence.Text; }
set { _postPersistence.Text = value; }
}
public IEnumerable<ILike> Likes
{
//this seems really out of place
return _postPersistence.Likes.Select(likePersistence => new Like(likePersistence ));
}
public Post(PostPersistence postPersistence, INotificationService notificationService)
{
_postPersistence = postPersistence;
_notificationService = notificationService;
}
public void Like()
{
_postPersistence.Likes.Add(new LikePersistence());
_notificationService.NotifyPostLiked(Id);
}
}
I've spent some time reading about DDD but most examples were theoretical or used same ORM classes in domain layer. My solution seems to be really ugly, because in fact domain models are just wrappers around ORM classes and it doens't seem to be a domain-centric approach. Also the way IEnumerable<ILike> Likes is implemented bothers me because it won't benefit from LINQ to SQL. What are other (concrete!) options to create domain objects with a more transparent persistence implementation?
One of the goals of persistence in DDD is persistence ignorance which is what you seem to be striving for to some extent. One of the issues that I see with your code samples is that you have your entities implementing interfaces and referencing repositories and services. In DDD, entities should not implement interfaces which are just abstractions of itself and have instance dependencies on repositories or services. If a specific behavior on an entity requires a service, pass that service directly into the corresponding method. Otherwise, all interactions with services and repositories should be done outside of the entity; typically in an application service. The application service orchestrates between repositories and services in order to invoke behaviors on domain entities. As a result, entities don't need to references services or repositories directly - all they have is some state and behavior which modifies that state and maintains its integrity. The job of the ORM then is to map this state to table(s) in a relational database. ORMs such as NHibernate allow you to attain a relatively large degree of persistence ignorance.
UPDATES
Still I don't want to expose method with an INotificationService as a
parameter, because this service should be internal, layer above don't
need to know about it.
In your current implementation of the Post class the INotificationService has the same or greater visibility as the class. If the INotificationService is implemented in an infrastructure layer, it already has to have sufficient visibility. Take a look at hexagonal architecture for an overview of layering in modern architectures.
As a side note, functionality associated with notifications can often be placed into handlers for domain events. This is a powerful technique for attaining a great degree of decoupling.
And with separate DTO and domain classes how would you solve
persistence synchronization problem when domain object doesn't know
about its underlying DTO? How to track changes?
A DTO and corresponding domain classes exist for very different reasons. The purpose of the DTO is to carry data across system boundaries. DTOs are not in a one-one correspondence with domain objects - they can represent part of the domain object or a change to the domain object. One way to track changes would be to have a DTO be explicit about the changes it contains. For example, suppose you have a UI screen that allows editing of a Post. That screen can capture all the changes made and send those changes in a command (DTO) to a service. The service would load up the appropriate Post entity and apply the changes specified by the command.
I think you need to do a bit more research, see all the options and decide if it is really worth the hassle to go for a full DDD implementation, i ve been there myself the last few days so i ll tell you my experience.
EF Code first is quite promising but there are quite a few issues with it, i have an entry here for this
Entity Framework and Domain Driven Design. With EF your domain models can be persisted by EF without you having to create a separate "persistence" class. You can use POCO (plain old objects) and get a simple application up and running but as i said to me it s not fully mature yet.
If you use LINQ to SQL then the most common approach would be to manually map a "data transfer object" to a business object. Doing it manually can be tough for a big application so check for a tool like Automapper. Alternatively you can simply wrap the DTO in a business object like
public class Post
{
PostPersistence Post { get; set;}
public IList<LikePersistence> Likes { get; set; }
.....
}
NHibernate: Not sure, havent used it for a long time.
My feeling for this (and this is just an opinion, i may be wrong) is that you ll always have to make compromises and you ll not find a perfect solution out there. If you give EF a couple more years it may get there. I think an approach that maps DTOs to DDD objects is probably the most flexible so looking for an automapping tool may be worth your time. If you want to keep it simple, my favourite would be some simple wrappers around DTOs when required.

How should I handle creation of composite entities with a hand-rolled DAL?

For reasons beyond my control I cannot use a real ORM, so I am forced to create a custom DAL that sits over the raw data and returns "domain objects" to the consumer. Also for reasons beyond my control I must use Stored Procedures for data access.
I am making use of the Factory and Repository patterns for data access, or at least in basic theory:
The call to SqlCommand and friends is hidden by a Repository class that takes parameters as needed and returns domain objects.
To create the domain object, the Repository has an internal reference to a Factory of it's own type (e.g. Customer, Order, etc.). The factory has a single method, Create, which takes a DataRow as its input and maps the DataRow's columns to properties of the domain object.
This seems to work fairly well for basic objects that map to a single table. Now, here's the issue I'm running into: I want some of these domain objects to be richer and have collections of related objects. Here's a concrete example of a domain object in this system I'm working on:
class Case
{
public string CaseNumber { get; internal set; }
public ICollection<Message> Messages { get; set; }
}
class Message
{
public int MessageId { get; internal set; }
public string Content { get; set; }
}
In simple parlance, Case has many Messages. My concern is the best way to retrieve the raw data for a Case, since I also need a list of associated messages. It seems to me I can either:
Run a second stored procedure in the CaseRepository when I retrieve a Case to get all the Messages that belong to it - this doesn't seem like a good idea because it means every time I look up a Case, I'm making two database calls instead of one.
Use one stored procedure that returns two tables (One containing a single row with information for the Case, one containing zero or more rows with messages that belong to it) and calling two factory methods i.e. CaseFactory.Create(caseDataRow) and a loop that calls MessageFactory.Create(messageDataRow). This makes more sense as the Case is the aggregate root (or pretending to be one as the case may be) so should know how to create messages that hang off of it.
The second option seems better performance but more code. Is there a third option I'm overlooking or is #2 the best way to handle this kind of composite object when I can't use a true ORM (or even something like Linq to SQL)
As it stands, your repositories are more like table gateways (even via sprocs). You'll need a new layer of repositories, which have access to one or more table gateways, and are able to assemble the composite domain entities from the data returned from many tables.
class Customer
{
string name; // etc
Address homeAddress;
Order[] orders;
}
interface ICustomerTableGateway { ... }
interface IAddressTableGateway { ... }
interface IOrderTableGateway { ... }
class CustomerRepository
{
Customer Get(int id)
{
customer = customerTableGateway.Get(id);
customer.Address = addressTableGateway.Get(customer.id);
customer.Orders = orderTableGateway.GetAll(customer.id);
}
}
If you can return multiple tables from the single sproc than that simplifies things further :)

Best "pattern" for Data Access Layer to Business Object

I'm trying to figure out the cleanest way to do this.
Currently I have a customer object:
public class Customer
{
public int Id {get;set;}
public string name {get;set;}
public List<Email> emailCollection {get;set}
public Customer(int id)
{
this.emailCollection = getEmails(id);
}
}
Then my Email object is also pretty basic.
public class Email
{
private int index;
public string emailAddress{get;set;}
public int emailType{get;set;}
public Email(...){...}
public static List<Email> getEmails(int id)
{
return DataAccessLayer.getCustomerEmailsByID(id);
}
}
The DataAccessLayer currently connects to the data base, and uses a SqlDataReader to iterate over the result set and creates new Email objects and adds them to a List which it returns when done.
So where and how can I improve upon this?
Should I have my DataAccessLayer instead return a DataTable and leave it up to the Email object to parse and return a List back to the Customer?
I guess "Factory" is probably the wrong word, but should I have another type of EmailFactory which takes a DataTable from the DataAccessLayer and returns a List to the Email object? I guess that kind of sounds redundant...
Is this even proper practice to have my Email.getEmails(id) as a static method?
I might just be throwing myself off by trying to find and apply the best "pattern" to what would normally be a simple task.
Thanks.
Follow up
I created a working example where my Domain/Business object extracts a customer record by id from an existing database. The xml mapping files in nhibernate are really neat. After I followed a tutorial to setup the sessions and repository factories, pulling database records was pretty straight forward.
However, I've noticed a huge performance hit.
My original method consisted of a Stored Procedure on the DB, which was called by a DAL object, which parsed the result set into my domain/business object.
I clocked my original method at taking 30ms to grab a single customer record. I then clocked the nhibernate method at taking 3000ms to grab the same record.
Am I missing something? Or is there just a lot of overhead using this nhibernate route?
Otherwise I like the cleanliness of the code:
protected void Page_Load(object sender, EventArgs e)
{
ICustomerRepository repository = new CustomerRepository();
Customer customer = repository.GetById(id);
}
public class CustomerRepository : ICustomerRepository
{
public Customer GetById(string Id)
{
using (ISession session = NHibernateHelper.OpenSession())
{
Customer customer = session
.CreateCriteria(typeof(Customer))
.Add(Restrictions.Eq("ID", Id))
.UniqueResult<Customer>();
return customer;
}
}
}
The example I followed had me create a helper class to help manage the Session, maybe that's why i'm getting this overhead?
public class NHibernateHelper
{
private static ISessionFactory _sessionFactory;
private static ISessionFactory SessionFactory
{
get
{
if (_sessionFactory == null)
{
Configuration cfg = new Configuration();
cfg.Configure();
cfg.AddAssembly(typeof(Customer).Assembly);
_sessionFactory = cfg.BuildSessionFactory();
}
return _sessionFactory;
}
}
public static ISession OpenSession()
{
return SessionFactory.OpenSession();
}
}
With the application i'm working on, speed is of the essence. And ultimately a lot of data will pass between the web-app and the database. If it takes an agent 1/3 of a second to pull up a customer record as opposed to 3 seconds, that would be a huge hit. But if i'm doing something weird and this is a one time initial setup cost, then it might be worth it if the performance was just as good as executing stored procedures on the DB.
Still open to suggestions!
Updated.
I'm scrapping my ORM/NHibernate route. I found the performance is just too slow to justify using it. Basic customer queries just take too long for our environment. 3 seconds compared to sub-second responses is too much.
If we wanted slow queries, we'd just keep our current implementation. The idea to rewrite it was to drastically increase times.
However, after having played with NHibernate this past week, it is a great tool! It just doesn't quite fit my needs for this project.
If the configuration you've got works now, why mess with it? It doesn't sound like you're identifying any particular needs or issues with the code as it is.
I'm sure a bunch of OO types could huddle around and suggest various refactorings here so that the correct responsibilities and roles are being respected, and somebody might even try to shoehorn in a design pattern or two. But the code you have now is simple and sounds like it doesn't have any issues - i'd say leave it.
I've implemented a DAL layer by basically doing what NHibernate does but manually. What NHibernate does is create a Proxy class that inherits from your Domain object (which should have all its fields marked as virtual). All data access code goes into property overrides, its pretty elegant actually.
I simplified this somewhat by having my Repositories fill out the simple properties themselves and only using a proxy for Lazy loading. What I ended up is a set of classes like this:
public class Product {
public int Id {get; set;}
public int CustomerId { get; set;}
public virtual Customer Customer { get; set;}
}
public class ProductLazyLoadProxy {
ICustomerRepository _customerRepository;
public ProductLazyLoadProxy(ICustomerRepository customerRepository) {
_customerRepository = customerRepository;
}
public override Customer {
get {
if(base.Customer == null)
Customer = _customerRepository.Get(CustomerId);
return base.Customer
}
set { base.Customer = value; }
}
}
public class ProductRepository : IProductRepository {
public Product Get(int id) {
var dr = GetDataReaderForId(id);
return new ProductLazyLoadProxy() {
Id = Convert.ToInt(dr["id"]),
CustomerId = Convert.ToInt(dr["customer_id"]),
}
}
}
But after writing about 20 of these I just gave up and learned NHibernate, with Linq2NHibernate for querying and FluentNHibernate for configuration nowadays the roadblocks are lower than ever.
Most likely your application has its domain logic setup in transaction scripts. For .NET implementations that use transaction script Martin Fowler recommends the usage of the table data gateway pattern. .NET provides good support for this pattern because the table data gateway pattern is great with record set, which Microsoft implements with its DataSet-type classes.
Various tools within the Visual Studio environment should increase your productivity. The fact that DataSets can easily be databound to various controls (like the DataGridView) makes it a good choice for data-driven applications.
If your business logic is more complex than a few validations a domain model becomes a good option. Do note that a domain model comes with a whole different set of data access requirements!
This may be too radical for you and doesn't really solve the question, but how about completely scrapping your data layer and opting for an ORM? You will save a lot of code redundancy that spending a week or so on a DAL will bring.
That aside, the pattern you're using resembles a repository pattern, sort of. I'd say your options are
A service object in your Email class - say EmailService - instantiated in the constructor or a property. Accessed via an instance such as email.Service.GetById(id)
A static method on Email, like Email.GetById(id) which is a similar approach
A completely separate static class that is basically a façade class, EmailManager for example, with static methods like EmailManager.GetById(int)
The ActiveRecord pattern where you are dealing with an instance, like
email.Save() and email.GetById()

Categories

Resources