Should I create namespaces for Entities and Value Objects? - c#

I am building an application using DDD principles. I am now thinking about the namespace structure in the core of my application. Please see the idea below:
Company.Application.Core.Entities
Company.Application.Core.ValueObjects
However, I cannot find a single example of an application on GitHb, which follows this convention. Is there a specific reason not to follow this naming convention?
I also have a base class for entities i.e. Company.Application.Core.Entities.Entity and a base class for value objects i.e. Company.Application.Core.ValueObjects.ValueObject
The alternative option is to put all Value Objects and Entities in: Company.Application.Core

Your approach will work, but such composition tells story about your code focused on DDD Building Blocks, not about immanent features of your domain. In DDD we want to show important things about domain, the technology issues are not the most important concerns anymore.
I suggest creating following namespaces:
YourCompany.YourApplicationName.YourParticularBoundedContextName.Application
here you can keep all Application Scope building blocks i.e. Application Services and DTO's which are used to transfer parameters to Application Services and return data from them.
YourCompany.YourApplicationName.YourParticularBoundedContextName.Domain
this is the namespace where you will create subnamespaces for Domain Scope building blocks.
YourCompany.YourApplicationName.YourParticularBoundedContextName.Domain.AggregateName
each Aggregate have its own namespace in which there are Aggregate Root class, Entities and VOs used internally in this Aggregate, Repository interface, Aggregate Factory if needed etc.
I don't know if in C# it is possible, but in Java there is another advantage of having separate package (namespace) for Aggregate - you can make Aggregate Root class public and all other Entities and VOs that are internally used as package scope, so they will not be visible outside package (namespace). This way you build public API for your Aggregate that no one can break, because there is a guardian: the compiler :)
YourCompany.YourApplicationName.YourParticularBoundedContextName.Infrastructure
here is a place for repositories' implementations (each in subnamespace of corresponding Aggregate)
Base classes can be kept in:
YourCompany.YourApplicationName.Domain
and even kept in separate project as you can try to reuse it in another application.
What is the advantage? When working with code you are focusing on features and domain rather than on technological aspects. You will more frequently have to cope with problems like "how does this process flow look like" than "I want to see all my Entities and VOs at once", so let your code structure support this. Separating Entities (Aggregates parts actually) and VOs (also Aggregate parts) into separate namespaces you lost information what is working with what. You can simple end with big ball of mud, because you will reuse something that shouldn't be reused.
Please look at:
https://github.com/BottegaIT/ddd-leaven-v2
it is a sample project in Java with packaging done this way. Maybe it will help you.
Another example is:
https://github.com/VaughnVernon/IDDD_Samples
which is a sample for Vaughn Vernon's book about DDD.
There is also article that can be useful:
http://www.codingthearchitecture.com/2015/03/08/package_by_component_and_architecturally_aligned_testing.html

Using separate namespaces for your Entity types (that map to database tables etc.) and your DTO types (used for passing data between client and server layers of your application) is pretty standard practice, even if .Entities and .ValueObjects aren't particularly common choices. I don't think it's worth worrying about too much as long as you use them consistently.

Related

Solution structure with Repository, DAL, BAL

We would like to create a new project with a clean architecture. So our team decided to have:
Repository pattern
Data Access Layer
Business Access Layer
Common Layer (Abstractions such as IPersonRepository, IPersonService, ICSVExport)
Some Core services such as create CSV files.
UnitTests
Now what we have is:
PersonsApp.Solution
--PersonsApp.WebUI
-- Controllers (PersonController)
--PersonApp.Persistence
--Core folder
-IGenericRepository.cs (Abstraction)
-IUnitOfWork.cs (Abstraction)
--Infrastructure folder
-DbDactory.cs (Implementation)
-Disposable.cs (Implementation)
-IDbFactory.cs (Abstraction)
-RepositoryBase.cs (Abstraction)
--Models folder
- Here we DbContext, EF models (Implementation)
--Repositories
- PersonRepository.cs (Implementation)
--PersonApp.Service
--Core folder
-IPersonService.cs (Abstraction)
-ICSVService.cs (Abstraction)
--Business
-PersonService.cs (Abstraction)
--System
-CSVService.cs (Abstraction)
--PersonApp.Test
In my view, our structure is a little bit messy.
The first problem is:
PersonApp.Service has abstractions(interfaces) and implementations
in one class library.
The second problem is:
PersonApp.Persistence has abstractions(RepositoryBase) and
implementations in one class library. But if I move RepositoryBase,
IGenericRepository, IUnitOfWork in a class library called
PersonApp.Abstractions, then I will circular reference errors
between PersonApp.Abstractions and PersonApp.Persistence
What is the best way to organize our solution?
This is probably not a good S.O. question given it's asking something that is opinion-based. When planning out project structure I aim to keep things simple. If an abstraction is for polymorphism I will consider moving interfaces into a separate "common" assembly. For example if I want to provide several possible implementations of a thing, I will have a common assembly that declares the interface, then separate assemblies for the specific implementations. In most cases I use interfaces as contracts so that I can substitute the real with mocks. In these cases I keep the interfaces nested beneath the concrete implementation. I use a VS add-in called NestIn to provide nesting support. This keeps the project structure nice and compact. However, a caveat, if you are using .Net Standard libraries, file nesting doesn't appear to be supported. (Hopefully this changes / has changed)
So for a SomeService, my folder project structure would look like:
Services [folder]
SomeService.cs [concrete]
SomeService.dependencies.cs [partial] [nested]
ISomeService [nested]
the .dependencies.cs file is a partial class where I put all dependencies and the constructor. This keeps them tucked out of the way while I'm working on implementation. I used to rely on #regions way back, but frankly I cannot stand them now. Partial classes are much better IMO.
My repositories live alongside my entities in a Domain assembly.
Entities [folder]
Configuration [folder]
OrderConfiguration.cs
Order.cs
Repositories [folder]
OrderManagementRepository.cs
OrderManagementRepository.dependencies.cs
IOrderManagementRepository.cs
MySystemDbContext.cs
I don't use Generic repositories, rather repositories are designed to pair up with Controllers or Services that they serve. I might have some general purpose repositories that service more than one consumer. (stuff like lookups, etc.) This pattern evolved for me from wanting to satisfy SRP. The biggest issue with things like generic repositories is that they need to serve multiple masters. While an OrderRepository might serve a single responsibility in being worried solely about Orders, the problem I see is that many different places will need access to Order information. This means different criteria, and wanting different amounts of data. So instead, if I have an OrderManagementService that deals with orders, order lines, etc. and touches on Products and other bits and bobs in the process of placing orders, I will use an OrderManagementRepository to serve virtually all data needed by the service, and manage the wrapping of supported operations for managing an order. This means my service only typically needs 1 repository dependency (rather than an OrderRepository, ProductRepository, etc. etc. etc.) and my OrderManagemmentRepository has only 1 reason to change. (But that's getting off topic. :)
I started relying on Nesting a while ago back when you needed ReSharper or the like to get access to "Go to Implementation" for interfaces. Go to Definition would take you to the interfaces, which when in a separate namespace or assembly made navigating around dependencies a pain. By nesting interfaces under their concrete implementations, it's a quick click through from the interface to it's concrete implementation and back. I make use of tracking the current code file in the solution manager so as I navigate through code my project view highlights/expands to the currently viewed file.
Ultimately, your project structure should reflect how you prefer to navigate through the code to make it as intuitive and easy to get around to find the bits you need. That will be different for different people, so partial classes and nesting works really well for me, as I am a very visual person that uses the project view a lot. It might not serve any benefit for people that are hotkey navigation wizards. Ultimately though I'd say keep it simple, and adaptable. Trying to plan it out too much in the early stages is like premature optimization. Don't be afraid to move things around as a project grows. A project that grows simply by adding code will invariably turn into a unstable, confusing tangled mess, no matter how well you try to plan ahead. Good code comes from constant re-factoring which is moving things around and deleting as well as adding. When your style is adaptable and you are building in a way that is constantly refining and code is getting better through natural selection, the structure is free to evolve.
Hopefully that might give some food for thought. Good luck in the green fields!
Edit: Regarding polymorphic interfaces vs. contract interfaces. With polymorphic interfaces where I want to have multiple, substitute-able concrete implementations, this is a case where the interface (and any applicable base class) would reside in a separate assembly. The nesting solution applies for cases where the only substitution is for mocking purposes. (unit testing) A recent example of a polymorphic instance was when I needed to replace an in-built SMS service wrapper to support a new SMS provider. This resulted in re-factoring a hard-coded concrete class from the original code into a SMSCore assembly containing the ISMSProvider interface and some general common definitions, then two assemblies for the implementations: SMSByMessageMedia and SMSBySoprano.
Other cases that come up might be around customizations. For instance I have a number of personal libraries and such for general purpose code, and when implementing them for a client there might be some client-specific "isms" that I want to make. These cases are typically resolved by extending the general purpose implementation (Open-Closed Principle) by overriding, or implementing a provided interface for the custom dependency that the general purpose code can consume. In both of these cases, the client project is going to have a reference to the concrete implementation(s) anyways, so having extendable classes and dependency interfaces in that assembly/namespace doesn't pose any issues. This saves needing to add several different namespaces & assembly references.

Use auto-generated classes/objects when using a SOAP WS in .NET?

At a current project I have to develop a .NET client application which uses a handful of SOAP web services to communicate with an external software.
Fortunately .NET makes it very easy to use a SOAP WS as it generates all the required objects when adding a service reference.
On the other hand after playing around with this auto generated classes for a while I'm not sure if it's better to use them directly in the business logic or if I should map them into my own models (e.g. using something like a repository pattern).
Pros for mapping:
- Separation of business logic and data access (WS could change)
- Central point which calls the WS (can validate the responses and do a proper error handling)
- Sometimes WS types are cumbersome to use (e.g. WebService1.TypeA is not compatible to WebService2.TypeA).
- Generated classes cannot/should not be customized.
- ...
Cons for mapping:
Some of the used WSDLs have a complex structure and lots of nested types. In case of mapping them to my own models I have to duplicate many classes and properties. That's the fact why I have concerns about this solution.
In short I'm unsure if the duplication of the web service classes to my own namespaces and an implementation of a repository or facade pattern is a proper way to go or just blowing up the architecture.
Are there any best practices or similar?
In my 20+ years of experience, adding a repository/service layer can be overkill if the lifetime of the project is uncertain or likely to be short lived. There is the added concern of performance, however SOAP itself would be more of a bottleneck than an object mapping layer when done correctly. Also, Naked Object applications don’t benefit from separation of concerns.
That being said, if you are connecting to a SOAP endpoint these days you are likely to be developing an enterprise application that should be built to be around for a few years and enhanced over time. That is, built to accept growing needs. So as far as your pros and cons, in my experience it depends on return on the time investment. From the information you posted here, the extra effort would be beneficial.
Generation can be a great tool when done right. I do a considerable amount of T4 generation in my projects for similar purposes. As far as best practices, I generate my classes into a ‘Generated’ sub namespace and extend them. This way I can extend the functionality and structure without fear of them being overwritten. In the generated classes I mark everything partial and virtual so that I have options outside of inheritance. This may be overkill to do all at once, but is something to consider. Leveraging partial classes could be another way to modify and extend the generated classes.
You can even generate the extended/partial classes. I use T4Toolbox to generate external files and use the ‘PreserveExistingFile’ to prevent the file from being overwritten. T4Toolbox (if you aren’t already using it) offers a great modular way to manage your generation, even generate into other projects.
Even if you don’t add a repository layer, I would encourage you to apply the concepts of the Composite and Façade patterns to simplify the interaction with the external service.
So in review, best practices in my experience:
Repository:
if you need it to be long-lived and extendable
Generation:
Use a namespace and class naming that makes it clear that the class is generated and will be overwritten.
Create classes that are partials or extend the generated classes for flexibility
T4Toolbox if using T4
modular T4
preserve custom code

.net n-layer website structural advice required

I'm creating my first .net/c# website using Entity Framework as my data access layer. I've split my project into layers so that I have DataAccess, BusinessLogic, a separate BusinessObjects layer and the website itself is the UI (Pages/UserControls/Appcode folder). There is also an additional Utilities plugin project.
The EF model has gone in DA, whilst the entity creation has gone into BO. All feels good, but I'm having trouble what logic class belongs in AppCode (UI) and what belongs in BusinessLogic.
Are there any guidelines that can help me determine which side of the line things go?
App_Code is just a handy convenience for you to run code. I would advise you to avoid using that folder. Just create class library projects for all your classes, which would comprise your business logic layer. In the web project, only put pages and controls (ASCX and ASPX files). It makes the logical separation clearer.
There is a reference implementation from Microsoft Spain; which employs EF, Unity, WCF etc. But, note that this implementation may be overengineered for your needs. Before implementation, instead of copying the same structure, it is better for you to decide, which parts, concepts, patterns are useful for you and which are not.
Microsoft N Layer Reference Implementation

How to break apart layers in a strict-layered architecture and promote modularity without causing unnecessary redundancy? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've received the go-ahead to start building the foundation for a new architecture for our code base at my company. The impetus for this initiative is the fact that:
Our code base is over ten years old and is finally breaking at the seams as we try to scale.
The top "layers", if you want to call them such, are a mess of classic ASP and .NET.
Our database is filled with a bunch of unholy stored procs which contain thousands of lines of business logic and validation.
Prior developers created "clever" solutions that are non-extensible, non-reusable, and exhibit very obvious anti-patterns; these need to be deprecated in short order.
I've been referencing the MS Patterns and Practices Architecture Guide quite heavily as I work toward an initial design, but I still have some lingering questions before I commit to anything. Before I get into the questions, here is what I have so far for the architecture:
(High-level)
(Business and Data layers in depth)
The diagrams basically show how I intend to break apart each layer into multiple assemblies. So in this candidate architecture, we'd have eleven assemblies, not including the top-most layers.
Here's the breakdown, with a description of each assembly:
Company.Project.Common.OperationalManagement : Contains components which implement exception handling policies, logging, performance counters, configuration, and tracing.
Company.Project.Common.Security : Contains components which perform authentication, authorization, and validation.
Company.Project.Common.Communication : Contains components which may be used to communicate with other services and applications (basically a bunch of reusable WCF clients).
Company.Project.Business.Interfaces : Contains the interfaces and abstract classes which are used to interact with the business layer from high-level layers.
Company.Project.Business.Workflows : Contains components and logic related to the creation and maintenance of business workflows.
Company.Project.Business.Components : Contains components which encapsulate business rules and validation.
Company.Project.Business.Entities : Contains data objects that are representative of business entities at a high-level. Some of these may be unique, some may be composites formed from more granular data entities from the data layer.
Company.Project.Data.Interfaces : Contains the interfaces and abstract classes which are used to interact with the data access layer in a repository style.
Company.Project.Data.ServiceGateways : Contains service clients and components which are used to call out to and fetch data from external systems.
Company.Project.Data.Components : Contains components which are used to communicate with a database.
Company.Project.Data.Entities : Contains much more granular entities which represent business data at a low level, suitable for persisting to a database or other data source in a transactional manner.
My intent is that this should be a strict-layered design (a layer may only communicate with the layer directly below it) and the modular break-down of the layers should promote high cohesion and loose coupling. But I still have some concerns. Here are my questions, which I feel are objective enough that they are suitable here on SO...
Are my naming conventions for each module and its respective assembly following standard conventions, or is there a different way I should be going about this?
Is it beneficial to break apart the business and data layers into multiple assemblies?
Is it beneficial to have the interfaces and abstract classes for each layer in their own assemblies?
MOST IMPORTANTLY - Is it beneficial to have an "Entities" assembly for both the business and data layers? My concern here is that if you include the classes that will be generated by LINQ to SQL inside the data access components, then a given entity will be represented in three different places in the code base. Obviously tools like AutoMapper may be able to help, but I'm still not 100%. The reason that I have them broken apart like this is to A - Enforce a strict-layered architecture and B - Promote a looser coupling between layers and minimize breakage when changes to the business domain behind each entity occur. However, I'd like to get some guidance from people who are much more seasoned in architecture than I am.
If you could answer my questions or point me in the right direction I'd be most grateful. Thanks.
EDIT:
Wanted to include some additional details that seem to be more pertinent after reading Baboon's answer. The database tables are also an unholy mess and are quasi-relational, at best. However, I'm not allowed to fully rearchitect the database and do a data clean-up: the furthest down to the core I can go is to create new stored procs and start deprecating the old ones. That's why I'm leaning toward having entities defined explicitly in the data layer--to try to use the classes generated by LINQ to SQL (or any other ORM) as data entities just doesn't seem feasible.
I would disagree with this standard layered architecture in favor of a onion architecture.
According to that, I can give a try at your questions:
1. Are my naming conventions for each module and its respective assembly following standard conventions, or is there a different way I
should be going about this?
Yes, I would agree that it is not a bad convention, and pretty much standard.
2. Is it beneficial to break apart the business and data layers into multiple assemblies?
Yes, but I rather have one assembly called Domain (usually Core.Domain) and other one called Data (Core.Data). Domain assembly contains all the entities (as per domain-driven-design) along with repository interfaces, services, factories etc... Data assembly references the Domain and implements concrete repositories, with an ORM.
3. Is it beneficial to have the interfaces and abstract classes for each layer in their own assemblies?
Depending on various reasons. In the answer to the previous question, I've mentioned separating interfaces for repositories into the Domain, and concrete repositories in Data assembly. This gives you clean Domain without any "pollution" from any specific data or any other technology. Generally, I base my code by thinking on a TDD-oriented level, extracting all dependencies from classes making them more usable, following the SRP principle, and thinking what can go wrong when other people on the team use the architecture :) For example, one big advantage of separating into assemblies is that you control your references and clearly state "no data-access code in domain!".
4. Is it beneficial to have an "Entities" assembly for both the business and data layers?
I would disagree, and say no. You should have your core entities, and map them to the database through an ORM. If you have complex presentation logic, you can have something like ViewModel objects, which are basically entities dumbed down just with data suited for representation in the UI. If you have something like a network in-between, you can have special DTO objects as well, to minimize network calls. But, I think having data and separate business entities just complicates the matter.
One thing as well to add here, if you are starting a new architecture, and you are talking about an application that already exists for 10 years, you should consider better ORM tools from LINQ-to-SQL, either Entity Framework or NHibernate (I opt for NHibernate in my opinion).
I would also add that answering to as many question as there are in one application architecture is hard, so try posting your questions separately and more specifically. For each of the parts of architecture (UI, service layers, domain, security and other cross-concerns) you could have multiple-page discussions. Also, remember not to over-architecture your solutions, and with that complicating things even more then needed!
I actually just started the same thing, so hopefully this will help or at least generate more comments and even help for myself :)
1. Are my naming conventions for each module and its respective assembly following standard conventions, or is there a different way I should be going about this?
According to MSDN Names of Namespaces, this seems to be ok. They lay it out as:
<Company>.(<Product>|<Technology>)[.<Feature>][.<Subnamespace>]
For example, Microsoft.WindowsMobile.DirectX.
2.Is it beneficial to break apart the business and data layers into multiple assemblies?
I definitely think its beneficial to break apart the business and data layers into multiple assemblies. However, in my solution, I've create just two assemblies (DataLayer and BusinessLayer). The other details like Interfaces, Workflows, etc I would create directories for under each assembly. I dont think you need to split them up at that level.
3.Is it beneficial to have the interfaces and abstract classes for each layer in their own assemblies?
Kind of goes along with the above comments.
4.Is it beneficial to have an "Entities" assembly for both the business and data layers?
Yes. I would say that your data entities might not map directly to what your business model will be. When storing the data to a database or other medium, you might need to change things around to have it play nice. The entities that you expose to your service layer should be useable for the UI. The entities you use for you Data Access Layer should be useable for you storage medium. AutoMapper is definitely your friend and can help with mapping as you mentioned. So this is how it shapes up:
(source: microsoft.com)
1) The naming is absolutely fine, just as SwDevMan81 stated.
2) Absolutely, If WCF gets outdated in a few years, you'll only have to change your DAL.
3) The rule of thumb is to ask yourself this simple question: "Can I think of a case where I will make smart use of this?".
When talking about your WCF contracts, yes, definitely put those in a separate assembly: it is key to a good WCF design (I'll go into more details).
When talking about an interface defined in AssemblyA, and is implemented in AssemblyB, then the properties/methods described in those interfaces are used in AssemblyC, you are fine as long as every class defined in AssemblyB is used in C through an interface. Otherwise, you'll have to reference both A, and B: you lose.
4) The only reason I can think of to actually move around 3 times the same looking object, is bad design: the database relations were poorly crafted, and thus you have to tweak the objects that come out to have something you can work with.
If you redo the architecture, you can have another assembly, used in pretty much every project, called "Entities" that holds the data objects. By every project i meant WCF as well.
On a side note, I would add that the WCF service should be split into 3 assemblies: the ServiceContracts, the Service itself, and the Entities we talked about. I had a good video on that last point, but it's at work, i'll link it tomorow!
HTH,
bab.
EDIT: here is the video.

DAL design question

I need to design a Data access layer DAL .Net Enterprise library version 3.5 Data access application block (DAAB)
In my application,I've various logical modules like Registration, billing, order management, user management,etc
Am using C# business entities to map the module objects to database tables and then return the List collection to the client.
I would like to design my DAL in such a way that if tomorrow we decide to use some other data access framework we should have minimal code change.
Given this, how do i design my class structure?
I thought I would have a class DbManagerBase which would be a wrapper over existing .net DAAB
This class DbManagerBase would implement an interface called IDbManagerBase which would have public methods like ExecuteReader, ExecuteNonQuery, etc.
The client class ie. RegistrationDAL,UserManagermentDAL would have the following code inside each of its methods:
IDbManagerBase obj= new DbManagerBase()
obj.ExecuteReader(myStoredProcName)
.
.
.
is this a good OOPS design?may i know any better approach please?or do i need to use inheritance here?
Can i have all the methods in DbManagerBase class and RegistrationDAL,UserManagermentDAL classes as static?I guess,if i've methods as static then the above interface code wont make any sense...right???
To truly abstract the DAL I'd use the repository pattern.
To answer a few of the questions:
Can i have all the methods in
DbManagerBase class and
RegistrationDAL,UserManagermentDAL
classes as static?
I would probably go with a non-static approach cause it gives the flexibility to better control instantiation of the DALs (eg. you could create instances of them from a factory), also it will allow you to have two DALs in place that are talking to different DBs in a cleaner way. Also you will not need to create an instance of the DbManagerBase in every object since it would be an instance member.
Regarding IDbManagerBase having ExecuteReader, ExecuteNonQuery and obj.ExecuteReader(myStoredProcName)
I would be careful about baking the knowledge about database specific concepts in too many places. Keep in mind some DBs to not support stored procedures.
Another point is that before I went about implementing a DAL of sorts I would be sure to read through some code in other open source DALs like NHibernate or Subsonic. It is completely possible they would solve your business problem and reduce your dev time significantly.
If you are looking for a small example of a layered DAL architecture there is my little project on github (it is very basic but shows how you can build interfaces to support a lot of esoteric databases)

Categories

Resources