I am still fairly new to C# and I am trying to decide the best way to structure a new program. Here is what I want to do and I would like feed back on my idea.
Presentation Layer
Business Layer (Separate Class Library)
Data Layer (Separate Class Library)
Model Layer (Separate Class Library)
What I am struggling with is if it is ok to have the classes in the Data Layer and Business Layer inherit from the types I define in Model Layer. This way I can extended the types as needed in my Business Layer with any new properties I see fit. I might not use every property from the Model type in my Business Layer class but is that really a big deal? If this isn't clear enough I can try and put together an example.
The general practice is to use encapsulation, not inheritance, for layer transitions. Consider the following two paradigms (if I understand you correctly)
Model/Data Layer:
Customer
Order
Business Layer:
MyCustomer : Customer
MyOrder : Order
versus
Model/Data Layer:
Customer
Order
Business Layer:
MyCustomer (encapsulates Data.Customer)
MyOrder (encapsulates Data.Order)
There are two main issues when going the first (inheritance) route:
When you modify the base (data/model) class, you're forced to change the business class.
Getting object relationships is difficult and generally requires a non-polymorphic approach. I.E., if the model or data layer exposes a collection of Orders on a Customer object, it's difficult and "kludgy" to get your MyCustomer class to expose a collection of MyOrder objects instead.
Utilizing encapsulation deals with both of these issues, and is definitely the route I'd recommend.
Judging by your name, I'm assuming you're looking to write a WPF application. If that's the case, look into the Model-View-ViewModel (MVVM) design pattern.
Whether to put the classes of each layer in a separate library depends on if you would need to write more applications in the future that would need to use these same classes. You might think that you need and start doing right away. From my experience, I start to have better ideas after coding and having at least some kind of proof of concept. Organizing code, such as re-factoring, organizing into libraries, etc at some point during the middle of the project is easy. It's trying to do that right before going live which is risky and not a good idea.
Regarding the remaining parts of your question, I would refer to you what I learned from Martin Fowler as I can't say better than him and if I do, I am just repeating after him:
http://martinfowler.com/eaaCatalog/serviceLayer.html
http://martinfowler.com/eaaCatalog/
I think an example would be good - I don't think what you're suggesting is necessarily bad, but it depends on what you're trying to achieve.
Why are you inheriting from classes in your model layer? What is the interface and purpose of the specific model classes, and what is the purpose of the business logic classes which are inheriting from your model classes?
Does inheritance really make sense?
Would inheritance violate Liskov's substitution principle?
EDIT:
wpfwannabe, I would be careful about using inheritance just because it seems to be the easy option. I specifically mentioned the Liskov substitution principle because it deals with determining when inheritance is valid.
In addition, you may also want to look at the 'SOLID' design principles for OO development (which includes Liskov's substitution principle):
http://en.wikipedia.org/wiki/Solid_(object-oriented_design)
Related
I have a confusion with the cyclic references and communication patterns between view and controller(logic) classes.
Below image is a simple UML, which logic and view has their references and can talk with each other freely.
Cyclic reference UML example
But as I know, cyclic references are BAD, veryyy BAD. My friend showed me below UML as an solution to overcome this issue. We can just inherit these classes with interface and reference interfaces to this classes. As he said, its not counted as an cyclic referencing, but in my opinion its just an illusion.
Basic solution UML example
Back to the questions:
Is coupling scripts still bad if they are cohesive little structures in the big chunk of our code.
Does the second UML can be evaluated as better design, instead of first one. And is it cycling references?
Even for the these two cohesive coupled class (but decoupled from all of the other classes in the codebase). Should I implement better solution?
I ve read this blogpost which serves better solutions: https://www.sebaslab.com/the-truth-behind-inversion-of-control-part-i-dependency-injection/ , in the "Object Communication and Dependency Injection" section. Blogger mentiones 4 different communication method. The second UML is I think the first one he mentioned.
Do you guys have better solution, both for view-logic object communication and general whole codebase object communication?
I would not say that cyclical dependencies are always bad. It implies a strong coupling between components, but in some cases is can be useful to use multiple strongly coupled classes instead of a single much larger class.
If you want to reduce coupling, interfaces is the typical way to go. But it might be sufficient with a single interface, i.e. either IController or IView, that should be sufficient to remove any cycles.
However, the common model when writing things like UIs is to follow a Model-View-ViewModel, where the view references the view model, that in turn references the model. This does not really remove the circular references, since the view needs to know when the view model has changed. But it is abstracted by the use of events.
Events and delegates can often be used as a lighter weight alternative to interfaces. There are also patterns using event-managers, where you can send and listen to events of a specific type, without needing direct references between all the objects, but I'm somewhat skeptical of this, since I suspect it could be difficult to get an overview of what objects send and listens to some event-type.
In the end it usually depend on what specifically you want to do. Events are good when you want a fairly loose coupling, but are not well suited for complex interactions. For such cases interfaces might be better, but they do not really add much if classes are strongly coupled. As a rule of thumb when adding interfaces, you should ask yourself: What other implementations of this interface will be made? if the answer is none, then don't add it, at least not for the moment.
I would also not put to much stock on technical interviewers, since poor interview questions seem fairly common.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've received the go-ahead to start building the foundation for a new architecture for our code base at my company. The impetus for this initiative is the fact that:
Our code base is over ten years old and is finally breaking at the seams as we try to scale.
The top "layers", if you want to call them such, are a mess of classic ASP and .NET.
Our database is filled with a bunch of unholy stored procs which contain thousands of lines of business logic and validation.
Prior developers created "clever" solutions that are non-extensible, non-reusable, and exhibit very obvious anti-patterns; these need to be deprecated in short order.
I've been referencing the MS Patterns and Practices Architecture Guide quite heavily as I work toward an initial design, but I still have some lingering questions before I commit to anything. Before I get into the questions, here is what I have so far for the architecture:
(High-level)
(Business and Data layers in depth)
The diagrams basically show how I intend to break apart each layer into multiple assemblies. So in this candidate architecture, we'd have eleven assemblies, not including the top-most layers.
Here's the breakdown, with a description of each assembly:
Company.Project.Common.OperationalManagement : Contains components which implement exception handling policies, logging, performance counters, configuration, and tracing.
Company.Project.Common.Security : Contains components which perform authentication, authorization, and validation.
Company.Project.Common.Communication : Contains components which may be used to communicate with other services and applications (basically a bunch of reusable WCF clients).
Company.Project.Business.Interfaces : Contains the interfaces and abstract classes which are used to interact with the business layer from high-level layers.
Company.Project.Business.Workflows : Contains components and logic related to the creation and maintenance of business workflows.
Company.Project.Business.Components : Contains components which encapsulate business rules and validation.
Company.Project.Business.Entities : Contains data objects that are representative of business entities at a high-level. Some of these may be unique, some may be composites formed from more granular data entities from the data layer.
Company.Project.Data.Interfaces : Contains the interfaces and abstract classes which are used to interact with the data access layer in a repository style.
Company.Project.Data.ServiceGateways : Contains service clients and components which are used to call out to and fetch data from external systems.
Company.Project.Data.Components : Contains components which are used to communicate with a database.
Company.Project.Data.Entities : Contains much more granular entities which represent business data at a low level, suitable for persisting to a database or other data source in a transactional manner.
My intent is that this should be a strict-layered design (a layer may only communicate with the layer directly below it) and the modular break-down of the layers should promote high cohesion and loose coupling. But I still have some concerns. Here are my questions, which I feel are objective enough that they are suitable here on SO...
Are my naming conventions for each module and its respective assembly following standard conventions, or is there a different way I should be going about this?
Is it beneficial to break apart the business and data layers into multiple assemblies?
Is it beneficial to have the interfaces and abstract classes for each layer in their own assemblies?
MOST IMPORTANTLY - Is it beneficial to have an "Entities" assembly for both the business and data layers? My concern here is that if you include the classes that will be generated by LINQ to SQL inside the data access components, then a given entity will be represented in three different places in the code base. Obviously tools like AutoMapper may be able to help, but I'm still not 100%. The reason that I have them broken apart like this is to A - Enforce a strict-layered architecture and B - Promote a looser coupling between layers and minimize breakage when changes to the business domain behind each entity occur. However, I'd like to get some guidance from people who are much more seasoned in architecture than I am.
If you could answer my questions or point me in the right direction I'd be most grateful. Thanks.
EDIT:
Wanted to include some additional details that seem to be more pertinent after reading Baboon's answer. The database tables are also an unholy mess and are quasi-relational, at best. However, I'm not allowed to fully rearchitect the database and do a data clean-up: the furthest down to the core I can go is to create new stored procs and start deprecating the old ones. That's why I'm leaning toward having entities defined explicitly in the data layer--to try to use the classes generated by LINQ to SQL (or any other ORM) as data entities just doesn't seem feasible.
I would disagree with this standard layered architecture in favor of a onion architecture.
According to that, I can give a try at your questions:
1. Are my naming conventions for each module and its respective assembly following standard conventions, or is there a different way I
should be going about this?
Yes, I would agree that it is not a bad convention, and pretty much standard.
2. Is it beneficial to break apart the business and data layers into multiple assemblies?
Yes, but I rather have one assembly called Domain (usually Core.Domain) and other one called Data (Core.Data). Domain assembly contains all the entities (as per domain-driven-design) along with repository interfaces, services, factories etc... Data assembly references the Domain and implements concrete repositories, with an ORM.
3. Is it beneficial to have the interfaces and abstract classes for each layer in their own assemblies?
Depending on various reasons. In the answer to the previous question, I've mentioned separating interfaces for repositories into the Domain, and concrete repositories in Data assembly. This gives you clean Domain without any "pollution" from any specific data or any other technology. Generally, I base my code by thinking on a TDD-oriented level, extracting all dependencies from classes making them more usable, following the SRP principle, and thinking what can go wrong when other people on the team use the architecture :) For example, one big advantage of separating into assemblies is that you control your references and clearly state "no data-access code in domain!".
4. Is it beneficial to have an "Entities" assembly for both the business and data layers?
I would disagree, and say no. You should have your core entities, and map them to the database through an ORM. If you have complex presentation logic, you can have something like ViewModel objects, which are basically entities dumbed down just with data suited for representation in the UI. If you have something like a network in-between, you can have special DTO objects as well, to minimize network calls. But, I think having data and separate business entities just complicates the matter.
One thing as well to add here, if you are starting a new architecture, and you are talking about an application that already exists for 10 years, you should consider better ORM tools from LINQ-to-SQL, either Entity Framework or NHibernate (I opt for NHibernate in my opinion).
I would also add that answering to as many question as there are in one application architecture is hard, so try posting your questions separately and more specifically. For each of the parts of architecture (UI, service layers, domain, security and other cross-concerns) you could have multiple-page discussions. Also, remember not to over-architecture your solutions, and with that complicating things even more then needed!
I actually just started the same thing, so hopefully this will help or at least generate more comments and even help for myself :)
1. Are my naming conventions for each module and its respective assembly following standard conventions, or is there a different way I should be going about this?
According to MSDN Names of Namespaces, this seems to be ok. They lay it out as:
<Company>.(<Product>|<Technology>)[.<Feature>][.<Subnamespace>]
For example, Microsoft.WindowsMobile.DirectX.
2.Is it beneficial to break apart the business and data layers into multiple assemblies?
I definitely think its beneficial to break apart the business and data layers into multiple assemblies. However, in my solution, I've create just two assemblies (DataLayer and BusinessLayer). The other details like Interfaces, Workflows, etc I would create directories for under each assembly. I dont think you need to split them up at that level.
3.Is it beneficial to have the interfaces and abstract classes for each layer in their own assemblies?
Kind of goes along with the above comments.
4.Is it beneficial to have an "Entities" assembly for both the business and data layers?
Yes. I would say that your data entities might not map directly to what your business model will be. When storing the data to a database or other medium, you might need to change things around to have it play nice. The entities that you expose to your service layer should be useable for the UI. The entities you use for you Data Access Layer should be useable for you storage medium. AutoMapper is definitely your friend and can help with mapping as you mentioned. So this is how it shapes up:
(source: microsoft.com)
1) The naming is absolutely fine, just as SwDevMan81 stated.
2) Absolutely, If WCF gets outdated in a few years, you'll only have to change your DAL.
3) The rule of thumb is to ask yourself this simple question: "Can I think of a case where I will make smart use of this?".
When talking about your WCF contracts, yes, definitely put those in a separate assembly: it is key to a good WCF design (I'll go into more details).
When talking about an interface defined in AssemblyA, and is implemented in AssemblyB, then the properties/methods described in those interfaces are used in AssemblyC, you are fine as long as every class defined in AssemblyB is used in C through an interface. Otherwise, you'll have to reference both A, and B: you lose.
4) The only reason I can think of to actually move around 3 times the same looking object, is bad design: the database relations were poorly crafted, and thus you have to tweak the objects that come out to have something you can work with.
If you redo the architecture, you can have another assembly, used in pretty much every project, called "Entities" that holds the data objects. By every project i meant WCF as well.
On a side note, I would add that the WCF service should be split into 3 assemblies: the ServiceContracts, the Service itself, and the Entities we talked about. I had a good video on that last point, but it's at work, i'll link it tomorow!
HTH,
bab.
EDIT: here is the video.
I have a project structure like so :-
CentralRepository.BL
CentralRepository.BO
CentralRepository.DataAccess
CentralRepository.Tests
CentralRepository.Webservices
and there is an awful lot of dependencies between these. I want to leverage unity to reduce the dependencies, so im going to create interfaces for my classes. My question is in which project should the interfaces reside in. My thoughts are they should be in the BO layer. Can someone give me some guidance on this please
On a combinatorial level, you have three options:
Define interfaces in a separate library
Define interfaces together with their consumers
Define interfaces together with their implementers
However, the last option is a really bad idea because it tightly couples the interface to the implementer (or the other way around). Since the whole point of introducing an interface in the first place is to reduce coupling, nothing is gained by doing that.
Defining the interface together with the consumer is often sufficient, and personally I only take the extra step of defining the interface in a separate library when disparate consumers are in play (which is mostly tend to happen if you're shipping a public API).
BO is essentially your domain objects, or at least that is my assumption. In general, unless you are using a pattern like ActiveRecord, they are state objects only. An interface, on the other hand, specifies behavior. Not a good concept, from many "best practices", to mix the behavior and state. Now I will likely ramble a bit, but I think the background may help.
Now, to the question of where interfaces should exist. There are a couple of choices.
Stick the interfaces in the library they belong to.
Create a separate contract library
The simpler is to stick them in the same library, but then your mocks rely on the library, as well as your tests. Not a huge deal, but it has a slight odor to it.
My normal method is to set up projects like this:
{company}.{program/project}.{concern (optional)}.{area}.{subarea (optional)}
The first two to three bits of the name are covered in yours by the word "CentralRepository". In my case it would be MyCompany.CentralRepository or MyCompany.MyProgram.CentralRepository, but naming convention is not the core part of this post.
The "area" portions are the thrust of this post, and I generally use the following.
Set up a domain object library (your BO): CentralRepository.Domain.Models
Set up a domain exception library: CentralRepository.Domain.Exceptions
All/most other projects reference the above two, as they represent the state in the application. Certainly ALL business libraries use these objects. The persistance library(s) may have a different model and I may have a view model on the experience library(s).
Set up the core library next: CentralRepository.Core (may have subareas?). this is where the business logic lays (the actual applciation, as persistence and experience changes should not affect core functionality).
Set up a test library for core: CentralRepository.Core.Test.Unit.VS (I have Unit.VS to show these are unit tests, not integration tests with a unit test library, and I am using VS to indicate MSTest - others will have different naming).
Create tests and then set up business functionality. As need, set up interfaces. Example
Need data from a DAL, so an interface and mock are set up for data to use for Core tests. The name here would be something like CentralRepository.Persist.Contracts (may also use a subarea, if there are multiple types of persistence).
The core concept here is "Core as Application" rather than n-tier (they are compatible, but thinking of business logic only, as a paradigm, keeps you loosely coupled with persistence and experience).
Now, back to your question. The way I set up interfaces is based on the location of the "interfaced" classes. So, I would likely have:
CentralRepository.Core.Contracts
CentralRepository.Experience.Service.Contracts
CentralRepository.Persist.Service.Contracts
CentralRepository.Persist.Data.Contracts
I am still working with this, but the core concept is my IoC and testing should both be considered and I should be able to isolate testing, which is better achieved if I can isolate the contracts (interfaces). Logical separation is fine (single library), but I don't generally head that way due to having at least a couple of green developers who find it difficult to see logical separation without physical separation. Your mileage may vary. :-0
Hope this rambling helps in some way.
I would suggest keeping interfaces wherever their implementers are in the majority of cases, if you're talking assemblies.
Personally, when I'm using a layered approach, I tend to give each layer its own assembly and give it a reference to the layer below it. In each layer, most of the public things are interfaces. So, I in the data access layer, I might have ICustomerDao and IOrderDao as public interfaces. I'll also have public Dao factories in the DAO assembly. I'll then have specific implementations marked as internal -- CustomerDaoMySqlImpl or CustomerDaoXmlImpl that implement the public interface. The public factory then provides implementations to users (i.e. the domain layer) without the users knowing exactly which implementation they're getting -- they provide information to the factory, and the factory turns around and hands them a ICustomerDao that they use.
The reason I mention all this is to lay the foundation for understanding what interfaces are really supposed to be -- contracts between the servicer and client of an API. As such, from a dependency standpoint, you want to define the contract generally where the servicer is. If you define it elsewhere, you're potentially not really managing your dependencies with interfaces and instead just introducing a non-useful layer of indirection.
So anyway, I'd say think of your interfaces as what they are -- a contract to your clients as to what you're going to provide, while keeping private the details of how you're going to provide it. That's probably a good heuristic that will make it more intuitive where to put the interfaces.
Forgive the title, it's simpler than it sounds.
I've got a class, StickerBook. It contains some stickers, List<Sticker>.
When it's time to see if there any stickers waiting to be added, should StickerBook.CheckForNewStickers() handle the logic of looking for them and then adding them, or should a new class, NewStickerChecker check for and then add them to the StickerBook?
A pretty basic concept I know, I just can't wrap my head around it.
Pretty sure there's no right or wrong answer to this. Either way you seem to be encapsulating the logic rather than making the internals of the class (List) open to all which is generally a good thing.
Putting logic into a separate service class (and having this class implement an interface) could make unit testing easier if you then pass this interface to other pieces of code as a dependency. I think it's conceptually easier to mock up a service class and pass around the actual simple entities during testing rather than mock up entities with complex logic on them.
Well, it depends on what's your general approach to structuring the code and modelling of objects. That said, the Single Responsibility Principle states that object should have only one reason of existence, so I would favor the NewStickerChecker as a class.
there is no black and white and this could be subjective, in my experience if there is an entity which contains a collection of sub-entities, like in your case the book contains a list of stickers, you could load them inside the entity.
After all OOP defines that classes contain data and logic required to manipulate that data ;-)
P.S. just as disclaimer, you are talking of entities in the spirit of Business Entities and not in the Entity Framework light (... not that it would change too much anyway but just for the question's tagging ).
As I am further digging into MVVM and MVVM-light I recognized, that there is no MVVM-light provided base class for models.
But from my understanding, messaging and raising notifications could also happen in a model. At least in communication between models I would find that messaging would come in very handy.
So I just decided to derive my Model from ViewModelBase, even though some of the properties (like the design time ones) will be unused.
But the more I am looking at this, the more I think I have missed something. Is it considered "bad practice" to derive my Models from ViewModelBase?
And is it ok to use Messaging for Model communication?
Derive your view-model classes from whatever you like... MVVM-light offers the VieWModelBase to provide an implementation of ICleanUp - which is good for managing the life-cycle of ViewModel objects. My own choice has been to implement all scaffolding for Property Change notifications in a base class, then derive from that for model classes. About the only strong suggestions I have regarding model classes are:
Once size doesn't fit all. How you store your data may be different from how you interact with data, and ViewModel objects should be geared towards supporting interaction, not storage, and if you need to interact with the same data (Model) in two very different ways, then design two different ViewModels to support these different interactions.
Do use attributes (a la System.ComponentModel) to annotate the models. You can get a lot of your validation work done this way - and validation feedback is the responsibility of the presentation layer (View + ViewModel), not the problem domain (i.e. the Model).
Really good ViewModel classes are also usually stateless enough that they can be recycled/reused within a single user interaction, such that large data lists can be virtualized (WPF supports virtualization) to save RAM.
Remember DRY (Do Not Repeat Yourself), KISS (Keep It Simple, Stupid!) and YAGNI (You ain't gonna need it) - are the principles you should keep in mind above any academic design principles. I've litereally wasted weeks on a WPF app implementing academically perfect MVC/MVVM patterns, only to find that they detracted form the overall understandability of the finished solution. So... keep it simple! :)
I would take a look at the EventAggregator in the Composite Application Library. The answer in this post has a good description of it. Jeremy Miller's post goes into a bit more detail.