Software-design only with interfaces? - c#

Is it good approach when in software-designing the class interactions are describe only with interfaces? If yes, should I always use this approach?
I must design class library that should have a high testability (I use C#).
This library have one facade and some amount of classes with different interactions in the background.
In the case of optimizing this library for good testability I've replace most part of my classes with interfaces.
And when I did this, I saw in a connection diagram (Visual Studio class diagram) only interfaces.
Is it normal decision of my problem? or there should be some another approach?
P/S: Maybe it's well known way in software-design but I can't find some confirmation in books that I have.

Yes this is good practice. It allows you to focus about the responsibilities of each class without getting concerned with implementation details. It allows you to see the method call stack and as you say gives a high level of testability and maintainability. You're on the right track as far as I see :)

Yes, that is generally a good practice.
I would recommend you to read a good design patterns book, for example this one.
it is targeted for Java developers but I had no trouble understanding all the examples as a C# developer.

By using interfaces you can decompose your applications into subsystems to make it maintenable and easily expandable. Some uses cases can be:
application may need to communicate more than one web service endpoints to to fullfill same functions such as direct billing or payment interfaces from different providers
data access layer class that execute SQLs to different Databases with different drivers.
processing different objects that implements the same interface using the same thread pool from the same queue

Related

Designing the database access layer

I consider myself quite amateur when it comes to designing a system's architecture, and I currently find myself in the process of doing just that.
Particularly, I am trying to come up with an efficient and maintainable way to re-implement all classes that have methods/functions that query the database to read data, then send it upstream for another layer to process it, and finally receive the processed data to write it back to the database.
Surely this generic problem has already been solved. I intend to follow a DDD approach, so that the methods accessing the database are part of an "Infrastructure" layer. Is there an optimal way of designing a system (or structure of classes) to accomplish this? Should I have just one gateway to read/write from the database that all classes should refer to, or should each components have its own way of communicating with the database? Is there a standard approach to do this?
I am mindful that the question might be a bit broad, but for the experts out there surely you have gone through this and are able to help.
There are following items to be considered:
DAO pattern - Create DAO layer leveraging DAO objects (for each domain object) where upper layer (such as service layer) can make use of it.
Architecture - If you are thinking about micro-services architecture, then UI, Service, DB access (DAO) & DB - all these will be single deployable unit. Hence, design pattern will be aligned to chosen architectural approach.
API Gateway - An API gateway (aligned with architectural approach). Think about functional use-cases while designing APIs rather than just providing CRUD operations or technology specific APIs.
As an accepted practice, your presentation layer, business logic and data access(some call it back end layer) should be fairly separated. Microst's MVC concept is just a sensible example of dedicated effort to achieve this. AngularJs from Google is another example of MVVM that is to write clean code on client side as opposed to Asp.net MVC server side. So what should be the best practices is clearly established here. As for your question, I would say to design something qualify for highly optimized designing paradigm does not require a certain way but an experience of knowing somany ways and wisdom to choose single or multiple ways or even mixing them to suit your needs. As for your question about gateway to data access, let me put it this way, maintain multiple connections is a very resource consuming job so ideally a single static instance data class in which a single connection is maintained is an appropriate approach and all the data serving, manipulating methods should be put in there. In web development, we are familiar with webservices exposing objects with operation / message contracts. Separation and encapsulation is the key here. But be sure nothing is perfect, though Asp.net MVC boasts the best methodology of separating all these layers, they have Razor to contradict it. But it's necessary. Hell with being paranoid with tightly coupled back and front end or spaghetti code. It all makes sense when it suits your needs. The key here only experience can teach us the optimize way or ways to do something. That's the returned value of my answer!

How to implement a maintainable and loosly coupled application using DDD and SRP?

The reason for asking this question is that I've been wondering on how to stitch all these different concepts together. There are many examples and discussions on i.e. DDD, Dependency Injection, CQRS, SOA, MVC but not so many examples on how to put them all together in a flexible way.
My goal:
Develop modules that with little or no modification can stand on their own
Changing or reworking the UI should be as easy as possible (i.e. the UI should do as little as possible, and be "stupid"
Use documented patterns and principles
To make it easier to ask a concrete question, the main arcitecture now looks like this:
The example shows how to add a note to an employee. Employee Management is one bounded context. Employee has several properties, among those an ICollection<Note>.
The bound context is in my understanding the logic place to seperate code. Each BC is a module. Most of the time I find each of them can warrant their own UI if needed (i.e. some modules might be made available for Windows phone).
The Domain holds all business logic.
The infrastructure holds repository implementation, and services to send mail, save files and utilities that does not belong in the domain. I'm thinking of making some of the common service feautures that I have to use in several domains (like send e-mail) as a sort of an API that I could reference to save some code implementing the same things across several BC's.
The query layer holds all Querys except GetById that I need in the repository to fetch an object. The query layer can query other persistence instances, and will probably need to change some for each UI.
The Wcf or Web Api is kind of my Application layer, it might belong in infrastrucure and not on the outside. This service also sets up the dependencies, so all UI need to do is to ask for information and send commands.
The process starts with the blue arrows. Read the model since that has most of the information.
In step 1 the EmployeeDto in this example is just some of employee properties to show the user information about the employee they need to make a note on (like a note about new experience or something like that).
So, the questions are:
Does implementing a layered arcitecture like this really involve so much mapping, or have I missed something?
Is it recommended (or even smart) to use a Wcf service to run the main logic like this (it practically is my Application Service)
Are there alternatives to Wcf without having my domain objects in my UI layer?
Is there anything wrong with this implementation. Any fall pits to look out for?
Do you have any good examples to recommend looking at that can help me to understand how all these concepts are supposed to work together.
Update:
I've read through most of the articles now (quite a bit of reading) except for the paid book (requires a bit more time to do). All of them are very good pointers, and the way of thinking of the Wcf of more as an adapter seems to be a good answer to question 2. JGauffins work on his framework is also very interesting if I'm planning to go the that route.
However, as mentioned in some of the comments beneath I feel some of the examples tends towards recommending or implementing event and/or command sourcing, message buses and so on. To me it is overkill to plan for that level of scaling right now. As many business applications this is a "large" (in terms of an internal application, think max a few thousand) number of users working on a large set of data, not a highly collaborative domain in the sense of needing to implement event and command queues often assosiated with CQRS to cope with that.
Based on the answers below, the approach I'll start with will be based on the model above and the answers like this:
I'll just have to cope with mapping. Thoe pros outweighs the cons.
I'll pull application services back to the infrastructure and
consider Wcf as an "adapter"
I'll use command objects and send to application service. Not
polluting my domain with domain objects.
To keep complexity down I try to manage without event/command
sourcing, message buses etc for now.
In addition I just wanted to link to this blog post by Udi Dahan about CQRS, I think things like this keeps complexity down unless they are really needed.
There is a trade-off between mapping and layers. One reason certain mappings exist is because appropriate abstractions aren't available or feasible. As a result, it is often easier to just explicitly map between layers than trying to implement a framework that infers the mappings, but I digress; this hinges on a philosophical discussion of the issue.
The WCF or WebAPI service should be very thin. Think of it as an adapter in a hexagonal architecture. It should delegate everything to an application service. There is conflation of the term service which causes confusion. Overall, the goal of WCF or WebAPI is to "adapt" your domain to a specific technology such as HTTP. WCF can be thought of as implementing an open host service in DDD lingo.
You mentioned WebAPI which is an alternative if you want HTTP. Most importantly, be aware of the role of this adapting layer. As you state, it is best to have the UI depend on DTOs and generally the contract of a service implemented with WCF or WebAPI or anything else. This keeps things simple and allows you to vary implementation of your domain without affecting consumers of open host services.
You should always be on the lookout for needless complexity. Layering is a trade-off and sometimes it can be overkill. For example, in an app that is primarily CRUD, there is no need to layer this much. Also, as stated above, don't think of WCF services as being application services. Instead, think of them as adapters between a transport technology and application services. In turn, think of application services as being a facade over you domain, regardless of whether your domain is implemented with DDD or a transaction script approach.
What really helped me understand is the referenced article on the hexagonal architecture. This way, you can view your domain as being at the core and you layer things around it, adapting your domain to infrastructure and services. What you have seems to already follow these principles. A great, in-depth resource for all of this is Implementing Domain-Driven Design by Vaughn Vernon, specifically the chapter on architecture.
Does implementing a layered architecture like this really involve so much mapping, or have I missed something?
Yes. The thing is that it's not the same object. It's different representations of the same object, but specialized for each use case. A view model contains logic to update the GUI, a DTO is specialized for transfer (might get normalized to ease transfer). etc. etc. They might look the same, but they really aren't.
You could of course try to put all adaptations into a single class, but that would not be very fun to work with when your application grows.
Is it recommended (or even smart) to use a Wcf service to run the main logic like this (it practically is my Application Service)
You need some kind of networking layer. I wouldn't let all client applications touch my database. It would create a maintenance nightmare if you mess with the database schema (if some of the clients still run the old version).
By using a server it's much easier to maintain version differences.
Do note the a WCF service definition should be treated as constant once being used. Any changes should be defined in a new interface (for instance MyService2).
Are there alternatives to Wcf without having my domain objects in my UI layer?
You could take a look at my framework. Start post: http://blog.gauffin.org/2012/10/writing-decoupled-and-scalable-applications-2/
Is there anything wrong with this implementation.
Not that I can see. Looks like you have a pretty good grasp of the concepts and how they should be used.
Any fall pits to look out for?
Don't try to be lazy with the queries and commands. Don't make them a bit more generic to fit several use cases. It will come back and bite you when the application grows. Smaller classes is easier to maintain.
Do you have any good examples to recommend looking at that can help me to understand how all these concepts are supposed to work together.
The my linked blog post and all other articles in that series.

How to change existing Singletone behaviour in C#

How to change existing Singletone behavior in C#
I have a problem – we are using assemblies developed by other team (infrastructure team), there is Singletone class we need little bit different behavior.
We are thinking of number of possibilities how to deal with code implemented by other development Team.
One possibility is to add additional Instance2 method, but it's not a good idea as we thought. This solution makes our API not usable and hard to understand.
May be there is any common way to solve it?
If you are using API you don't like simply write a wrapper of this API. Not add method to this API.
You can inherit from singleton for "reuse" or some fine tuning, using templates (C++) or generics (C#.NET).
I've posted in my blog (www.devartplus.com) a serie of posts in this subject:
1) Basic singleton inheritance in C#.NET
2) Thread-safe singleton inheritance in C#.NET
3) Singleton implementations in C++
You are invited to visit those links, and share your opinion.
Good luck.

How should data be accessed. The working practic

I am newbie C# developer. When I just have started to learn programming thins were pretty simple, you see the problem, you develop solution, test it and it works, that simple.
Then you find out the design patterns and the whole abstraction thing, and you begin to spend more time on the code that yields no results, always tiring to protect code from possible changes in future. More time less result.
Sorry for the boring introduction, but I just trying to show how frustrated I am now.
There is a bunch of data-access technologies provided by Microsoft itself, and even larger bunch of technologies provided by third-party companies.
I don’t have team leader or neighbor super skilled programmer friend, so I have to ask an advice from you.
How do you realize the data access in your real applications written in C#?
From a very overall perspective, I always hide and data access implementation details behind an interface, like this:
public interface IRepository<T> { /*...*/ }
The .NET framework offers a lot of different ways to access data, so I can understand that you are confused. However, at this time, there are only really two or three reasonable options for accessing relational databases:
NHibernate
Entity Framework
(Low-level APIs like IDataReader may still have their place in limited scenarios)
It's often difficult to see the benefit of abstraction without seeing the benefits it provides in a real world application. The best advice I can give is to read up on the SOLID principles then in writing your application try and think about ways the client may come to you and say "Now I need it to do this" which maybe a subtle change to the functionality or a major change. Think about how this would affect your code and in how many places you'd need to make those changes. Once you've made those changes how confident would you be that you haven't broken something else?
Another idea would be to download one of the sample applications. One of my particular favourites is the Data Access Platform sample provided on Codeplex. Try working through this code and see how the abstraction and pattern implementations minimise the impact on the code overall when it comes time to change.
The bottom line is it's easy to learn a programming language but understanding how to build robust solutions with it takes time. Stick with it though because when you do finally get a good understanding of software architecture it's immensely rewarding.
Some points to consider for the DAL: (note: very opinionated, but answers to this question have to be)
Encapsulate logic behind a Repository
Use interfaced-based coding
Use Dependency Injection
Use a mature ORM like NHibernate/Entity Framework 4.0 (but know when to use SPROC's for db-intensive work)
Use the Unit of Work pattern
Prevent SQL Injection attacks by using parameterized queries (or LINQ-Entites, as above)

Design pattern for handling many parameters and business rules

I am working on a project that is responsible for creating a "job" which is composed of one or more "tasks" which are persisted to a database through a DAL. The job and task columns are composed of values that are set according to business rules.
The class, as it exists now, is getting complicated and unwieldy because the business rules dictate that it needs access to many databases across our system to decide whether a job can be created and/or how it should be set up.
To further complicate things it needs to be necessary to submit a list of jobs and it needs to be callable in a variety of ways (as a referenced assembly, via windows service, or via web service).
Here are some examples of the things it does:
Generate a job cost estimate
Take in an account and/or user to which assign the job
Emit an event for job submission progress tracking
Merge in data from an outside, user-defined list (.csv, .xls, ect.)
Copy files from a local drive to a network accessible drive (if necessary)
My question is: What are the best practices or design patterns to make this as manageable and simple as possible?
Seems like the class needs to be refactored as it would appear to violate the Single Responsibility Principle. I would recommend that each one of the bullet points above have its separate implementation class. In this way you would be implementing the facade pattern , where your main class represents the high level abstraction of what the system is doing.
This type of program can get really messy if not kept clean from the ground up. I myself always try to stick with the basic 3-Tier Application (Presentation, Business, Data). There is a lot of good information out there for building applications in this manner, and it's best to do some demo projects, and read what others have to say about the subject. Here is the MSDN reference.
I myself had to redesign an application that did something very similar. Once I got my Data Layer separated and worked out from everything else my life became a lot easier.
My best advice is take the time to Plan a lot. Use diagrams, flowcharts, etc. etc.. When a program is this complex, I like to have the groundwork for my layers laid out before I ever start writing code.
Given your description of the requirements, there's no real "simple" way to go about this. Its requisite functionality is massive and diverse. My only suggestions are to make the entire thing into a DLL library (or even a set of DLLs), to separate the various frontends so that referencing the assembly need not rely on the Windows service (for instance); and to stick to basic OOP commandments like loose coupling.
Besides recommending to use SOLID and go the extra mile to keep it DRY, I'll suggest to introduce the concept of rules in the system.
By modeling the rules you can switch to a more configurable / flexible approach. You can combine multiple rules to expose different operations that affect the outcome in jobs and the related tasks.
This allows you to have rules that are composed of others. Depending on the scenario you have, that could greatly simplify how you deal with it, since some operations that involve implicit rules that are spread across all those system can be expressed as a combination of simple rules. I'd keep it as simple as possible, but as you extend it you might find the need for different ways to combine the rules, and patterns will emerge on their own.
As for SOLID, I recommend to check the ebook here and try to keep an evolving code approach.

Categories

Resources