My team and I are looking for evidence to support either a multi library approach for like functionality, or condensing all of this functionality within a single service layer. It's important to note that this is going to be sitting behind a web api and either approach is valid, but we need to decide which holds more benefit. To illustrate the layer(s) we are looking at the following is what we'll have:
Solution
WebAPI
Services ---- This is what we're looking at
DataAccess
Bear in mind that if we did use the multi library approach we would still have a Services project, but it would be much thinner and have more specific functionality. We are not planning to independently deploy these libraries, but have everything needing to reference them either in the same solution, or access them via the web api.
What the rest of us would propose is something like the following:
Solution
WebAPI
Services
Services.Geography ---
Services.Membership --- This is the alternative approach
Services.ProductDelivery ---
etc...
The benefits we see in the first option is having all of this code organized within a single library which allows for easier extraction of duplicate code, potentially unit testing, and perhaps some relief from the build process.
The benefits of option two are having a clear delineation in functionality between projects, having isolated code which is potentially portable should the need arise, and generally being able to independently work on and configure different facets of the application.
The drawbacks we see in option one are that the Service layer now becomes responsible for each facet of the application, which bloats that library and in my opinion sort of violates Single Responsibility. We realize that rule is not as applicable to libraries so much as it is methods and classes, but it still seems like there are other benefits to be had by separating functionality. There's also the potential to mistakenly place code somewhere it doesn't belong, or use classes available to the entire project where they may not apply.
The drawbacks in option two are an obvious increase in overhead on project builds, working in configs (even though this may be desirable) and potentially cluttering the solution with more projects than necessary. I think we'd plan to consolidate like functionality into single projects (ie, we might build multiple implementations of ProductDelivery within that project to be able to switch between them or use different ones for different reasons).
We realize all of our business rules can be accomplished with either approach, we just have reached an impasse in deciding which approach is better practice.
So the question is, which of these two approaches is better practice?
I have 2 things that can make me think to chose the first option:
Your services use only one data layer library.
Your services are really short (something like implementing just the CRUD)
Split in libraries, a class that count few lines in an entire library, can be awkward. Unless, you know it will grow a lot after (in several classes, of course).
If not, I would say the option 2, is better because:
it's easier to replace a part of the service like that. Change the library that you want, and it's done.
it should be more abstracted, if you want to avoid strong coupling between each library
it should be easier to test a specific part of it.
it should be more configurable, and you can configure all of them in the project that references all of them (Even, if one or several libraries doesn't change a lot of things).
it should be less like a god library
it might be more exportable for some others projects, depending on how specialized your libraries are.
And I disagree, for these points:
a single library which allows for easier extraction of duplicate code
If you are careful, your duplicate code, can be extracted in a parent library, common to all the others. So all your duplicate code, should be automatically extracted (Except, if there is a lack of communication or people prefer to copy/paste code. But, one library would not change that. It might even be harder to find where the code already exists).
potentially unit testing
Why several libraries have to be more difficult to test ?
If you have several libraries, you will have to make them more abstracted, to allow the change. And then, your testing should be easy.
and perhaps some relief from the build process.
Why ? If all your libraries are well named, where would be your problems ?
Deploying a dll or several dlls, shouldn't be that hard.
If it's about the configuration, one library or more, you will still have the same configuration to make, not necessarily more (but probably a bit more).
I also disagree for the single responsability doesn't apply to libraries. It is.
Each library, should be responsible for one business, not all of them. If you finish with a set of libraries, it can become a framework. Even, for a framework, you will finish to have a single responsability but much more general, than the responsibility of methods, classes, libraries, etc...
But you might want an opinion from a more advanced architect/developer than me.
If someone disagree with me, don't hesitate to comment my answer. I would be happy to learn from your knowledge
With the comments from my first answer in mind.
The current plan is to have a single data layer. Many of our potential
other libraries would be third party api wrappers that don't
necessarily need to interact with the database. Those that do could
potentially have their own data layer which may or may not interact
with the same database or an independent database. I think doing that
makes them self contained and able to exist without the rest of the
solution. Still not totally sure if this is the approach we want to
take yet though.
Dependency injection ?
StructureMap as our IoC dependency resolver
You will end up with several libraries, unless, all your libraries you use, have to use themselves together.
You will either have your services becoming kind of proxies for the third party libraries, or your services will use proxies for the third party libraries.
But anyway, the proxy parts, should not be together in same library. It would be harder to change the third party library, if you do that.
If you chose the solution where your services use proxies for the third party libraries. You can inject these proxies into your services easily, thanks to the dependency injection.
If you change of third party library, change the proxy implementation and change the injection, and it's done.
But, if you chose to make your services the proxies. It's almost the same, but you have one layer less. And your service implementations have to be exported in different libraries. You will also have to be more careful when changing your service, because you will endup breaking things elsewhere in your app.
For that last point, having a proxy layer used by your services, sounds better to me, at the moment.
I'm still thinking. It will have more edits to come I imagine
Related
I try to realize microservice-based project on ASP.NET Core (Web API).
So, I have an independent components which communicate between them and external world.
So, I have "connection points" between services - view model (input data) of one is an equvivalent for request/response for others and so on.
I think, there are some best practices for this case which allow me not to create tonnes of identically code, am I right?
Deeply, let's look at the situation, where I have, for example, data in the database, and microservice, which get (possible transform or wide a little) an information from DB and give it to an asker. Is it possible not to create duplicate code for storing and responsing information from database?
Thank you.
What are you talking about are models. Input models and output models (DTOs).
If your projects are part of the same solution, then you can probably have a shared project or class library, to reuse your models.
If not, create a NuGet package, distribute it via your own feed and use it in all the projects that require it.
In order for this to work, you need to keep this project very simple. It should not have any dependencies preferably, so you can reference it without unintended consequences. If you keep it very simple then it can work well.
Mostly it depends on your use case, overall intent and solution architecture.
Micro services are meant to be autonomous from a development and deployment point of view and they shouldn't know about themselves or know as little as possible. The more they know about other micro services the higher is coupling. They should owns it's model and data needed for what they're created for (in order to meet their responsibility). You can achieve this using, for example, event based integration.
In this scenario I don't see a need for any code reuse. Every micro service will have different input and logic behind it. You should strive for this in your project.
If your micro services are too chatty (for example, they often need to ask other micro services for data), you probably made a mistake in their boundaries and you should consider design them again. Also you should avoid creating micro services which are just browsers for their databases.
Next thing to point out is DRY principle, and why this is not applicable to micro services world. In OOP world is common to use this principle. That's why most developers will try to use it in micro services world. But if you would try to apply this to micro services you'll end up with high coupling and you won't be able to develop them truly independently. Code reuse and data redundancy is not so bad as you probably think.
So to wrap up. As I said at the beginning it depends. If your "micro services" are part of one solution and you're for example referencing them in code, you can't name them micro services actually and you can use solution like Andrei told. But if they are not and you really care about their independence (and you're following what I mentioned above) you should not share code among different micro services, and there won't be need to actually. But if different micro services really use same code (even if they're well designed), don't be afraid and just reuse same code. You'll see that it will pay off.
Micro services are not silver bullet for every need and you should be aware of it. As a further reference I recommend you this free book.
The reason for asking this question is that I've been wondering on how to stitch all these different concepts together. There are many examples and discussions on i.e. DDD, Dependency Injection, CQRS, SOA, MVC but not so many examples on how to put them all together in a flexible way.
My goal:
Develop modules that with little or no modification can stand on their own
Changing or reworking the UI should be as easy as possible (i.e. the UI should do as little as possible, and be "stupid"
Use documented patterns and principles
To make it easier to ask a concrete question, the main arcitecture now looks like this:
The example shows how to add a note to an employee. Employee Management is one bounded context. Employee has several properties, among those an ICollection<Note>.
The bound context is in my understanding the logic place to seperate code. Each BC is a module. Most of the time I find each of them can warrant their own UI if needed (i.e. some modules might be made available for Windows phone).
The Domain holds all business logic.
The infrastructure holds repository implementation, and services to send mail, save files and utilities that does not belong in the domain. I'm thinking of making some of the common service feautures that I have to use in several domains (like send e-mail) as a sort of an API that I could reference to save some code implementing the same things across several BC's.
The query layer holds all Querys except GetById that I need in the repository to fetch an object. The query layer can query other persistence instances, and will probably need to change some for each UI.
The Wcf or Web Api is kind of my Application layer, it might belong in infrastrucure and not on the outside. This service also sets up the dependencies, so all UI need to do is to ask for information and send commands.
The process starts with the blue arrows. Read the model since that has most of the information.
In step 1 the EmployeeDto in this example is just some of employee properties to show the user information about the employee they need to make a note on (like a note about new experience or something like that).
So, the questions are:
Does implementing a layered arcitecture like this really involve so much mapping, or have I missed something?
Is it recommended (or even smart) to use a Wcf service to run the main logic like this (it practically is my Application Service)
Are there alternatives to Wcf without having my domain objects in my UI layer?
Is there anything wrong with this implementation. Any fall pits to look out for?
Do you have any good examples to recommend looking at that can help me to understand how all these concepts are supposed to work together.
Update:
I've read through most of the articles now (quite a bit of reading) except for the paid book (requires a bit more time to do). All of them are very good pointers, and the way of thinking of the Wcf of more as an adapter seems to be a good answer to question 2. JGauffins work on his framework is also very interesting if I'm planning to go the that route.
However, as mentioned in some of the comments beneath I feel some of the examples tends towards recommending or implementing event and/or command sourcing, message buses and so on. To me it is overkill to plan for that level of scaling right now. As many business applications this is a "large" (in terms of an internal application, think max a few thousand) number of users working on a large set of data, not a highly collaborative domain in the sense of needing to implement event and command queues often assosiated with CQRS to cope with that.
Based on the answers below, the approach I'll start with will be based on the model above and the answers like this:
I'll just have to cope with mapping. Thoe pros outweighs the cons.
I'll pull application services back to the infrastructure and
consider Wcf as an "adapter"
I'll use command objects and send to application service. Not
polluting my domain with domain objects.
To keep complexity down I try to manage without event/command
sourcing, message buses etc for now.
In addition I just wanted to link to this blog post by Udi Dahan about CQRS, I think things like this keeps complexity down unless they are really needed.
There is a trade-off between mapping and layers. One reason certain mappings exist is because appropriate abstractions aren't available or feasible. As a result, it is often easier to just explicitly map between layers than trying to implement a framework that infers the mappings, but I digress; this hinges on a philosophical discussion of the issue.
The WCF or WebAPI service should be very thin. Think of it as an adapter in a hexagonal architecture. It should delegate everything to an application service. There is conflation of the term service which causes confusion. Overall, the goal of WCF or WebAPI is to "adapt" your domain to a specific technology such as HTTP. WCF can be thought of as implementing an open host service in DDD lingo.
You mentioned WebAPI which is an alternative if you want HTTP. Most importantly, be aware of the role of this adapting layer. As you state, it is best to have the UI depend on DTOs and generally the contract of a service implemented with WCF or WebAPI or anything else. This keeps things simple and allows you to vary implementation of your domain without affecting consumers of open host services.
You should always be on the lookout for needless complexity. Layering is a trade-off and sometimes it can be overkill. For example, in an app that is primarily CRUD, there is no need to layer this much. Also, as stated above, don't think of WCF services as being application services. Instead, think of them as adapters between a transport technology and application services. In turn, think of application services as being a facade over you domain, regardless of whether your domain is implemented with DDD or a transaction script approach.
What really helped me understand is the referenced article on the hexagonal architecture. This way, you can view your domain as being at the core and you layer things around it, adapting your domain to infrastructure and services. What you have seems to already follow these principles. A great, in-depth resource for all of this is Implementing Domain-Driven Design by Vaughn Vernon, specifically the chapter on architecture.
Does implementing a layered architecture like this really involve so much mapping, or have I missed something?
Yes. The thing is that it's not the same object. It's different representations of the same object, but specialized for each use case. A view model contains logic to update the GUI, a DTO is specialized for transfer (might get normalized to ease transfer). etc. etc. They might look the same, but they really aren't.
You could of course try to put all adaptations into a single class, but that would not be very fun to work with when your application grows.
Is it recommended (or even smart) to use a Wcf service to run the main logic like this (it practically is my Application Service)
You need some kind of networking layer. I wouldn't let all client applications touch my database. It would create a maintenance nightmare if you mess with the database schema (if some of the clients still run the old version).
By using a server it's much easier to maintain version differences.
Do note the a WCF service definition should be treated as constant once being used. Any changes should be defined in a new interface (for instance MyService2).
Are there alternatives to Wcf without having my domain objects in my UI layer?
You could take a look at my framework. Start post: http://blog.gauffin.org/2012/10/writing-decoupled-and-scalable-applications-2/
Is there anything wrong with this implementation.
Not that I can see. Looks like you have a pretty good grasp of the concepts and how they should be used.
Any fall pits to look out for?
Don't try to be lazy with the queries and commands. Don't make them a bit more generic to fit several use cases. It will come back and bite you when the application grows. Smaller classes is easier to maintain.
Do you have any good examples to recommend looking at that can help me to understand how all these concepts are supposed to work together.
The my linked blog post and all other articles in that series.
We are in a situation whereby we have 4 developers with a bit of free time on our hands (talking about 3-4 weeks).
Across our code base, for different projects, there are a number of framework-y type of code that is re-written for every new project that we start. Since we have some free time on our hands, I'm in the process of creating a "standard" set of libraries that all projects can re-use, such as:
Caching
Logging
Although these 2 above would rely on libraries such as Enterprise Library, each new project would write its own wrappers around it, etc, so we're consolidating all these code.
I'm looking for suggestions on the standard libraries that you built in-house that is shared across many projects.
To give you some context, we build LOB internal apps and public facing websites - i.e. we are not a software house selling shrink-wrap, so we don't need stuff like a licensing module.
Any thoughts would be much appreciated - our developers are yearning to write some code, and I would very much love to give them something to do that would benefit the organization in the long run.
Cheers
Unit Testing Infrastructure - can you easily run all your unit tests? do you have unit tests?
Build Process - can you build/deploy an app from scratch, with only 1 or 2 commands?
Some of the major things we do:
Logging (with some wrappers around TraceSource)
Serialization wrappers (so you can serialize/deserialize in one line of code)
Compression (wrappers for the .NET functionality, to make it so you can do this in one line of code)
Encryption (same thing, wrappers for .NET Framework functionality, so the developer doesn't have to work in byte[]'s all the time)
Context - a class that walks the stack trace to bring back a data structure that has all the information about the current call (assembly, class, member, member type, file name, line number, etc)
etc, etc...
Hope that helps
ok, most importantly, don't reinvent the wheel!
Spend some time researching libraries which you can easily leverage:
For logging I highly recommend Log4Net.
For testing nUnit
For mocking, Rhino.
Also, take a look at Inversion of Control Containers, I recommend Castle Windsor.
For indexing I recommend Solr (on top of Lucene).
Next, write some wrappers:
These should be the entry point of you API (common library, but think of it as an API).
Focus on abstracting all the libraries you use internally in your API, so if you don't want to use Log4Net, or Castle Windsor anymore, you can by writing well structured abstractions and concentrating on loosely coupled design patterns.
Adopt Domain Driven Development:
Think of API(s) as Domains and modular abstractions that internally use other common APIs like you common Data Access library.
Suggestions:
I'd start with a super flexible general DAL library, that makes it super easy to access any type of data and multiple storage mediums.
I'd use Fluent nHibernate for the relational DB stuff, and I'd have all the method calls into the you data access implement LINQ, as it's a c# language feature.
using LINQ to query DBs, Indexes, files, xml etc.
Here is one thing that can keep all developers busy for a month:
Run your apps' unit tests in a profiler with code coverage (nUnit or VS Code Coverage).
Figure out which areas need more tests.
Write unit tests for those sub-systems.
Now, if the system was not written using TDD, chances are it'd be very monolithic and will require significant refactoring to introduce test surfaces. Hopefully, at the end of it you end up with a more modular, less tightly coupled. more testable system.
My attitude is that one should almost never write standard libraries. Instead, one should refactor existing, working code to remove duplication and improve ease of use and ease of testing.
The result will be very much like a "standard library", except that you will know that it works (you reran your unit tests after every change, right?), and you'll know that it will be used, since it was already being used. Otherwise, you run the risk of creating a wonderful standard library that isn't used and doesn't work when it is used.
A previous job encountered a little down time while the business sorted out what the next version should be. There were a few things we did that helped
Migrated from .net reoting to WCF
Searched for pain points in the code that all devs just hate to work with and refactor them
Introduce a good automated build system that would run unit tests and send out emails for failed builds. It would also package and place that version in a shared directory for the QA to pick up
Scripted the DB so that you can easily upgrade the database rather than being forced to take an out of date copy polluted with irrelevant data that other devs have been playing with.
Introduced proper bug tracking and triage process
Researched how we could migrate from winforms to wpf
Looked at CAB (composite application) or plugin frameworks so configuration would get simplier. (At that time setup and configuration was a tremendous amount of time)
Other things I would do now might be
Look at Postsharp to weave cross cutting concerns which would simplify logging, exception handling or anywhere code was repeated over and over again
Look at Automapper so that conversions from one type to another was driven by configuration rather than changing code in many places.
Look at education around TDD (if you dont do it) or BDD style unit tests.
Invest time in streamlining automated integration tests. (As this one is difficult to set up and configure manually it tends to get dropped of within SDLC)
Look at the viability on dev tools such as Resharper
HTH
When/where do you decide to split a large Visual Studio project into smaller multiple projects? If it can be reusable? when project is too big? (but how big is too big?)
and When you do split the project, do you,
group by database tables
group by similar functionality
other..
Pros of many projects:
Easier to isolate code for unit testing. I like to isolate code that has a dependency on a big external server thing, for example code that talks to the SMTP server gets its own assembly, code that talks to the database gets it's own assembly, code that talks to the webserver, code that is pure business logic like validations.
Pros of few projects:
Visual studio goes faster
Some developers just don't get your vision
about dividing up responsibilities
and will start putting classes
everywhere, so you end up with the
pain of extra projects and the
benefits of putting everything into
one project.
Each project has a configuration and when you make a decision about project configuration, often you have to make the same chagne everywhere, such as setting or changing the strong name key
Pros of many Solutions
You hit the maximum project level later.
Only the stuff in your current solution gets compiled everytime you hit f5
If the project isn't expected to change in the life of your application, why re-compile it over and over? Call it done and move it to its own solution.
Cons of many Solutions
It's up to you to work out the dependencies between solutions and manually compile the dependencies first. This leads to complicated build scripts.
Projects should be cohesive. Logic should be related, and accomplishing a similar goal
This answer will depend on the size of the product you are supporting. In general we organize our projects along domain and logic. And we will divide those even further, the more you divide the more organize you must be, or you are going to hit the dreaded recursive dependency issue.
When I do choose to break up project it is when it grows to be too large or two areas are becoming too similar.
When complexity is rising I do not split by tables, i generally split functionality.
Re-usability is another excellent time to reduce lines of code, as well as introduce a new project. However be careful how many "utility" libraries you introduce because they do have impact on readability/understandability.
I do not think there is a line in sand that says, if you hit 3k SLOC, you have too much. It all is contextual.
I always have several projects (and therefore a solution) , instead of one project with all of my source in it.
In some cases, it is unavoidable because you are using and open source library and want to be able to debug it. But more pragmatically, I typically have my applications provide functionality via plugins. This allows me to change the behavior or offer a user-selectable behavior at runtime. In the non-plugin case, it allows you to update one portion of your program without updating everything. There are also cases where you can provide the main apparently, and only download the modules / assemblies when you need them.
One other reason is that you can create smaller test apps to exercise an assembly, rather than building a very large solution and potentially requiring a user to execute several (and irrelevant) GUI operations before even reaching the part you want to test. And this isn't just a testing concern -- maybe you have less-savvy users in your organization that only want to be presented with the bits that concern them.
When the overall purpose of the project remains the same, but the number of classes is becoming large, I tend to create folders and namespaces to better group functionality within the project. Classes that are coupled to each-other tend to go in the same folder/namespace, so that if I need to understand a given class, the related classes are nearby in the Solution Explorer. I usually only create new projects if I realize that a particular piece of functionality is very different in purpose or if there is a common dependency between existing projects.
I usually wind up with a few relatively small Framework projects that define interfaces for loose coupling between other projects, with larger projects for the different types of concrete functionality. That's always at least one project for the UI and one project for logic and data (often split into two projects if the data layer becomes very large in its own right.)
I move code to a new project, if it has general functionality (theoretically) usable by other projects too. If the project is large, because it represents a complex problem, then namespaces provide a great way to bring order in the code. Here you can for example introduce a (sub-)namespaces for each SQL table, etc. etc.
I'm currently working on two social networking sites that have a lot in common, yet are distinctively different. I find myself writing a lot of the same code for both (including UI), and was wondering if there is a best practice that will limit duplicating code.
One of the main problems is that these projects are very independent of eachother and will likely have more differences than similaries soon. Also, once the initial work is done, they might be handed off to other programmers, so having shared code libraries might end up being a big problem.
Any suggestions from people that might have had to deal with a similiar situation?
PS: I'm the only developer on both of these projects, and it looks like it's going to stay that way for a while.
Abstracting shared functionality back to a framework or library with defined interfaces and default implementations is a common way to handle this. For example, your plugin architecture, if you choose to support one, is probably something that could be shared among all of your projects. Most of the time the things you want to share are pretty basic functionality or relatively abstract functionality that can be easily customized. The former are easier to recognize and factor out to common libraries. The latter may sometimes be more work than simply re-implementing the code with minor changes (sharing patterns rather than code).
One thing you want to be careful of is to let the actual re-use drive the design of common libraries rather than coming up with a shared architecture in advance. It's very tempting to get caught up in framework design and abstracting it out for shared use. Unfortunately you often find that the shared use never develops or develops in a different direction than you expected and you end up rewriting or throwing away much of the framework -- or even worse, keeping and maintaining unused code. Let YAGNI (you aren't gonna need it) be your guide and delay refactoring to common libraries until you actually have a need.
There are a couple (at least) of different approaches here, and you could certainly use both. Firstly you could remove some common code in to a separate project and just call that code staticaly. This is pretty easy to do and I sometimes take this approach with simple helper functions that probably don't belong in a class in my main project - a good example would be a math library or something like that. The other approach is to extract common functionality in to a class or interface which you then inherit and extend. Depending on what code you are looking to reuse you might use either (or both) of these approaches.
I suspect you will find it easier than you think. Try it with some simple code, set up a new project in the same solution, reference your library from your existing code and see how it goes. There is also no reason not to reference your shared project in multiple solutions either.
Having shared code libraries need not be a problem if the development gets handed off. For now you can have your 2 sites reference the same library (or libraries) which you maintain, but if and when you split the projects out to other teams you can give a copy of the shared code to each team.