How to Implement Loose Coupling with a SOA Architecture - c#

I've been doing a lot of research lately about SOA and ESB's etc.
I'm working on redesigning some legacy systems at work now and would like to build it with more of a SOA architecture than it currently has. We use these services in about 5 of our websites and one of the biggest problems we have right now with our legacy system is that almost all the time when we make bug fixes or updates we need to re-deploy our 5 websites which can be a quite time consuming process.
My goal is to make the interfaces between services loosely coupled so that changes can be made without having to re-deploy all the dependent services and websites.
I need the ability to extend an already existing service interface without breaking or updating any of its dependencies. Have any of you encountered this problem before? How did you solve it?

I suggest looking at a different style of services than maybe you've been doing so far. Consider services that collaborate with each other using events, rather than request/response. I've been using this approach for many years with clients in various verticals with a great deal of success. I've written up quite a bit about these topics in the past 4 years. Here's one place where you can get started:
http://www.udidahan.com/2006/08/28/podcast-business-and-autonomous-components-in-soa/
Hope that helps.

There are a couple of approaches you can take. Our SOA architecture involves XML messages sent to and from the services. One way we achieve what you describe is by avoiding the use of a data binding library to our XML schema and use a generic XML parser to get just the data nodes you want ignoring those you aren't interested in. This way the service can add additional new nodes to the message without breaking anyone currently using it. We typically only do this when we need just one or two pieces of information from a larger schema structure.
Alternatively, the other (preferred) solution we use is versioning. A version of a service adheres to a particular schema/interface. When the schema changes (e.g the interface is extended or modified), we create a new version of the service. At any time we may have 2 or 3 versions on the go at any one time. In time, we deprecate and then remove older versions, while eventually migrating dependent code onto newer versions. This way those dependent on the service can continue using the existing version of the service while some particular dependency can 'upgrade' to the new version. Which versions of a service are called are defined in a configuration file for the dependent code. Note that it is not only the schema which gets versioned, but all of the underlying implementation code as well.
Hope this helps.

What you're asking isn't an easy topic. There are many ways you can go about making your Service Oriented Architecture loosely coupled.
I suggest checking out Thomas Erl's SOA book series. It explains everything pretty clearly and in-depth.

There are a few common pratices to achieve loose coupling for services.
Use doc/literal style of web services, think in data (the wire format) instead of RPC, avoid schema-based data binding.
Abide strictly by the contract when sending out data, but keep few assumptions processing incoming data, xpath is a good tool for that (loose in, tight out)
Use ESB and avoid any directly point to point communication between services.

Here is a rough checklist for evaluating whether your SOA implements Loose Coupling:
Location of the called system (its physical address): Does your
application use direct URLs for accessing systems or is the
application decoupled via an abstraction layer that is responsible
for maintaining connections between systems? The Services Registry
and the service group paradigm used in SAP NetWeaver CE are good
examples of what such an abstraction might look like. Using an
enterprise service bus (ESB) is another example. The point is that
the application should not hard code the physical address of the
called system in order to truly be considered loosely coupled.
Number of receivers: Does the application specify which systems are
the receivers of a service call? A loosely coupled composite will not
specify particular systems but will leave the delivery of its
messages to a service contract implementation layer. A tightly
coupled application will explicitly call the receiving systems in
order; a loosely coupled application simply makes calls to the
service interface and allows the service contract implementation
layer to take care of the details of delivering messages to the right
systems.
Availability of systems: Does your application require that all the
systems that you are connecting to be up and running all the time?
Obviously, this is a very difficult requirement especially if you
want to connect to external systems that are not under your control.
If the answer is that all systems must be running all the time, the
application is tightly coupled in this regard.
Data format: Does the application reuse the data formats provided by
the backend systems or are you using a canonical data type system
that is independent of the type systems used in the called
applications? If you are reusing the data types of the backend
systems, you probably have to struggle with data type conversions in
your application, and this is not a very loosely coupled approach.
Response time: Does the application require called systems to respond
within a certain timeframe or is it acceptable for the application to
receive an answer minutes, hours, or even days later?

Related

Microservices communication message templates approach

I try to realize microservice-based project on ASP.NET Core (Web API).
So, I have an independent components which communicate between them and external world.
So, I have "connection points" between services - view model (input data) of one is an equvivalent for request/response for others and so on.
I think, there are some best practices for this case which allow me not to create tonnes of identically code, am I right?
Deeply, let's look at the situation, where I have, for example, data in the database, and microservice, which get (possible transform or wide a little) an information from DB and give it to an asker. Is it possible not to create duplicate code for storing and responsing information from database?
Thank you.
What are you talking about are models. Input models and output models (DTOs).
If your projects are part of the same solution, then you can probably have a shared project or class library, to reuse your models.
If not, create a NuGet package, distribute it via your own feed and use it in all the projects that require it.
In order for this to work, you need to keep this project very simple. It should not have any dependencies preferably, so you can reference it without unintended consequences. If you keep it very simple then it can work well.
Mostly it depends on your use case, overall intent and solution architecture.
Micro services are meant to be autonomous from a development and deployment point of view and they shouldn't know about themselves or know as little as possible. The more they know about other micro services the higher is coupling. They should owns it's model and data needed for what they're created for (in order to meet their responsibility). You can achieve this using, for example, event based integration.
In this scenario I don't see a need for any code reuse. Every micro service will have different input and logic behind it. You should strive for this in your project.
If your micro services are too chatty (for example, they often need to ask other micro services for data), you probably made a mistake in their boundaries and you should consider design them again. Also you should avoid creating micro services which are just browsers for their databases.
Next thing to point out is DRY principle, and why this is not applicable to micro services world. In OOP world is common to use this principle. That's why most developers will try to use it in micro services world. But if you would try to apply this to micro services you'll end up with high coupling and you won't be able to develop them truly independently. Code reuse and data redundancy is not so bad as you probably think.
So to wrap up. As I said at the beginning it depends. If your "micro services" are part of one solution and you're for example referencing them in code, you can't name them micro services actually and you can use solution like Andrei told. But if they are not and you really care about their independence (and you're following what I mentioned above) you should not share code among different micro services, and there won't be need to actually. But if different micro services really use same code (even if they're well designed), don't be afraid and just reuse same code. You'll see that it will pay off.
Micro services are not silver bullet for every need and you should be aware of it. As a further reference I recommend you this free book.

Is it possible to implement microservices with oData.net

I've been reading about the Microservice Architecure and with the limited valuable information available on internet, I believe, I have a fair understanding of it from the theory point of view. I understand that on a high level this architecture suggests to move away from monoliths and have small, independent services. However, all the examples that I see on the internet are suggesting to write loosely coupled windows services (daemons in case of non MS implementations) connected to an ESB. I understand that writing small, loosely coupled web services that adhere to SRP also fits the bill of micro services.
That said, oData.Net services, where all oData controllers (micro services?) are deployed as a monolith, is a clear violation of the Microservices Architecure pattern. Is it a correct statement to make that oData.net is not designed to work as micro services? If your answer is no then please explain with help of a an example. Also, help me understand, how to have the API gateway pattern in the mix.
ODATA do fit micro services. However, micro services are not a good fit for odata. What I mean is that there is really nothing that stops you from exposing OData in a micro service.
However, by doing so you typically expose a large set of the inner data structure in the micro service. That would in turn increase the coupling between different services. By doing so, you make it harder to change a service due to dependencies.
My own personal rule of thumb is to expose as small API as possible from each service. And the data structures that I expose are not the same as the internal ones. They might be flattened or a union between data in different internal entities.
My reasoning is: If you are going to create separate services, try to separate them as much as possible. Else you are just building a monolith that happens to run in a couple of different windows services.
oData is entirely valid as a method for exposing a microservice; exposing a explicit table however isn't microservices. So I don't agree completely with jgauffin. There is no reason why an API cannot be made available using oData. Where I do agree with JGauffin is that an API should have a small, and planar footprint that is decoupled from the detailed data structures of the source or destination. Therefore it is up to the service calling it to transform the API, but means that the generic format of the API can be reused as long as the business need is there, and technical platforms switched as required.

How to implement a maintainable and loosly coupled application using DDD and SRP?

The reason for asking this question is that I've been wondering on how to stitch all these different concepts together. There are many examples and discussions on i.e. DDD, Dependency Injection, CQRS, SOA, MVC but not so many examples on how to put them all together in a flexible way.
My goal:
Develop modules that with little or no modification can stand on their own
Changing or reworking the UI should be as easy as possible (i.e. the UI should do as little as possible, and be "stupid"
Use documented patterns and principles
To make it easier to ask a concrete question, the main arcitecture now looks like this:
The example shows how to add a note to an employee. Employee Management is one bounded context. Employee has several properties, among those an ICollection<Note>.
The bound context is in my understanding the logic place to seperate code. Each BC is a module. Most of the time I find each of them can warrant their own UI if needed (i.e. some modules might be made available for Windows phone).
The Domain holds all business logic.
The infrastructure holds repository implementation, and services to send mail, save files and utilities that does not belong in the domain. I'm thinking of making some of the common service feautures that I have to use in several domains (like send e-mail) as a sort of an API that I could reference to save some code implementing the same things across several BC's.
The query layer holds all Querys except GetById that I need in the repository to fetch an object. The query layer can query other persistence instances, and will probably need to change some for each UI.
The Wcf or Web Api is kind of my Application layer, it might belong in infrastrucure and not on the outside. This service also sets up the dependencies, so all UI need to do is to ask for information and send commands.
The process starts with the blue arrows. Read the model since that has most of the information.
In step 1 the EmployeeDto in this example is just some of employee properties to show the user information about the employee they need to make a note on (like a note about new experience or something like that).
So, the questions are:
Does implementing a layered arcitecture like this really involve so much mapping, or have I missed something?
Is it recommended (or even smart) to use a Wcf service to run the main logic like this (it practically is my Application Service)
Are there alternatives to Wcf without having my domain objects in my UI layer?
Is there anything wrong with this implementation. Any fall pits to look out for?
Do you have any good examples to recommend looking at that can help me to understand how all these concepts are supposed to work together.
Update:
I've read through most of the articles now (quite a bit of reading) except for the paid book (requires a bit more time to do). All of them are very good pointers, and the way of thinking of the Wcf of more as an adapter seems to be a good answer to question 2. JGauffins work on his framework is also very interesting if I'm planning to go the that route.
However, as mentioned in some of the comments beneath I feel some of the examples tends towards recommending or implementing event and/or command sourcing, message buses and so on. To me it is overkill to plan for that level of scaling right now. As many business applications this is a "large" (in terms of an internal application, think max a few thousand) number of users working on a large set of data, not a highly collaborative domain in the sense of needing to implement event and command queues often assosiated with CQRS to cope with that.
Based on the answers below, the approach I'll start with will be based on the model above and the answers like this:
I'll just have to cope with mapping. Thoe pros outweighs the cons.
I'll pull application services back to the infrastructure and
consider Wcf as an "adapter"
I'll use command objects and send to application service. Not
polluting my domain with domain objects.
To keep complexity down I try to manage without event/command
sourcing, message buses etc for now.
In addition I just wanted to link to this blog post by Udi Dahan about CQRS, I think things like this keeps complexity down unless they are really needed.
There is a trade-off between mapping and layers. One reason certain mappings exist is because appropriate abstractions aren't available or feasible. As a result, it is often easier to just explicitly map between layers than trying to implement a framework that infers the mappings, but I digress; this hinges on a philosophical discussion of the issue.
The WCF or WebAPI service should be very thin. Think of it as an adapter in a hexagonal architecture. It should delegate everything to an application service. There is conflation of the term service which causes confusion. Overall, the goal of WCF or WebAPI is to "adapt" your domain to a specific technology such as HTTP. WCF can be thought of as implementing an open host service in DDD lingo.
You mentioned WebAPI which is an alternative if you want HTTP. Most importantly, be aware of the role of this adapting layer. As you state, it is best to have the UI depend on DTOs and generally the contract of a service implemented with WCF or WebAPI or anything else. This keeps things simple and allows you to vary implementation of your domain without affecting consumers of open host services.
You should always be on the lookout for needless complexity. Layering is a trade-off and sometimes it can be overkill. For example, in an app that is primarily CRUD, there is no need to layer this much. Also, as stated above, don't think of WCF services as being application services. Instead, think of them as adapters between a transport technology and application services. In turn, think of application services as being a facade over you domain, regardless of whether your domain is implemented with DDD or a transaction script approach.
What really helped me understand is the referenced article on the hexagonal architecture. This way, you can view your domain as being at the core and you layer things around it, adapting your domain to infrastructure and services. What you have seems to already follow these principles. A great, in-depth resource for all of this is Implementing Domain-Driven Design by Vaughn Vernon, specifically the chapter on architecture.
Does implementing a layered architecture like this really involve so much mapping, or have I missed something?
Yes. The thing is that it's not the same object. It's different representations of the same object, but specialized for each use case. A view model contains logic to update the GUI, a DTO is specialized for transfer (might get normalized to ease transfer). etc. etc. They might look the same, but they really aren't.
You could of course try to put all adaptations into a single class, but that would not be very fun to work with when your application grows.
Is it recommended (or even smart) to use a Wcf service to run the main logic like this (it practically is my Application Service)
You need some kind of networking layer. I wouldn't let all client applications touch my database. It would create a maintenance nightmare if you mess with the database schema (if some of the clients still run the old version).
By using a server it's much easier to maintain version differences.
Do note the a WCF service definition should be treated as constant once being used. Any changes should be defined in a new interface (for instance MyService2).
Are there alternatives to Wcf without having my domain objects in my UI layer?
You could take a look at my framework. Start post: http://blog.gauffin.org/2012/10/writing-decoupled-and-scalable-applications-2/
Is there anything wrong with this implementation.
Not that I can see. Looks like you have a pretty good grasp of the concepts and how they should be used.
Any fall pits to look out for?
Don't try to be lazy with the queries and commands. Don't make them a bit more generic to fit several use cases. It will come back and bite you when the application grows. Smaller classes is easier to maintain.
Do you have any good examples to recommend looking at that can help me to understand how all these concepts are supposed to work together.
The my linked blog post and all other articles in that series.

application completely SOA?

Is it wise to build a large application entirely based off SOA? Or just some portions? User account logins, accounting, gis mapping, sales, etc?
In other words, would it be wise to build a GUI to such an application in HTML & Javascript which does all it's exchanges via ajax to .NET web services on the back-end?
I can't see it worth loosing all the .net .aspx functionality such as forms authentication, view state, etc. But my co-worker is saying if we are going to go SOA there is no need for .NET on the front end. But i think there should be some sort of balance. Where do you draw the line? Should all calls to the database go through the web services?
I just want to say that "with SOA we’re building for change, while with Traditional systems engineering, we’re building for stability."
The problem with stability, of course, is, it only takes the business so far — if the organization requires business agility, then they’re much better off implementing SOA.
So, It solely depends on what you want to achieve, you are the one who should draw the boundary.
I read it in article on SOA few days back as I'm too working on SOA.
EDIT:
Meanwhile I came across this article and thought of sharing with you.
The video quite explains the current scenario of SOA and its views by different people.
I'm getting the words of the song 'If I had a hammer' coming to mind. SOA is an architectural approach to develop software as a series of services. In my opinion this is best for systems that have less than immediate latency and limited bandwidth, and high cost in access etc (these are all obviously highly subjective). You don't need full SOA just get loose couping between components which I would argue is a good goal to achieve.
DB calls can go through a service, take ADO.NET data services for example however you really have to weigh up with what the service is to provide. Take caching. A decent approach to SOA will consider that data is may need to be cached to reduce service load. So can your data be stale in the UI? Are you allowing that use case? Is right for login info to be stale (a rough example I know but possibly something that may need to be addressed).
All in all - it depends. I think some things lend themselves to SOA very well. If you take a DDD approach then the services that represent Domains would probably do so. In this way your UI talks to domain services and not rows in table as the DB is abstracted behind domain services.
Don't use one methodology to solve all problems.
See this SO question too
It's a service oriented architecture, not a service exclusive architecture.
Presentation logic and plumbing have to live somewhere; it all depends on where it makes the most sense for it to live.
For example, let's say you have a UI component that relies on a highly chatty but efficient set of calls to a database to generate a complex analysis of something (take your pick). If your web browser is making all those calls, you introduce massive network latency and concurrency issues. If a web service makes all those calls, you are potentially putting presentation logic into it to format that result.
If you are using Session state (or web services period), you are essentially using ASP.Net anyway. Try uninstalling it and see if your web services still run.
If presentation logic needs to live on the server side, it is better for it to live within a framework intended for presentation rather than a web service, IMO. If you haven't looked at MVC 2, do so. It makes it incredibly easy to set up an application that melds browser and server UI support (for example, jQuery validator controls backed by server-side validation).
Conversely, the web browser provides an expressive platform. Assuming browser support and team knowledge, the AJAX/SOA architecture you describe is a good one. I'm using it more and more and trying to make my server pages cleaner and simpler but I have no plans to exclude ASP.Net from my toolkit any time soon.
Client implementation should be completely disconnected from the back end web service in a SOA. The service should be able to be consumed by ANY client. If you are using .NET on the back end and front end because they can be coded to directly communicate, then you are missing the point, because now they are tightly coupled and what you have now is a stove pipe application. The client should have no idea how the server side is implemented -- shouldn't matter if the back-end web service is built using .NET, Java, or whatever.
In a true SOA, you should be able to search for services in the services repository, perhaps tie the outputs in with other services or use XSLT to create alternative outputs that weren't necessarily considered when the original service was built, and consume it in a standard way in any client on the front end.
It sounds like what you're really asking is how to build a single application. The point of a SOA is to provide standard data sets through re-usable interfaces, that have no specific application or implementation in mind. To start out building a single application with the entire back-end comprised of SOA services would be a huge undertaking. In MY mind, each back-end service should be built because of it's intrinsic value all on it's own and be provided to the entire SOA "domain". Then when you or I decide to make a client that does X, Y, and Z, we can just go find those capabilities in the SOA and injest them.

webservices with repository pattern in c# and WCF?

Can anyone confirm the best way to integrate the repository pattern with webservices.... Well actually i have my repository patter working now in c#. I have 3 projects, DataAccess, Services and my presentation layer.
Problem is my presentation layer is a number of things... I have a ASP.NET MVC site, I have an WPF application and we are about to create another site + an external company needs access to our repository also.
Currently i have just added the services layer as reference to each of the sites... But is not the normal way to provide data access via web services? (WCF) - if this is the case will this break the services layer? or should i convert the services layer to a web service?
Anybody know what the PROS and CONS are of this, speed??
I think I understand your dilemma. If I understand correctly then your services layer consists of pure fabrications. http://en.wikipedia.org/wiki/GRASP_(Object_Oriented_Design).
If I assume correctly above, then your services layer should not be impacted at all by the introduction of WCF. WCF is essentially an additional presentation layer that provides interoperability, sitting between your UI presentation layer and any business logic layers. So your WCF services would then call your services layer, which may access repositories as needed.
WCF provides a high degree of interoperability so I think it is an excellent choice. I would use basicHttp bindings though, if you intend to interop with different programming languages as this is the most flexible. Don't worry about the speed. There are plenty of solutions out there to mitigate any bottlenecks that result due to WCF.
Good luck, and let me know if I can help in any other way.
Well first - not all callers have to use the same repository API; this is especially true of an external company.
WCF is interface based. This means that if you need to re-use some logic code, it is possible to use IoC/DI to inject WCF rather than a DAL (but using the same interface) - by using assembly sharing. It sounds like this is what you are doing. This works in many cases, but not all; fundamentally web-service based APIs often need to be designed differently in order to be optimal. It also isn't 100% pure from an SOA viewpoint, but it gets the job done, and allows more intelligent domain entities, so in an intranet (etc) scenario it is (IMO) perfectly reasonable.
An external caller would typically just use the wsdl/mex-based APIs (rather than assembly sharing), but anything is possible...
Maybe webservices are not the best way, if i have full access to the service assembly then i suppose it always better to assembly share the services layer with my applications.
My applications do similar things, but they all need to access the service layer - well the business logic and get back information...
In this case - its always preferable to use assembly sharing with the service layer rather than provide a WCF Web service using HTTP protocol or using TCP on wcf - for example?
Thanks again
Whether to share your Service/API assemblies with your client applications is fairly subjective. If you are a full Microsoft shop, and use .NET for your entire application stack, then I would say sharing the API is a great way to gain code reuse (you have to be careful how you design your API so you don't bleed domain concerns, like repositories, into your presentation.) If you don't have any plans to migrate your client applications to other platforms (i.e. you plan to stay on .NET for the foreseeable future), then I think its perfectly acceptable to share your Service/API assemblies (and even then, in a multi-platform client environment, sharing Service/API with .NET clients should still be acceptable.) There is always a trade off between the 'architecturally ideal' and the 'practical and achievable within budget'. You can spend a LOT of time, money, and effort trying to achieve the architecturally ideal, when the gap between that and the practical often isn't really that much. The choice NOT to share the API and essentially recreate it to maintain "correct" SOA, consuming only the contract, can actually increase work and introduce maintenance hassles that quite possibly are not worth it for your particular project at this particular time. Given that you are already generally 'service-oriented', if at a future point in time you need the benefit that contract-only consumption on the client can offer, then your already set to go there. But don't push too far too soon.
Given your needs, from what I have been able to glean from these posts so far, I think your on the right track from your services down too. A repository (a la Evans, DDD) is definitely a domain concern, and as such, you really shouldn't have to worry about it from the perspective of your presentation layer. You services are the gateway to your domain, which is the home of your business logic. Repositories are just a support facility that helps you achieve domain isolation from a data store (they are glorified collections really, and to be quite frank...they can be a bit of a pain in a dynamic and complex domain. Simple data mappers, (Fowler, PofEAA) are often a lot easier to deal with and less complex in the long run, and allow more adaptable behavior around your data retrieval logic to be centralized in your domain services.) Aside from heavy use of AJAX calls to REST Services, if you expose adequate Services/API around your domain, that is the only thing that your clients should have worry about. Wrap up all the rest of your business logic entirely within the confines of your domain, and keep your clients as light weight as possible and abstracted from concepts like 'Repository' or 'Data Mapper' and whatnot.
In my experience, the only non-service or API concept that needs to be shared across the Client-to-Domain boundary is Context...and it can be notoriously difficult to cross that boundary in a service-oriented application.

Categories

Resources