application completely SOA? - c#

Is it wise to build a large application entirely based off SOA? Or just some portions? User account logins, accounting, gis mapping, sales, etc?
In other words, would it be wise to build a GUI to such an application in HTML & Javascript which does all it's exchanges via ajax to .NET web services on the back-end?
I can't see it worth loosing all the .net .aspx functionality such as forms authentication, view state, etc. But my co-worker is saying if we are going to go SOA there is no need for .NET on the front end. But i think there should be some sort of balance. Where do you draw the line? Should all calls to the database go through the web services?

I just want to say that "with SOA we’re building for change, while with Traditional systems engineering, we’re building for stability."
The problem with stability, of course, is, it only takes the business so far — if the organization requires business agility, then they’re much better off implementing SOA.
So, It solely depends on what you want to achieve, you are the one who should draw the boundary.
I read it in article on SOA few days back as I'm too working on SOA.
EDIT:
Meanwhile I came across this article and thought of sharing with you.
The video quite explains the current scenario of SOA and its views by different people.

I'm getting the words of the song 'If I had a hammer' coming to mind. SOA is an architectural approach to develop software as a series of services. In my opinion this is best for systems that have less than immediate latency and limited bandwidth, and high cost in access etc (these are all obviously highly subjective). You don't need full SOA just get loose couping between components which I would argue is a good goal to achieve.
DB calls can go through a service, take ADO.NET data services for example however you really have to weigh up with what the service is to provide. Take caching. A decent approach to SOA will consider that data is may need to be cached to reduce service load. So can your data be stale in the UI? Are you allowing that use case? Is right for login info to be stale (a rough example I know but possibly something that may need to be addressed).
All in all - it depends. I think some things lend themselves to SOA very well. If you take a DDD approach then the services that represent Domains would probably do so. In this way your UI talks to domain services and not rows in table as the DB is abstracted behind domain services.
Don't use one methodology to solve all problems.
See this SO question too

It's a service oriented architecture, not a service exclusive architecture.
Presentation logic and plumbing have to live somewhere; it all depends on where it makes the most sense for it to live.
For example, let's say you have a UI component that relies on a highly chatty but efficient set of calls to a database to generate a complex analysis of something (take your pick). If your web browser is making all those calls, you introduce massive network latency and concurrency issues. If a web service makes all those calls, you are potentially putting presentation logic into it to format that result.
If you are using Session state (or web services period), you are essentially using ASP.Net anyway. Try uninstalling it and see if your web services still run.
If presentation logic needs to live on the server side, it is better for it to live within a framework intended for presentation rather than a web service, IMO. If you haven't looked at MVC 2, do so. It makes it incredibly easy to set up an application that melds browser and server UI support (for example, jQuery validator controls backed by server-side validation).
Conversely, the web browser provides an expressive platform. Assuming browser support and team knowledge, the AJAX/SOA architecture you describe is a good one. I'm using it more and more and trying to make my server pages cleaner and simpler but I have no plans to exclude ASP.Net from my toolkit any time soon.

Client implementation should be completely disconnected from the back end web service in a SOA. The service should be able to be consumed by ANY client. If you are using .NET on the back end and front end because they can be coded to directly communicate, then you are missing the point, because now they are tightly coupled and what you have now is a stove pipe application. The client should have no idea how the server side is implemented -- shouldn't matter if the back-end web service is built using .NET, Java, or whatever.
In a true SOA, you should be able to search for services in the services repository, perhaps tie the outputs in with other services or use XSLT to create alternative outputs that weren't necessarily considered when the original service was built, and consume it in a standard way in any client on the front end.
It sounds like what you're really asking is how to build a single application. The point of a SOA is to provide standard data sets through re-usable interfaces, that have no specific application or implementation in mind. To start out building a single application with the entire back-end comprised of SOA services would be a huge undertaking. In MY mind, each back-end service should be built because of it's intrinsic value all on it's own and be provided to the entire SOA "domain". Then when you or I decide to make a client that does X, Y, and Z, we can just go find those capabilities in the SOA and injest them.

Related

.NET distributed layered application

I have been developing n-tier applications using .NET for many years. But I still have no idea how to distribute the tiers/layers (dll) to other servers.
Let say, I have an MVC web application with 4 projects, i.e. MVC (UI), Business, Service and Data. Everything works fine if all class library dlls are in one server.
If I want to scale out the application by distributing the Service layer (dll) and Data layer (dll) to other 2 servers, should I convert the class library to WCF Service Library project (with TCP or pipe as communication protocol for better performance) ? Or should I use other technology like .NET remoting or Web API?
Will that be a lot of work?
Is that one of the purpose of creating multi-tier application?
Thanks.
Update:
Do you have any links (from Microsoft) that explain in detail how to scale out an n-tier architecture application to multiple server by distributing the DLL?
If I want to scale out the application by distributing the Service layer (dll) and Data layer (dll) to other 2 servers, should I convert the class library to WCF Service Library project (with TCP or pipe as communication protocol for better performance) ?
Yep, since they are on different machines, you need some kind of communication mechanism that goes beyond simply DLL invocation.
Or should I use other technology like .NET remoting or Web API?
Which approach you choose depends on many factors like complexity, performance...There are many options like
WCF webservices
Simple REST calls with WebApi
a message bus i.e. NServiceBus
...
Obviously remote calls will also be slower having a potential impact on performance etc.
Will that be a lot of work?
It will be more work and in my opinion that "more work" should really be justified. Keep your architecture as simple as possible or better, only as complex as really needed.
An alternative approach could be to have some deployment pipeline that deploys your entire application on different server instances and have some intelligent load balancing strategy. The only thing you need to pay attention to in that case is to properly share the sessions between your instances (stateless would be better ;) ).
My 50 cents...
As far as I know WCF replaced .NET Remoting (MSDN).
Anyway... Someone before me said. If you don't have to scale the application, do not do it. Communication cost alone between services of any kind will slow things down considerably. Probably to extent, where it would be slower than it is now (which I am assuming is the reason for scaling).
Prior to scaling, I would first see where the bottleneck really is. For instance, if the problem is your DB server, then moving services and data layer to another server is useless, as you will still be using the same database. So, you need to first find out what your bottelneck is.
The easiest and least painful way to scale (in my opinion) would be to just add another IIS server and a load balancer that would direct traffic to either one of them. You would need to store sessions in a database or use dedicated server, but that is about all the change you will need. Plus, if one of your server fails, one will still operate.
By default, avoid premature optimalization.
If you have a only web site, I would keep it as simple as possible and only create logical layering. There are a number of options: typical 3 tier, onion architecture etc. The key is that later, if really really needed, you could still refactor your code and make your data layer a separate physical layer. But unless you are creating a new Amazon or something, this will probably not be the case.
If you are in the situation, for example, that you have a web site, but also have to expose a web api; you could choose to have the web site consume the web api. In fact, your web site would then become a very thin layer (maybe not even using ASP.NET MVC) because most of the logic would be in the web api.
PS - .NET remoting is old technology, consider WCF or Web API instead.

Web Application Questions

I am having a bit of trouble finding relevant and updated information. A lot of what I find is from 2001/2002, and the majority of it doesn't apply.
Basically, I want to create a server/client application. The server will be run from a single dedicated machine, and I will install the client on numerous other machines (remotes).
What I am not sure on (never used ASP.NET Web Application) is do I need to plan ahead for it, or can it be added on top?
I am assuming I can just create the Server/Client applications in C# NET, then create the ASP.NET Web App later to give a web based front end to the server application. If this is correct, can anyone possibly link me to good resources for this type of information? As I mentioned, everything I have found is either old, or doesn't apply.
Ok I think I get what you're asking even though it's not that clear.
You're looking to build a cliet/server application initially and later to provide similar functionlaity via a web based application. Correct? If so, then:
To some extent you do need to plan and design for it. This is what I recommended: Let's assume you're using a layered architecture for you server side application and these layers are:
1. TCP/IP Interface layer
2. Business layer
3. Data layer
The business layer and data layer will be reused in your ASP.NET application as well. Both these layers MUST be completely agnostic of and TCP/IP and Http stuff.
The TCP/IP interface layer, sort of translates the TPC/IP ness of your server application to pure C# method calls to normal data types and makes calls into your business layer. If you follow this basic design you will be able to reuse your business layer and data layer.
EDIT
ASP.NET applications are assemblies. They run in the process space of another "application" (worker process) that in turn runs in the process space of IIS. But nonetheless, the architecture I mention in my answer will work for you (I do this all the time) if you're careful about your TCP/IP Interface layer being the barrier (and interface) or in order words decoupling your TCP/IP "ness" from your business layer.
For example, an aspx page (or MVC controller or asp.net handler) is an "Http Interface layer". If used correctly, the "page" handles all of the http/html stuff and "converts" all of the messaging into regular C# method calls on the business layer completely decoupling the business layer from any knowledge of ASP.NET, http, sessions and the like. The business layer in fact should have know knowledge or dependency on anything to do with ASP.NET.
So if your TCP/IP service interface layer performance the same function (that is the sole responsibility of Service interface layers) then you're good to go. And when the time comes, you'll slap on an Http Service interface layer to your system (sharing the BL and DAL). Hope that makes sense.
It's very common to have more than one project in an ASP.NET based web site, some of which have really nothing to do with the WEB UI.
A good resource on this will be any beginner's ASP.NET tutorial. (I trust your googling skills :-)).
Just make sure you separate the GUI from the implementation (for example - if you use webForms to test it - make sure you don't rely on any webForms specific implementation).
I really recommend reading a bit about ASP.NET before starting the task, but generally, rest assured your c# projects are "pluggable" to an ASP.NET implementation.
Hope I got the question right..
http://www.asp.net/general/videos
http://msdn.microsoft.com/en-us/library/ms178093(v=VS.90).aspx
You don't have to create any client application, the client is the web browser.

How to Implement Loose Coupling with a SOA Architecture

I've been doing a lot of research lately about SOA and ESB's etc.
I'm working on redesigning some legacy systems at work now and would like to build it with more of a SOA architecture than it currently has. We use these services in about 5 of our websites and one of the biggest problems we have right now with our legacy system is that almost all the time when we make bug fixes or updates we need to re-deploy our 5 websites which can be a quite time consuming process.
My goal is to make the interfaces between services loosely coupled so that changes can be made without having to re-deploy all the dependent services and websites.
I need the ability to extend an already existing service interface without breaking or updating any of its dependencies. Have any of you encountered this problem before? How did you solve it?
I suggest looking at a different style of services than maybe you've been doing so far. Consider services that collaborate with each other using events, rather than request/response. I've been using this approach for many years with clients in various verticals with a great deal of success. I've written up quite a bit about these topics in the past 4 years. Here's one place where you can get started:
http://www.udidahan.com/2006/08/28/podcast-business-and-autonomous-components-in-soa/
Hope that helps.
There are a couple of approaches you can take. Our SOA architecture involves XML messages sent to and from the services. One way we achieve what you describe is by avoiding the use of a data binding library to our XML schema and use a generic XML parser to get just the data nodes you want ignoring those you aren't interested in. This way the service can add additional new nodes to the message without breaking anyone currently using it. We typically only do this when we need just one or two pieces of information from a larger schema structure.
Alternatively, the other (preferred) solution we use is versioning. A version of a service adheres to a particular schema/interface. When the schema changes (e.g the interface is extended or modified), we create a new version of the service. At any time we may have 2 or 3 versions on the go at any one time. In time, we deprecate and then remove older versions, while eventually migrating dependent code onto newer versions. This way those dependent on the service can continue using the existing version of the service while some particular dependency can 'upgrade' to the new version. Which versions of a service are called are defined in a configuration file for the dependent code. Note that it is not only the schema which gets versioned, but all of the underlying implementation code as well.
Hope this helps.
What you're asking isn't an easy topic. There are many ways you can go about making your Service Oriented Architecture loosely coupled.
I suggest checking out Thomas Erl's SOA book series. It explains everything pretty clearly and in-depth.
There are a few common pratices to achieve loose coupling for services.
Use doc/literal style of web services, think in data (the wire format) instead of RPC, avoid schema-based data binding.
Abide strictly by the contract when sending out data, but keep few assumptions processing incoming data, xpath is a good tool for that (loose in, tight out)
Use ESB and avoid any directly point to point communication between services.
Here is a rough checklist for evaluating whether your SOA implements Loose Coupling:
Location of the called system (its physical address): Does your
application use direct URLs for accessing systems or is the
application decoupled via an abstraction layer that is responsible
for maintaining connections between systems? The Services Registry
and the service group paradigm used in SAP NetWeaver CE are good
examples of what such an abstraction might look like. Using an
enterprise service bus (ESB) is another example. The point is that
the application should not hard code the physical address of the
called system in order to truly be considered loosely coupled.
Number of receivers: Does the application specify which systems are
the receivers of a service call? A loosely coupled composite will not
specify particular systems but will leave the delivery of its
messages to a service contract implementation layer. A tightly
coupled application will explicitly call the receiving systems in
order; a loosely coupled application simply makes calls to the
service interface and allows the service contract implementation
layer to take care of the details of delivering messages to the right
systems.
Availability of systems: Does your application require that all the
systems that you are connecting to be up and running all the time?
Obviously, this is a very difficult requirement especially if you
want to connect to external systems that are not under your control.
If the answer is that all systems must be running all the time, the
application is tightly coupled in this regard.
Data format: Does the application reuse the data formats provided by
the backend systems or are you using a canonical data type system
that is independent of the type systems used in the called
applications? If you are reusing the data types of the backend
systems, you probably have to struggle with data type conversions in
your application, and this is not a very loosely coupled approach.
Response time: Does the application require called systems to respond
within a certain timeframe or is it acceptable for the application to
receive an answer minutes, hours, or even days later?

Is a 3 (physical) tier architecture inefficient?

Note: When I refer to tier, I mean a physical tier. Many of the questions on this site relating to "tiers" are referring to logical layers, which is not what I'm asking about.
I am designing an app using a standard "3 layer" architecture, with presentation, business logic (BLL) and data access (DAL) layers. The technology is WPF, C#, LINQ and SQL Server 2008. My question relates to the physical architecture of this app.
I can place the BLL/DAL in a standard DLL which is loaded and run on the user machine, making a 2 tier architecture - client machine and database server. But it is not too difficult to turn the BLL/DAL into a WCF service which sits on an app server and is called from the user machine. This would give me a 3 tier architecture - client machine, app server and database server.
My question is this - what is the advantage of using a 3 tier architecture? I've often been told that 3 tiers add scalability, but it's not immediately apparent to me why this would be. And surely you are going to take a performance hit with the same data having to make two hops over the wire - from database server to app server, then from app server to client machine.
I would appreciate the advice of experienced architects and developers out there.
It depends on the use of your application and your requirement for security. If your application is being used over the Internet, and you're storing anything that is potentially sensitive in any way, adding the physical remove for the database is strongly recommended. Never, ever let anyone from the outside onto any machine with direct access to your database. People can and will attempt to break your security for no better reason than they have nothing better to do.
Scalability can be a factor as well, both in front of the presentation layer (in front of the web servers) and in the database. Placing a load balancer in front of the presentation layer allows incoming requests to be routed to an array of machines that can be managed independently. Machines can be added to the pool in times of need and removed for maintenance. Placing load balancers between the other layers can have the same impact. The idea is to provide a flexible, dynamic back-end environment that can be adjusted as demand requires.
Whenever you find yourself asking the question "Is X inefficient?" you should, immediately, ask yourself three precursor questions:
By "inefficient," what resource do you think it should be using efficiently and may not be? Time? Space? Bandwidth? Development hours?
Why do you care? No, seriously: If you're going to spend even one minute answering this question, there has to be a reason. What is that reason?
Compared to what?
As far as your comment about scalability is concerned: For a time, I had the unfortunate responsibility of maintaining a system whose architect who had been told that minimizing round-trips to the database would make an application scalable. He took that insight and ran with it. You can read about this project here. It occurs to me that I ought to have mentioned that at no point during the entire decade-plus-long lifetime of that application were there more than four users logged in simultaneously.
Inefficiency is in the eye of the beholder.
For example, having everything happening on the client may increase the memory footprint or CPU/network requirements of the client computers. If this work can be off-loaded to a server/server farm you may save having to do hardware upgrades of client PCs just to run your software. If more resources or upgrades are needed, they can be added/done in the business logic tier without impacting the clients.
Also, having the business logic on its own tier may be more efficient later (from a software development perspective) when you need to expose some of your application's functionality in a web-based system, or an Outlook add-in, or an iPhone app. You don't want to have to update all of these systems whenever the business logic changes slightly.
Security may be better as your users don't need direct access to the database server, they are isolated by the application server.
It also forces you to think about your application in a modular way with well defined interfaces which may have architectural benefits to the design of your application.
It can be. It depends on what has been implemented and how.
The driving force for creating a 3 tier physical architecture is not necessarily performance related.
The reason scalability is quoted is that a service might run on a server farm, but the clients would be unaware of this. The size of the server farm can be increased to meet demand if the architecture has been designed to support it.
Main advantage of 3t applications described like you did is not scalability. Maintainability maybe.
In order to make your architecture scalable you need one more technology you didn't mentioned.
- you need services. I would suggest WCF.
Making your BLL WCF service (or multiple services) would make your application much more efficient and scalable, allowing your BLL to run on different/multiple machines.

webservices with repository pattern in c# and WCF?

Can anyone confirm the best way to integrate the repository pattern with webservices.... Well actually i have my repository patter working now in c#. I have 3 projects, DataAccess, Services and my presentation layer.
Problem is my presentation layer is a number of things... I have a ASP.NET MVC site, I have an WPF application and we are about to create another site + an external company needs access to our repository also.
Currently i have just added the services layer as reference to each of the sites... But is not the normal way to provide data access via web services? (WCF) - if this is the case will this break the services layer? or should i convert the services layer to a web service?
Anybody know what the PROS and CONS are of this, speed??
I think I understand your dilemma. If I understand correctly then your services layer consists of pure fabrications. http://en.wikipedia.org/wiki/GRASP_(Object_Oriented_Design).
If I assume correctly above, then your services layer should not be impacted at all by the introduction of WCF. WCF is essentially an additional presentation layer that provides interoperability, sitting between your UI presentation layer and any business logic layers. So your WCF services would then call your services layer, which may access repositories as needed.
WCF provides a high degree of interoperability so I think it is an excellent choice. I would use basicHttp bindings though, if you intend to interop with different programming languages as this is the most flexible. Don't worry about the speed. There are plenty of solutions out there to mitigate any bottlenecks that result due to WCF.
Good luck, and let me know if I can help in any other way.
Well first - not all callers have to use the same repository API; this is especially true of an external company.
WCF is interface based. This means that if you need to re-use some logic code, it is possible to use IoC/DI to inject WCF rather than a DAL (but using the same interface) - by using assembly sharing. It sounds like this is what you are doing. This works in many cases, but not all; fundamentally web-service based APIs often need to be designed differently in order to be optimal. It also isn't 100% pure from an SOA viewpoint, but it gets the job done, and allows more intelligent domain entities, so in an intranet (etc) scenario it is (IMO) perfectly reasonable.
An external caller would typically just use the wsdl/mex-based APIs (rather than assembly sharing), but anything is possible...
Maybe webservices are not the best way, if i have full access to the service assembly then i suppose it always better to assembly share the services layer with my applications.
My applications do similar things, but they all need to access the service layer - well the business logic and get back information...
In this case - its always preferable to use assembly sharing with the service layer rather than provide a WCF Web service using HTTP protocol or using TCP on wcf - for example?
Thanks again
Whether to share your Service/API assemblies with your client applications is fairly subjective. If you are a full Microsoft shop, and use .NET for your entire application stack, then I would say sharing the API is a great way to gain code reuse (you have to be careful how you design your API so you don't bleed domain concerns, like repositories, into your presentation.) If you don't have any plans to migrate your client applications to other platforms (i.e. you plan to stay on .NET for the foreseeable future), then I think its perfectly acceptable to share your Service/API assemblies (and even then, in a multi-platform client environment, sharing Service/API with .NET clients should still be acceptable.) There is always a trade off between the 'architecturally ideal' and the 'practical and achievable within budget'. You can spend a LOT of time, money, and effort trying to achieve the architecturally ideal, when the gap between that and the practical often isn't really that much. The choice NOT to share the API and essentially recreate it to maintain "correct" SOA, consuming only the contract, can actually increase work and introduce maintenance hassles that quite possibly are not worth it for your particular project at this particular time. Given that you are already generally 'service-oriented', if at a future point in time you need the benefit that contract-only consumption on the client can offer, then your already set to go there. But don't push too far too soon.
Given your needs, from what I have been able to glean from these posts so far, I think your on the right track from your services down too. A repository (a la Evans, DDD) is definitely a domain concern, and as such, you really shouldn't have to worry about it from the perspective of your presentation layer. You services are the gateway to your domain, which is the home of your business logic. Repositories are just a support facility that helps you achieve domain isolation from a data store (they are glorified collections really, and to be quite frank...they can be a bit of a pain in a dynamic and complex domain. Simple data mappers, (Fowler, PofEAA) are often a lot easier to deal with and less complex in the long run, and allow more adaptable behavior around your data retrieval logic to be centralized in your domain services.) Aside from heavy use of AJAX calls to REST Services, if you expose adequate Services/API around your domain, that is the only thing that your clients should have worry about. Wrap up all the rest of your business logic entirely within the confines of your domain, and keep your clients as light weight as possible and abstracted from concepts like 'Repository' or 'Data Mapper' and whatnot.
In my experience, the only non-service or API concept that needs to be shared across the Client-to-Domain boundary is Context...and it can be notoriously difficult to cross that boundary in a service-oriented application.

Categories

Resources