WCF N-Tier Architecture - c#

I'm working on a fairly straight forward multi-tier application (WPF, WCF, EF 4, and SQL). As far as architecture is concerned, we were planning to include a single "Common" project which will include both entities as well as service contracts.
Are there any advantages/disadvantages to having entities and service contracts in separate assemblies? Or is it usually good to keep them together?
I'm interested in hearing the opinion of others.
Thanks!

Having Contracts in a separate assembly gives you the advantage of the ability injecting to a different entities in a different assembly by providing the Contracts assembly to a developer , and he would implement it and give you a dll that you can put inside the project folder and inject to it using IoC framework like StructureMap without rebuilding,
having the contracts in the same assembly that contains the entities tie the contracts to the implementations...

If you are using a RESTful architecture with other .NET platform consumers - it's helpful to have the Service Contracts in a separate assembly (Shared) so that you can easily share your operation and data contracts with RESTful consumers without exposing any unnecessary data access components to your clients.
I would recommend that you keep the data access and service contracts isolated for this reason.

That is exactly how I structured the design for an e-commerce n-tier app I designed.
There are two common libraries - one for DTO's and another for interfaces.
Then the client and server included those librarues, and the service proxies were generated using common types.
The main advantage here is ease of compilation - you don't have to recreate the proxies when you change the insterface, the client and server are updated automatically.
I also had a utilities app that contained all the helper type stuff I needed.
EDIT: Sorry, just re-read your question. In my case, I had multiple interface libraries - one for the workflow library (with composed interfaces), and another for services (the thing being composed into workflow operations)
So in my case it made sense to keep them seperate.
If you only have one set of interfaces, and those interfaces all make use of your DTO's, there is no reason to seperate them into two libraries - one would be sufficient. Consider though if you may need to share your DTO's between more interface libraries in future, in that case rather keep the DTO's seperate from the interfaces from the start.

Related

Use auto-generated classes/objects when using a SOAP WS in .NET?

At a current project I have to develop a .NET client application which uses a handful of SOAP web services to communicate with an external software.
Fortunately .NET makes it very easy to use a SOAP WS as it generates all the required objects when adding a service reference.
On the other hand after playing around with this auto generated classes for a while I'm not sure if it's better to use them directly in the business logic or if I should map them into my own models (e.g. using something like a repository pattern).
Pros for mapping:
- Separation of business logic and data access (WS could change)
- Central point which calls the WS (can validate the responses and do a proper error handling)
- Sometimes WS types are cumbersome to use (e.g. WebService1.TypeA is not compatible to WebService2.TypeA).
- Generated classes cannot/should not be customized.
- ...
Cons for mapping:
Some of the used WSDLs have a complex structure and lots of nested types. In case of mapping them to my own models I have to duplicate many classes and properties. That's the fact why I have concerns about this solution.
In short I'm unsure if the duplication of the web service classes to my own namespaces and an implementation of a repository or facade pattern is a proper way to go or just blowing up the architecture.
Are there any best practices or similar?
In my 20+ years of experience, adding a repository/service layer can be overkill if the lifetime of the project is uncertain or likely to be short lived. There is the added concern of performance, however SOAP itself would be more of a bottleneck than an object mapping layer when done correctly. Also, Naked Object applications don’t benefit from separation of concerns.
That being said, if you are connecting to a SOAP endpoint these days you are likely to be developing an enterprise application that should be built to be around for a few years and enhanced over time. That is, built to accept growing needs. So as far as your pros and cons, in my experience it depends on return on the time investment. From the information you posted here, the extra effort would be beneficial.
Generation can be a great tool when done right. I do a considerable amount of T4 generation in my projects for similar purposes. As far as best practices, I generate my classes into a ‘Generated’ sub namespace and extend them. This way I can extend the functionality and structure without fear of them being overwritten. In the generated classes I mark everything partial and virtual so that I have options outside of inheritance. This may be overkill to do all at once, but is something to consider. Leveraging partial classes could be another way to modify and extend the generated classes.
You can even generate the extended/partial classes. I use T4Toolbox to generate external files and use the ‘PreserveExistingFile’ to prevent the file from being overwritten. T4Toolbox (if you aren’t already using it) offers a great modular way to manage your generation, even generate into other projects.
Even if you don’t add a repository layer, I would encourage you to apply the concepts of the Composite and Façade patterns to simplify the interaction with the external service.
So in review, best practices in my experience:
Repository:
if you need it to be long-lived and extendable
Generation:
Use a namespace and class naming that makes it clear that the class is generated and will be overwritten.
Create classes that are partials or extend the generated classes for flexibility
T4Toolbox if using T4
modular T4
preserve custom code

.net n-layer website structural advice required

I'm creating my first .net/c# website using Entity Framework as my data access layer. I've split my project into layers so that I have DataAccess, BusinessLogic, a separate BusinessObjects layer and the website itself is the UI (Pages/UserControls/Appcode folder). There is also an additional Utilities plugin project.
The EF model has gone in DA, whilst the entity creation has gone into BO. All feels good, but I'm having trouble what logic class belongs in AppCode (UI) and what belongs in BusinessLogic.
Are there any guidelines that can help me determine which side of the line things go?
App_Code is just a handy convenience for you to run code. I would advise you to avoid using that folder. Just create class library projects for all your classes, which would comprise your business logic layer. In the web project, only put pages and controls (ASCX and ASPX files). It makes the logical separation clearer.
There is a reference implementation from Microsoft Spain; which employs EF, Unity, WCF etc. But, note that this implementation may be overengineered for your needs. Before implementation, instead of copying the same structure, it is better for you to decide, which parts, concepts, patterns are useful for you and which are not.
Microsoft N Layer Reference Implementation

wcf architecture - how to design my service contract in a flexible way

I have some entities like: Customers, Orders, Invoices.
For each one of them I grouped their CRUD operations and few other in interfaces like: ISvcCustomerMgmt, ISvcOrderMgmt, ISvcInvoicesMgmt, ISvcPaymentsMgmt.
Now I need to create few WCF service contracts independent to each other which will consist of implementing one or more of this interfaces.
one for internal use
ISvcInternal: ISvcCustomerMgmt,
ISvcOrderMgmt, ISvcInvoicesMgmt
//,maybe more in the future
one for external use (3rd parties) ISvcExternal:
ISvcCustomerMgmt //,maybe more in the future
So, my real services look like this: 1) SvcInternal: ISvcInternal, 2) SvcExternal: ISvcExternal.
When I see SvcInternal implementation, it gets bigger with a lot of operations.
Is this method flexible enough? Do you recommend another approach of splitting them up somehow? Feel free to share your thoughts.
Thank you.
if i have to implement this i would say i will put all the code and Operations in a Worker Manager or Fascade Layer... that will consist of all operations... (Real coding logic).
my service will be only a thin client that will only pass request to Fascade layer....
This allows me to reuse a great amount of code... and it also allows me to expose same method in more then one services without ReImplementation....
One point though why don't you use differentiate b/w you internal and external services with different bindings... e.g. even if you are going to use WSHttpBinding or BasicHttpBinding for both services create different endpoints and binding for them....
in terms of Code Hirerachy my idea would be of using folder hirerachy and namespaces to differentiate b/w this... e.g. Namespace.Interfaces.Internal and vice versa...
hope that helps.
This can be an endless debate... How you choose to group your service operations is up to you.
One way is to put everything in a single, cover-it-all service, which acts as a façade to cover the internal complexities. But, as you say, that can grow quickly.
Another option is to have one service per entity type, or per aggregate root. An aggregate root is an entity that has an ID and is independently manageable from other entities. An example: you may have an Invoice entity and an InvoiceLine entity; then the Invoice entity is an aggregate root but the InvoiceLine entity is not, because it cannot exist without an Invoice -- therefore, it is not independent.
Yet another approach is to divide up per domain -- that is, divide up the service into smaller services that are each consistent and independent of the other services. Sometimes that is possible, sometimes it isn't. Use your judgment.
At our company, services consist of 3 assemblies:
1) the "contract" assembly which we name [Company].[Project].Contract. This assembly contains the DTO (Domain) objects, the Interface definitions and a Client class to access the service. This assembly can be shared with those who want to consume your service.
2) the "business" assembly which we name [Company].[Project].Business. This assembly exposes a factory class that returns interfaces to the internal business worker classes.
3) the "service" assembly, which we name [Company].[Project].Service for a traditional SOAP service or [Company].[Project].Rest in case of a REST service, it is the "facade" that publishes the service's interfaces and covers the transport and protocol logic.
Putting all functionality in one service is a good option to start with, but you will soon find that certain classes belong together naturally, so you will probably end up with a number of domain specific services.
Now, WCF has this great concept of configuration, but those who have field experience with this will agree that this can be very tedious and error-prone, especially when your SOA becomes more complex (as it always does, eventually). This always results in very complex configurations, multiplied by the various environments (development, test, staging, production) the services will run in. Needless to say this might result in errors.
To cope with this, we use the broobu framework that allows near-zero config for WCF services using WS-Discovery and dynamic proxy generation the only drawback for this solution is that you preferably use IIS-hosted services with AppFabric 1.1. This way, you use IIS to configure the services: much safer (since you won't use XML config files) and much more scalable.

Common definitions in loose coupled design

I'm trying to put together a very granulary loose coupled design.
But I can't decide how to handle common definitions.
Right now I seperate concerns by adding it as an external dll. Through injection and interfaces my domain can use my business logic without knowing the implementation.
The problem I'm having is that for all my components to be loosely coupled, they need to implement the same interfaces. My solution was a seperate project (dll) with just all the definitions.
This started out well, but seems to become bloathed and chains all code together on this one dll-dependency.
What's the most pragmatic way to go about ?
Thanks!
EDIT
Sorry I think I initially misunderstood your question. So you have one assembly which contains your interfaces and you have your implementations in other assemblies using DI to create your dependant objects. I tend to create a core assembly in my application which holds the main behaviours of the app (smart entities, enums and interfaces). This assembly depends on little but is heavy depended on by the rest of the application. Check out this project as an example - whocanhelpme.codeplex.com. You could call this core bloated but it, by definition, needs to be very rich.
You might find that many of your abstract units follow common design patterns. Here is a site that gives a good description of each one - you may be able to derive names from these (Observer, Factory, Adapter etc.):
http://www.dofactory.com/Patterns/Patterns.aspx
I would say, that the layer should only know about the next layer and its interfaces, so it is fine to place interfaces along with their implementations and then add references between layers (assemblies) in the chain.
You can configure DI using bootstrapper pattern and resolve through the locator. Regarding cross cutting concerns like logging, caching ect there should be separate assembly referenced to each layer. Here you can also employ contracts and in the future perhaps replace these cross cutting functionalities with another assembly implementing the same contracts.
Hope this helps at least a bit :)

DDD Projects Structure With WCF

I'm starting a new WCF-based project which is composed by an "Engine" and some desktop applications.
But i found it difficult to make my project structure.
Engine (Windows Service, which host WCF Services for Desktop applications access and host all my business logic)
Desktop Application (Only Presentation)
Shared
MyProject.Core (Customers/Customer, Customers/ICustomerService)
Engine
MyProject.Engine (Customers/CustomerService, Customers/ICustomer, Customers/ICustomerRepository)
MyProject.Infrastructure.SqlServer (Customers/Customer (LinqToSql Specific), Customers/CustomerRepository)
WinForm Application
MyProject.Core
MyProject.UI
Am i right ?
If you are doing DDD I find it strange that you have no domain model. You have a so-called engine, which has multiple concerns. It implements your business logic and knows about hosting your business logic as a windows service.
I would propose a project structure as follows:
MyProject.Model: Defines abstract repositories, entities, value objects, services (DDD term) and other domain logic. It has no references to other projects
MyProject.DataAccess: implementation of repositories using linq2sql. Has a reference to MyProject.Model
MyProject.ServiceModel: Contains service contracts and other stuff related exposing your domain model as WCF services. this project would also contain service specific representations of those of your domain objects that the service serves and accepts. The reason for this would be that you should probably not decorate your domain classes with the attributes needed in WCF data contracts. This project references MyProject.Model.
MyProject.Service: Contains app.config for your service and performs dependency injection, through a custom ServiceHost and ServiceHostFactory. It references MyProject.Model MyProject.ServiceModel and MyProject.DataAccess + your favorite DI framework (Windsor Castle for example)
MyProject.PresentationModel: Defines various view models and commands to use in your UI. It has service references to the services exposed by MyProject.Service
MyProject.WinUI: Your WPF app. References MyProject.PresentationModel.
Note that most of what you have probably read in Eric Evans' book about DDD is only concerned with the contents of MyProject.Model. The other projects are making up additional layers not directly addressed in mr. Evans' book.
Remember that by having a clear separation of concerns, and using dependency injection you will end up with code that is easily tested. With the structure I have proposed above, you should be able to test almost everything, since your UI will contain only XAML.
Anyway, this is just my take on it. Please feel free to ask if some of this needs clarification.
Good luck with the project.
/Klaus

Categories

Resources