DTOs that are too heavy to be shared - c#

I have a domain entity currently within my application which exposes functionality via wcf and restful api - where the properties are decorated with various attributes - like SwaggerWCF for example, and some validation rules like [Mandatory].
Now this is fine, however I am now working on a client library to faciliate consuming the services. The typical pattern I follow here is to break the DTOs out into a separate nuget package which is then used by the service and the client.
However these DTos are heavy - hell they probably aren't arent even dtos.
How can I expose my lovely POCOS as Dtos and then layer up the extra stuff on the service side?
I can only see duplication on the road ahead....

DTOs which are usually the smaller bits that get transferred usually serve 2 purposes, to a) make the data smaller for transfer and b) to allow some abstraction from the inner workings to prevent exposing everything if needed.
If a) is not a concern since they would be large, b) is still useful in hiding some properties etc down the road, so creating almost duplicate DTOs is probably ok, and can be managed using some kind of auto mapping.
Having DTOs separates these concerns and if you need to provide a client library these can be more easily extracted into a separate package.

Related

DDD paradigm. Is it possible to use a service from the Infrastructure layer on the Application Core.Domain layer

Sorry for my English, I am writing a real estate appraisal module and decided to try writing in the DDD paradigm.
I looked at examples and different articles and formed the following picture for myself (simplified):
ApplicationCore.Domain - the core of business logic, contains all the necessary objects divided into 2 typesа
Entitys - if an entity needs to be stored in a database and it is a ready-made complete business logic object.
ValueObjects - all other properties that encapsulate the behavior, are compared by the value of the fields, and are part of Entity
It is very important to create only valid objects, so I create everything through factories with a validator, and the constructors are private.
There should be no references to other dependencies, as isolated as possible.
ApplicationCore.App - a layer above ApplicationCore.Domain, contains a link to it.
works with objects from ApplicationCore.Domain and uses external services through Port/Adapter.
Port - For example, an abstraction of a repository, its implementation through an Adapter on the Infrastructure layer.
interaction logic for ApplicationCore is in AppServices - for classic implementation or in Command / Request - for CQRS
Infrastructure - contains a link to ApplicationCore.Domain and ApplicationCore.App.
Implements the ports of the ApplicationCore.App layer
Entities are independent from each other and refer to each other by keys for interaction.
I kept the basic logic of the assessment system in this paradigm.
But here it was necessary to add a service that receives additional information for evaluation from different sources, while in wiretapping mode.
Those. Works against the background of listening to TcpIp socket
Service settings are stored in the database and change frequently - i.e. you need to introduce a new Entity
A chain is formed
DataProvider (ApplicationCore.Domain) -> IDataProviderRepository (ApplicationCore.App) -> EfDataProviderRepository (Infrastructure).
DataProvider - must depend on ITcpIpTransport, without it it is just a set of settings for TcpIpTransport.
Logically, DataProvider is a service for receiving data, but it has settings and state stored in the database, i.e. you need to make it Entity.
If you make the DataProvider dependent on the Infrastructure layer, then ApplicationCore.Domain will also have ports to external services, is this permissible?
how best to implement it.
I've to admit that is a bit hard to understand everything. Especially at the end, you speak about object that isn't clear where they are and what they do.
Anyway, you cannot use anything for the layer above (infrastructure) into the layers below (application and domain). I don't know how would you do it, but as a simple dependency it would not leave you compile any project. You'll end with a circular dependency.
Given that you would not do this, you can build a service (I do it with static functions to avoid any kind of unwanted implementation) into the domain layer. That service use interfaces that are implemented into the application layer. Into the application layer you'll also use (or reuse) interfaces that will be implemented into the infrastructure layer.

Use auto-generated classes/objects when using a SOAP WS in .NET?

At a current project I have to develop a .NET client application which uses a handful of SOAP web services to communicate with an external software.
Fortunately .NET makes it very easy to use a SOAP WS as it generates all the required objects when adding a service reference.
On the other hand after playing around with this auto generated classes for a while I'm not sure if it's better to use them directly in the business logic or if I should map them into my own models (e.g. using something like a repository pattern).
Pros for mapping:
- Separation of business logic and data access (WS could change)
- Central point which calls the WS (can validate the responses and do a proper error handling)
- Sometimes WS types are cumbersome to use (e.g. WebService1.TypeA is not compatible to WebService2.TypeA).
- Generated classes cannot/should not be customized.
- ...
Cons for mapping:
Some of the used WSDLs have a complex structure and lots of nested types. In case of mapping them to my own models I have to duplicate many classes and properties. That's the fact why I have concerns about this solution.
In short I'm unsure if the duplication of the web service classes to my own namespaces and an implementation of a repository or facade pattern is a proper way to go or just blowing up the architecture.
Are there any best practices or similar?
In my 20+ years of experience, adding a repository/service layer can be overkill if the lifetime of the project is uncertain or likely to be short lived. There is the added concern of performance, however SOAP itself would be more of a bottleneck than an object mapping layer when done correctly. Also, Naked Object applications don’t benefit from separation of concerns.
That being said, if you are connecting to a SOAP endpoint these days you are likely to be developing an enterprise application that should be built to be around for a few years and enhanced over time. That is, built to accept growing needs. So as far as your pros and cons, in my experience it depends on return on the time investment. From the information you posted here, the extra effort would be beneficial.
Generation can be a great tool when done right. I do a considerable amount of T4 generation in my projects for similar purposes. As far as best practices, I generate my classes into a ‘Generated’ sub namespace and extend them. This way I can extend the functionality and structure without fear of them being overwritten. In the generated classes I mark everything partial and virtual so that I have options outside of inheritance. This may be overkill to do all at once, but is something to consider. Leveraging partial classes could be another way to modify and extend the generated classes.
You can even generate the extended/partial classes. I use T4Toolbox to generate external files and use the ‘PreserveExistingFile’ to prevent the file from being overwritten. T4Toolbox (if you aren’t already using it) offers a great modular way to manage your generation, even generate into other projects.
Even if you don’t add a repository layer, I would encourage you to apply the concepts of the Composite and Façade patterns to simplify the interaction with the external service.
So in review, best practices in my experience:
Repository:
if you need it to be long-lived and extendable
Generation:
Use a namespace and class naming that makes it clear that the class is generated and will be overwritten.
Create classes that are partials or extend the generated classes for flexibility
T4Toolbox if using T4
modular T4
preserve custom code

ServiceStack and non-database objects

I'm a C# coder with a (Windows) sysadmin background. I've been looking at the various service frameworks in order to create a unified REST-API for various infrastructure components (windows management, hardware management, etc.). I've settled on using ServiceStack as my framework for this, but have a question on how to manage my DTOs. Most of the time my source data is from non-database objects, which include:
Other web services (usually SOAP based). I usually bring these in via "Add Web Reference" (most, but not all, are asmx).
.NET Objects (usually WMI/WinRM/PowerShell [System.Management], or Active Directory [System.DirectoryServices])...
In some unfortunate cases, raw text output I get as a result of invoking a command (via ssh or cmd).
In all of these cases, I will have to call some sort of Save() method to update properties. In addition, there might be some non-CRUD methods I would like to expose to the REST service. Usually I don't need everything from the source data (for example, in the case of web service data, I'm only interested in boxing up certain properties and methods of a particular proxy class). My understanding is that my DTOs should be clean and not have any dependencies. Since I don't believe I have an ORM I can use, what design pattern should I use to map my data to a DTO?
Apologies if I'm misusing any terminology here...
With a variety of backend services and data sources, I think it would be hard to use anything highly structured like a framework to map your data to DTOs. I would keep it simple:
Keep your DTO classes separate from any of your backend classes. Generally resist the temptation to try to reuse code, use inheritance, etc., in your DTOs (though sometimes I find it useful to declare interfaces for the DTOs to implement). This will keep the interface of your your ServiceStack service clean and independent of backend details.
There are some extension methods available in ServiceStack to easily map properties between two classes: TranslateTo, PopulateWith, PopulateWithNonDefaultValues, etc. The link above mentions these. The trick is that while your DTO classes should not be subclasses of, or directly reusing your backend classes, you will find it convenient to have the property names match up if you want to use these mapping methods.
Keep your ServiceStack service classes simple; their primary responsibility should be translating between DTO classes and lower level model classes, and making one or two method calls on business logic classes to do the actual work.
It sounds like it would be useful for the highest level of your business layer--the classes that your ServiceStack services interact with--to present a clean interface that abstracts away the details about the source and format of a given type of data. So you may want three layers of model classes. From top to bottom: DTOs, business layer POCO classes, and framework-specific classes for specific backend services like web reference generated code or whatever.
I think that's about all there is to it.
I recommend that you define DTOs that meet the requirements of your API, and then have a 'business logic' layer that mediates between the actual objects and your DTOs.
Your ServiceStack services will have a dependency on both the DTO definitions and the business logic layer, and the business logic layer will have a dependency on the DTO definitions and the real-world object definitions. In effect, your REST services and DTOs will act as a facade over the real-world APIs.

How to pass objects between layers of abstaction in c#?

So, I have an application structured with the following layers:
As of now, I am not using any concept of Objects to get data from the bottom-most layer. I am simply using DataTables to get data out. I am not happy with this because it requires the Business Logic Layer to be aware of column names, etc.
In my Business Logic layer I have objects that are loaded from those DataTables and the service layer works with those objects via Collections of those objects.
Here's my question. If I wanted to have the Data Abstraction Layer accept and reply with Objects, how do I avoid referencing the DAL from the Service Layer? I've read that object factories are one way and also I've read I could build object transformation functions and so on.
What is the best way you've successfully employed this? My ultimate goal is to provide pluggable DALs for different Database server vendors.
A standard approach, and one I take often, is to have a common library of object models representing my domain: Customer, Order, OrderLine, etc, that are shared across all layers.
Better still, don't share the types across layers, just share the interfaces and have a factory available when instances are required.
You can then have a pluggable DAL, but your DAL still needs to conform to the contract of returning an ICustomer.
This doesn't stop references from being required, a reference to at least the interfaces are needed - as someone else has commented, other references, for the strong types or factories, could then be factored out - such as with IoC/DI frameworks.
The way I see it: a commom model is a cross-cutting concern in your application design, and shouldn't be seen as breaking the layering.
Update: this is a very vague explanation of the solution, so I stand to have it built upon.
Update 2: and now of course to answer your question. A standard way to remove a reference to a DLL directly is to expose the contract via an interface, in your case a DAL method GetCustomers, your Service Layer talks to the interface, but requests a DAL instance via a Dependency Injection / Inversion of Control framework, or more easily, a factory. I usually go the factory route for small apps, it involves another DLL. Service Layer will reference interfaces and factory, DAL will reference interface, factory will reference interface and DAL.
Define simple objects (or interfaces) in a "Common" assembly, and use those.
For this to work, all layers / assemblies that need to exchange data will need to reference this common assembly; because of that you'll want to make the common assembly very very lean in terms of it's dependencies - otherwise you'll pollute the rest of your app.
For a simple application, I like to...
Use an ORM within my DAL to simplify data access (Entity Framework)
Use the repository pattern for CRUD operations (Repository Pattern)
Use the repository pattern as an abstraction of the DAL, mapping ORM's object to DTOs or model/domain objects (DTO)
Also I...
Use a Dependency Injection framework for repositories, this allows you to plug any database (StructureMap)
Write unit tests and mock my repositories (NUnit)
However, if I had a new project, I would go code-first with EF4 instead of having a ORM-DTO mapping layer. (Code-First wit EF4)
Peace!
Another approach is to use plain-old-clr-objects (POCO). Your data abstraction and business layer can leverage this. Under the POCO domain model, typically you'd use a tool like nhibernate to manage persistance.
Rather than interfaces, nhibernate introduces a "proxy" to introduce persistence behaviour invisibly, through decoration (XML file or Attributes) . You can get quite productive with this approach once you get used to it.
Further, all three layers can leverage the same simple POCO objects which can simplify things somewhat.
re: different assemblies, either put the objects in a shareable assembly, or (like MS often do), use code generation to generate the same POCO "schema" in other layers. Sometimes an intermediate assembly is just not shareable... or you'd like to introduce variations in the additional layers (perhaps due to security concerns). So the link is to just depend on serialization, yet at the same time define once (the POCO schema) and introduce the schema to different layers through tooling (the code generation part).
Hope that helps.

wcf architecture - how to design my service contract in a flexible way

I have some entities like: Customers, Orders, Invoices.
For each one of them I grouped their CRUD operations and few other in interfaces like: ISvcCustomerMgmt, ISvcOrderMgmt, ISvcInvoicesMgmt, ISvcPaymentsMgmt.
Now I need to create few WCF service contracts independent to each other which will consist of implementing one or more of this interfaces.
one for internal use
ISvcInternal: ISvcCustomerMgmt,
ISvcOrderMgmt, ISvcInvoicesMgmt
//,maybe more in the future
one for external use (3rd parties) ISvcExternal:
ISvcCustomerMgmt //,maybe more in the future
So, my real services look like this: 1) SvcInternal: ISvcInternal, 2) SvcExternal: ISvcExternal.
When I see SvcInternal implementation, it gets bigger with a lot of operations.
Is this method flexible enough? Do you recommend another approach of splitting them up somehow? Feel free to share your thoughts.
Thank you.
if i have to implement this i would say i will put all the code and Operations in a Worker Manager or Fascade Layer... that will consist of all operations... (Real coding logic).
my service will be only a thin client that will only pass request to Fascade layer....
This allows me to reuse a great amount of code... and it also allows me to expose same method in more then one services without ReImplementation....
One point though why don't you use differentiate b/w you internal and external services with different bindings... e.g. even if you are going to use WSHttpBinding or BasicHttpBinding for both services create different endpoints and binding for them....
in terms of Code Hirerachy my idea would be of using folder hirerachy and namespaces to differentiate b/w this... e.g. Namespace.Interfaces.Internal and vice versa...
hope that helps.
This can be an endless debate... How you choose to group your service operations is up to you.
One way is to put everything in a single, cover-it-all service, which acts as a façade to cover the internal complexities. But, as you say, that can grow quickly.
Another option is to have one service per entity type, or per aggregate root. An aggregate root is an entity that has an ID and is independently manageable from other entities. An example: you may have an Invoice entity and an InvoiceLine entity; then the Invoice entity is an aggregate root but the InvoiceLine entity is not, because it cannot exist without an Invoice -- therefore, it is not independent.
Yet another approach is to divide up per domain -- that is, divide up the service into smaller services that are each consistent and independent of the other services. Sometimes that is possible, sometimes it isn't. Use your judgment.
At our company, services consist of 3 assemblies:
1) the "contract" assembly which we name [Company].[Project].Contract. This assembly contains the DTO (Domain) objects, the Interface definitions and a Client class to access the service. This assembly can be shared with those who want to consume your service.
2) the "business" assembly which we name [Company].[Project].Business. This assembly exposes a factory class that returns interfaces to the internal business worker classes.
3) the "service" assembly, which we name [Company].[Project].Service for a traditional SOAP service or [Company].[Project].Rest in case of a REST service, it is the "facade" that publishes the service's interfaces and covers the transport and protocol logic.
Putting all functionality in one service is a good option to start with, but you will soon find that certain classes belong together naturally, so you will probably end up with a number of domain specific services.
Now, WCF has this great concept of configuration, but those who have field experience with this will agree that this can be very tedious and error-prone, especially when your SOA becomes more complex (as it always does, eventually). This always results in very complex configurations, multiplied by the various environments (development, test, staging, production) the services will run in. Needless to say this might result in errors.
To cope with this, we use the broobu framework that allows near-zero config for WCF services using WS-Discovery and dynamic proxy generation the only drawback for this solution is that you preferably use IIS-hosted services with AppFabric 1.1. This way, you use IIS to configure the services: much safer (since you won't use XML config files) and much more scalable.

Categories

Resources