All,
My typical approach for a medium sized WCF service would be something like:
Define the interface using WCF data contracts and service operations. The data contracts would be POCO DTOs with no CRUD or domain logic.
Model the domain using fully featured business objects.
Provide some mechanism to go from DTO to BO and vice versa (see related question: Pattern/Strategy for creating BOs from DTOs)
Now, a lot of the time (if not always) the data content of the business object and the DTO is near identical. How do people feel about creating a library of content objects which are shared by the BO and the DTO. E.g. if we had a WibbleDTO and a WibbleBO, we could create an IWibbleContent interface which both implement. We could even create an IWibbleContent interface and a WibbleContent class which both the DTO and BO hold a reference to.
So, specific questions:
Do you ever share content/data interfaces between your DTOs and BOs?
Do you ever share data content classes between your DTOs and BOs?
If not then I guess, as per my related question, we're left with tedious copying code, or we use something like AutoMapper.
Any comments appreciated.
We are using quite similar approach as you describe with DTOs and BOs.
We rarely have common interfaces, either they are very basic (eg. interface to get BusinessId) or they are specific for a certain implementation, eg. a calculation which could be made on the client or on the server.
We actually just copy properties. They are usually trivial enough that it is not worth to share code.
At the end, more code is different then similar.
We have many attributes on these classes, which almost never are the same.
Most Properties are implemented as get; set; on the server, but with OnPropertyChangedEvent on the client, which requires the use of explicit fields.
We don't share much code on client and server side. So there is no need for common interfaces.
Even if many of the properties are the same on both classes, there is actually not much to share.
I usually create POCOs and use them through all of my layers - data access to business to ui. In the business layer I have managers that have the POCOs pased back and forth. We are going to look at the Entity Framework and/or NHibernate so I am not sure where that will lead us.
Yeah, we write some extra code but we keep everything lean and mean. We are using MVC for our UI which for me was a godsend compared to the bulk of webforms, I'll never go back. Right now our battle is should we send JSON to the ajax callbacks or use partial views, the latter is what we do most of the time.
Are we correct? Maybe not but it works for us. So many choices, so little time.
Related
I've built a web application with Entity Framework using POCO.
I'm using these POCO classes as my business objects and not just for persisting data which works fine until...
Now I need to add some logic into these classes to do thing like total up sales, order lines, etc.
Should I add methods to my POCO classes to enable this functionality or leave them purely for persisting data and create some kind of 'processor' whereby I pass in the business objects and get the values I require out.
Is there a best practice for this?
What is the architectural design you are using or want to use?
For example, if these are your domain entities, you should put as much as possible logic in them. If they are merely data containers and you don't have a real architecture in place, your logic would probably in some business component.
So if you provide your question with some more details, we can help you better.
I am creating a WPF application that uses a WCF service to interact with the data source. I use DI for both the client and the WCF server to ensure decoupled code, but I am unsure how to handle the data transfer from backend to user interface.
To keep the layers separate data is currently transferred from the database to the UI through several mapping steps. On the server side data entities are mapped to domain objects which again are mapped to service data contracts. On the client side WCF proxy classes are mapped to viewmodels.
Some developers at work claim that this "copying" of data between seemingly identical classes creates a maintenance problem because so many classes must be updated when a change is introduced. Instead they say you should use shared classes across the layers since we control both the client application and the WCF service. I too worry about the amount of work involved and see a potential performance penalty, but on the other hand using a shared class across the layers/abstractions might create tight coupling the way I see it. What is the best approach?
Using DTOs as business objects is not the best decision you can make.
From my experience I can say that usually when your objects are identical for all the layers then there is probably a problem with the architecture somewhere.
In a real business scenario it is quite unlikely that a business logic on a server and a business logic on a client have the same context and operate with the same objects. And if they have exactly the same structure with the database... hmmm... sounds like a data-driven application.
But if it is a data-driven application when clients access some data, modify it and save it back, then probably you don't really need this complicated layering? It sounds simple so let's keep it simple. If it is a data-driven application, why not just create a WCF DataServices context on top of your database and let it do all the dirty work for you when you just access your data over WCF without even thinking about DTOs, mappings, etc.
If it is not a data-driven application, then you probably have some complicated business logic on your server side, and this business logic usually operates with objects that make sense only its context. It just doesn't make sense to push these object all the way through to the UI.
Instead, the UI will probably send commands to the server in order to ask the system to do something. For example, it will send a "DisableAccount(id=123)" command instead of loading AccountDTO, changing its IsEnabled flag to false and pushing it back.
If there is a business logic, then it probably will be triggered by such command from the client who does not need to know how to disable accounts or how to do other things. It just knows and can command the system to do something.
So in this scenario client (the UI) just does not need the same object that server has. It may need some data to display to user, but it definitely will be in a format that make sense for the client's view, not for the business logic. It will probably contain some denormalized data, combined somehow.
Say, User for UI is not a DTO mapped to the Users table. It is another DTO, containing users data and statistics from different tables, processed somehow. Client does not need to know the internal structure of server's data storage so there is no need to expose it. Get the relevant data and send the appropriate commands, that's it.
Saying all this I should underline that it is NOT a binary choice you make. For simple features you may use a simple approach, for the features where having business logic make sense you may do the other things.
You do not have to choose one for everything. So you do not have to always create 3 similar objects just because it is "The Way" or always pass entities all the way through to the UI.
But what you will have to do is to clearly separate contexts and to define where which approach is going to be used.
In 80% you will probably end up with something simple (like WCF DataServices), and you don't need to do anything, and it is fine as in a lot of operations you just want to change the data.
But in other 20% (which is the "core" of your application) where the real business logic lives - here you may want this kind of separation not only for objects, but also for responsibilities between your layers.
All that mapping indeed creates a maintenance burden. Whether or not it's warranted depends on what you are building, and how complex the business logic is.
However, it's very important to realize that once you start sharing data structures across layers and tiers, the architecture is no longer decoupled. If you do that, you'd essentially be building a monolithic application. Don't get me wrong: there's nothing wrong with building a monolithic application if all you're doing is a glorified CRUD application, but it's essential to make that decision explicitly.
There's at least these alternatives:
Maintain strict layering. The mapping cost remains, but the code is decoupled.
Build a monolithic application. Collapse everything you can collapse. Keep it as simple as possible. It's going to be tightly coupled, but it just may become so simple that it doesn't matter.
Do something radically different, like building a CQRS application or a SOA mashup.
Personally, I prefer the third option these days.
I see nothing sacred about layers. Having layer-specific versions of each and every entity in the model would increase the number of classes a great deal. It's unnecessary, in my view. It violates the DRY principle: why keep repeating yourself?
What does layer purity buy you?
So I'd say the best approach is to pass those model entities around without fear.
I seem to be missing something and extensive use of google didn't help to improve my understanding...
Here is my problem:
I like to create my domain model in a persistence ignorant manner, for example:
I don't want to add virtual if I don't need it otherwise.
I don't like to add a default constructor, because I like my objects to always be fully constructed. Furthermore, the need for a default constructor is problematic in the context of dependency injection.
I don't want to use overly complicated mappings, because my domain model uses interfaces or other constructs not readily supported by the ORM.
One solution to this would be to have separate domain objects and data entities. Retrieval of the constructed domain objects could easily be solved using the repository pattern and building the domain object from the data entity returned by the ORM. Using AutoMapper, this would be trivial and not too much code overhead.
But I have one big problem with this approach: It seems that I can't really support lazy loading without writing code for it myself. Additionally, I would have quite a lot of classes for the same "thing", especially in the extended context of WCF and UI:
Data entity (mapped to the ORM)
Domain model
WCF DTO
View model
So, my question is: What am I missing? How is this problem generally solved?
UPDATE:
The answers so far suggest what I already feared: It looks like I have two options:
Make compromises on the domain model to match the prerequisites of the ORM and thus have a domain model the ORM leaks into
Create a lot of additional code
UPDATE:
In addition to the accepted answer, please see my answer for concrete information on how I solved those problems for me.
I would question that matching the prereqs of an ORM is necessarily "making compromises". However, some of these are fair points from the standpoint of a highly SOLID, loosely-coupled architecture.
An ORM framework exists for one sole reason; to take a domain model implemented by you, and persist it into a similar DB structure, without you having to implement a large number of bug-prone, near-impossible-to-unit-test SQL strings or stored procedures. They also easily implement concepts like lazy-loading; hydrating an object at the last minute before that object is needed, instead of building a large object graph yourself.
If you want stored procs, or have them and need to use them (whether you want to or not), most ORMs are not the right tool for the job. If you have a very complex domain structure such that the ORM cannot map the relationship between a field and its data source, I would seriously question why you are using that domain and that data source. And if you want 100% POCO objects, with no knowledge of the persistence mechanism behind, then you will likely end up doing an end run around most of the power of an ORM, because if the domain doesn't have virtual members or child collections that can be replaced with proxies, then you are forced to eager-load the entire object graph (which may well be impossible if you have a massive interlinked object graph).
While ORMs do require some knowledge in the domain of the persistence mechanism in terms of domain design, an ORM still results in much more SOLID designs, IMO. Without an ORM, these are your options:
Roll your own Repository that contains a method to produce and persist every type of "top-level" object in your domain (a "God Object" anti-pattern)
Create DAOs that each work on a different object type. These types require you to hard-code the get and set between ADO DataReaders and your objects; in the average case a mapping greatly simplifies the process. The DAOs also have to know about each other; to persist an Invoice you need the DAO for the Invoice, which needs a DAO for the InvoiceLine, Customer and GeneralLedger objects as well. And, there must be a common, abstracted transaction control mechanism built into all of this.
Set up an ActiveRecord pattern where objects persist themselves (and put even more knowledge about the persistence mechanism into your domain)
Overall, the second option is the most SOLID, but more often than not it turns into a beast-and-two-thirds to maintain, especially when dealing with a domain containing backreferences and circular references. For instance, for fast retrieval and/or traversal, an InvoiceLineDetail record (perhaps containing shipping notes or tax information) might refer directly to the Invoice as well as the InvoiceLine to which it belongs. That creates a 3-node circular reference that requires either an O(n^2) algorithm to detect that the object has been handled already, or hard-coded logic concerning a "cascade" behavior for the backreference. I've had to implement "graph walkers" before; trust me, you DO NOT WANT to do this if there is ANY other way of doing the job.
So, in conclusion, my opinion is that ORMs are the least of all evils given a sufficiently complex domain. They encapsulate much of what is not SOLID about persistence mechanisms, and reduce knowledge of the domain about its persistence to very high-level implementation details that break down to simple rules ("all domain objects must have all their public members marked virtual").
In short - it is not solved
(here goes additional useless characters to post my awesome answer)
All good points.
I don't have an answer (but the comment got too long when I decided to add something about stored procs) except to say my philosophy seems to be identical to yours and I code or code generate.
Things like partial classes make this a lot easier than it used to be in the early .NET days. But ORMs (as a distinct "thing" as opposed to something that just gets done in getting to and from the database) still require a LOT of compromises and they are, frankly, too leaky of an abstraction for me. And I'm not big on having a lot of dupe classes because my designs tend to have a very long life and change a lot over the years (decades, even).
As far as the database side, stored procs are a necessity in my view. I know that ORMs support them, but the tendency is not to do so by most ORM users and that is a huge negative for me - because they talk about a best practice and then they couple to a table-based design even if it is created from a code-first model. Seems to me they should look at an object datastore if they don't want to use a relational database in a way which utilizes its strengths. I believe in Code AND Database first - i.e. model the database and the object model simultaneously back and forth and then work inwards from both ends. I'm going to lay it out right here:
If you let your developers code ORM against your tables, your app is going to have problems being able to live for years. Tables need to change. More and more people are going to want to knock up against those entities, and now they all are using an ORM generated from tables. And you are going to want to refactor your tables over time. In addition, only stored procedures are going to give you any kind of usable role-based manageability without dealing with every tabl on a per-column GRANT basis - which is super-painful. If you program well in OO, you have to understand the benefits of controlled coupling. That's all stored procedures are - USE THEM so your database has a well-defined interface. Or don't use a relational database if you just want a "dumb" datastore.
Have you looked at the Entity Framework 4.1 Code First? IIRC, the domain objects are pure POCOs.
this what we did on our latest project, and it worked out pretty well
use EF 4.1 with virtual keywords for our business objects and have our own custom implementation of T4 template. Wrapping the ObjectContext behind an interface for repository style dataaccess.
using automapper to convert between Bo To DTO
using autoMapper to convert between ViewModel and DTO.
you would think that viewmodel and Dto and Business objects are same thing, and they might look same, but they have a very clear seperation in terms of concerns.
View Models are more about UI screen, DTO is more about the task you are accomplishing, and Business objects primarily concerned about the domain
There are some comprimises along the way, but if you want EF, then the benfits outweigh things that you give up
Over a year later, I have solved these problems for me now.
Using NHibernate, I am able to map fairly complex Domain Models to reasonable database designs that wouldn't make a DBA cringe.
Sometimes it is needed to create a new implementation of the IUserType interface so that NHibernate can correctly persist a custom type. Thanks to NHibernates extensible nature, that is no big deal.
I found no way to avoid adding virtual to my properties without loosing lazy loading. I still don't particularly like it, especially because of all the warnings from Code Analysis about virtual properties without derived classes overriding them, but out of pragmatism, I can now live with it.
For the default constructor I also found a solution I can live with. I add the constructors I need as public constructors and I add an obsolete protected constructor for NHibernate to use:
[Obsolete("This constructor exists because of NHibernate. Do not use.")]
protected DataExportForeignKey()
{
}
I'm starting a project using EF 4 and POCO.
What is the best practice for sending data to the client ? Should I send the POCO or I should have a DTO instead?
Are there any issue I should be aware of when sending the entity (that is disconnected from the context) to the client ?
Is it a recommended practice to send the POCO to the client layer?
I believe that we are mixing 2 definitions here that don't have relation with each other.
DTO or Data Transfer Object is a design pattern, you can use it to transfer data between layers, and also they don't have behavior. Martin Fowler explains this very well at: http://www.martinfowler.com/eaaCatalog/dataTransferObject.html
In the other hand we have POCO or Plain Old CLR Object. But to talk about POCO, we have to know where it started, that is POJO, or Plain Old Java Object. Martin Fowler with two partners coined the term and he explains it here: http://www.martinfowler.com/bliki/POJO.html
So POCOs can have behavior and everything you want. They are the same common classes you write in your daily-basis, they just gave them that name to call them in a short and easy-remember way.
In anwser to your second question, I think the best approach and the one I always go for is sending DTOs from the Busines Layer to everything that uses it (e.g.: your services, web site, desktop app, mobile app, etc.). This is because they don't have behavior and not pretty much than only properties in most of the cases, so they are light-weight and ideally to use in services, and of course, they don't reveal sensitive data from your business.
That being said, if you are planning to use DTO, I can recommend you to download EntitiesToDTOs, an Entity Framework DTO Generator that I just recently published at CodePlex, it is free and open source. Go to http://entitiestodtos.codeplex.com
For me, one of the main reasons to use EF4 with POCO is the fact that you don't need DTO's. I can understand using DTO's with traditional EDMX files where your entities are pretty bloated, but this isn't the case.
Your POCO obviously needs to be serializable, but there really shouldn't be any issues specific to sending POCO entities that don't also occur with DTO's.
I have a bit different opinion from above opinions.
I believe DTO or ViewModel is still needed for out side of the Server Layer.
In real world application, there is a few view layer which only need one Domain Object, that is, almost every views need multiple Domain Objects.
And all those Domain Objects are wrapped in one DTO or ViewModel Class.
This is why I insist DTO or ViewModel is still needed even though they are POCO.
I would consider EF4 entities business models AND viewmodels rolled into one. They already implement PropertyChanged out of the box, for example. Partial classes can provide custom functionality if you need. Mirroring the entities with your own safety layer creates unnecessary work and maintenance, in my opinion.
I'm a believer in separation of business logic and everything else. However in the case of EF4 the work is already done for you. Go nuts.
POCO = Plain Old CLR (or better: Class) Object
DTO = Data Transfer Object
In this post there is a difference, but frankly most of the blogs I read describe POCO in the way DTO is defined: DTOs are simple data containers used for moving data between the layers of an application.
Are POCO and DTO the same thing?
A POCO follows the rules of OOP. It should (but doesn't have to) have state and behavior. POCO comes from POJO, coined by Martin Fowler [anecdote here]. He used the term POJO as a way to make it more sexy to reject the framework heavy EJB implementations. POCO should be used in the same context in .Net. Don't let frameworks dictate your object's design.
A DTO's only purpose is to transfer state, and should have no behavior. See Martin Fowler's explanation of a DTO for an example of the use of this pattern.
Here's the difference: POCO describes an approach to programming (good old fashioned object oriented programming), where DTO is a pattern that is used to "transfer data" using objects.
While you can treat POCOs like DTOs, you run the risk of creating an anemic domain model if you do so. Additionally, there's a mismatch in structure, since DTOs should be designed to transfer data, not to represent the true structure of the business domain. The result of this is that DTOs tend to be more flat than your actual domain.
In a domain of any reasonable complexity, you're almost always better off creating separate domain POCOs and translating them to DTOs. DDD (domain driven design) defines the anti-corruption layer (another link here, but best thing to do is buy the book), which is a good structure that makes the segregation clear.
It's probably redundant for me to contribute since I already stated my position in my blog article, but the final paragraph of that article kind of sums things up:
So, in conclusion, learn to love the POCO, and make sure you don’t spread any misinformation about it being the same thing as a DTO. DTOs are simple data containers used for moving data between the layers of an application. POCOs are full fledged business objects with the one requirement that they are Persistence Ignorant (no get or save methods). Lastly, if you haven’t checked out Jimmy Nilsson’s book yet, pick it up from your local university stacks. It has examples in C# and it’s a great read.
BTW, Patrick I read the POCO as a Lifestyle article, and I completely agree, that is a fantastic article. It's actually a section from the Jimmy Nilsson book that I recommended. I had no idea that it was available online. His book really is the best source of information I've found on POCO / DTO / Repository / and other DDD development practices.
POCO is simply an object that does not take a dependency on an external framework. It is PLAIN.
Whether a POCO has behaviour or not it's immaterial.
A DTO may be POCO as may a domain object (which would typically be rich in behaviour).
Typically DTOs are more likely to take dependencies on external frameworks (eg. attributes) for serialisation purposes as typically they exit at the boundary of a system.
In typical Onion style architectures (often used within a broadly DDD approach) the domain layer is placed at the centre and so its objects should not, at this point, have dependencies outside of that layer.
I wrote an article for that topic: DTO vs Value Object vs POCO.
In short:
DTO != Value Object
DTO ⊂ POCO
Value Object ⊂ POCO
I think a DTO can be a POCO. DTO is more about the usage of the object while POCO is more of the style of the object (decoupled from architectural concepts).
One example where a POCO is something different than DTO is when you're talking about POCO's inside your domain model/business logic model, which is a nice OO representation of your problem domain. You could use the POCO's throughout the whole application, but this could have some undesirable side effect such a knowledge leaks. DTO's are for instance used from the Service Layer which the UI communicates with, the DTO's are flat representation of the data, and are only used for providing the UI with data, and communicating changes back to the service layer. The service layer is in charge of mapping the DTO's both ways to the POCO domain objects.
Update Martin Fowler said that this approach is a heavy road to take, and should only be taken if there is a significant mismatch between the domain layer and the user interface.
TL;DR:
A DTO describes the pattern of state transfer. A POCO doesn't describe much of anything except that there is nothing special about it. It's another way of saying "object" in OOP. It comes from POJO (Java), coined by Martin Fowler who literally just describes it as a fancier name for 'object' because 'object' isn't very sexy and people were avoiding it as such.
Expanding...
Okay to explain this in a far more high-brow way that I ever thought would be needed, beginning with your original question about DTOs:
A DTO is an object pattern used to transfer state between layers of concern. They can have behavior (i.e. can technically be a poco) so long as that behavior doesn't mutate the state. For example, it may have a method that serializes itself. For it to be a proper DTO, it needs to be a simple property bag; it needs to be clear that this object is not a strong model, it has no implied semantic meaning, and it doesn't enforce any form of business rule or invariant. It literally only exists to move data around.
A POCO is a plain object, but what is meant by 'plain' is that it is not special and does not have any specific requirements or conventions. It just means it's a CLR object with no implied pattern to it. A generic term. I've also heard it extended to describe the fact that it also isn't made to work with some other framework. So if your POCO has a bunch of EF decorations all over it's properties, for example, then it I'd argue that it isn't a simple POCO and that it's more in the realm of DAO, which I would describe as a combination of DTO and additional database concerns (e.g. mapping, etc.). POCOs are free and unencumbered like the objects you learn to create in school
Here some examples of different kinds of object patterns to compare:
View Model: used to model data for a view. Usually has data annotations to assist binding and validation for particular view (i.e. generally NOT a shared object), or in this day and age, a particular view component (e.g. React). In MVVM, it also acts as a controller. It's more than a DTO; it's not transferring state, it's presenting it or more specifically, forming that state in a way that is useful to a UI.
Value Object: used to represent values, should be immutable
Aggregate Root: used to manage state and invariants. should not allow references to internal entities other than by ID
Handlers: used to respond to an event/message.
Attributes: used as decorations to deal with cross-cutting concerns. May only be allowed to be used on certain objects levels (e.g. property but not class, method but not property, etc.)
Service: used to perform complex tasks. Typically some form of facade.
Controller: used to control flow of requests and responses. Typically restricted to a particular protocol or acts as some sort of mediator; it has a particular responsibility.
Factory: used to configure and/or assemble complex objects for use when a constructor isn't good enough. Also used to make decisions on which objects need to be created at runtime.
Repository/DAO: used to access data. Typically exposes CRUD operations or is an object that represents the database schema; may be marked up with implementation specific attributes. In fact, one of these schema DAO objects is actually another kind of DTO...
API Contracts: Likely to be marked up with serialization attributes. Typically needs to have public getters and setters and should be lightweight (not an overly complex graph); methods unrelated to serialization are not typical and discouraged.
These can be seen as just objects, but notice that most of them are generally tied to a pattern or have implied restrictions. So you could call them "objects" or you could be more specific about its intent and call it by what it is. This is also why we have design patterns; to describe complex concepts in a few words. DTO is a pattern. Aggregate root is a pattern, View Model is a pattern (e.g. MVC & MVVM).
A POCO doesn't describe a pattern. It is just a different way of referring to classes/objects in OOP which could be anything. Think of it as an abstract concept; they can be referring to anything. IMO, there's a one-way relationship though because once an object reaches the point where it can only serve one purpose cleanly, it is no longer a POCO. For example, once you mark up your class with decorations to make it work with some framework (i.e. 'instrumenting' it), it is no longer a POCO. Therefore I think there are some logical relationships like:
A DTO is a POCO (until it is instrumented)
A POCO might not be a DTO
A View Model is a POCO (until it is instrumented)
A POCO might not be View Model
The point in making a distinction between the two is about keeping patterns clear and consistent in effort to not cross concerns and lead to tight coupling. For example if you have a business object that has methods to mutate state, but is also decorated to hell with EF decorations for saving to SQL Server AND JsonProperty so that it can be sent back over an API endpoint. That object would be intolerant to change, and would likely be littered with variants of properties (e.g. UserId, UserPk, UserKey, UserGuid, where some of them are marked up to not be saved to the DB and others marked up to not be serialized to JSON at the API endpoint).
So if you were to tell me something was a DTO, then I'd probably make sure it was never used for anything other than moving state around. If you told me something was a view model, then I'd probably make sure it wasn't getting saved to a database, and I'd know that it's ok to put 'hacky' things in there to make sure the data is usable by a UI. If you told me something was a Domain Model, then I'd probably make sure it had no dependencies on anything outside of the domain and certainly no dependencies on any technical implementation details (databases, services etc.), only abstractions. But if you told me something was a POCO, you wouldn't really be telling me much at all other than it is not and should not be instrumented.
History
Paraphrased from Fowler's explanation: In a world where objects were fancy (e.g. followed a particular pattern, had instrumentation etc.), it somehow encouraged people to avoid using not-fancy objects to capture business logic. So they gave it a fancy name POJO. If you want an example, the one he refers to is an "Entity Bean" which is one of those kinds of objects that have very specific conventions and requirements, etc.. If you don't know what that is --> Java Beans.
In contrast, a POJO/POCO is just the regular ole object that you'd learn out to create in school.
A primary use case for a DTO is in returning data from a web service. In this instance, POCO and DTO are equivalent. Any behavior in the POCO would be removed when it is returned from a web service, so it doesn't really matter whether or not it has behavior.
DTO objects are used to deserialize data into objects from different sources. Those objects are NOT your Model (POCO) objects. You need to transform those objects into your Model (POCO) objects. The transformation is mostly a copy operation. You can fill those POCO objects directly from the source if its an internal source, but its not adviceable if its an external source. External sources have API's with descriptions of the Schema they use. Its much easier then to load the request data in an DTO and after that transform those in your POCO's. Yes its an extra step, but with a reason. The rule is to load the data from your source in an object. It can be JSON, XML whatever. When loaded then transform that data in what you need in your model. So most of times the DTO is an object image of the external source. Sometimes you even get the Schema's of the source providers then you can deserialize even easier, XML works like that with XSD's.
here is the general rule: DTO==evil and indicator of over-engineered software. POCO==good. 'enterprise' patterns have destroyed the brains of a lot of people in the Java EE world. please don't repeat the mistake in .NET land.
Don't even call them DTOs. They're called Models....Period. Models never have behavior. I don't know who came up with this dumb term DTO but it must be a .NET thing is all I can figure. Think of view models in MVC, same dam** thing, models are used to transfer state between layers server side or over the wire period, they are all models. Properties with data. These are models you pass ove the wire. Models, Models Models. That's it.
I wish the stupid term DTO would go away from our vocabulary.