I have a graph of objects :
School-->Classes-->Students.
and I want to set it up in a way that I can send back school class to the client and it can access classes and students in a lazy-loading way.
is that possible ?
In brief: no.
You can either :
send back all the data needed (including classes and students with your school entity) in a single call ("eager loading")
or:
you need have to have separate methods on your WCF service to retrieve detail data in a separate call (something like: List<Class> GetClassesForSchool(int schoolId), List<Student> GetStudentsForClass(int classId))
Lazy loading per se only works as long as your Entity Framework object context is still around to be queried for more data - which is certainly not the case when you send entities across the wire using WCF.
I don't think so, because your entity is travelling across different tiers and the one with database connection won't be accessed without your interventation from any other tier.
You'll need to tailor your own solution to do that, or just use data-transfer objects, which will have the right information nor the one that may be useless for some view.
Update:
Read this article if you want to learn more about DTO pattern:
http://aspalliance.com/1215_Implementing_a_Generic_Data_Transfer_Object_in_C.2
Related
I have an application which queries a local store of data (currently backed by an XML file), using Linq to Objects. Periodically, another thread in the application will query a remote server for updated data, and if it exists, will download all of the remote data, deserialise it and replace the local objects with newly deserialised ones before saving the new XML to disk.
I have decided to replace the XML file with a SQLite database, and I intend to use Entity Framework to interact with it. This has prompted me to re-look at the way external changes are applied, and I've decided that only data where the remote entities updated_at property is newer than the local entity will be updated (rather than the current approach of replacing the whole data set)
So I must write a method to download the external changes and update or insert the relevant entities into the SQLite database.
What I don't understand is where, in architectural terms, this method should sit. My (potentially naive) thinking is that a generic UpdateFromRemoteObjects<T>(List<T> updatedItems) method could sit in the DbContext class, and would accept a list of entities and update the appropriate DbSet. But this feels like it may be too closely coupled to the DbContext. Should I use a repository to provide a layer to implement this? Or is another application architecture more appropriate?
Many people start with CRC when designing components: Classes have Responsibilities and Collaborators
First consider the single responsibility principle: a class with two or more responsibilities is probably doing too much. This is your reason for not putting the method on the DbContext: this updating stuff is a new distinct responsibility, so create a class for it.
I can see this class doing 2 things: QueryRemoteServerForChanges and UpdateLocalObjects.
Now consider its Collaborators. it seems to need two: an instance of DbContext for the local changes, and a instance of whatever gives access to the remote data.
So not a repository no; and not a layer; but definitely a class with a responsibility.
I am writing a smart client WPF application using MVVM that communicates with a WCF service layer containing the business logic and domain objects that uses NHibernate to manage persistence. We are in control of both sides of the wire.
Currently, I am working on creating a screen to Edit Product Details it has a tab control with each tab representing some aspect of the Product such as Main Details, Product Class, Container Type and so on. In the end, there will probably be at least 5 of these tabs.
Up to now I have been working on transforming simple domain objects to DTOs using SetResultTransformer and this has been working quite nicely.
Now that I am getting to a more complicated object I am getting a bit stuck. I would like to return a DTO to be displayed that contains the Main Product details, categories and classes. As far as categories and classes are concerned I would not want to return every single property of the domain object.
Questions:
1) How do people go about creating a DTO where there are several one to
many collections to return as in this example?
2) Is there any concerns about the DTO becoming too large?
3) When sending the DTO back to the back end is it better to send the same type of DTO with the updated values or some other more command oriented DTO?
Thanks for any help
Alex
We are currently using pretty big DTOs and it is working pretty fine. NHibernate is doing a lot of lazy loading, so this helps with big objects.
We are using bags for one to many relations, they are lazy loaded and are working pretty well.
Depending on the type of application lazy loading can be a bit of a problem. We had some problems with our rich client application with big DTOs but with some planning and a sound architecture it works pretty well.
I don't know if large DTOs are really a problem with NHibernate, but so far we don't have got any problems.
We are sending the whole object back and forth and it is doing well. NHibernate updates just the changed fields and this is really nice.
I wouldn't serialize the NHIbernate objects over web services or something like that (I don't know the WCF service layer and how it communicates with your application). If I am transferring data through web services I am generating new data objects and fill them accoringly, transfer them back and forth and update the NHibernate objects with those.
Have you tried Automapper? I do all my DTO mappings with Automapper and it works like a charm.
Have a look at automapper. I'm sure you'll like it.
Please help on choosing the right way to use the entities in n-tier web application.
At the present moment I have the following assembleis in it:
The Model (Custom entities) describes the fields of the classes that the application use.
The Validation is validating the data integrity from UI using the reflection attributes method (checks data in all layers).
The BusinessLogicLayer is a business facade for additional logic and caching that use abstract data providers from DataAccessLayer.
The DataAccessLayer overrides the abstarct data providers using LinqtoSql data context and Linq queries. And here is the point that makes me feel i go wrong...
My DataLayer right before it sends data to the business layer, maps (converts) the data retrieved from DB to the Model classes (Custom entities) using the mappers. It looks like this:
internal static model.City ToModel(this City city)
{
if (city == null)
{
return null;
}
return new model.City
{
Id = city.CountryId,
CountryId = city.CountryId,
AddedDate = city.AddedDate,
AddedBy = city.AddedBy,
Title = city.Title
};
}
So the mapper maps data object to the describing model. Is that right and common way to work with entities or do I have to use the data object as entities (to gain a time)? Am I clear enough?
You could use your data entities in your project if they are POCOs. Otherwise I would create separate models as you have done. But do keep them in a separate assembly (not in the DataAccess project)
But I would not expose them through a webservice.
Other suggestions
imho people overuse layers. Most applications do not need a lot of layers. My current client had a architecture like yours for all their applications. The problem was that only the data access layer and the presentation layer had logic in them, all other layers just took data from the lower layer, transformed it, and sent it to the layer above.
The first thing I did was to tell them to scrap all layers and instead use something like this (requires a IoC container):
Core (Contains business rules and dataaccess through an orm)
Specification (Seperated interface pattern. Contains service interfaces and models)
User interface (might be a webservice, winforms, webapp)
That works for most application. If you find that Core grows and becomes too large too handle you can split it up without affecting any of the user interfaces.
You are already using an ORM and have you thought about using a validation block (FluentValidation or DataAnnotations) for validation? Makes it easy to validate your models in all layers.
It may be a common practice to send out DTOs from serivce boundary (WCF service, etc.) but if you are directly using your "entities" in your presentation model, I don't see any benefit in doing that.
As to the code snippet you have provided, why not use AutoMappter? It helps by eliminating writing of boiler-plate mapping codes and does that for you if you have a set of convention in place.
Get rid of the model now, before removing it later will require refactoring the whole application. The last project i worked on used this architecture and maintaining the DTO layer and mappings to the database model layer is a huge pain in the arse and offers no usefull benefits. One of the main things that is anoying is that LinkToSql does not effectively support a disconnected data model. You cannot update a database table by creating a new DB entity with a primary key matching an existing record and then stick it into the data context. You have to first retrieve the entity from the database, update it then commit the changes. Managing this results in really nasty update methods to map all the properties from your DTOs to your LinqtoSql classes. It also breaks the whole deferred execution model of LinqToSql. Don't even get me started on the problems it causes with properties on parent classes that are collections of child DTOs (e.g. a customer DTO with an Orders property that contains a collection of order DTOs), managing those mappings is really really fiddly, i had to do some extensive optimisations because retrieving a few hundred records ended up causing LinqToSql to make 200,000 database calls (admittedly there was also some pretty dumbass code as well but you get the picture).
The only valid reason to use DTOs is if you want to have multiple pluggable Data Access Layers e.g. LinqToSql and NHibernate for supporting different DB servers. That way you can swap out the data access later without having to change any other layers. If you don't need to do this then save yourself a world of pain and just use the LinqToSql entities.
In many posts concerning this topic I come across very simple examples that do not answer my question.
Let's say a have a document table and user table. In DAL written in ADO.NET i have a method to retries all documents for some criteria. Now I the UI I have a case where I need to show this list along with the names of the creator.
Up to know I have it done with one method in DAL containig JOIN statement.
However eveytime I have such a complex method i have to do custom mapping to some object that doesn't mark 1:1 to DB.
Should it be put into another layer ? If so then I will have to resing from join query for iteration through results and querying each document author. . . which doen't make sense... (performance)
what is the best approach for such scenarios ?
For your ui my suggestion is to have a dto (a viewmodel for those mvp/mvc people) hold the user's data and the corresponding list of documents.
Custom mapping will always be present so I suggest you take a look at Automapper here to ease those mapping pains.
I ran into the same thing in the past while creating my own custom data access layers. You want your objects to map one to one to your db, but many times you just need to write one off custom functions to retrieve inner joining data. I would not put these custom actions into their own layer.
At times, what I have done was created a general class that was responsible for retrieving data for grids, combo boxes, etc, that joined information from a number of tables. This class would return custom objects containing the retrieved results. I you are not satisfied with a tool that performs automatic custom mapping for you, I would suggest creating your own auto mapping class builder utility.
As long as you split your app into data access, business, and UI layers I think you are headed in the right direction.
DTO
I'm building a Web application I would like to scale to many users. Also, I need to expose functionality to trusted third parties via Web Services.
I'm using LLBLGen to generate the data access layer (using SQL Server 2008). The goal is to build a business logic layer that shields the Web App from the details of DAL and, of course, to provide an extra level of validation beyond the DAL. Also, as far as I can tell right now, the Web Service will essentially be a thin wrapper over the BLL.
The DAL, of course, has its own set of entity objects, for instance, CustomerEntity, ProductEntity, and so forth. However, I don't want the presentation layer to have access to these objects directly, as they contain DAL specific methods and the assembly is specific to the DAL and so on. So, the idea is to create Data Transfer Objects (DTO). The idea is that these will be, essentially, plain old C#/.NET objects that have all the fields of, say, a CustomerEntity that are actually the database table Customer but none of the other stuff, except maybe some IsChanged/IsDirty properties. So, there would be CustomerDTO, ProductDTO, etc. I assume these would inherit from a base DTO class. I believe I can generate these with some template for LLBLGen, but I'm not sure about it yet.
So, the idea is that the BLL will expose its functionality by accepting and returning these DTO objects. I think the Web Service will handle converting these objects to XML for the third parties using it, many may not be using .NET (also, some things will be script callable from AJAX calls on the Web App, using JSON).
I'm not sure the best way to design this and exactly how to go forward. Here are some issues:
1) How should this be exposed to the clients (The presentation tier and to the Web Service code)
I was thinking that there would be one public class that has these methods, every call would be be an atomic operation:
InsertDTO, UpdateDTO, DeleteDTO, GetProducts, GetProductByCustomer, and so forth ...
Then the clients would just call these methods and pass in the appropriate arguments, typically a DTO.
Is this a good, workable approach?
2) What to return from these methods? Obviously, the Get/Fetch sort of methods will return DTO. But what about Inserts? Part of the signature could be:
InsertDTO(DTO dto)
However, when inserting what should be returned? I want to be notified of errors. However, I use autoincrementing primary keys for some tables (However, a few tables have natural keys, particularly many-to-many ones).
One option I thought about was a Result class:
class Result
{
public Exception Error {get; set;}
public DTO AffectedObject {get; set;}
}
So, on an insert, the DTO would get its get ID (like CustomerDTO.CustomerID) property set and then put in this result object. The client will know if there is an error if Result.Error != null and then it would know the ID from the Result.AffectedObject property.
Is this a good approach? One problem is that it seems like it is passing a lot of data back and forth that is redundant (when it's just the ID). I don't think adding a "int NewID" property would be clean because some inserts will not have a autoincrementing key like that. Another issue is that I don't think Web Services would handle this well? I believe they would just return the base DTO for AffectedObject in the Result class, rather than the derived DTO. I suppose I could solve this by having a LOT of the different kinds of Result objects (maybe derived from a base Result and inherit the Error property) but that doesn't seem very clean.
All right, I hope this isn't too wordy but I want to be clear.
1: That is a pretty standard approach, that lends itself well to a "repository" implementation for the best unit-testable approach.
2: Exceptions (which should be declared as "faults" on the WCF boundary, btw) will get raised automatically. You don't need to handle that directly. For data - there are three common approaches:
use ref on the contract (not very pretty)
return the (updated) object - i.e. public DTO SomeOperation(DTO item);
return just the updated identity information (primary-key / timestamp / etc)
One thing about all of these is that it doesn't necessitate a different type per operation (contrast your Result class, which would need to be duplicated per DTO).
Q1: You can think of your WCF Data Contract composite types as DTOs to solve this problem. This way your UI layer only has access to the DataContract's DataMember properties. Your atomic operations would be the methods exposed by your WCF Interface.
Q2: Configure your Response data contracts to return a new custom type with your primary keys etc... WCF can also be configured to bubble exceptions back to the UI.