DTO
I'm building a Web application I would like to scale to many users. Also, I need to expose functionality to trusted third parties via Web Services.
I'm using LLBLGen to generate the data access layer (using SQL Server 2008). The goal is to build a business logic layer that shields the Web App from the details of DAL and, of course, to provide an extra level of validation beyond the DAL. Also, as far as I can tell right now, the Web Service will essentially be a thin wrapper over the BLL.
The DAL, of course, has its own set of entity objects, for instance, CustomerEntity, ProductEntity, and so forth. However, I don't want the presentation layer to have access to these objects directly, as they contain DAL specific methods and the assembly is specific to the DAL and so on. So, the idea is to create Data Transfer Objects (DTO). The idea is that these will be, essentially, plain old C#/.NET objects that have all the fields of, say, a CustomerEntity that are actually the database table Customer but none of the other stuff, except maybe some IsChanged/IsDirty properties. So, there would be CustomerDTO, ProductDTO, etc. I assume these would inherit from a base DTO class. I believe I can generate these with some template for LLBLGen, but I'm not sure about it yet.
So, the idea is that the BLL will expose its functionality by accepting and returning these DTO objects. I think the Web Service will handle converting these objects to XML for the third parties using it, many may not be using .NET (also, some things will be script callable from AJAX calls on the Web App, using JSON).
I'm not sure the best way to design this and exactly how to go forward. Here are some issues:
1) How should this be exposed to the clients (The presentation tier and to the Web Service code)
I was thinking that there would be one public class that has these methods, every call would be be an atomic operation:
InsertDTO, UpdateDTO, DeleteDTO, GetProducts, GetProductByCustomer, and so forth ...
Then the clients would just call these methods and pass in the appropriate arguments, typically a DTO.
Is this a good, workable approach?
2) What to return from these methods? Obviously, the Get/Fetch sort of methods will return DTO. But what about Inserts? Part of the signature could be:
InsertDTO(DTO dto)
However, when inserting what should be returned? I want to be notified of errors. However, I use autoincrementing primary keys for some tables (However, a few tables have natural keys, particularly many-to-many ones).
One option I thought about was a Result class:
class Result
{
public Exception Error {get; set;}
public DTO AffectedObject {get; set;}
}
So, on an insert, the DTO would get its get ID (like CustomerDTO.CustomerID) property set and then put in this result object. The client will know if there is an error if Result.Error != null and then it would know the ID from the Result.AffectedObject property.
Is this a good approach? One problem is that it seems like it is passing a lot of data back and forth that is redundant (when it's just the ID). I don't think adding a "int NewID" property would be clean because some inserts will not have a autoincrementing key like that. Another issue is that I don't think Web Services would handle this well? I believe they would just return the base DTO for AffectedObject in the Result class, rather than the derived DTO. I suppose I could solve this by having a LOT of the different kinds of Result objects (maybe derived from a base Result and inherit the Error property) but that doesn't seem very clean.
All right, I hope this isn't too wordy but I want to be clear.
1: That is a pretty standard approach, that lends itself well to a "repository" implementation for the best unit-testable approach.
2: Exceptions (which should be declared as "faults" on the WCF boundary, btw) will get raised automatically. You don't need to handle that directly. For data - there are three common approaches:
use ref on the contract (not very pretty)
return the (updated) object - i.e. public DTO SomeOperation(DTO item);
return just the updated identity information (primary-key / timestamp / etc)
One thing about all of these is that it doesn't necessitate a different type per operation (contrast your Result class, which would need to be duplicated per DTO).
Q1: You can think of your WCF Data Contract composite types as DTOs to solve this problem. This way your UI layer only has access to the DataContract's DataMember properties. Your atomic operations would be the methods exposed by your WCF Interface.
Q2: Configure your Response data contracts to return a new custom type with your primary keys etc... WCF can also be configured to bubble exceptions back to the UI.
Related
By the design is it a good idea to create multiple data contracts for a same entity?
For example I have a table called [Person], at the beginning there are only two field: ID and Name. I use nHibernate to map the entity and mark it as data contract to expose the original entity to the client.
With further development more and more column are added to the table: height, sex, address... blah blah blah
When client tries to retrieve a Person object, a large object with a lots of useless property are also delivered.
Is it a good design that, I create another class [PersonWithNameOnly] or such as [PersonLite] for some methods that provides a lite object of that dto? I'm worried that it will create a lot of data contract.
Yes, it is a good practice to expose a ViewModel containing only what is required by the client.
The principle is to work only with the data the current layer need to work with, such as micro services architecture.
However, because developing a new ViewModel for each endpoint can be time consuming, you will see some projects on which a ViewModel can be used for several endpoints, containing a lot of properties. But obviously, that is not the best practice.
That is why people try to provide solutions to this problem, such as GraphQL, which is able to provide only the fields the client have required to get.
Hope it is a bit clearer.
So I'm currently working on a project with a team, and my team and I have come across a certain design scenario that we are trying to come up with a solution for.
Background Info of Current Project Implementation:
This problem involves three main projects in our solution: Repository, Models, and Services project. In case it isn't obvious, the purpose of each project is as follows. The Models project contains models of all the data we store in our database. The Repository project has one main database access class that uses generics to interact with different tables depending on the model passed in. Lastly the Services project contains classes that interface data between the front-end and repository, and in general each service class maps 1-to-1 to a model class. As one would expect, the build dependencies are: Repository relies on Models, and Services relies on both projects.
The Issue:
The current issue we are encountering is that we need a way to ensure that if a developer attempts to query or interact with a specific type of object (Call it ModelA), then we want to ensure that a specific set of filters is always included by default (and these filters are partially based on if a particular user has permissions to view certain objects in a list). A developer should be able to override this filter.
What we want to avoid doing is having an if clause in the repository classes that says "if you're updating this model type, add these filters".
Solutions we have thought of / considered:
One solution we are currently considering is having a function in ServiceA (the service corresponding to ModelA) that appends these filters to a given query, and then to make it so that if anyone requests for the db context of a model, they must pass in a function that manipulates filtering in some fashion (in other words, if they want to interact with ModelA, they would pass in the filter function from ServiceA). The issue with this solution is that a developer needs to always be aware that if they ever interact with ModelA, they must pass in the function from ServiceA. Also, because we don't want every model to enforce certain filter options, we would likely want this to be an optional parameter, which might then cause issues where developers simply forget to include this function when interacting with ModelA.
Another solution we considered is to have an attribute (let's call it DefaultFilterAttribute) on ModelA that stores a class type that should implement a particular interface (called IFilterProvider). ServiceA would implement this interface, and ModelA's attribute would be given ServiceA as a type. Then the repository methods can check if the entity passed in has a DefaultFilterAttribute on it, and then simply call the method implemented by the class attached to the attribute. Unfortunately, as some of you might have noticed, the way our project dependencies are currently set up, we can't really implement a solution like this.
So I'm wondering if there is a clean solution to this problem, or if potentially we are thinking about the problem and/or design pattern incorrectly, and should be taking a completely different approach.
I think you're making this unnecessarily complex. What you're describing is pretty much the entire purpose of a service layer. Presumably, you'd have something like GetModelAList (that's actually a pretty bad method for a service, but just for illustration). The logic of applying certain filters automatically, then, is encapsulated in that method. The application doesn't know or care how that data is retrieved; it just knows if it wants a list of ModelA instances, it calls that method.
If you then want a way to not apply those filters, you can provide another method such as GetModelAListUnfiltered, or pass a boolean or something in to the original method that determines whether filters are automatically applied. Really you can handle this however you want, but the point is that it's all encapsulated in your service.
Lastly, you haven't specified exactly what your repository is doing, but a repository should be extremely simple, really just returning sets of all objects. Logic like what you're talking about belongs in a service layer. However, even that only applies if you're doing direct database access using something like Dapper or ADO.NET. If you're using a full-fledged ORM like Entity Framework, throw away your repository layer entirely. Yes, you heard me correctly: throw it away completely. A repository wrapped around an ORM is a useless abstraction, only serving to add more code that needs to be maintained and tested for no good reason whatsoever. The benefits some level of abstraction gives you is already provided by the service layer.
Using this:
https://genericunitofworkandrepositories.codeplex.com/
and the following set of blog posts:
http://blog.longle.net/2013/05/11/genericizing-the-unit-of-work-pattern-repository-pattern-with-entity-framework-in-mvc/
We are trying to use those repositories with Breeze since it handles client side javascript and OData very well.
I was wondering how we could use these with Breeze to handle overriding the BeforeSaveEntity correctly.
We have quite a bit of business logic that needs to happen during the save (modifying properties like ModifiedBy, ModifiedTime, CreatedBy, etc) but when we change those they aren't updated by breeze, so we have to re query after the save (we've tried manually mapping the changes back but it requires us to duplicate all of the business logic).
Our second option was to check the type of each entity and then request the correct repository for it, handle the save internally, and then do a new get request on the client to get the updated information. This is chatty though so we were hoping there is a better way. What would the correct way of updating these objects while bypassing breeze's save without returning an error or having to reget the data afterward?
Any examples of Breeze with Business Logic during the save would be very helpful, especially if it happens in a service, repository or something else besides directly in the BeforeSaveEntity method.
This is many questions rolled into one and each is a big topic. The best I can do is point you in some directions.
Before I get rolling, let me explain why you're not seeing the effects of setting "properties like ModifiedBy, ModifiedTime, CreatedBy, etc)". The EFContextProvider does not update every property of the modified entities but rather only those properties mentioned in the EntityInfo.OriginalValuesMap, a dictionary of the property names and original values of just the properties that have changed. If you want to save a property that is only set on the server, just add it to the original values map:
var map = EntityInfo.OriginalValuesMap;
map["ModifiedBy"]=null; // the original value does not matter
map["ModifiedTime"]=null;
Now Breeze knows to save these properties as well and their new values will be returned to the client.
Let's return to the bigger picture.
Breeze is first and foremost an client-side JavaScript library. You can do pretty much whatever you want on the server-side and make Breeze happy about it as long as your server speaks HTTP and JSON.
Writing a server that provides all the capabilities you need is not trivial no matter what technology you favor. The authors of Breeze offer some .NET components out of the box to make your job easier, especially when you choose the Web API, EF and SQL Server stacks.
Our .NET demos typically throw everything into one web application. That's not how we roll in practice. In real life we would never instantiate a Breeze EFContextProvider in our Web API controller. That controller (or multiple controllers) would delegate to an external class that is responsible for business logic and data access, perhaps a repository or unit-of-work (UoW) class.
Repository pattern with Breeze .NET components
We tend to create separate projects for the model (POCOs usually), data access (ORM) and web (Web API plus client assets) projects. You'll see this kind of separation in the DocCode Sample and also in John Papa's Code Camper sample, the companion to his PluralsSight course "Building Apps with Angular and Breeze".
Those samples also demonstrate an implementation of the repository pattern that blends the responsibilities of multiple repositories and UoW in one class. This makes sense for the small models in these samples. There is nothing to stop you from refactoring the repositories into separate classes.
We keep our repository class in the same project as the EF data access material as we see no particular value in creating yet another project for this small purpose. It's not difficult to refactor into a separate project if you're determined to do so.
Both the Breeze and Code Camper samples concentrate on Breeze client development. They are thin on server-side logic. That said, you will find valuable clues for applying custom business logic in the BeforeSaveEntities extension point in the "NorthwindRepository.cs" and `NorthwindEntitySaveGuard.cs" files in the DocCode sample. You'll see how to restrict saves to certain types and certain records of those types based on the user who is making the request.
The logic can be overwhelming if you try to channel all save changes requests through a single endpoint. You don't have to do that. You could have several save endpoints, each dedicated to a particular business operation that is limited to insert/updating/deleting entities of just a few types in a highly specific manner. You can be as granular as you please. See "Named Saves" in the "Saving Entities" topic.
Have it your way
Now there are a gazillion ways to implement repository and UoW patterns.
You could go the way set forth by the post you cited. In that case, you don't need the Breeze .NET components. It's pretty trivial to wire up your Web API query methods (IQueryable or not) to repository methods that return IQueryable (or just objects). The Web API doesn't have to know if you've got a Breeze EFContextProvider behind the scenes or something completely different.
Handling the Breeze client's SaveChanges request is a bit trickier. Maybe you can derive from ContextProvider or EFContextProvider; maybe not. Study the "ContextProvider.cs" documentation and the source code, especially the SaveChanges method, and you'll see what you need to do to keep Breeze client happy and interface with however you want to handle change-set saves with your UoW.
Assuming you change nothing on the client-side (that's an assumption, not a given ... you can change the save protocol if you want), your SaveChanges needs to do only two things:
Interpret the "saveBundle" from the client.
Return something structurally similar to the SaveResult
The saveBundle is a JSON package that you probably don't want to unpack yourself. Fortunately, you can derive a class from ContextProvider that you use simply to turn the saveBundle into a "SaveMap", a dictionary of EntityInfo objects that's pretty much what anyone would want to work with when analyzing a change-set for validation and save.
The following might do the trick:
using System;
using System.Collections.Generic;
using System.Data;
using Breeze.ContextProvider;
using Newtonsoft.Json.Linq;
public class SaveBundleToSaveMap : ContextProvider
{
// Never create a public instance
private SaveBundleToSaveMap(){}
/// <summary>
/// Convert a saveBundle into a SaveMap
/// </summary>
public static Dictionary<Type, List<EntityInfo>> Convert(JObject saveBundle)
{
var dynSaveBundle = (dynamic) saveBundle;
var entitiesArray = (JArray) dynSaveBundle.entities;
var provider = new SaveBundleToSaveMap();
var saveWorkState = new SaveWorkState(provider, entitiesArray);
return saveWorkState.SaveMap;
}
// override abstract members but DO NOT USE ANY OF THEM
}
Then it's up to you how you make use of the "SaveMap" and dispatch to your business logic.
The SaveResult is a simple structure:
public class SaveResult {
public List<Object> Entities; // each of the entity type you serialize to the client
public List<KeyMapping> KeyMappings;
public List<Object> Errors;
}
public class KeyMapping {
public String EntityTypeName;
public Object TempValue;
public Object RealValue;
}
Use these classes as is or construct your own. Breeze client cares about the JSON, not these types.
All,
My typical approach for a medium sized WCF service would be something like:
Define the interface using WCF data contracts and service operations. The data contracts would be POCO DTOs with no CRUD or domain logic.
Model the domain using fully featured business objects.
Provide some mechanism to go from DTO to BO and vice versa (see related question: Pattern/Strategy for creating BOs from DTOs)
Now, a lot of the time (if not always) the data content of the business object and the DTO is near identical. How do people feel about creating a library of content objects which are shared by the BO and the DTO. E.g. if we had a WibbleDTO and a WibbleBO, we could create an IWibbleContent interface which both implement. We could even create an IWibbleContent interface and a WibbleContent class which both the DTO and BO hold a reference to.
So, specific questions:
Do you ever share content/data interfaces between your DTOs and BOs?
Do you ever share data content classes between your DTOs and BOs?
If not then I guess, as per my related question, we're left with tedious copying code, or we use something like AutoMapper.
Any comments appreciated.
We are using quite similar approach as you describe with DTOs and BOs.
We rarely have common interfaces, either they are very basic (eg. interface to get BusinessId) or they are specific for a certain implementation, eg. a calculation which could be made on the client or on the server.
We actually just copy properties. They are usually trivial enough that it is not worth to share code.
At the end, more code is different then similar.
We have many attributes on these classes, which almost never are the same.
Most Properties are implemented as get; set; on the server, but with OnPropertyChangedEvent on the client, which requires the use of explicit fields.
We don't share much code on client and server side. So there is no need for common interfaces.
Even if many of the properties are the same on both classes, there is actually not much to share.
I usually create POCOs and use them through all of my layers - data access to business to ui. In the business layer I have managers that have the POCOs pased back and forth. We are going to look at the Entity Framework and/or NHibernate so I am not sure where that will lead us.
Yeah, we write some extra code but we keep everything lean and mean. We are using MVC for our UI which for me was a godsend compared to the bulk of webforms, I'll never go back. Right now our battle is should we send JSON to the ajax callbacks or use partial views, the latter is what we do most of the time.
Are we correct? Maybe not but it works for us. So many choices, so little time.
We have incremental burden of maintaining EntityTranslator to transform the business messages to the service message and service message to business message in .NET and WCF application. In fact, I cannot call them as Business object since we just need to fetch from DB and update the same. We read data from device and store to DB and read data from DB and store to device.
All our classes are simple, plain .NET classes and doesn't do anything specific.
It is very similar classes.
Here is my service entity.
[DataContract]
public class LogInfoServiceEntity
{
string data1;
string name;
}
public class LogInfo
{
string data1;
string name;
}
Now I need to define the translator just to create the instance type of other side and copy the data other side. We have around 25 classes like this and we feel, very difficult to manage them. So we have 25 Business to Service translator and 25 Service to Business Translator.
I like to have simple POJO kind of classes to store and get the information than using all the translator.
What is the best way to handle the situation?
Or
Is translator is the best way to handle the situation?
Automapper might be what you're looking for.
The answer is "it depends". It solely depends on the complexity of your system. Usually WCF service interfaces should be coarse grained and not necessarily map one-to-one to your business layer entities to prevent additional round-trips to server.
For instance, Customer entity in WCF interface can convey much more information, even not related directly to Customer entity in business layer. But you return this information additionally because you predict that in 85% of situations client will not only need Customer data, but also all orders/activities or any other supplementary information within next several minutes.
This is usual trade-off - whether to return more or less.
In your particular case I would stick with code generation: you can always write a tool which will generate all external interfaces and translators out of business logic entities.
This may be a daft question, however why don't you
Use the same classs for the
DataContract as you use for the
"business messages"?
Normally you keep your contracts separate so you can change your business objects without effecting your data contracts, however what benefit do you get from keeping them separate?