.NET Service burden of using Entity Translator - c#

We have incremental burden of maintaining EntityTranslator to transform the business messages to the service message and service message to business message in .NET and WCF application. In fact, I cannot call them as Business object since we just need to fetch from DB and update the same. We read data from device and store to DB and read data from DB and store to device.
All our classes are simple, plain .NET classes and doesn't do anything specific.
It is very similar classes.
Here is my service entity.
[DataContract]
public class LogInfoServiceEntity
{
string data1;
string name;
}
public class LogInfo
{
string data1;
string name;
}
Now I need to define the translator just to create the instance type of other side and copy the data other side. We have around 25 classes like this and we feel, very difficult to manage them. So we have 25 Business to Service translator and 25 Service to Business Translator.
I like to have simple POJO kind of classes to store and get the information than using all the translator.
What is the best way to handle the situation?
Or
Is translator is the best way to handle the situation?

Automapper might be what you're looking for.

The answer is "it depends". It solely depends on the complexity of your system. Usually WCF service interfaces should be coarse grained and not necessarily map one-to-one to your business layer entities to prevent additional round-trips to server.
For instance, Customer entity in WCF interface can convey much more information, even not related directly to Customer entity in business layer. But you return this information additionally because you predict that in 85% of situations client will not only need Customer data, but also all orders/activities or any other supplementary information within next several minutes.
This is usual trade-off - whether to return more or less.
In your particular case I would stick with code generation: you can always write a tool which will generate all external interfaces and translators out of business logic entities.

This may be a daft question, however why don't you
Use the same classs for the
DataContract as you use for the
"business messages"?
Normally you keep your contracts separate so you can change your business objects without effecting your data contracts, however what benefit do you get from keeping them separate?

Related

Layer design in an application

I have been writing applications which are layered as:
DB<->DAL<->BL<->Service<->Presentation
And that's all that get's referenced. That is, The Presentation doesn't have a reference to the DAL.
We have a new app to write for a client, and the client is proposing something that is foreign to me. That is, the WRITE flow goes through the SL, but to READ data from the database, the want us to have a linq query in the presentation, direct to the DAL. That seems strange, but I am being told that my way is old fashioned and that my way, and their proposed way is essentially the same thing.
Also, my business logic usually resides in the BL, which is a separate project. But the client wants the business logic to be in the DTO object's themselves.
Is this normal? Is this basically Domain Driven Development or something? I find it strange that the linq calls to get the data for a form, is in the presentation layer, as opposed to my idea of a service layer method:
public MyPersonObject GetPersonByPersonId(int personId)
And then the same method in the Business, which might apply some rules to what is got, and then the same method in the DAL, which has the Linq.
Client is client, did you ever hear CQRS?
Your client might be affected by CQRS which is a new architecture fashion in domain driven design. In general, it separates command and query in different ways to database.
But in your client's proposed approach, it seems mixed up between traditional DDD and CQRS which does not use event sourcing inside. But it is still okay and normal, IMHO, query which provides data for presentation layer is trivial and it is not essentially complicated. It is like the report system which just queries data from database, you don't need to use ORM for this.
Also, my business logic usually resides in the BL, which is a separate project. But the client wants the business logic to be in the DTO object's themselves.
Business logic should be in domain entity, if not, seem you are violating Anemic Model anti pattern, it is also not in DTO. DTO is the concept of data transfer object between distribution layer with consumer.
What you describe is in no way DDD. While some DDD implementations do use a split architecture for queries and commands (CQRS approach), it doesn't remove the need for good layering of your application.
If writes go through a Service layer, it probably means that your software is at least of reasonable complexity and as such, should decouple presentation from persistence with a layer of abstraction in between. In CQRS, that layer often takes the form of Facades that accept queries and return DTO's containing the required data.
But the client wants the business logic to be in the DTO object's
themselves.
DTO stands for Data Transfer Object. DTO's don't contain any business logic and have no other purpose but to carry data.

Please help on choosing the right arhitecture of n-tier web application

Please help on choosing the right way to use the entities in n-tier web application.
At the present moment I have the following assembleis in it:
The Model (Custom entities) describes the fields of the classes that the application use.
The Validation is validating the data integrity from UI using the reflection attributes method (checks data in all layers).
The BusinessLogicLayer is a business facade for additional logic and caching that use abstract data providers from DataAccessLayer.
The DataAccessLayer overrides the abstarct data providers using LinqtoSql data context and Linq queries. And here is the point that makes me feel i go wrong...
My DataLayer right before it sends data to the business layer, maps (converts) the data retrieved from DB to the Model classes (Custom entities) using the mappers. It looks like this:
internal static model.City ToModel(this City city)
{
if (city == null)
{
return null;
}
return new model.City
{
Id = city.CountryId,
CountryId = city.CountryId,
AddedDate = city.AddedDate,
AddedBy = city.AddedBy,
Title = city.Title
};
}
So the mapper maps data object to the describing model. Is that right and common way to work with entities or do I have to use the data object as entities (to gain a time)? Am I clear enough?
You could use your data entities in your project if they are POCOs. Otherwise I would create separate models as you have done. But do keep them in a separate assembly (not in the DataAccess project)
But I would not expose them through a webservice.
Other suggestions
imho people overuse layers. Most applications do not need a lot of layers. My current client had a architecture like yours for all their applications. The problem was that only the data access layer and the presentation layer had logic in them, all other layers just took data from the lower layer, transformed it, and sent it to the layer above.
The first thing I did was to tell them to scrap all layers and instead use something like this (requires a IoC container):
Core (Contains business rules and dataaccess through an orm)
Specification (Seperated interface pattern. Contains service interfaces and models)
User interface (might be a webservice, winforms, webapp)
That works for most application. If you find that Core grows and becomes too large too handle you can split it up without affecting any of the user interfaces.
You are already using an ORM and have you thought about using a validation block (FluentValidation or DataAnnotations) for validation? Makes it easy to validate your models in all layers.
It may be a common practice to send out DTOs from serivce boundary (WCF service, etc.) but if you are directly using your "entities" in your presentation model, I don't see any benefit in doing that.
As to the code snippet you have provided, why not use AutoMappter? It helps by eliminating writing of boiler-plate mapping codes and does that for you if you have a set of convention in place.
Get rid of the model now, before removing it later will require refactoring the whole application. The last project i worked on used this architecture and maintaining the DTO layer and mappings to the database model layer is a huge pain in the arse and offers no usefull benefits. One of the main things that is anoying is that LinkToSql does not effectively support a disconnected data model. You cannot update a database table by creating a new DB entity with a primary key matching an existing record and then stick it into the data context. You have to first retrieve the entity from the database, update it then commit the changes. Managing this results in really nasty update methods to map all the properties from your DTOs to your LinqtoSql classes. It also breaks the whole deferred execution model of LinqToSql. Don't even get me started on the problems it causes with properties on parent classes that are collections of child DTOs (e.g. a customer DTO with an Orders property that contains a collection of order DTOs), managing those mappings is really really fiddly, i had to do some extensive optimisations because retrieving a few hundred records ended up causing LinqToSql to make 200,000 database calls (admittedly there was also some pretty dumbass code as well but you get the picture).
The only valid reason to use DTOs is if you want to have multiple pluggable Data Access Layers e.g. LinqToSql and NHibernate for supporting different DB servers. That way you can swap out the data access later without having to change any other layers. If you don't need to do this then save yourself a world of pain and just use the LinqToSql entities.

Sharing content between business objects and DTOs

All,
My typical approach for a medium sized WCF service would be something like:
Define the interface using WCF data contracts and service operations. The data contracts would be POCO DTOs with no CRUD or domain logic.
Model the domain using fully featured business objects.
Provide some mechanism to go from DTO to BO and vice versa (see related question: Pattern/Strategy for creating BOs from DTOs)
Now, a lot of the time (if not always) the data content of the business object and the DTO is near identical. How do people feel about creating a library of content objects which are shared by the BO and the DTO. E.g. if we had a WibbleDTO and a WibbleBO, we could create an IWibbleContent interface which both implement. We could even create an IWibbleContent interface and a WibbleContent class which both the DTO and BO hold a reference to.
So, specific questions:
Do you ever share content/data interfaces between your DTOs and BOs?
Do you ever share data content classes between your DTOs and BOs?
If not then I guess, as per my related question, we're left with tedious copying code, or we use something like AutoMapper.
Any comments appreciated.
We are using quite similar approach as you describe with DTOs and BOs.
We rarely have common interfaces, either they are very basic (eg. interface to get BusinessId) or they are specific for a certain implementation, eg. a calculation which could be made on the client or on the server.
We actually just copy properties. They are usually trivial enough that it is not worth to share code.
At the end, more code is different then similar.
We have many attributes on these classes, which almost never are the same.
Most Properties are implemented as get; set; on the server, but with OnPropertyChangedEvent on the client, which requires the use of explicit fields.
We don't share much code on client and server side. So there is no need for common interfaces.
Even if many of the properties are the same on both classes, there is actually not much to share.
I usually create POCOs and use them through all of my layers - data access to business to ui. In the business layer I have managers that have the POCOs pased back and forth. We are going to look at the Entity Framework and/or NHibernate so I am not sure where that will lead us.
Yeah, we write some extra code but we keep everything lean and mean. We are using MVC for our UI which for me was a godsend compared to the bulk of webforms, I'll never go back. Right now our battle is should we send JSON to the ajax callbacks or use partial views, the latter is what we do most of the time.
Are we correct? Maybe not but it works for us. So many choices, so little time.

n-Tier Architecture Feedback Needed

I started my website, like stackoverflow, with a little technical debt that I'm trying to pay off. Being a contract developer, I've been in many places and see many different methods of achieving this result, but the way I'm going is..
Presentation (web)
Business Layer (old fashioned entity classes and BL layer)
Data Layer (DA classes to SQL Server via Stored Proc)
My question primarily concerns the Business Layer. Right now I have an Entity namespace and a BusinessLogic namespace.
The BL has a reference to the DA and the Entity.
The Entity has a reference to the DA
(The DA is "unaware" of the BL or Entity)
I really want all my churning of turning Data into Entities to occur within the BL -- thus the Business Logic. However, I want the Entity to be able to access the BL if need be -- and thus remove the Entity's reference to the DL.
So...
Is is "wrong" to have the BL and Entity objects within the same namespace so they can work together?
Essentially, I'm trying have an entity object like Employee (classic example, eh?) and have the Employee have a
public Hashtable[] SubordinateEmployees
property that returns a Hashtable of other Employee objects that report to this employee. But I don't want to load it until it's needed. So for most employees the property would never get accessed, but when it does, it self-loads with a call to the BL, which calls the DA.
Does the question make sense?
If so, does my solution?
Thanks so much in advance!
The usual way to deal with the kind of situation your example represents is with facades. Instead of trying to get the subordinate employees from the Employee object, you use a call to the business logic to get it.
hashtable = BL.GetSubordinateEmployees(supervisor);
That way you have a single point of access to the subordinates, and there is only one thing (the BL) accessing the data layer and creating Entities.
Let me see if I can show you a better way to think about this
you have your data access (sql server, mysql, flat xml files, etc.) all of this should be abstracted away nothing else in your application should care or know how you are getting your data, only that it dose, if anything else knows how you are getting your data you have a layer violation. if the DAL dose anything other then get data you have a layer violation. Next you implement a data access interface something like IDAL that your business layer uses, this is very important for making your code testable by forcing you to separate your layers.
Your data entities can be placed in the DAL name space or give them there own, Giving them there own forces separation. Data entities are dumb objects and should contain very little to no logic and are only aware of themselves and the data they have, THEY DO NOT CONTAIN BUSINESS LOGIC!, DATA ACCESS LOCIC, OR UI LOGIC. if they do you have a layer violation. The only function of a data entity is to hold data and be passed from one layer to the next.
Biz layer implements a data access interface like the IDAL we talked about before you can instantiate this with a factory, an IOC container, or all else failing a concrete type, but add a setter property so this can be changed for testing. The Biz Layer Only handles Business logic, it doesn't know or care where the data came from or where it's going, it only cares about manipulating the data to comply with business rules, this would include date validation, filtering (part of this is telling the DAL what data it needs, let the DAL figure out how to get it). Basically the BIZ handles all logic that isn't UI related or Data retrieval related. Just like the DAL the Biz should implement an Interface for the same reason.
The UI layer Accesses the Biz layer the same way the Biz layer accesses the DAL for the same reason. All the UI layer cares about is displaying data and getting data from the user. The IU Layer should not know anything about the business rules, with the possible exception of data validation required to populate the Data Entities.
The advantage of this architecture is it forces separation of concern making it easier to test, more flexible, and easier to maintain. Today you are building a web site but tomorrow you want to allow others to integrate vi a web service, all you have to do is create a web service that implements the IBIZ interface and your done, when you have to fix a bug in the BIZ layer, it's already fixed in both your website and web service.
Taking this to the next level, lets say you are doing a lot of heavy number crunching and you need more powerful servers to handle this so all you have to do is implement an IDal and IBIZ interface that are really wrappers to WCF that handles the communication between your servers, now your application is distributed between multiple server and you didn't have to change your code to do it.

How to Design Data Transfer Objects in Business Logic Layer

DTO
I'm building a Web application I would like to scale to many users. Also, I need to expose functionality to trusted third parties via Web Services.
I'm using LLBLGen to generate the data access layer (using SQL Server 2008). The goal is to build a business logic layer that shields the Web App from the details of DAL and, of course, to provide an extra level of validation beyond the DAL. Also, as far as I can tell right now, the Web Service will essentially be a thin wrapper over the BLL.
The DAL, of course, has its own set of entity objects, for instance, CustomerEntity, ProductEntity, and so forth. However, I don't want the presentation layer to have access to these objects directly, as they contain DAL specific methods and the assembly is specific to the DAL and so on. So, the idea is to create Data Transfer Objects (DTO). The idea is that these will be, essentially, plain old C#/.NET objects that have all the fields of, say, a CustomerEntity that are actually the database table Customer but none of the other stuff, except maybe some IsChanged/IsDirty properties. So, there would be CustomerDTO, ProductDTO, etc. I assume these would inherit from a base DTO class. I believe I can generate these with some template for LLBLGen, but I'm not sure about it yet.
So, the idea is that the BLL will expose its functionality by accepting and returning these DTO objects. I think the Web Service will handle converting these objects to XML for the third parties using it, many may not be using .NET (also, some things will be script callable from AJAX calls on the Web App, using JSON).
I'm not sure the best way to design this and exactly how to go forward. Here are some issues:
1) How should this be exposed to the clients (The presentation tier and to the Web Service code)
I was thinking that there would be one public class that has these methods, every call would be be an atomic operation:
InsertDTO, UpdateDTO, DeleteDTO, GetProducts, GetProductByCustomer, and so forth ...
Then the clients would just call these methods and pass in the appropriate arguments, typically a DTO.
Is this a good, workable approach?
2) What to return from these methods? Obviously, the Get/Fetch sort of methods will return DTO. But what about Inserts? Part of the signature could be:
InsertDTO(DTO dto)
However, when inserting what should be returned? I want to be notified of errors. However, I use autoincrementing primary keys for some tables (However, a few tables have natural keys, particularly many-to-many ones).
One option I thought about was a Result class:
class Result
{
public Exception Error {get; set;}
public DTO AffectedObject {get; set;}
}
So, on an insert, the DTO would get its get ID (like CustomerDTO.CustomerID) property set and then put in this result object. The client will know if there is an error if Result.Error != null and then it would know the ID from the Result.AffectedObject property.
Is this a good approach? One problem is that it seems like it is passing a lot of data back and forth that is redundant (when it's just the ID). I don't think adding a "int NewID" property would be clean because some inserts will not have a autoincrementing key like that. Another issue is that I don't think Web Services would handle this well? I believe they would just return the base DTO for AffectedObject in the Result class, rather than the derived DTO. I suppose I could solve this by having a LOT of the different kinds of Result objects (maybe derived from a base Result and inherit the Error property) but that doesn't seem very clean.
All right, I hope this isn't too wordy but I want to be clear.
1: That is a pretty standard approach, that lends itself well to a "repository" implementation for the best unit-testable approach.
2: Exceptions (which should be declared as "faults" on the WCF boundary, btw) will get raised automatically. You don't need to handle that directly. For data - there are three common approaches:
use ref on the contract (not very pretty)
return the (updated) object - i.e. public DTO SomeOperation(DTO item);
return just the updated identity information (primary-key / timestamp / etc)
One thing about all of these is that it doesn't necessitate a different type per operation (contrast your Result class, which would need to be duplicated per DTO).
Q1: You can think of your WCF Data Contract composite types as DTOs to solve this problem. This way your UI layer only has access to the DataContract's DataMember properties. Your atomic operations would be the methods exposed by your WCF Interface.
Q2: Configure your Response data contracts to return a new custom type with your primary keys etc... WCF can also be configured to bubble exceptions back to the UI.

Categories

Resources