Repository and Unit of Work for x Contexts - c#

I'm trying to implement the Repository Pattern with UoW for our small Application.
As far as i read the main Idea of the UoW is to expose the Repositories via one Context and to Save all Stuff in one step, f.e. with Transaction, to make sure the Operations are atomic.
Everything fine so far and I checked the examples like http://www.asp.net/mvc/tutorials/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
The problem: We have a logical BusinessModel, which gets Data from the TFS API and a SQL-Database using Entitiy Framework. So we have seperate Systems and no Contexts.
UoW still seems a good Idea to implement (Rollback, if one System could not save properly etc.), but does it need Repository as well? If I create a Repository, which hold just a List of the logical BusinessModel and gets the Data from EF/TFS API, I could do everything I'd like to in it.
Did I understand these two Concepts wrong? Or is just our environment not workingfor this Patterns?
Thanks in advance
Matthias
Edit: To show what I mean:
We have a DataLayer.TFS Project, which we can query TFS-relevant data like this:
IEnumerable<WorkItem> tfsItems = TFS.QueryByParameter(criterias);
And we have a DataLayer.SQL Project, where we have the EF and query data like this:
using (var context = new SimpleContextContainer())
{
//Some Tables to List etc.
}
Then we have a ugly procedure called Concat, which merges the two Lists:
private static List<NewItem> Concat(IEnumerable<WorkItem> tfsItems, IEnumerable<OldDBItem> dbItems)
{
List<NewItem> result = new List<NewItem>();
foreach(var tfsItem in tfsItems)
{
var tmpItem = new NewItem();
tmpItem.PbiId = tfsItem.Id;
//Do this for all properties
result.Add(tmpItem);
}
return result;
}
And we have an opposite Prodecure for splitting the Data.

It is not easy to tell what your architecture is here. But I think you can go with something like this:
Business-Layer: Work with your domain objects, your "business".
Domain-Layer: Your domain objects, accessing data from repositories to build them, merge them, provide constraints, etc
Data-Access-Layer: Repositories, one for each source
Persistence-Layer: Unit of Work
Data-Layer: SQL Database, TFS
Data-Access-Layer and persistence layer can be merged in case of TFS. For the SQL Database you get a context from EF, which should be used in that case in the data access layer. So you have methods like AddEntities or DeleteEntities there.
You can have constraints on the different layers of course.
The repository usually gives back full functional domain objects. So if you achieve that, you are on the right way I think. The domain-layer that I wrote above is now in place as far as I understood your comments. Perhaps you can and should get rid of that, replaced by a single assembly that defines your business objects that is used by the Repository and your business layer.

Related

Building applications using multiple datasources for single objects

I'm struggling finding a right solution for my application architecture. For my application I have a single class for customers. The data for filling my customer objects are spread over multiple different types of datasources. The main part is exposed in a readonly Oracle database, other parts are exposed using a webservices and I need te save some extra data to another datasource (for instance a MS SQL database using entityframework) since I only have readonly rights for most datasouces (they are managed somewhere else).
For this reason I wanna build some kind of central library with connectors to all of my datasources for creating a centralized Customer Object to work with. So far so good for this idea (I think) but I can't find any documentation or example with best practices how to achieve such a solution.
EXAMPLE:
* Main Application (multiple applications)
- Central Business Logic Layer (Business-API)
* Webservice Connector
* Oracle Connector
* EntityFramework Connector
Does anyone know if there is some good reading material on this specific subject?
Kind regards
The specific problem you describe with customer objects sounds a lot like the one solved by the Data Mapper pattern, which is technically a kind of Mediator. Quoting from the Wikipedia page for Data Mapper:
A Data Mapper is a Data Access Layer that performs bidirectional transfer of data between a persistent data store (often a relational database) and an in memory data representation (the domain layer). The goal of the pattern is to keep the in memory representation and the persistent data store independent of each other and the data mapper itself. The layer is composed of one or more mappers (or Data Access Objects), performing the data transfer. Mapper implementations vary in scope. Generic mappers will handle many different domain entity types, dedicated mappers will handle one or a few.
Although the language of the problem above speaks of a persistent data store that's singular, there's no reason why it couldn't be several data locations (Mediator pattern hides the details from the collaborators).
There is an extension of this pattern, known as the Repository pattern:
I suggest the DAO-Pattern to abstract from any data access. The business logic should not be aware of any datasources. This is the most important aim. Anything else has to be subordinated.
You can create a constructor that accepts datasources like:
public class Customer
{
public Customer(OracleConnector oracle, WebSerivceConnector webservice, EntityConnector entity)
{
this.oracle = oracle;
this.webservice = webservice;
this.entity = entity;
}
public void Fetch()
{
// fetch data from oracle, webservice, and entity.
this.Name = oracle.GetCustomerName();
}
}
This way only Customer knows how to get the data, all the logic is in one place. You can even make it more testable and less coupling by creating interfaces for connectors.
public interface IOracleConnector
{
// add something here
string GetCustomerName();
}
public class OracleConnector
: IOracleConnector
{
// add the implementation here.
}
Then change Customer constructor to accepts IOracleConnector like:
public Customer(IOracleConnector oracle, WebSerivceConnector webservice, EntityConnector entity)
{
// your code here.
}
Now, you can create a mock to test Customer without actually connecting to the database.

Repository pattern and localized lookup tables

I am trying to grabs the idea of the pattern repository and trying to get it implemented in database structures I've already set up in the past. I'm now trying to get the best practice to work with my lookup tables. I've created a test project to play around and this is my database model:
You can see that I have three tables for the lookups: Lookup, Language and LookupLanguage. Language table simply contains the languages.
Lookup tables holds the different types used throughout the models.
And LookupLanguage links the both tables together:
I've created anew project with all the models 1 to 1 to the database tables:
I also created a generic repository and a generic CrudService interface:
public interface ICrudService<T> where T : IsActiveEntity, new()
{
int Create(T item);
void Save();
void Delete(int id);
T Get(int id);
IEnumerable<T> GetAll();
IEnumerable<T> Where(Expression<Func<T, bool>> func, bool showDeleted = false);
void Restore(int id);
}
Now, according to the following post: When implementing the repository pattern should lookup value / tables get their own Repository? , the repository should hide the underlying database layer. So I think I need a new implementation of a service and/or repository to get the lookups, but then, where do I have to tell in which language I need to have the lookup?
Let's take the status (new, accepted, refused) from the company as an example.
The company model is as follow:
public partial class Company : IsActiveEntity
{
[Required]
[MaxLength(50)]
public string CompanyName { get; set; }
public System.Guid StatusGuid { get; set; }
[ForeignKey("StatusGuid")]
public virtual Lookup Status { get; set; }
}
I guess I don't need to have a separate implementation of a repository?
But I need a separate implementation CompanyService.
interface ICompanyService : ICrudService<Company>
{
IQueryable<LookupLanguage> GetStatuses(Guid languageguid);
LookupLanguage GetStatus(Guid statusguid, Guid languageguid);
}
Is this the correct approach, or do I miss something here?
Creating a Generic LookupRepository in your case in a better option because of your table schema and maintainence perspective.
I'm not sure whether you are using both Service Locator and Repository pattern or just Repository because of the name ICompanyService. But regardless, I agree that Repositories should not represent tables 1-1 always but they do most of the times.
The SO link you provided has a different table structure than yours. You have a generic lookup table vs the link has a separate table for each lookup. In the case where you have separate tables it makes sense to have the lookup repository method go with the entity repository since you will have a separate code to fetch the data for each lookup(as they have separate tables with different schema).
But in you case you have a single table that stores all the lookup types for each language and it makes sense to have a single LookupRepository that returns all the various types of lookups based on Language and LookupType. If you create each lookup method in separate entity repositories (like GetStatuses in CompanyRepository and GetStatuses in ContactRepository) you will have to repeat the logic in the method for each repository.
Think if you change the schema of the lookup table (say add a column) and you want to test all places the lookups are used it will be nightmare if you have lookup methods all over the place and pretty easy if you have one method in LookupRepository.
interface ILookupService : ICrudService<Lookup>
{
IQueryable<Lookup> GetStatuses(Guid languageguid, LookupType lookupType);
Lookup GetStatus(Guid statusguid, Guid languageguid, LookupType lookupType);
}
As regards your question, "Is this the correct approach" - this entirely depends on your specific needs.
What you have done doesn't seem to have any real issues. You have implemented the repository pattern using generics which is great. You are using interfaces for your repositories which allows for easier unit testing, also great!
One of your tags seems to indicate you are interested in the Entity Framework. You do not seem to be using that. The Entity Framework would simplify your code by creating the boiler plate classes for you. You can still use your repository pattern code with the classes created by the Entity Framework.
It seems that you are confusing the idea of a service and a repository. A repository is a general object which allows you to get data from a store without caring about the implementation. In your example, ICompanyService is a repository.
It is really controversial topic and there are different approaches to this problem. In our data logic we are not using repository pattern because we do not want to abstract most of the benefits of Entity Framework. Instead, we pass the context to the business logic which is already a combination of UoW / Repository pattern. Your approach is okay if you are going this way on all of your company services. However what I have seen so far, putting methods to the related services by their return values is the best approach to remind where they are. For instance if you want to get the company lookup, create a ILookupService and put GetLookUpsByCompany(int companyId) method to retrieve the company lookups.
I would argue with the linked response. Repositories ARE linked to database entities, considering the Entity Framework itself as a uow/repository implementation is a best example. On the other hand, services are for domain concerns and if there is a mismatch between your database entities and domain entities (you have two separate layers), services can help to glue the two.
In your specific case, you have repositories although you call them services. And you need a repository per database entity, that's just easier to implement and maintain. And also it helps to answer your question: yes, you need the extra repository for the linking table.
A small suggestion. You seem to have a generic query function that only accepts where clauses
IEnumerable<T> Where(Expression<Func<T, bool>> func, bool showDeleted = false);
If you already follow this route that allows arbitrary filtering expressions (which itself is a little arguable as someone will point out that you can' possibly guarantee that all technically possible filters can be executed by the database engine), why don't you allow all possible queries, including ordering, paging, etc:
IQueryable<T> Query { get; }
This is as easy to implement as your version (you just expose the dbset) but allows clients to perform more complicated queries, with the same possible concern that such contract is possibly too broad.
Localization is a presentation layer thing. The lower layers of your application should bother with it as little as possible.
I see two different kind of lookups: translations of coded concepts (Mr/Miss/Mrs) and translations of entity properties (company name maybe, or job titles or product names).
Coded concepts
I would not use lookup tables for coded concepts. There is no need to bother the lower layers at all with this. You will only need to translate them once for the entire application and create simple resource files that contain the translations.
But if you do wish to keep the translations in the database, a separate lookup repository for the codes or even per code system will sort of replace the resource file and serve you fine.
Entity properties
I can imagine different/nastier localization issues when certain entities have one or more properties that get translated in different languages. Then, the translation becomes part of the entity. I'd want the repository to cough up entity objects that contain all translations of the description, in a dictionary or so. Cause the business layer should not worry about language when querying, caching and updating relations. It should not ask the company repository for the Dutch version of company X. It should simply ask for company X and be served a Company object that contains its name in Dutch, English and French.
I've one more remark about the actual database implementation:
I think the lookup tables are distracting from the actual entities, to the point where you have forgotten to create a relation between person and person company. ;) I'd suggest putting all translations of entity properties in a single XML type column instead.
This illustrates why the repository should handle entities plus translations. If you were to make this storage layer level implementation change at some point, i.e. go from lookup tables to xml columns, the repository interfaces should remain the same.

How to create a business model wrapper for a generic database approach?

I'm currently facing a performance problem with creating POCO objects from my database. I'm using Entity Framework 4 as OR-Mapper.
The whole application is a prototype for now.
Let's assume I want to have some business objects like classes 'Printer' or 'Scanner'. Both classes inherit from a BaseClass called Product.
The business classes exist.
I try to use a more generic database approach. I don't want to create tables for "Printer" nor "Scanner". I want to have 3 tables: One called Product, and the other Property and PropertyValue (which stores all assigned values to a specific Product).
In my business layer I do create a specific object like this:
public Printer GetPrinter(int IDProduct)
{
Printer item = new Printer();
// get the product object with EF
// get all PropertyValues
// (with Reflection) foreach property in item.GetType().GetProperties
// {
// property.SetValue("specific value")
// }
return item;
}
This is how the EF model looks like:
Works fine so far. For now I'm doing performance tests for retrieving multiple sets.
I've created a prototype and improved it several times to increase the performance. It is still far away from being usable.
I takes 919ms to create 300 objects who only contain 3 properties.
The reason for choosing such DB design is to have a generic database design. Adding new properties should only be done in the business model.
Am I just being too stupid to create a performant way of retrieving xx objects or is my approach totally wrong? As far as I understand OR-Mapper, they are basically doing the same?
I think you missed whole point of ORM. The reason why people are using ORM is to be able to persist buisness objects and easily retrieve business objects. You are using ORM to get just data for your business objects' factories. Factories are using reflection to build business object from materialized classes retrieved by ORM. This will always be very slow because:
Query compilation is slow (you can precompile it)
Object materialization is slow (you can't avoid it)
Reflection is slow (you can't avoid it)
IMO if you want to follwo this DB design to have generic tables absolutely independent on your business objects you don't need ORM or at least you don't need EF.
The reason for your performance problems is that generic approach is not follwed in your business model. So you must somewhere convert generic data to specific data = slow operation.
If you want to improve performance define set of shared properties and place them into Product. Then either use your current PropertyValue and Property for additional non shared properties or use simply ExtendedProperties table storing key value pairs. Your entities will be of type Product with inner type property, shared properties and collection of extended properties. That is generic approach.
Firstly, it's not clear to me what you have in the way of POCOs. Did you hand code these and your context or T4 generate them? There are some great articles here that benchmark performance with no POCO, T4 Generated POCOs/Context and hand coded POCOs/Context. As expected there is HUGE performance savings going with POCOs (more than a 15-fold boost in performance in his benchmark) going the POCO route over those generated by the Entity Framework. You don't say what DBMS...if MSSQL have you turned on the profiler and see what's being generated?

Persistence with EntityFramework in ASP.NET MVC application

In my ASP.NET MVC application I need to implement persistence of data. I've choose Entity Framework for its ability to create classes, database tables and queries from entity model so that I don't have to write SQL table creation or Linq to SQL queries by hand. So simplicity is my goal.
My approach was to create model and than a custom HttpModule that gets called at the and of each request and that just called SaveChanges() on the context. That made my life very hard - entity framework kept throwing very strange exception. Sometimes it worked - no exception but sometimes it did not. First I was trying to fix the problems one by one but when I got another one I realized that my general approach is probably wrong.
So that is the general practice to implement for implementing persistence in ASP.NET MVC application ? Do I just call saveChanges after each change ? Isn't that little inefficient ? And I don't know how to do that with Services patter anyway (services work with entities so I'd have to pass context instance to them so that they could save changes if they make some).
Some links to study materials or tutorials are also appreciated.
Note: this question asks for programing practice. I ask those who will consider it vague to bear in mind that it is still solving my very particular problem and right technique will save me a lot of technical problems before voting to close.
You just need to make sure SaveChanges gets called before your request finishes. At the bottom of a controller action is an ideal place. My controller actions typically look like this:
public ActionResult SomeAction(...)
{
_repository.DoSomething();
...
_repository.DoSomethingElse();
...
_repository.SaveChanges();
return View(...);
}
This has the added benefit that if an exception gets thrown, then SaveChanges will not get called. And you can either handle the exception in the action or in Controller.OnException.
It's going to be no more or less efficient than calling a stored procedure that many number of times (with respect to number of connections that need to be made).
Nominally, you would make all your changes to the object set, then SaveChanges to commit all those changes.
So instead of doing this:
mySet.Objects.Add(someObject);
mySet.SaveChanges();
mySet.OtherObjects.Add(someOtherObject);
mySet.SaveChanges();
You just need to do:
mySet.Objects.Add(someObject);
mySet.OtherObjects.Add(someOtherObject);
mySet.SaveChanges();
// Commits Both Changes
Usually your data access is wrapped by an object implementing the repsitory pattern. You then invoke a Save() method on the repository.
Something like
var customer = customerRepository.Get(id);
customer.FirstName = firstName;
customer.LastName = lastName;
customerRepository.SaveChanges();
The repository can then be wrapped by a service layer to provide view model objects or DTO's
Isn't that little inefficient ?
Don't prematurely optimise. When you have a performance issue, analyse the performance, identify a cause and then optimise. Repeat.
Update
A repository wraps data access, usually a single entity. A service layer wraps business logic and can access multiply entities through multiple repositories. It usually deals with 'slim' models or DTO's.
An example could be something like getting a list of invoices for a customer
public Customer GetCustomerWithInvoices(int id) {
var customer = customerRepository.Get(id);
var invoiceList = invoiceRepository.GetAllInvoicesFor(id);
return new Customer {
Customer = customer,
Invoices = invoiceList
};
}

Adding behavior to LINQ to Entities models

What's the preferred approach when using L2E to add behavior to the objects in the data model?
Having a wrapper class that implements the behavior you need with only the data you need
using (var dbh = new ffEntities())
{
var query = from feed in dbh.feeds select
new FFFeed(feed.name, new Uri(feed.uri), feed.refresh);
return query.ToList();
}
//Later in a separate place, not even in the same class
foreach (FFeed feed in feedList) { feed.doX(); }
Using directly the data model instances and have a method that operates over the IEnumerable of those instances
using (var dbh = new ffEntities())
{
var query = from feed in dbh.feeds select feed;
return query.ToList();
}
//Later in a separate place, not even in the same class
foreach (feeds feed in feedList) { doX(feed); }
Using extension methods on the data model class so it ends up having the extra methods the wrapper would have.
public static class dataModelExtensions {
public static void doX(this feeds source) {
//do X
}
}
//Later in a separate place, not even in the same class
foreach (feeds feed in feedList) { feed.doX(); }
Which one is best? I tend to favor the last approach as it's clean, doesn't interfere with the CRUD facilities (i can just use it to insert/update/delete directly, no need to wrap things back), but I wonder if there's a downside I haven't seen.
Is there a fourth approach? I fail at grasping LINQ's philosophy a bit, especially regarding LINQ to Entities.
The Entity classes are partial classes as far as i know, so you can add another file extending them directly using the partial keyword.
Else, i usually have a wrapper class, i.e. my ViewModel (i'm using WPF with MVVM). I also have some generic Helper classes with fluent interfaces that i use to add specific query filters to my ViewModel.
I think it's a mistake to put behaviors on entity types at all.
The Entity Framework is based around the Entity Data Model, described by one of its architects as "very close to the object data model of .NET, modulo the behaviors." Put another way, your entity model is designed to map relational data into object space, but it should not be extended with methods. Save your methods for business types.
Unlike some other ORMs, you are not stuck with whatever object type comes out of the black box. You can project to nearly any type with LINQ, even if it is shaped differently than your entity types. So use entity types for mapping only, not for business code, data transfer, or presentation models.
Entity types are declared partial when code is generated. This leads some developers to attempt to extend them into business types. This is a mistake. Indeed, it is rarely a good idea to extend entity types. The properties created within your entity model can be queried in LINQ to Entities; properties or methods you add to the partial class cannot be included in a query.
Consider these examples of a business method:
public Decimal CalculateEarnings(Guid id)
{
var timeRecord = (from tr in Context.TimeRecords
.Include(“Employee.Person”)
.Include(“Job.Steps”)
.Include(“TheWorld.And.ItsDog”)
where tr.Id = id
select tr).First();
// Calculate has deep knowledge of entity model
return EarningsHelpers.Calculate(timeRecord);
}
What's wrong with this method? The generated SQL is going to be ferociously complex, because we have asked the Entity Framework to materialize instances of entire objects merely to get at the minority of properties required by the Calculate method. The code is also fragile. Changing the model will not only break the eager loading (via the Include calls), but will also break the Calculate method.
The Single Responsibility Principle states that a class should have only one reason to change. In the example shown on the screen, the EarningsHelpers type has the responsibility both of actually calculating earnings and of keeping up-to-date with changes to the entity model. The first responsibility seems correct, the second doesn't sound right. Let's see if we can fix that.
public Decimal CalculateEarnings(Guid id)
{
var timeData = from tr in Context.TimeRecords
where tr.Id = id
select new EarningsCalculationContext
{
Salary = tr.Employee.Salary,
StepRates = from s in tr.Job.Steps
select s.Rate,
TotalHours = tr.Stop – tr.Start
}.First();
// Calculate has no knowledge of entity model
return EarningsHelpers.Calculate(timeData);
}
In the next example, I have rewritten the LINQ query to pick out only the bits of information required by the Calculate method, and project that information onto a type which rolls up the arguments for the Calculate method. If writing a new type just to pass arguments to a method seemed like too much work, I could have also projected onto an anonymous type, and passed Salary, StepRates, and TotalHours as individual arguments. But either way, we have fixed the dependency of EarningsHelpers on the entity model, and as a free bonus we've gotten more efficient SQL, as well.
You might look at this code and wonder what would happen if the Job property of TimeRecord where nullable. Wouldn't I get a null reference exception?
No, I would not. This code will not be compiled and executed as IL; it will be translated to SQL. LINQ to Entities coalesces null references. In the example query shown on the screen, StepRates would simply return null if Job was null. You can think of this as being identical to lazy loading, except without the extra database queries. The code says, "If there is a job, then load the rates from its steps."
An additional benefit of this kind of architecture is that it makes unit testing of the Web assembly very easy. Unit tests should not access a database, generally speaking (put another way, tests which do access a database are integration tests rather than unit tests). It's quite easy to write a mock repository which returns arrays of objects as Queryables rather than actually going to the Entity Framework.

Categories

Resources