Create repository without using IQueryable for specification - c#

I've recently been looking into DDD, repositories and the specification pattern and after reading a hand full of blogs and examples I'm trying to come up with a repository that I'm happy with.
I have been exposing IQueryable on my repositories until recently but after understanding that IQueryable is a leaky abstraction because of it is deferred execution and is effectively crossing the boundry from my data layer I have changed it so that my repositories return IEnumerable instead.
So I might have something like this for example:
public interface IUserRepository
{
IEnumerable<User> All();
void Save(User item);
void Delete(User item);
}
I thought okay that seems good but what if I wanted to filter the data my firstname or email? After reading a blog post I implemented a way of passing ICriteria into the All() method.
public IEnumerable<TEntity> All(ICriteria<TEntity> criteria)
{
return criteria.BuildQueryFrom(Set).ToList();
// Set is a DbSet from EntityFramework
}
And an example criteria class:
public class AccountById : ICriteria<Account>
{
private readonly int _id;
public AccountById(int id)
{
_id = id;
}
IQueryable<Account> ICriteria<Account>.BuildQueryFrom(DbSet<Account> dbSet)
{
return from entity in dbSet
where entity.Id == _id
select entity;
}
}
This works fine and I can build these criteria classes to meet my requirements and pass them into the repos and all works well.
One thing I don't like though is being tied to IQueryable because I have to use an ORM that supports Linq so if I wanted to use SqlCommand in my repository for say performance sake or so I can write cleaner SQL rather than the ORM generated SQL, how would I go about doing that?
I would also like to avoid having to write a new method for each filter like FindById, FindByUsername, FindByEmail etc.
How would I go about creating a repository that allows me to specifiy the criteria I want to select without using IQueryable so it would still work whether I used EF, nHibernate or just plain SqlCommand? I'm stuggling to find an example that uses SqlCommand and the specification pattern.
How did people used to do it before ORMs?

Personally, I don't mind IQueryable being a leaky abstraction, because it allows me to write LINQ queries in my service layer and and therefore have more testable code. As long as objects that implement IQueryable are kept inside the service layer (i.e. don't return them to the presentation layer) I don't see a problem. It maximizes the testability of your application. See for instance my blog post about testable LINQified repositories.

Related

Use repository without Entity Framework

I try to figure out how to implement the repository pattern WITHOUT Entity Framework. I need to use the ADO.NET(disconnected) implementation,(withot DbContext). I know if it is possible.
I have one interface of Repository like it:
public interface IRepository<T>
{
void Add(T newEntity);
void Remove(T Entity);
void Update(T Entity);
IQueryable<T> GetAll();
T Get(object key);
void SaveChanges();
}
So, I just need to know how use it with ADO.NET connection and Mapper or other thing.
Your question is about the repository pattern, however your sample looks more like writing a custom linq provider.
This is actually rather complex thing, and what i think you actually need is to really use the repository pattern, where you have methods like GetCar(int Id) that you then implement specifically using what ever framwork you want.
The diffrence is that with a linq provider like EF, you're exposing the ability for callers to write write any query they want. However, technically this breaks the repository pattern. The idea with the Repo pattern is to contain all the queries in a single class that then expose methods for doing only what the application needs. This way you don't get queries all over your code, and you can implement only the operations your application needs.
In other words, EF is not really an example of the repository pattern, it's a way to talk to the database and create classes that represent the entities in the database. Typically you'd use EF or something similar inside your repository and possibly expose the actual entity types for the outside to use.
Now, writing a custom linq provider is still interesting and if you want to do it, check out this series:
http://www.jacopretorius.net/2010/01/implementing-a-custom-linq-provider.html

how can I implement an agnostic data layer?

I am writing a proof of concept application.
When coming to the data layer we need the ability to connect to different databases and different technology might be used
Ado.net (sqlCommand etc..)
Entity Framework.
Nhibernate.
What I am saying is that the whatever calls our RepositoryService class is ignorant about the provider used.EG "Entity Framework, Raw Ado.Net NHibernate" etc..
Is there an example out there or an empty shell I can look at or a code snippet from you.
Just to give an idea how would you go about it.
Noddy implementation to give you an idea omitted possible IOC etc..:
public class BusinessService
{
public List<CustomerDto> GetCustomers()
{
RepositoryService repositoryService=new RepositoryService();
List<CustomerDto> customers = repositoryService.GetCustomers().ToList();
return customers
}
}
public class RepositoryService:IRepository
{
private string dbProvider;
public RepositoryService()
{
//In here determine the provider from config file EG Sql- EF etc.. and call the appriopiate repository
// dbProvider=???
}
public IEnumerable<CustomerDto> GetCustomers()
{
//Get the customers from the choosen repository
}
}
public interface IRepository
{
IEnumerable<CustomerDto> GetCustomers();
}
public class SqlRepository : IRepository
{
public IEnumerable<CustomerDto> GetCustomers()
{
throw new NotImplementedException();
}
}
public class EFRepository : IRepository
{
public IEnumerable<CustomerDto> GetCustomers()
{
throw new NotImplementedException();
}
}
public class CustomerDto
{
public string Name { get; set; }
public string Surname { get; set; }
}
Many thanks
You should be more clear about your objectives (and those of your manager). Accessing your data thrue some repository interfaces is a first step. The second step is to have a shared object representation of your data table rows (or your entities if you want to refine table mappings).
The idea behind the scene may be:
a) We don't know ORM technologies well and want to try without taking the risk to have poor performances.
b) Our database is very huge and we manipulate huges amounts of data.
c) Our database contains many thousands of tables.
d) ...
The general answer may be :
1) use the choosen ORM when possible.
2) downgrade to ADO.NET or even to stored procedures when performances are poor.
Entity Framework and NHibernate use an high level entity mapping abstraction. Do you want to use this? If not, you may use lightweight object mappers like Dapper or PetaPoco.
ORM are a good way to lower the development costs of 70% to 80% the database access code (95% if you just read data). Choosing to be able to use all of them will ensure you that the potential cost gains will be lost.
PetaPoco is very interesting for a first experiment because it includes the very light mapper source code in your C# project and generates table objects with an easy to understand T4 transform file (all the source code is small and included in your data access layer). Its major default is that its author does have time to work on it last years.
If ORM technologies can make program easier to write and scale, they have drawbacks:
1) because you work outside the database, operation between in memory (or not yet persisted) objects and database data can easily become very costly : if a search for data concerning one object in database generate one request, an operation on a collection of objects will generate as many requests as there are items in the collection.
2) because of the complex change tracking mechanisms in high level ORM, saving data can become very slow if you don't take care of this.
3) The more the ORM offers functionalities, the more your learning curve is long.
The way that I generally accomplish this task is to have different concrete implementations of your repository interfaces, so you can have an EFRepository or an NHibernateRepository or an AdoNetRepository or an InMemoryDatabaseRepository implementation.
As long as you encapsulate the construction of your repository (through a factory or dependency injection or whatever) the types that are consuming your repository don't have to be know exactly what kind of repository that they are working with.

Which types should my Entity Framework repository and service layer methods return: List, IEnumerable, IQueryable?

I have a concrete repository implementation that returns a IQueryable of the entity:
public class Repository
{
private AppDbContext context;
public Repository()
{
context = new AppDbContext();
}
public IQueryable<Post> GetPosts()
{
return context.Posts;
}
}
My service layer can then perform LINQ as needed for other methods (where, paging, etc)
Right now my service layer is setup to return IEnumerable:
public IEnumerable<Post> GetPageOfPosts(int pageNumber, int pageSize)
{
Repository postRepo = new Repository();
var posts = (from p in postRepo.GetPosts() //this is IQueryable
orderby p.PostDate descending
select p)
.Skip((pageNumber - 1) * pageSize)
.Take(pageSize);
return posts;
}
This means in my codebehind I have to do a ToList() if I want to bind to a repeater or other control.
Is this the best way to handle the return types or do I need to be converting to list before I return from my service layer methods?
Both approaches are possible and it is only matter of choice.
Once you use IQueryable you have simple repository which will work in the most cases but it is worse testable because queries defined on IQueryable are linq-to-entities. If you mock repository they are linq-to-objects in unit tests = you don't test your real implementation. You need integration tests to test your query logic.
Once you use IEnumerable you will have very complex public interfaces of your repositories - you will need special repository type for every entity which needs special query exposed on the repository. This kind of repositories was more common with stored procedures - each method on the repository was mapped to single stored procedure. This type of repository provides better separation of concerns and less leaky abstraction but in the same time it removes a lot of ORM and Linq flexibility.
For the last you can have combined approach where you have methods returning IEnumerable for most common scenarios (queries used more often) and one method exposing IQueryable for rare or complex dynamically build queries.
Edit:
As noted in comments using IQueryable has some side effects. When you expose IQueryable you must keep your context alive until you execute the query - IQueryable uses deferred execution in the same way as IEnumerable so unless you call ToList, First or other functions executing your query you still need your context alive.
The simplest way to achieve that is using disposable pattern in the repository - create context in its constructor and dispose it when repository disposes. Then you can use using blocks and execute queries inside them. This approach is for very simple scenarios where you are happy with single context per repository. More complex (and common) scenarios require context to be shared among multiple repositories. In such case you can use something like context provider / factory (disposable) and inject the factory to repository constructor (or allow provider to create repositories). This leads to DAL layer factory and custom unit of work.
The other word for your question seems to need to determine when the AppDbContext is disposed or where it is.
If you don't dispose it, meaning it's disposed when a application exits, it is no problem to return IEnumerable/IQueryable, no having actual data. However, you would need to return the type as IList, having actual data, before the AppDbContext is disposed.
UPDATE:
I think you would need to catch the following code meaning though you already know.
//outside of this code is refered to your code.
//Returning IEnumerable could be used outside this scope if AppDbContext is ensured no disposing
public IEnumerable<Post> GetIEnumerableWithoutActualData()
{
return context.Posts;
}
//Even if AppDbContext is disposed, IEnumerable could be used.
public IEnumerable<Post> GetIEnumerableWithActualData()
{
return context.Posts.ToList();
}
Your returns types should always be as high up on the inheritance hierarchy as possible (or maybe I should write that as low, if the base is towards the bottom). If all your methods require IQueryable<T>, then all the return values should surrender that type.
That said, IEnumerable<T> has a method (AsQueryable()) you can call to achieve (what I believe to be) the desired result.

Repository Pattern: How to implement a basic Repository including a predicate in C#?

I am new to repositories. I just read about implementing predicates and a Unit of Work (Fowler). I have seen repository interfaces like the following:
public interface IRepository<ET> {
ET Add( ET entity);
ET Remove( int id);
ET Get( int id);
IList<ET> Get(Expression<Func<T, bool>> predicate);
}
Of course the Unit of Work would inject a data context (Microsoft fan) to the new repository, where the Unit of Work would have a .Save() method, calling Save on all data contexts.
There's no Edit method, so I assume you can modify any Entity that pops out of the Repository then call save changes on the Unit of Work.
Is this correct? Leaky? What am I missing? Do methods of OrderBy need not ever be in a Repository? Should Paging (.Skip().Take()) somehow be implemented in the predicate?
Links to example code out there would be fantastic, especially how to implement the predicate in a repository.
if you are referring Entity Framework
i would you suggest you read this: Link
Update:
I am not a expert in repository pattern, however i do using it in my project now. a part form performance, following is the benefits that i find from this design pattern:
1, Simplify CRUD operation implementations for all entities.
with one interface:
public interface IDataRepository<T> where T : class
then you will be able to replicate others very easily and fast
public class EntityOneRepository : IDataRepository<EntityOne>
public class EntityTwoRepository : IDataRepository<EntityTwo>
2, Keeps my code dry.
some entities may have their own method for data manipulation. (i.e. store procedure)
you can extend it easily without touching other repositories.
public interface IDonationRepository : IDataRepository<Donation>
{
//method one
//method two
//....
}
for the Paging, it can be either done by Skip() and take(), or you can define your own SP in database then call it via EF4. in that case you will benefit from database sp caching as well.
Some time, keeping the code clean and logically readable is also important for a better app structure.
The repository interface you've presented is a very easy-to-use CRUD interface that can work well in many types of applications. In general, I'd rather not have paging and sorting parameters or options on my repository, instead I'd rather return an IQueryable and let callers compose those types of operations into the query (as long as you are IQueryable, a technology like EF or nHibernate can translate those operators into SQL - if you fall back to IList or IEnumerable it's all in memory operations).
Although I avoid paging and sorting I might have more specific operations on a repository to shield business logic from some details. For example, I might extend IEmployeeRepository from IRepository and add a GetManagers method, or something similar to hide the Where expression needed in the query. It all depends on the application and complexity level.
One important note on this sentence in your post:
Of course the Unit of Work would
inject a data context (Microsoft fan)
to the new repository, where the Unit
of Work would have a .Save() method,
calling Save on all data contexts.
Make sure you are using a single data context/object context inside each unit of work, because a context is essentially the underlying unit of work. If you are using multiple contexts in the same logic transaction then you'd effectively have multiple units of work.
I have a couple sample implementations in this project:
http://odetocode.com/downloads/employeetimecards.zip
The code might make more sense if you read this accompanying article:
http://msdn.microsoft.com/en-us/library/ff714955.aspx
Hope that helps,

LINQ query methods in the data access layer(DAL)

A project base on classic 3 layers: UI(not important in this question), business logic layer and data access layer. I have several tables: Customers Products Orders Users. The design is supposed to be:
//DAL methods
public IEnumerable<Customer> GetAllCustomers()
public IEnumerable<Product> GetAllProducts()
public IEnumerable<Order> GetAllOrders()
public IEnumerable<User> GetAllUsers()
//BLL methods
public IEnumerable<Order> GetOrders(long CustomerID)
public IEnumerable<Product> GetProducts(long CustomerID)
public IEnumerable<Product> GetProducts(long OrderID)
What confuses me is that I find that all methods in DAL are GetAllXXXX. And I have to admit that this design is working fine. In DAL there is nothing but GetAll methods. In BLL there is nothing but combined operations(filter/join/select) to GetAll methods. Is it weird? What's the correct way?
No, that's not weird, and in fact that is very similar to how i do it.
Only differences for me:
I use IQueryable<T> instead of IEnumerable<T> (to get deferred exec)
I have a generic repository (Repository<T>):
IQueryable<T> Find()
void Add(T)
etc etc
This way, my repositories stay clean/simple.
So your BLL could be implemented like this:
public IEnumerable<Order> GetOrders(long CustomerID)
{
Repository<Order> orderRepository = new Repository<Order>(); // should use DI here, but i digress
return orderRepository
.Find() // no query executed...
.Where(o => o.CustomerID == CustomerID) // still nothing...
.ToList(); // query executed, with BL applied! cool!
}
Makes the BLL do the projection/work/logic. Repositories just handle persistence of T, doesn't care about the actual type, or any business logic.
That's how i do it anyway.
Consider that your data access layer could be providing services like:
Create
Update
Delete
GetSingleCustomer()
CalculateUpperManagementTuesdayReport()
I wouldn't say it's terribly odd, but perhaps your DAL doesn't need to provide those services, as your application doesn't require them.
Having your filter/join/select in the BL, I'd prefer IQueryable<t> instead of IEnumerable<T>. This means that the execution of a given statement in the BL code doesn't happen until you call Single(), First(), ToList(), Count(), etc, etc within the BL code.
My question would be what would you lose if you merge the current BLL and the DAL? - they both seem to be dealing with bridging the gap from persisted data (DBs) to objects.. Seems like the simplest thing that would work.
Another way of looking at it would be is change localized ? e.g. if there is a change in DAL layer, would that be isolated or would it ripple through into the upper layers (which is undesirable).
The BLL ideally should encapsulate the rules/workflows of your domain aka domain knowledge. e.g. if certain customers are treated differently. The DAL exists to transform your data from the persisted state into objects (or data structs to be consumed by higher layers) and vice versa. That's my understanding as of today...

Categories

Resources