A project base on classic 3 layers: UI(not important in this question), business logic layer and data access layer. I have several tables: Customers Products Orders Users. The design is supposed to be:
//DAL methods
public IEnumerable<Customer> GetAllCustomers()
public IEnumerable<Product> GetAllProducts()
public IEnumerable<Order> GetAllOrders()
public IEnumerable<User> GetAllUsers()
//BLL methods
public IEnumerable<Order> GetOrders(long CustomerID)
public IEnumerable<Product> GetProducts(long CustomerID)
public IEnumerable<Product> GetProducts(long OrderID)
What confuses me is that I find that all methods in DAL are GetAllXXXX. And I have to admit that this design is working fine. In DAL there is nothing but GetAll methods. In BLL there is nothing but combined operations(filter/join/select) to GetAll methods. Is it weird? What's the correct way?
No, that's not weird, and in fact that is very similar to how i do it.
Only differences for me:
I use IQueryable<T> instead of IEnumerable<T> (to get deferred exec)
I have a generic repository (Repository<T>):
IQueryable<T> Find()
void Add(T)
etc etc
This way, my repositories stay clean/simple.
So your BLL could be implemented like this:
public IEnumerable<Order> GetOrders(long CustomerID)
{
Repository<Order> orderRepository = new Repository<Order>(); // should use DI here, but i digress
return orderRepository
.Find() // no query executed...
.Where(o => o.CustomerID == CustomerID) // still nothing...
.ToList(); // query executed, with BL applied! cool!
}
Makes the BLL do the projection/work/logic. Repositories just handle persistence of T, doesn't care about the actual type, or any business logic.
That's how i do it anyway.
Consider that your data access layer could be providing services like:
Create
Update
Delete
GetSingleCustomer()
CalculateUpperManagementTuesdayReport()
I wouldn't say it's terribly odd, but perhaps your DAL doesn't need to provide those services, as your application doesn't require them.
Having your filter/join/select in the BL, I'd prefer IQueryable<t> instead of IEnumerable<T>. This means that the execution of a given statement in the BL code doesn't happen until you call Single(), First(), ToList(), Count(), etc, etc within the BL code.
My question would be what would you lose if you merge the current BLL and the DAL? - they both seem to be dealing with bridging the gap from persisted data (DBs) to objects.. Seems like the simplest thing that would work.
Another way of looking at it would be is change localized ? e.g. if there is a change in DAL layer, would that be isolated or would it ripple through into the upper layers (which is undesirable).
The BLL ideally should encapsulate the rules/workflows of your domain aka domain knowledge. e.g. if certain customers are treated differently. The DAL exists to transform your data from the persisted state into objects (or data structs to be consumed by higher layers) and vice versa. That's my understanding as of today...
Related
I have a repository that asks for a DbContext in its constructor, and then I used ninject to solve this dependency, and I set the object scope to be InRequestScope as it means instantiating an object per HTTP request, but I'm not sure that when an HTTP request actually happens? is it when the app is being loaded? or it happens when we call SaveChanges()?
My approach for managing the DbContext is like that, I have a repository asking for a context as I said, and then the controller asks for this repository in its constructor:
public class PageGroupsController : Controller
{
IGenericRepository<PageGroup> _repository;
public PageGroupsController(IGenericRepository<PageGroup> repository)
{
_repository = repository;
}
// GET: Admin/PageGroups
public ActionResult Index()
{
return View(_repository.Get());
}
}
And the repository:
public class GenericRepository<TEntity> : IGenericRepository<TEntity> where TEntity : class
{
private DbContext _context;
public GenericRepository(DbContext context)
{
_context = context;
}
public IEnumerable<TEntity> Get()
{
return _context.Set<TEntity>().ToList();
}
}
And the NinjectWebCommon.cs which is where I solve the dependencies:
private static void RegisterServices(IKernel kernel)
{
kernel.Bind<DbContext>().To<MyCmsContext>().InRequestScope();
kernel.Bind<IGenericRepository<PageGroup>>().To<GenericRepository<PageGroup>>();
}
Is this approach good at all? I didn't want to use using {var db = new DbContext} all over the place in my controllers, and I didn't want to make a single context for the whole app as well. is this approach equal to the using approach(I mean querying what we need in a using block)? but with less coupling?
Each time a controller action is called from any web client, that is a request. So when someone visits your site and visits /Pagegroups/Index resolved through routing, that is a request. When you do a Form.Submit from the client, that is a request, make an Ajax call, that is a request.
Do you want the DbContext scoped to be constructed for each request? Absolutely, and no "longer" than a request. For simple applications, using using() within actions is perfectly fine, but it does add a bit of boilerplate code repeating it everywhere. In more complex, long lived applications where you might want to unit test or that could have more complex logic that benefits from breaking down into smaller components shared around, using blocks are a bit of a mess to share the DbContext, so an injected DbContext scoped to the request serves that purpose just fine. Every class instance serving a request is given the exact same DbContext instance.
You don't want a DbContext scoped longer than a request (I.e. Singleton) because while requests from one client may be sequential, requests from multiple users are not. Web servers will respond to various user requests at a time on different threads. EF's DbContext is not thread safe. This catches out new developers where everything seems to work on their machine when testing, only to find that once deployed to a server and handling concurrent requests, errors start popping up.
Also, as DbContext's age, they get bigger and slower tracking more instances of entities. This leads to gradual performance loss, as well as issues as a DbContext serves up cached instances that doesn't reflect data changes from possibly other sources. A new development team might get caught out with the cross-thread issue but introduce locking or such because they want to use EF's caching rather than using a shorter lifespan. (assuming DbContext are "expensive" to create all the time [they're not!:]) This often is the cause of teams calling to abandon EF because it's "slow" without realizing that design decisions prevented them from taking advantage of most of EF's capabilities.
As a general tip I would strongly recommend avoiding the Generic Repository pattern when working with EF. It will give you no benefit other than pigeon-holing your data logic. The power of EF is in the ability to handle the translation of operations against Objects and their relationships down to SQL. It is not merely a wrapper to get down to data. Methods like this:
public IEnumerable<TEntity> Get()
{
return _context.Set<TEntity>().ToList();
}
are entirely counter-productive. If you have tens of thousands of records want to order and paginate, and do something like:
var items = repository.Get()
.OrderBy(x => x.CreatedAt)
.Skip(pageNumber * pageSize)
.Take(pageSize)
.ToList();
The problem is that your repository tells EF to load, track, and materialize the entire table before any sorting or pagination take place. What's worse is that if there was any filtering to be done (Where clauses based on search criteria etc.) then these wouldn't be applied until the Repository had returned all of the records.
Instead, if you just had your controller method do this:
var items = _context.PageGroups
.OrderBy(x => x.CreatedAt)
.Skip(pageNumber * pageSize)
.Take(pageSize)
.ToList();
then EF would compose an SQL query that performed the ordering and fetched just that single page of entities. The same goes for taking advantage of Projection with Select to fetch back just the details you need, or eager loading related entities. Trying to do that with a generic repository gets either very complex (trying to pass expressions around, or lots of arguments to try and handle sorting, pagination, etc.) or very inefficient, often both.
Two reasons I recommend considering a repository are: Unit testing, and to handle low-level common filtering such as soft-delete (IsActive) and/or multi-tenancy (OwnerId) type data. Basically any time that the data generally has to conform to standard rules that a repository can enforce in one place. In these cases I recommend non-generic repositories that serve respective controllers. For instance, if I have a ManagePageGroupsController, I'd have a ManagePageGroupsRepository to serve it. The key difference in this pattern is that the Repository returns IQueryable<TEntity> rather than IEnumerable<TEntity> or even TEntity. (Unless the result of a "Create" method) This allows the consumers to still handle sorting, pagination, projection, etc. as if they were working with the DbContext, while the repository can ensure Where clauses are in place for low-level rules, assert access rights, and the repository can be mocked out easily as a substitute for unit tests. (Easier to mock a repository method that serves an IQueryable than to mock a DbContext/DbSet) Unless your application is going to be using unit tests, or has a few low-level common considerations like soft-deletes, I'd recommend not bothering with the complexity of trying to abstract the DbContext and fully leverage everything EF has to offer.
Edit: Expanding on IQueryable
Once you determine that a Repository serves a use for testing or base filtering like IsActive, you can avoid a lot of complexity by returning IQueryable rather than IEnumerable.
Consumers of a repository will often want to do things like filter results, sort results, paginate results, project results to DTOs / ViewModels, or otherwise use the results to perform checks like getting a count or checking if any items exist.
As covered above, a method like:
public IEnumerable<PageGroup> Get()
{
return _context.PageGroups
.Where(x => x.IsActive)
.ToList();
}
would return ALL items from the database to be stored in memory by the application server before any of these considerations were taken. If we want to support filtering:
public IEnumerable<PageGroup> Get(PageGroupFilters filters)
{
var query _context.PageGroups
.Where(x => x.IsActive);
if (!string.IsNullOrEmpty(filters.Name)
query = query.Where(x => x.Name.StartsWith(filters.Name));
// Repeat for any other supported filters.
return query.ToList();
}
Then adding order by conditions:
public IEnumerable<PageGroup> Get(PageGroupFilters filters, IEnumerable<OrderByCondition> orderBy)
{
var query _context.PageGroups
.Where(x => x.IsActive);
if (!string.IsNullOrEmpty(filters.Name)
query = query.Where(x => x.Name.StartsWith(filters.Name));
// Repeat for any other supported filters.
foreach(var condition in orderBy)
{
if (condition.Direction == Directions.Ascending)
query = query.OrderBy(condition.Expression);
else
query = query.OrderByDescending(condition.Expression);
}
return query.ToList();
}
then pagination:
public IEnumerable Get(PageGroupFilters filters, IEnumerable orderBy, int pageNumber = 1, int pageSize = 0)
{
var query _context.PageGroups
.Where(x => x.IsActive);
if (!string.IsNullOrEmpty(filters.Name)
query = query.Where(x => x.Name.StartsWith(filters.Name));
// Repeat for any other supported filters.
foreach(var condition in orderBy)
{
if (condition.Direction == Directions.Ascending)
query = query.OrderBy(condition.Expression);
else
query = query.OrderByDescending(condition.Expression);
}
if (pageSize != 0)
query = query.Skip(pageNumber * pageSize).Take(pageSize);
return query.ToList();
}
You can hopefully see where this is going. You may just want a count of applicable entities, or check if at least one exists. As above this will still always return the list of Entities. If we have related entities that might need to be eager loaded, or projected down to a DTO/ViewModel, still much more work to be done or a memory/performance hit to accept.
Alternatively you can add multiple methods to handle scenarios for filtering (GetAll vs. GetBySource, etc.) and pass Func<Expression<T>> as parameters to try and generalize the implementation. This adds considerable complexity or leaves gaps in what is available for consumers. Often the justification for the Repository pattern is to abstract the data logic (ORM) from the business logic. However this either cripples your performance and/or capability of your system, or it is a lie the minute you introduce Expressions through the abstraction. Any expression passed to the repository and fed to EF must conform to EF's rules (No custom functions, or system methods that EF cannot translate to SQL, etc.) or you must add considerable complexity to parse and translate expressions within your Repository to ensure everything will work. And then on top of that, supporting synchronous vs. asynchronous.. It adds up fast.
The alternative is IQueryable:
public IQueryable<PageGroup> Get()
{
return _context.PageGroups
.Where(x => x.IsActive);
}
Now when a consumer wants to add filtering, sorting, and pagination:
var pageGroups = Repository.Get()
.Where(x => x.Name.StartsWith(searchText)
.OrderBy(x => x.Name)
.Skip(pageNumber * pageSize).Take(pageSize)
.ToList();
if they want to simply get a count:
var pageGroups = Repository.Get()
.Where(x => x.Name.StartsWith(searchText)
.Count();
If we are dealing with a more complex entity like a Customer with Orders and OrderLines, we can eager load or project:
// Top 50 customers by order count.
var customer = ManageCustomerRepository.Get()
.Select(x => new CustomerSummaryViewModel
{
CustomerId = x.Id,
Name = x.Name,
OrderCount = x.Orders.Count()
}).OrderByDescending(x => x.Orders.Count())
.Take(50)
.ToList();
Even if I commonly fetch items by ID and want a repository method like "GetById" I will return IQueryable<T> rather than T:
public IQueryable<PageGroup> GetById(pageGroupid)
{
return _context.PageGroups
.Where(x => x.PageGroupId == pageGroupId);
// rather than returning a PageGroup and using
// return _context.PageGroups.SingleOrDefault(x =>x.PageGroupId == pageGroupid);
}
Why? Because my caller can still take advantage of projecting the item down to a view model, decide if anything needs to be eager loaded, or do an action like an exists check using Any().
The Repository does not abstract the DbContext to hide EF from the business logic, but rather to enable a base set of rules like the check for IsActive so we don't have to worry about adding .Where(x => x.IsActive) everywhere and the consequences of forgetting it. It's also easy to mock out. For instance to create a mock of our repository's Get method:
var mockRepository = new Mock<PageGroupRepository>();
mockRepository.Setup(x => x.Get())
.Returns(buildSamplePageGroups());
where the buildSamplePageGroups method holds code that builds the set of test data suitable for the test. That method returns a List<PageGroup> containing the test data. This only gets a bit more complex from a testing perspective if you need to support async operations against the repository. This requires a suitable container for the test data rather than List<T>.
Edit 2: Generic Repositories.
The issue with Generic repositories is that you end up compartmentalizing your entities where through details like navigation properties, they are related. In creating an order you deal with customers, addresses, orders, products etc. where the act of creating an order generally only needs a subset of information about these entities. If I have a ManageOrdersController to handle editing and creating orders and generic repositories, I end up with dependencies on several repositories for Order, Customer, Product, etc. etc.
The typical argument for generic repositories is Single Reponsibility Principle (SRP) and Do Not Repeat Yourself (DNRY/DRY) An OrderRepository is responsible for only orders, CustomerRepository is responsible for only customers. However, you could equally argue organizing the repository this way breaks SRP because the principle behind SRP is that the code within should have one, and only one reason to change. Especially without an IQueryable implementation, a repository referenced exposing methods that are used by several different controllers and related services has the potential for many reasons to change as each controller has different concerns for the actions and output of the repository. DRY is a different argument and comes down to preference. The key to DRY is that it should be considered where code is identical, not merely similar. With an IQueryable implementation there is a valid argument that you could easily have identical methods in multiple repositories, I.e. GetProducts in a ManageOrderRepository and ManageProductsRepository vs. centralizing it in a ProductsRepository referenced by both ManageOrderController and ManageProductController. However, the implementation of GetProducts is fairly dead simple, amounting to nearly a one-liner. A GetProducts method for a Product-related controller may be interested on getting products that are active vs. inactive, where getting products to complete an order would likely only ever look at active products. It boils down to a decision if trying to satisfy DRY is worth having to manage references to a handful (or more) repository dependencies vs. a single repository. (Considering things like mock setups for tests) Generic repositories specifically expect all methods across every entity type to conform to a specific pattern. Generics are great where that implementation is identical, but fails at that goal the minute the code could benefit from being allowed to be "similar" but serve a unique variation.
Instead, I opt to pair my repository to the controller, having a ManageOrdersRepository. This repository and the methods within have only one reason to ever change, and that is to serve the ManageOrdersController. While other repositories may have similar needs from some of the entities this repository does, they are free to change to serve the needs of their controller without impacting the Manage Orders process flow. This keeps constructor dependencies compact and easy to mock.
I am using ASP.NET MVC 3. I get my view's data in the following sequence:
Controller -> Service Layer -> Repository
In my repository I have a GetAll method that brings back all the records for a specific object, like Category.
So if I need a list of the all the categories then in my controller I would have something like:
IEnumerable<Category> categories = categoryService.GetAll();
In the service layer I would have something like:
public IEnumerable<Category> GetAll()
{
return categoryRepository.GetAll();
}
Now this is what I need to know where do I actually start to filter the data? Can it be done anywhere in one of these 3 layers or does it only have to be in the repository layer? Lets say I need all the parent categories. Do I have the .GetAll.Where(x => x.ParentCategoryId == null); in my controller, service layer, or repository layer?
Do I have it like this in my controller:
IEnumerable<Category> categories = categoryService.GetParentCategories();
And in my service layer I can have:
public IEnumerable<Category> GetParentCategories()
{
return categoryRepository.GetAll.Where(x => x.ParentCategoryId == null);
}
Or does my service layer have to look like this:
public IEnumerable<Category> GetParentCategories()
{
return categoryRepository.GetParentCategories();
}
And then in my repository layer like this:
public IEnumerable<Category> GetParentCategories()
{
return GetAll()
.Where(x => x.ParentCategoryId == null);
}
Please can someone help clarify this confusion that I have. There might be different scenarios. I might bring back all categories that have an active status. I might bring back categories with an inactive status. Then do I need a method for each?
You should filter at the closest you can from the data source, otherwise you'll be retrieving records to upper layers that will just be discarded due to a filtering option. This does not scale well, so you need to expose filtering capabilities at all layers that require it, but make sure that the actual filtering is performed in lowest layer possible, generally it is performed at the database level.
In the example you posted if use GetAll which return an IEnumerable of all the records and only then apply the filtering you'll have problems in the future because you're basically loading an entire table into memory and only then applying a filtering.
Since you're using EF you could take advantage of the deferred execution properties of the IQueryable. Check:
.NET Entity Framework - IEnumerable VS. IQueryable
Should a Repository return IEnumerable , IQueryable or List?
Update: Following up on your comment you should also check:
LINQ to entities vs LINQ to objects - Are they the same?
You should always try to fetch as little as possible from the database. And you should therefore do all filtering in your repository classes.
Many articles suggests that you create and use generic repositories. But imho they will not work very well when your application grows. I recommend that you create proper repository classes with proper search methods like:
emailRepository.GetForUser("Ada");
userRepository.GetNewUsers();
First of all, you hide implementation details like how to identify new users. It also makes the code easier to understand and extend than using a generic query.
You can also add some filtering options:
emailRepository.GetForUser("Ada", Filtering.New().Paged(1, 20).SortedBy("FirstName"));
Unlike #JoĆ£oAngelo I do NOT recommend that you use IQueryable outside of your repository. By doing so you'll move the database execution to outside your repository class. And that means that any errors can no be handled by your repository.
I have a concrete repository implementation that returns a IQueryable of the entity:
public class Repository
{
private AppDbContext context;
public Repository()
{
context = new AppDbContext();
}
public IQueryable<Post> GetPosts()
{
return context.Posts;
}
}
My service layer can then perform LINQ as needed for other methods (where, paging, etc)
Right now my service layer is setup to return IEnumerable:
public IEnumerable<Post> GetPageOfPosts(int pageNumber, int pageSize)
{
Repository postRepo = new Repository();
var posts = (from p in postRepo.GetPosts() //this is IQueryable
orderby p.PostDate descending
select p)
.Skip((pageNumber - 1) * pageSize)
.Take(pageSize);
return posts;
}
This means in my codebehind I have to do a ToList() if I want to bind to a repeater or other control.
Is this the best way to handle the return types or do I need to be converting to list before I return from my service layer methods?
Both approaches are possible and it is only matter of choice.
Once you use IQueryable you have simple repository which will work in the most cases but it is worse testable because queries defined on IQueryable are linq-to-entities. If you mock repository they are linq-to-objects in unit tests = you don't test your real implementation. You need integration tests to test your query logic.
Once you use IEnumerable you will have very complex public interfaces of your repositories - you will need special repository type for every entity which needs special query exposed on the repository. This kind of repositories was more common with stored procedures - each method on the repository was mapped to single stored procedure. This type of repository provides better separation of concerns and less leaky abstraction but in the same time it removes a lot of ORM and Linq flexibility.
For the last you can have combined approach where you have methods returning IEnumerable for most common scenarios (queries used more often) and one method exposing IQueryable for rare or complex dynamically build queries.
Edit:
As noted in comments using IQueryable has some side effects. When you expose IQueryable you must keep your context alive until you execute the query - IQueryable uses deferred execution in the same way as IEnumerable so unless you call ToList, First or other functions executing your query you still need your context alive.
The simplest way to achieve that is using disposable pattern in the repository - create context in its constructor and dispose it when repository disposes. Then you can use using blocks and execute queries inside them. This approach is for very simple scenarios where you are happy with single context per repository. More complex (and common) scenarios require context to be shared among multiple repositories. In such case you can use something like context provider / factory (disposable) and inject the factory to repository constructor (or allow provider to create repositories). This leads to DAL layer factory and custom unit of work.
The other word for your question seems to need to determine when the AppDbContext is disposed or where it is.
If you don't dispose it, meaning it's disposed when a application exits, it is no problem to return IEnumerable/IQueryable, no having actual data. However, you would need to return the type as IList, having actual data, before the AppDbContext is disposed.
UPDATE:
I think you would need to catch the following code meaning though you already know.
//outside of this code is refered to your code.
//Returning IEnumerable could be used outside this scope if AppDbContext is ensured no disposing
public IEnumerable<Post> GetIEnumerableWithoutActualData()
{
return context.Posts;
}
//Even if AppDbContext is disposed, IEnumerable could be used.
public IEnumerable<Post> GetIEnumerableWithActualData()
{
return context.Posts.ToList();
}
Your returns types should always be as high up on the inheritance hierarchy as possible (or maybe I should write that as low, if the base is towards the bottom). If all your methods require IQueryable<T>, then all the return values should surrender that type.
That said, IEnumerable<T> has a method (AsQueryable()) you can call to achieve (what I believe to be) the desired result.
I've recently been looking into DDD, repositories and the specification pattern and after reading a hand full of blogs and examples I'm trying to come up with a repository that I'm happy with.
I have been exposing IQueryable on my repositories until recently but after understanding that IQueryable is a leaky abstraction because of it is deferred execution and is effectively crossing the boundry from my data layer I have changed it so that my repositories return IEnumerable instead.
So I might have something like this for example:
public interface IUserRepository
{
IEnumerable<User> All();
void Save(User item);
void Delete(User item);
}
I thought okay that seems good but what if I wanted to filter the data my firstname or email? After reading a blog post I implemented a way of passing ICriteria into the All() method.
public IEnumerable<TEntity> All(ICriteria<TEntity> criteria)
{
return criteria.BuildQueryFrom(Set).ToList();
// Set is a DbSet from EntityFramework
}
And an example criteria class:
public class AccountById : ICriteria<Account>
{
private readonly int _id;
public AccountById(int id)
{
_id = id;
}
IQueryable<Account> ICriteria<Account>.BuildQueryFrom(DbSet<Account> dbSet)
{
return from entity in dbSet
where entity.Id == _id
select entity;
}
}
This works fine and I can build these criteria classes to meet my requirements and pass them into the repos and all works well.
One thing I don't like though is being tied to IQueryable because I have to use an ORM that supports Linq so if I wanted to use SqlCommand in my repository for say performance sake or so I can write cleaner SQL rather than the ORM generated SQL, how would I go about doing that?
I would also like to avoid having to write a new method for each filter like FindById, FindByUsername, FindByEmail etc.
How would I go about creating a repository that allows me to specifiy the criteria I want to select without using IQueryable so it would still work whether I used EF, nHibernate or just plain SqlCommand? I'm stuggling to find an example that uses SqlCommand and the specification pattern.
How did people used to do it before ORMs?
Personally, I don't mind IQueryable being a leaky abstraction, because it allows me to write LINQ queries in my service layer and and therefore have more testable code. As long as objects that implement IQueryable are kept inside the service layer (i.e. don't return them to the presentation layer) I don't see a problem. It maximizes the testability of your application. See for instance my blog post about testable LINQified repositories.
Should I be filtering my IQueryable results from the Domain Service?
For example... My 3 portals (Websites) access the same Domain Service Layer, depending on the type of user it is, I call a specific repository method and return the result,
Current Repository Layer:
IQueryable<Products> GetAllProductsForCrazyUserNow(CrazyUser id);
Products GetAProductForCrazyUserNow(CrazyUser id,product id);
IQueryable<Products> GetProductsForNiceUserNow(NiceUser id);
Products GetProductsForNiceUserNow(NiceUser id,product id);
Would it be best just to do this in the Repository Layer:
IQueryable<Products> GetAllProducts();
Products GetAProduct(product id);
Then within the Domain Service, I simple do the filter i.e.
var Niceman = IQueryable<Products> GetAllProducts().Where(u=> u.Name == "Nice");
NOTE: I have a read only session and session which includes CRUD within the Repository Layer, so please keep that in mind when answering.
Second question: Should I do any filtering at all in the domain service layer? This very layer is the only layer than can amend the Entity i.e. Product.Price == 25.00; This is not delegated to the repository layer.
I normally use the repository layer to do simple CRUD work and have the domain layer perform any business logic or in your case any filtering that is needed before passing that data back to the UI layer.
Separating out the business/filtering logic from the repository layer will help keep things clean. Also, in the future if you move to a different type of data access pattern, then you won't have to change how that code works since it will be separated in your domain layer.
Haroon,
I use extension methods on IQueryable<Classes> OUTSIDE of the repo. in fact, i have a set of classes that i call 'Filters' and they are usually something along these lines:
public static class ProductFilters
{
public static IQueryable<Products> NiceMan(
this IQueryable<Products> customQuery, string filterName)
{
if (!string.IsNullOrEmpty(filterName))
customQuery = customQuery.Where(u => u.Name == filterName);
return customQuery;
}
// create lots of other Products based filters here
// and repeat with seperate IQueryable<Classes> per type
}
usage:
var Niceman = IQueryable<Products> GetAllProducts().NiceMan("Nice");
I find this a good separation of logic and keeps the repo clean. And in answer to the second question, yes, use this filter/extension logic inside the service layer, rather than the repo as well.
I've had this same question myself. My hesitation to putting it in the service layer was that in some cases the filtering process could remove the bulk of the records returned from the repo, and I didn't want to pull more data than necessary from the database.
I ended up moving to NHibernate, and had my repository methods accept DetachedCriteria arguments. I then passed the user information to the service layer and had it perform the filtering not by manipulating an IQueryable but by constructing a DetachedCriteria object and passing it along to the repo, thus modifying the SQL and limiting the database work.
So far it seems to work pretty well, and "feels" right since I have my logic pretty solidly in the service layer with the repo only doing basic CRUD.