I have a 'Customer' POCO entity within my Entity Framework 4 project. I want to expose my Customer entities to my upper layers as a generic list rather than an ObjectSet.
I have an IUnitOfWork interface which looks as follows:
public interface IUnitOfWork
{
string Save();
IList<Customer> Customers { get; }
}
Down at my Entity Framework DAL (which implements the above interface) I have the following:
public class EntityContainer : ObjectContext, IUnitOfWork
{
private IObjectSet<Customer> _customers;
public IList<Customer> Customers
{
get
{
if (_customers == null)
{
_customers = CreateObjectSet<Customer>("Customers");
}
return _customers.ToList<Customer>() ;
}
}
}
However the 'CreateObjectSet("Customers")' line doesn't work. Every time I try to add a new 'Customer' nothing happens. Interestingly, if I revert to using an IObjectSet then the code works. For example:
public interface IUnitOfWork
{
string Save();
IObjectSet<Contact> Contacts { get; }
}
public class EntityContainer : ObjectContext, IUnitOfWork
{
private IObjectSet<Customer> _customers;
public IObjectSet<Customer> Customers
{
get
{
if (_customers == null)
{
_customers = CreateObjectSet<Customer>("Customers");
}
return _customers;
}
}
}
IQueryable also works, but I cannot get IList to work and I have no idea why. Anyone any ideas?
#
A correction to the original question. Using IQueryable doesn't work, nor does IEnumerable. This is because the Customer repository needs to provide 'Add' and 'Delete' methods to add/delete from the entity collection (add or remove customer entities in the above example). Neither IQueryable or IEnumerable allow you to add or remove objects; instead, an ICollection or IList must be used. This leaves me back at my original problem. I do not want to expose my collection to the repository as an ObjectSet. I want to use a type which is not tied to the EntityFramework i.e. - I want to use a generic list.
Has anyone any more suggestions? I suspect there's a straightforward way of doing this, but I'm not familiar enough with the framework to figure it out.
You seem to be missing a Repository in all of this. The Repository is usually what handles the conversion from ObjectSet<T> to IList<T> (or, in most cases, IEnumerable<T> or IQueryable<T>).
public class EntityContainer : ObjectContext
{
private IObjectSet<Customer> _customers;
public IObjectSet<Customer> Customers
{
get
{
return _customers ??
( _customers = CreateObjectSet<Customer>("Customers");
}
}
}
public class CustomerRepository
{
EntityContext _context = new EntityContext();
public IQueryable<Customer> FindAll()
{
return _context.Customers;
}
public Customer FindById(int id)
{
return _context.Customers.Single(c => c.Id == id);
}
// And so on.
}
I usually then have my UnitOfWork create the Repositories that should be enlisted in the Unit of Work so that anything done through the repositories is bundled in a single operation.
Keep in mind, that my UnitOfWork only would have two methods. One for getting a repository and another for committing the Unit of Work. All data retrieval is handled by the Repositories.
_customers.ToList() is the culprit. ToList executes the query and copies all the items from that query into a new collection object. this new collection object does not provide the tracking capabilities that ObjectSet has.
Related
I'm trying to create a simple data access class which acts as a library to return various entities. All my entity classes are generated via the Linq-to-SQL mapper in VS 2013, and all of them can be returned from the dataContext via Find(primary id)
I'd like to just define generic Find, Delete, Update etc without having to repeat it for every table/object but I don't want to create a Repository pattern.
How do I create a generic method that works regardless? This is what I have and of course it says T not defined, but I thought we could create generic methods in non-generic classes, so what am I doing wrong?
What I'd like to do is something like
var c = DAL<Customer>.Find(id)
var e = DAL<Employee>.Find(id)
... etc.
The code I attempted to write is
public class DAL
{
private string _key;
private DataContext _context = null;
//private DbSet<T> _table = null;
public DAL(string key)
{
_key = key;
_context = new DataContext();
}
public void Dispose()
{
_context.Dispose();
}
public DbSet<T> Find<T>(int id)
{
var d = _context.Set<T>();
//return d.Find(id);
}
}
I'm kind of lost... I do not want to define
public class DAL<T> where T:class
which ties each DAL to a type
This is the first time I'm venturing into generic nethods so any help appreciated
You can specify a generic method of a class, but you will need to use a constraint:
public class DAL
{
// .... previously written code
public DbSet<T> Find<T>(int id) where T : class
{
return _context.Set<T>();
}
}
You will also have to call it like this:
var dal = new DAL();
var data = dal.Find<Customer>(1);
Also note, that your code has a number of problems, you define a dispose method, but you don't implement the IDisposable interface, you're not actually returning anything from the method, etc..
Note: for all intents and purposes, this is still a repository pattern.
Something like this should work:
public T Find<T>(int id) where T : class
{
return context.Set<T>().Find(id);
}
I'm working on a quite large application. The domain has about 20-30 types, implemented as ORM classes (for example EF Code First or XPO, doesn't matter for the question). I've read several articles and suggestions about a generic implementation of the repository pattern and combining it with the unit of work pattern, resulting a code something like this:
public interface IRepository<T> {
IQueryable<T> AsQueryable();
IEnumerable<T> GetAll(Expression<Func<T, bool>> filter);
T GetByID(int id);
T Create();
void Save(T);
void Delete(T);
}
public interface IMyUnitOfWork : IDisposable {
void CommitChanges();
void DropChanges();
IRepository<Product> Products { get; }
IRepository<Customer> Customers { get; }
}
Is this pattern suitable for really large applications? Every example has about 2, maximum 3 repositories in the unit of work. As far as I understood the pattern, at the end of the day the number of repository references (lazy initialized in the implementation) equal (or nearly equal) to the number of domain entity classes, so that one can use the unit of work for complex business logic implementation. So for example let's extend the above code like this:
public interface IMyUnitOfWork : IDisposable {
...
IRepository<Customer> Customers { get; }
IRepository<Product> Products { get; }
IRepository<Orders> Orders { get; }
IRepository<ProductCategory> ProductCategories { get; }
IRepository<Tag> Tags { get; }
IRepository<CustomerStatistics> CustomerStatistics { get; }
IRepository<User> Users { get; }
IRepository<UserGroup> UserGroups { get; }
IRepository<Event> Events { get; }
...
}
How many repositories cab be referenced until one thinks about code smell? Or is it totally normal for this pattern? I could probably separate this interface into 2 or 3 different interfaces all implementing IUnitOfWork, but then the usage would be less comfortable.
UPDATE
I've checked a basically nice solution here recommended by #qujck. My problem with the dynamic repository registration and "dictionary based" approach is that I would like to enjoy the direct references to my repositories, because some of the repositories will have special behaviour. So when I write my business code I would like to be able to use it like this for example:
using (var uow = new MyUnitOfWork()) {
var allowedUsers = uow.Users.GetUsersInRolw("myRole");
// ... or
var clothes = uow.Products.GetInCategories("scarf", "hat", "trousers");
}
So here I'm benefiting that I have a strongly typed IRepository and IRepository reference, hence I can use the special methods (implemented as extension methods or by inheriting from the base interface). If I use a dynamic repository registration and retrieval method, I think I'm gonna loose this, or at least have to do some ugly castings all the time.
For the matter of DI, I would try to inject a repository factory to my real unit of work, so it can lazily instantiate the repositories.
Building on my comments above and on top of the answer here.
With a slightly modified unit of work abstraction
public interface IMyUnitOfWork
{
void CommitChanges();
void DropChanges();
IRepository<T> Repository<T>();
}
You can expose named repositories and specific repository methods with extension methods
public static class MyRepositories
{
public static IRepository<User> Users(this IMyUnitOfWork uow)
{
return uow.Repository<User>();
}
public static IRepository<Product> Products(this IMyUnitOfWork uow)
{
return uow.Repository<Product>();
}
public static IEnumerable<User> GetUsersInRole(
this IRepository<User> users, string role)
{
return users.AsQueryable().Where(x => true).ToList();
}
public static IEnumerable<Product> GetInCategories(
this IRepository<Product> products, params string[] categories)
{
return products.AsQueryable().Where(x => true).ToList();
}
}
That provide access the data as required
using(var uow = new MyUnitOfWork())
{
var allowedUsers = uow.Users().GetUsersInRole("myRole");
var result = uow.Products().GetInCategories("scarf", "hat", "trousers");
}
The way I tend to approach this is to move the type constraint from the repository class to the methods inside it. That means that instead of this:
public interface IMyUnitOfWork : IDisposable
{
IRepository<Customer> Customers { get; }
IRepository<Product> Products { get; }
IRepository<Orders> Orders { get; }
...
}
I have something like this:
public interface IMyUnitOfWork : IDisposable
{
Get<T>(/* some kind of filter expression in T */);
Add<T>(T);
Update<T>(T);
Delete<T>(/* some kind of filter expression in T */);
...
}
The main benefit of this is that you only need one data access object on your unit of work. The downside is that you don't have type-specific methods like Products.GetInCategories() any more. This can be problematic, so my solution to this is usually one of two things.
Separation of concerns
First, you can rethink where the separation between "data access" and "business logic" lies, so that you have a logic-layer class ProductService that has a method GetInCategory() that can do this:
using (var uow = new MyUnitOfWork())
{
var productsInCategory = GetAll<Product>(p => ["scarf", "hat", "trousers"].Contains(u.Category));
}
Your data access and business logic code is still separate.
Encapsulation of queries
Alternatively, you can implement a specification pattern, so you can have a namespace MyProject.Specifications in which there is a base class Specification<T> that has a filter expression somewhere internally, so that you can pass it to the unit of work object and that UoW can use the filter expression. This lets you have derived specifications, which you can pass around, and now you can write this:
using (var uow = new MyUnitOfWork())
{
var searchCategories = new Specifications.Products.GetInCategories("scarf", "hat", "trousers");
var productsInCategories = GetAll<Product>(searchCategories);
}
If you want a central place to keep commonly-used logic like "get user by role" or "get products in category", then instead of keeping it in your repository (which should be pure data access, strictly speaking) then you could have those extension methods on the objects themselves instead. For example, Product could have a method or an extension method InCategory(string) that returns a Specification<Product> or even just a filter such as Expression<Func<Product, bool>>, allowing you to write the query like this:
using (var uow = new MyUnitOfWork())
{
var productsInCategory = GetAll(Product.InCategories("scarf", "hat", "trousers");
}
(Note that this is still a generic method, but type inference will take care of it for you.)
This keeps all the query logic on the object being queried (or on an extensions class for that object), which still keeps your data and logic code nicely separated by class and by file, whilst allowing you to share it as you have been sharing your IRepository<T> extensions previously.
Example
To give a more specific example, I'm using this pattern with EF. I didn't bother with specifications; I just have service classes in the logic layer that use a single unit of work for each logical operation ("add a new user", "get a category of products", "save changes to a product" etc). The core of it looks like this (implementations omitted for brevity and because they're pretty trivial):
public class EFUnitOfWork: IUnitOfWork
{
private DbContext _db;
public EntityFrameworkSourceAdapter(DbContext context) {...}
public void Add<T>(T item) where T : class, new() {...}
public void AddAll<T>(IEnumerable<T> items) where T : class, new() {...}
public T Get<T>(Expression<Func<T, bool>> filter) where T : class, new() {...}
public IQueryable<T> GetAll<T>(Expression<Func<T, bool>> filter = null) where T : class, new() {...}
public void Update<T>(T item) where T : class, new() {...}
public void Remove<T>(Expression<Func<T, bool>> filter) where T : class, new() {...}
public void Commit() {...}
public void Dispose() {...}
}
Most of those methods use _db.Set<T>() to get the relevant DbSet, and then just query it with LINQ using the provided Expression<Func<T, bool>>.
I’m using Linq to Entities and lately, I found that a lot of folks recommending wrapping the datacontext in a using statement like this:
Using(DataContext db = new DataContext) {
var xx = db.customers;
}
This makes sense. However, I’m not sure how to incorporate this practice in my model.
For example: I have an interface (let’s call it customer) and it is implemented by a repository like this:
namespace Models
{
public class rCustomer : iCustomer
{
readonly DataContext db = new DataContext();
public customer getCustomer(Guid id)
{
return db.customers.SingleOrDefault(por => por.id == id);
}
public iQueryable<customer> getTopCustomers()
{
return db.customers.Take(10);
}
//*******************************************
//more methods using db, including add, update, delete, etc.
//*******************************************
}
}
Then, to take the advantage of using, I will need to change the methods to look like this:
namespace Models
{
public class rCustomer : iCustomer
{
public customer getCustomer(Guid id)
{
using(DataContext db = new DataContext()) {
return db.customers.SingleOrDefault(por => por.id == id);
}
}
public iQueryable<customer> getTopCustomers()
{
using(DataContext db = new DataContext()) {
return db.customers.Take(10);
}
}
//*******************************************
//more methods using db
//*******************************************
}
}
My question is: the recommendation of using “Using” is really that good? Please take in consideration that this change will be a major one, I have about 25 interfaces/repository combos, and each has about 20-25 methods, not to mention the need to re-test everything after finish.
Is there other way?
Thanks!
Edgar.
You can implement a Database factory which will cause your DbContext is being reused.
You can achieve this as follows:
DatabaseFactory class:
public class DatabaseFactory : Disposable, IDatabaseFactory
{
private YourEntities _dataContext;
public YourEntities Get()
{
return _dataContext ?? (_dataContext = new YourEntities());
}
protected override void DisposeCore()
{
if (_dataContext != null)
_dataContext.Dispose();
}
}
Excerpt of the Repository base class:
public abstract class Repository<T> : IRepository<T> where T : class
{
private YourEntities _dataContext;
private readonly IDbSet<T> _dbset;
protected Repository(IDatabaseFactory databaseFactory)
{
DatabaseFactory = databaseFactory;
_dbset = DataContext.Set<T>();
}
protected IDatabaseFactory DatabaseFactory
{
get;
private set;
}
protected YourEntities DataContext
{
get { return _dataContext ?? (_dataContext = DatabaseFactory.Get()); }
}
Your table's repository class:
public class ApplicationRepository : Repository<YourTable>, IYourTableRepository
{
private YourEntities _dataContext;
protected new IDatabaseFactory DatabaseFactory
{
get;
private set;
}
public YourTableRepository(IDatabaseFactory databaseFactory)
: base(databaseFactory)
{
DatabaseFactory = databaseFactory;
}
protected new YourEntities DataContext
{
get { return _dataContext ?? (_dataContext = DatabaseFactory.Get()); }
}
}
public interface IYourTableRepository : IRepository<YourTable>
{
}
}
This works perfectly together with AutoFac constructor injection as well.
Considering the code provided I see, you esplicitly use readonly DataContext db = new DataContext(); like a global variable, so you consider to have that object lifetime along with your rCustomer class instance lifetime.
If this is true, what you can do, instead of rewriting everything, you can implement IDisposable and inside Dispose() code something like
private void Dispose()
{
if(db != null)
db.Dispose();
}
Hope this helps.
As others have mentioned, it's important for the data contexts to be disposed. I won't go into that further.
I see three possible designs for the class that ensure that the contexts are disposed:
The second solution you provide in which you create a data context within the scope of each method of rCustomer that needs it so that each datacontext is in a using block.
Keep the data context as an instance variable and have rCustomer implement IDisposable so that when rCustomer is disposed you can dispose of it's data context. This means that all rCustomer instances will need to be wrapped in using blocks.
Pass an instance of an existing data context into rCustomer through its constructor. If you do this then rCustomer won't be responsible for disposing of it, the user of the class will. This would allow you to use a single data context across several instances of rCustomer, or with several different classes that need access to the data context. This has advantages (less overhead involved in creating new data contexts) and disadvantages (larger memory footprint as data contexts tend to hold onto quite a lot of memory through caches and the like).
I honestly think option #1 is a pretty good one, as long as you don't notice it performing too slowly (I'd time/profile it if you think it's causing problems). Due to connection pooling it shouldn't be all that bad. If it is, I'd go with #3 as my next choice. #2 isn't that far behind, but it would likely be a bit awkward and unexpected for other members of your team (if any).
The DataContext class is wrapped in a Using statement because it implements the IDisposable interface.
Internal to the DataContext it is using SqlConnection objects and SqlCommand objects. In order to correctly release these connection back to the Sql Connection Pool, they need to be disposed of.
The garbage collector will eventually do this, but it will take two passes due to the way IDisposable objects are managed.
It's strongly encouraged that Dispose is called and the Using statement is a nice way to do this.
Read these links for more indepth explanation:
http://social.msdn.microsoft.com/Forums/en-US/adodotnetentityframework/thread/2625b105-2cff-45ad-ba29-abdd763f74fe/
http://www.c-sharpcorner.com/UploadFile/DipalChoksi/UnderstandingGarbageCollectioninNETFramework11292005051110AM/UnderstandingGarbageCollectioninNETFramework.aspx
An alternative would be to make your rCustomer class implement IDisposable, and then in your Dispose method, you can call Dispose on your DataContext if it is not null. However, this just pushes the Disposable pattern out of your rCustomer class, into whatever types are using rCustomer.
What behavior should I use for a provider pattern with entity framework?
public class TestProvider : IDisposable
{
public Entities entities = new Entities();
public IEnumerable<Tag> GetAll()
{
return entities.Tag.ToList();
}
public ...
#region IDisposable Members
public void Dispose()
{
entities.Dispose();
}
#endregion
}
Or is it ok to use:
public class TestProvider
{
public IEnumerable<Tag> GetAll()
{
using (var entities = new Entities())
{
return entities.Tag.ToList();
}
}
public ...
}
Does it implies on performance? What are the pros and cons about it?
It depends on how long should TestProvider exist and what operations do you want to perform on retrieved entities. Generally ObjectContext instance should be used for the shortest time as possible but it should also represent single unit of work. ObjectContext instance should not be shared. I answered related question here.
It means that both your approaches are correct for some scenarios. First approach is ok if you expect to retrieve entities, modify them and the save them with the same provider instance. Second approach is ok if you just want to retrieve entities, you don't want to immediately modify them and you don't want to select anything else.
I'm currently developing a medium sized application, which will access 2 or more SQL databases, on different sites etc...
I am considering using something similar to this:
http://mikehadlow.blogspot.com/2008/03/using-irepository-pattern-with-linq-to.html
However, I want to use fluent nHibernate, in place of Linq-to-SQL (and of course nHibernate.Linq)
Is this viable?
How would I go about configuring this?
Where would my mapping definitions go etc...?
This application will eventually have many facets - from a WebUI, WCF Library and Windows applications / services.
Also, for example on a "product" table, would I create a "ProductManager" class, that has methods like:
GetProduct, GetAllProducts etc...
Any pointers are greatly received.
In my opinion (and in some other peoples opinion as well), a repository should be an interface that hides data access in an interface that mimics a collection interface. That's why a repository should be an IQueryable and IEnumerable.
public interface IRepository<T> : IQueryable<T>
{
void Add(T entity);
T Get(Guid id);
void Remove(T entity);
}
public class Repository<T> : IQueryable<T>
{
private readonly ISession session;
public Repository(ISession session)
{
session = session;
}
public Type ElementType
{
get { return session.Query<T>().ElementType; }
}
public Expression Expression
{
get { return session.Query<T>().Expression; }
}
public IQueryProvider Provider
{
get { return session.Query<T>().Provider; }
}
public void Add(T entity)
{
session.Save(entity);
}
public T Get(Guid id)
{
return session.Get<T>(id);
}
IEnumerator IEnumerable.GetEnumerator()
{
return this.GetEnumerator();
}
public IEnumerator<T> GetEnumerator()
{
return session.Query<T>().GetEnumerator();
}
public void Remove(T entity)
{
session.Delete(entity);
}
}
I do not implement a SubmitChanges like method in the repository itself, because I want to submit the changes of several repositories used by one action of the user at once. I hide the transaction management in a unit of work interface:
public interface IUnitOfWork : IDisposable
{
void Commit();
void RollBack();
}
I use the session of an NHibernate specific unit of work implementation as session for the repositories:
public interface INHiberanteUnitOfWork : IUnitOfWork
{
ISession Session { get; }
}
In a real application, I use a more complicated repository interface with methods for things like pagination, eager loading, specification pattern, access to the other ways of querying used by NHiberante instead of just linq. The linq implementation in the NHibernate trunk works good enough for most of the queries I need to do.
Here are my thoughts on generic repositories:
Advantage of creating a generic repository vs. specific repository for each object?
I have successfully used that pattern with NHibernate, and haven't found any real shortcomings.
The gist is that truly generic repositories are a bit of a red herring, but the same benefits can be realized by thinking about the problem slightly differently.
Hope that helps.