Disposing in Ninject - c#

I use Ninject for DI, it creates DbContext per request (single for all services) and i usually call few service methods per request (so i can't dispose DbContext after first service method has been called).
The question is, should i make WallService or WallManager (and other services and managers) IDisposable and what Dispose logic create?
My Business Logic Layer
namespace MySite.BLL.Services
{
public class WallService
{
WallManager wallManager;
public WallService(MybContext db)
{
wallManager = new WallManager(db);
}
}
}
My Data Access Layer
namespace MySite.DAL.Repositories
{
public class WallManager
{
MyDbContext db;
public WallManager(MyDbContext db)
{
this.db = db;
}
}
}
NinjectWebCommon.cs
kernel.Bind<MyDbContext>().ToSelf().InRequestScope().WithConstructorArgument<string>("MyMsSqlString");
kernel.Bind<WallService>().ToSelf().InRequestScope();
MyBaseController.cs
public class MyBaseController : Controller
{
[Inject]
public WallService WallService { get; set; }
// Other Services ..
}

(This is more an extended comment than an answer)
I'm pretty certain you don't need to do this - and not only that you don't need to but you really shouldn't in this case.
The thing is, you are not creating the DbContext instance yourself - you're delegating that responsibility to the IOC library;in that respect the reference is only "passing through" so none of your classes own it and should not do anything that could trash it.
Also, DbContext is a managed object so you don't need to Dispose it anyway.
There is a very good answer on this site about this notion already, although it doesn't directly address your question hence I didn't mark as duplicate
One thing I've noticed about your code though.
You're injecting the DbContext, and then using it to create a WallManager instance. That's kind of defeating the purpose of Dependency Injection. Why not directly inject the WallManager into WallService?
i.e.
public class WallService
{
readonly WallManager _wallManager;
public WallService(WallManager manager)
{
if (manager==null){
throw new ArgumentNullException("manager");
}
_wallManager = manager;
}
}
Ninject (or any other IOC Library) will have no problem figuring out that it needs to create and inject a DbContext into the manager dependency, once you have registered the WallManager type with it; the idea here is that you register all possible dependency types, and then the library builds the object graph for you.
This way you don't have to take a dependency on DbContext directly in your WallService...I am guessing you only take it in order to create the WallManager anyway - if you are also using DbContext in the WallService I would suggest you take another look at the design since you should limit your data access to one layer.

Related

Dapper with .NET Core - injected SqlConnection lifetime/scope

I'm using .NET Core Dependency Injection to instantiate a SqlConnection object during the application startup, which I'm then planning to inject in my repository. This SqlConnection will be used by Dapper to read/write data from the database within my repository implementation. I am going to use async calls with Dapper.
The question is: should I inject the SqlConnection as transient or as a singleton? Considering the fact that I want to use async my thought would be to use transient unless Dapper implements some isolation containers internally and my singleton's scope will still be wrapped within whatever the scope Dapper uses internally.
Are there any recommendations/best practices regarding the lifetime of the SqlConnection object when working with Dapper? Are there any caveats I might be missing?
Thanks in advance.
If you provide SQL connection as singleton you won't be able to serve multiple requests at the same time unless you enable MARS, which also has it's limitations. Best practice is to use transient SQL connection and ensure it is properly disposed.
In my applications I pass custom IDbConnectionFactory to repositories which is used to create connection inside using statement. In this case repository itself can be singleton to reduce allocations on heap.
Great question, and already two great answers. I was puzzled by this at first, and came up with the following solution to solve the problem, which encapsulates the repositories in a manager. The manager itself is responsible for extracting the connection string and injecting it into the repositories.
I've found this approach to make testing the repositories individually, say in a mock console app, much simpler, and I've have much luck following this pattern on several larger-scale project. Though I am admittedly not an expert at testing, dependency injection, or well anything really!
The main question I'm left asking myself, is whether the DbService should be a singleton or not. My rationale was that, there wasn't much point constantly creating and destroying the various repositories encapsulated in DbService and since they are all stateless I didn't see much problem in allowing them to "live". Though this could be entirely invalid logic.
EDIT: Should you want a ready made solution check out my Dapper repository implementation on GitHub
The repository manager is structured as follows:
/*
* Db Service
*/
public interface IDbService
{
ISomeRepo SomeRepo { get; }
}
public class DbService : IDbService
{
readonly string connStr;
ISomeRepo someRepo;
public DbService(string connStr)
{
this.connStr = connStr;
}
public ISomeRepo SomeRepo
{
get
{
if (someRepo == null)
{
someRepo = new SomeRepo(this.connStr);
}
return someRepo;
}
}
}
A sample repository would be structured as follows:
/*
* Mock Repo
*/
public interface ISomeRepo
{
IEnumerable<SomeModel> List();
}
public class SomeRepo : ISomeRepo
{
readonly string connStr;
public SomeRepo(string connStr)
{
this.connStr = connStr;
}
public IEnumerable<SomeModel> List()
{
//work to return list of SomeModel
}
}
Wiring it all up:
/*
* Startup.cs
*/
public IConfigurationRoot Configuration { get; }
public void ConfigureServices(IServiceCollection services)
{
//...rest of services
services.AddSingleton<IDbService, DbService>();
//...rest of services
}
And finally, using it:
public SomeController : Controller
{
IDbService dbService;
public SomeController(IDbService dbService)
{
this.dbService = dbService;
}
public IActionResult Index()
{
return View(dbService.SomeRepo.List());
}
}
I agree with #Andrii Litvinov, both answer and comment.
In this case I would go with approach of data-source specific connection factory.
With same approach, I am mentioning different way - UnitOfWork.
Refer DalSession and UnitOfWork from this answer. This handles connection.
Refer BaseDal from this answer. This is my implementation of Repository (actually BaseRepository).
UnitOfWork is injected as transient.
Multiple data sources could be handled by creating separate DalSession for each data source.
UnitOfWork is injected in BaseDal.
Are there any recommendations/best practices regarding the lifetime of the SqlConnection object when working with Dapper?
One thing most of developers agree is that, connection should be as short lived as possible. I see two approaches here:
Connection per action.
This of-course will be shortest life span of connection. You enclose connection in using block for each action. This is good approach as long as you do not want to group the actions. Even when you want to group the actions, you can use transaction in most of the cases.
Problem is when you want to group actions across multiple classes/methods. You cannot use using block here. Solution is UnitOfWork as below.
Connection per Unit Of Work.
Define your unit of work. This will be different per application. In web application, "connection per request" is widely used approach.
This makes more sense because generally there are (most of the time) group of actions we want to perform as a whole. This is explained in two links I provided above.
Another advantage of this approach is that, application (that uses DAL) gets more control on how connection should be used. And in my understanding, application knows better than DAL how connection should be used.

How to access open DbContext without explicit reference?

I've been working with the entity framework for a little bit now, and I've come across several scenarios where two contexts would try to access the same entity etc, so I'm wondering if perhaps I'm not opening/closing my dbcontexts in the best way.
Currently I basically open a DbContext on each of my controllers the way the basic MVC app was set up originally, which means I have a private dbcontext field on the controller, and I override the controller's dispose method to call dispose on the context.
However I also sometimes make queries against the db in some of my other classes, which can get called from within the controller, which also has a context open.
Is there a way to access an open context without an explicit handler? It wouldn't really make sense for me to pass around a DbContext reference through a dozen different methods.
Using Dependency Injection
As others have said and will probably reiterate, the 'right way' is often considered to be dependency injection.
In my latest project, I've organized things so that I'm almost done with the project and DI has been so effortless that I'm doing it myself (rather than using an injector). One major factor in that has been adhering fairly strictly to this structure:
WebProject
| |
| DataServices
| | |
ViewModels EntityModels
Access to all data services during one unit of work occurs through a single DataServiceFactory instance, that requires an instance of MyDbContext. Another factor has been an entirely RESTful application design - it means I don't have to intersperse persistence functionality throughout my code.
Without Dependency Injection
That said, maybe DI isn't right for you on this project. Maybe:
you don't plan to write unit tests
you need more time to understand DI
your project structure already has EF deeply integrated
In ASP.NET MVC, the unit of work often entirely coincides with the request lifetime - i.e. HttpContext.Current. As a result, you can lazily instanciate a repository 'singleton' per-request, instead of using DI. Here is a classic singleton pattern with current context as the backing store, to hold your DbContext:
public class RepositoryProxy {
private static HttpContext Ctx { get { return HttpContext.Current; } }
private static Guid repoGuid = typeof(MyDbContext).GUID;
public static MyDbContext Context {
get {
MyDbContext repo = Ctx.Items[repoGuid];
if (repo == null) {
repo = new MyDbContext();
Ctx.Items[repoGuid] = result;
}
return repo;
}
}
public static void SaveIfContext() {
MyDbContext repo = Ctx.Items[repoGuid];
if (repo != null) repo.SaveChanges();
}
}
You can SaveChanges automatically too, if you are feeling especially lazy (you'll still need to call it manually to inspect side-effects, of course, like retrieving the id for a new item):
public abstract class ExtendedController : Controller {
protected MyDbContext Context {
get { return RepositoryProxy.Context; }
}
protected override void OnActionExecuted(ActionExecutedContext filterContext) {
RepositoryProxy.SaveIfContext();
base.OnActionExecuted(filterContext);
}
}
The "best" way to handle DbContext instances is to make it a parameter on each method that needs it (or, if an entire class needs it, a parameter to the constructor).
This is a basic part of IoC (Inversion of Control) which allows the caller to specify dependencies for a method thus allowing the caller to "control" the behavior of the called method.
You can then add a Dependency Injection framework as well. They can be configured to use a singleton instance of your DbContext and inject that instance into any methods that need it.
Instead of passing instance of DbContext I would suggest you pass instance of some kind of factory class, that will create an instance of DbContext for you.
For most scenarios you can just create interface, that your DbContext will implement, something like this:
public interface IDbContext
{
IDbSet<TEntity> Set<TEntity>() where TEntity : class;
DbSet Set(Type entityType);
DbEntityEntry<TEntity> Entry<TEntity>(TEntity entity) where TEntity : class;
int SaveChanges();
}
and then a factory to return your instance:
public interface IContextFactory
{
IDbContext Retrieve();
}
The factory can be easily mocked to return whatever implementation you want - which is a good approach especially if you intend to test the application. Your DbContext will just derive from IDbContext interface and implement necessary methods and properties. This approach assumes, that to tell EF about your database tables you use OnModelCreating(DbModelBuilder modelBuilder) method and with modelBuilder.Configurations.Add(new EntityTypeConfiguratinOfSomeType()); instead of having properties of type DbSet<T> all around your context.
To completely abstract from Entity framework, you can create yet another layer of abstraction, for example with UnitOfWork or Repository pattern approaches, but I guess the former covers your concerns for now.

Does ASP.NET MVC 3 cache Filters?

I have an UnitOfWork attribute, something like this:
public class UnitOfWorkAttribute : ActionFilterAttribute
{
public IDataContext DataContext { get; set; }
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
if (filterContext.Controller.ViewData.ModelState.IsValid)
{
DataContext.SubmitChanges();
}
base.OnActionExecuted(filterContext);
}
}
As you can see, it has DataContext property, which is injected by Castle.Windsor. DataContext has lifestyle of PerWebRequest - meaning single instance reused for each request.
Thing is, that from time to time I get DataContext is Disposed exception in this attribute and I remember that ASP.NET MVC 3 tries to cache action filters somehow, so may that causes the problem?
If it is so, how to solve the issue - by not using any properties and trying to use ServiceLocator inside method?
Is it possible to tell ASP.NET MVC to not cache filter if it does cache it?
I would strongly advice against using such a construct. For a couple of reasons:
It is not the responsibility of the controller (or an on the controller decorated attribute) to commit the data context.
This would lead to lots of duplicated code (you'll have to decorate lots of methods with this attribute).
At that point in the execution (in the OnActionExecuted method) whether it is actually safe to commit the data.
Especially the third point should have drawn your attention. The mere fact that the model is valid, doesn't mean that it is okay to submit the changes of the data context. Look at this example:
[UnitOfWorkAttribute]
public View MoveCustomer(int customerId, Address address)
{
try
{
this.customerService.MoveCustomer(customerId, address);
}
catch { }
return View();
}
Of course this example is a bit naive. You would hardly ever swallow each and every exception, that would just be plain wrong. But what it does show is that it is very well possible for the action method to finish successfully, when the data should not be saved.
But besides this, is committing the transaction really a problem of MVC and if you decide it is, should you still want to decorate all action methods with this attribute. Wouldn't it be nicer if you just implement this without having to do anything on the Controller level? Because, which attributes are you going to add after this? Authorization attributes? Logging attributes? Tracing attributes? Where does it stop?
What you can try instead is to model all business operations that need to run in a transaction, in a way that allows you to dynamically add this behavior, without needing to change any existing code, or adding new attributes all over the place. A way to do this is to define an interface for these business operations. For instance:
public interface ICommandHandler<TCommand>
{
void Handle(TCommand command);
}
Using this interface, your controller would look like this:
private readonly ICommandHandler<MoveCustomerCommand> handler;
// constructor
public CustomerController(
ICommandHandler<MoveCustomerCommand> handler)
{
this.handler = handler;
}
public View MoveCustomer(int customerId, Address address)
{
var command = new MoveCustomerCommand
{
CustomerId = customerId,
Address = address,
};
this.handler.Handle(command);
return View();
}
For each business operation in the system you define a class (a DTO and Parameter Object). In the example the MoveCustomerCommand class. This class contains merely the data. The implementation is defined in a class that implementation of the ICommandHandler<MoveCustomerCommand>. For instance:
public class MoveCustomerCommandHandler
: ICommandHandler<MoveCustomerCommand>
{
private readonly IDataContext context;
public MoveCustomerCommandHandler(IDataContext context)
{
this.context = context;
}
public void Handle(MoveCustomerCommand command)
{
// TODO: Put logic here.
}
}
This looks like an awful lot of extra useless code, but this is actually really useful (and if you look closely, it isn't really that much extra code anyway).
Interesting about this is that you can now define one single decorator that handles the transactions for all command handlers in the system:
public class TransactionalCommandHandlerDecorator<TCommand>
: ICommandHandler<TCommand>
{
private readonly IDataContext context;
private readonly ICommandHandler<TCommand> decoratedHandler;
public TransactionalCommandHandlerDecorator(IDataContext context,
ICommandHandler<TCommand> decoratedHandler)
{
this.context = context;
this.decoratedHandler = decoratedHandler;
}
public void Handle(TCommand command)
{
this.decoratedHandler.Handle(command);
this.context.SubmitChanges();
}
}
This is not much more code than your UnitOfWorkAttribute, but the difference is that this handler can be wrapped around any implementation and injected into any controller, without the controller to know about this. And directly after executing a command is really the only safe place where you actually know whether you can save the changes or not.
You can find more information about this way of designing your application in this article: Meanwhile... on the command side of my architecture
Today I've half accidently found the original issue of the problem.
As it is seen from the question, filter has property, that is injected by Castle.Windsor, so those, who use ASP.NET MVC know, that for that to work you need to have IFilterProvider implementation, which would be able to use IoC container for dependency injection.
So I've started to look at it's implementation, and noticed, that it is derrived from FilterAttributeFilterProvider and FilterAttributeFilterProvider has constructor:
public FilterAttributeFilterProvider(bool cacheAttributeInstances)
So you can control to cache or not your attribute instances.
After disabling this cache, site was blown with NullReferenceExceptions, so I was able to find one more thing, that was overlooked and caused undesired side effects.
Thing was, that original filter was not removed, after we added Castle.Windsor filter provider. So when caching was enabled, IoC filter provider was creating instances and default filter provider was reusing them and all dependency properties were filled with values - that was not clearly noticeable, except the fact, that filters were running twice, after caching was disabled, default provider was needed to create instances by it self, so dependency properties were unfilled, that's why NullRefereceExceptions occurred.

SOA, TDD, DI & DDD - a shell game?

Okay, I'm going to try and go short and straight to the point. I am trying to develop a loosely-coupled, multi-tier service application that is testable and supports dependency injection. Here's what I have:
At the service layer, I have a StartSession method that accepts some key data required to, well, start the session. My service class is a facade and delegates to an instance of the ISessionManager interface that is injected into the service class constructor.
I am using the Repository pattern in the data access layer. So I have an ISessionRepository that my domain objects will work with and that I implement using the data access technology du jour. ISessionRepository has methods for GetById, Add and Update.
Since my service class is just a facade, I think it is safe to say that my ISessionManager implementation is the actual service class in my architecture. This class coordinates the operations with my Session domain/business object. And here's where the shell game and problem comes in.
In my SessionManager class (the concrete ISessionManager), here's how I have StartSession implemented:
public ISession StartSession(object sessionStartInfo)
{
var session = Session.GetSession(sessionStartInfo);
if (session == null)
session = Session.NewSession(sessionStartInfo);
return session;
}
I have three problems with this code:
First, obviously I could move this logic into a StartSession method in my Session class but I think that would defeat the purpose of the SessionManager class which then simply becomes a second facade (or is it still considered a coordinator?). Alas, the shell game.
Second, SessionManager has a tightly-coupled dependance upon the Session class. I considered creating an ISessionFactory/SessionFactory that could be injected into SessionManager but then I'd have the same tight-coupling inside the factory. But, maybe that's okay?
Finally, it seems to me that true DI and factory methods don't mix. After all, we want to avoid "new"ing an instance of an object and let the container return the instance to us. And true DI says that we should not reference the container directly. So, how then do I get the concrete ISessionRepository class injected into my Session domain object? Do I have it injected into the factory class then manually pass it into Session when constructing a new instance (using "new")?
Keep in mind that this is also only one operation and I also need to perform other tasks such as saving a session, listing sessions based on various criteria plus work with other domain objects in my solution. Plus, the Session object also encapsulates business logic for authorization, validation, etc. so (I think) it needs to be there.
The key to what I am looking to accomplish is not only functional but testable. I am using DI to break dependencies so we can easily implement unit tests using mocks as well as give us the ability to make changes to the concrete implementations without requiring changes in multiple areas.
Can you help me wrap my head around the best practices for such a design and how I can best achieve my goals for a solid SOA, DDD and TDD solution?
UPDATE
I was asked to provide some additional code, so as succinctly as possible:
[ServiceContract()]
public class SessionService : ISessionService
{
public SessionService(ISessionManager manager) { Manager = manager; }
public ISessionManager Manager { get; private set; }
[OperationContract()]
public SessionContract StartSession(SessionCriteriaContract criteria)
{
var session = Manager.StartSession(Mapper.Map<SessionCriteria>(criteria));
return Mapper.Map<SessionContract>(session);
}
}
public class SessionManager : ISessionManager
{
public SessionManager() { }
public ISession StartSession(SessionCriteria criteria)
{
var session = Session.GetSession(criteria);
if (session == null)
session = Session.NewSession(criteria);
return session;
}
}
public class Session : ISession
{
public Session(ISessionRepository repository, IValidator<ISession> validator)
{
Repository = repository;
Validator = validator;
}
// ISession Properties
public static ISession GetSession(SessionCriteria criteria)
{
return Repository.FindOne(criteria);
}
public static ISession NewSession(SessionCriteria criteria)
{
var session = ????;
// Set properties based on criteria object
return session;
}
public Boolean Save()
{
if (!Validator.IsValid(this))
return false;
return Repository.Save(this);
}
}
And, obviously, there is an ISessionRepository interface and concrete XyzSessionRepository class that I don't think needs to be shown.
2nd UPDATE
I added the IValidator dependency to the Session domain object to illustrate that there are other components in use.
The posted code clarifies a lot. It looks to me like the session class holds state (with behavior), and the service and manager classes strictly perform actions/behavior.
You might look at removing the Repository dependency from the Session and adding it to the SessionManager. So instead of the Session calling Repository.Save(this), your Manager class would have a Save(ISession session) method that would then call Repository.Save(session). This would mean that the session itself would not need to be managed by the container, and it would be perfectly reasonable to create it via "new Session()" (or using a factory that does the same). I think the fact that the Get- and New- methods on the Session are static is a clue/smell that they may not belong on that class (does this code compile? Seems like you are using an instance property within a static method).
Finally, it seems to me that true DI
and factory methods don't mix. After
all, we want to avoid "new"ing an
instance of an object and let the
container return the instance to us.
And true DI says that we should not
reference the container directly. So,
how then do I get the concrete
ISessionRepository class injected into
my Session domain object? Do I have it
injected into the factory class then
manually pass it into Session when
constructing a new instance (using
"new")?
This question gets asked a LOT when it comes to managing classes that mix state and service via an IOC container. As soon as you use an abstract factory that uses "new", you lose the benefits of a DI framework from that class downward in the object graph. You can get away from this by completely separating state and service, and having only your classes that provide service/behavior managed by the container. This leads to passing all data through method calls (aka functional programming). Some containers (Windsor for one) also provide a solution to this very problem (in Windsor it's called the Factory Facility).
Edit: wanted to add that functional programming also leads to what Fowler would call "anemic domain models". This is generally considered a bad thing in DDD, so you might have to weigh that against the advice I posted above.
Just some comments...
After all, we want to avoid "new"ing an instance of an object and let the container return the instance to us.
this ain't true for 100%. You want to avoid "new"ing only across so called seams which basically are lines between layers. if You try to abstract persistence with repositories - that's a seam, if You try to decouple domain model from UI (classic one - system.web reference), there's a seam. if You are in same layer, then decoupling one implementation from another sometimes makes little sense and just adds additional complexity (useless abstraction, ioc container configuration etc.). another (obvious) reason You want to abstract something is when You already right now need polymorphism.
And true DI says that we should not reference the container directly.
this is true. but another concept You might be missing is so called composition root (it's good for things to have a name :). this concept resolves confusion with "when to use service locator". idea is simple - You should compose Your dependency graph as fast as possible. there should be 1 place only where You actually reference ioc container.
E.g. in asp.net mvc application, common point for composition is ControllerFactory.
Do I have it injected into the factory class then manually pass it into Session when constructing a new instance
As I see so far, factories are generally good for 2 things:
1.To create complex objects (Builder pattern helps significantly)
2.Resolving violations of open closed and single responsibility principles
public void PurchaseProduct(Product product){
if(product.HasSomething) order.Apply(new FirstDiscountPolicy());
if(product.HasSomethingElse) order.Apply(new SecondDiscountPolicy());
}
becomes as:
public void PurchaseProduct(Product product){
order.Apply(DiscountPolicyFactory.Create(product));
}
In that way Your class that holds PurchaseProduct won't be needed to be modified if new discount policy appears in sight and PurchaseProduct would become responsible for purchasing product only instead of knowing what discount to apply.
P.s. if You are interested in DI, You should read "Dependency injection in .NET" by Mark Seemann.
I thought I'd post the approach I ended up following while giving due credit above.
After reading some additional articles on DDD, I finally came across the observation that our domain objects should not be responsible for their creation or persistence as well as the notion that it is okay to "new" an instance of a domain object from within the Domain Layer (as Arnis eluded).
So, I retained my SessionManager class but renamed it SessionService so it would be clearer that it is a Domain Service (not to be confused with the SessionService in the facade layer). It is now implemented like:
public class SessionService : ISessionService
{
public SessionService(ISessionFactory factory, ISessionRepository repository)
{
Factory = factory;
Repository = repository;
}
public ISessionFactory Factory { get; private set; }
public ISessionRepository Repository { get; private set; }
public ISession StartSession(SessionCriteria criteria)
{
var session = Repository.GetSession(criteria);
if (session == null)
session = Factory.CreateSession(criteria);
else if (!session.CanResume)
thrown new InvalidOperationException("Cannot resume the session.");
return session;
}
}
The Session class is now more of a true domain object only concerned with the state and logic required when working with the Session, such as the CanResume property shown above and validation logic.
The SessionFactory class is responsible for creating new instances and allows me to still inject the ISessionValidator instance provided by the container without directly referencing the container itself:
public class SessionFactory : ISessionFactory
{
public SessionFactory(ISessionValidator validator)
{
Validator = validator;
}
public ISessionValidator Validator { get; private set; }
public Session CreateSession(SessionCriteria criteria)
{
var session = new Session(Validator);
// Map properties
return session;
}
}
Unless someone can point out a flaw in my approach, I'm pretty comfortable that this is consistent with DDD and gives me full support for unit testing, etc. - everything I was after.

Confusion over IRepository

I'm a bit confused about when using the "IRepository pattern", when actually to load the data.
Currently I have something like this:
public class MainViewModel : ViewModelBase
{
// EF4 generated ObjectContext
private ScorBotEntities context = new ScorBotEntities();
// Custom IUserRepository class
private IUserRepository userRepository;
public MainViewModel()
{
this.userRepository = new UserRepository(context.Users);
}
public ObservableCollection<User> Users
{
get
{
return new ObservableCollection<User>(userRepository.GetAll());
}
}
}
ScorBotEntities are autogenerated using EF4 (I had a look at POCOs, to much work for this sized project).
You can find the definition of the UserRepository here: http://code.google.com/p/i4prj4-g2/source/browse/ScorBotRobotics/ScorBotRobotics/Repositories/UserRepository.cs
But basically, what I'm wondering about is, why do it even make sense to use a repository here, instead of just writing it like this:
public class MainViewModel : ViewModelBase
{
private ScorBotEntities context = new ScorBotEntities();
public MainViewModel()
{
}
public ObservableCollection<User> Users
{
get
{
return new ObservableCollection<User>(context.Users);
}
}
}
It makes sense to abstract functionality away such as with the UsernameAndPassword method. But in that case, perhaps using some Query Objects would be more ideal?
I am a bit baffled that your context has made its way down to your ViewModel. I believe your GUI layer should never see the context. Context must be opened/kept/closed by the IRepository. Let the data layer (IRepository) return an array/list of Users.
There are a couple of different points here. First, your view models should have no knowledge of the repository - keep your view models as simple as possible.
Second, the IRepository is your public API - so you should have dependencies to this (depend on abstractions rather than concrete implementation between layers).
There are a couple of different (perfectly acceptable ways) to implement the IRepository. One is to have the repository encapsulate the context directly. Another is to use the "unit of work" pattern and have your unitOfWork encapsulate the context and pass the unitOfWork object to each repository. Either way, since you're using EF4, testability is much easier than it used to be. For example, EF4 introduced IObjectSet so that it is easy to provide test doubles and mocks to test your repository.
I highly recommend checking out this whitepaper on Testability and Entity Framework 4.
Separation of concerns
What if you want to change the storage of 'Users', from say SQL to a flat file?
Then context would not be needed, and you'd have to change every use of it, instead of just your IRepository implementation.
Also, ideally you would have your IRepository injected. So you're MainViewModel doesn't care how it gets it's Users.

Categories

Resources