I am quite new to the FNH and NH world, so be gentle :P
I have created an application using FNH for data access which works good while not using lazy-loading, however once I enable lazy-loading everything goes pear shaped (as in, no sessions are open when I attempt to access the lazy-loaded properties etc).
The application layout I have created thus-far has a "Database" singleton which has various methods such as Save(), Refer() and List().
When calling Refer() a session is opened, the data is retrieved and the session is disposed; meaning there is no session available when attempting to access a lazy-loaded property from the returned object. Example: Database.Refer("username").Person since Person is lazy-loaded and the session has already closed.
I have read that Castle has a SessionManager that could be used for this very scenario but, either it's the late nights or lack of coffee, I can't seem to work out how to hook up FNH to use this manager as, in the spirit of castle, everything is defined in config files.
Am I missing something, or can this not be done? Are there any other session managers (or even more appropriate conventions) that I should look at?
Thanks for any help on this matter.
I don't think that your particular problem is connected with the SessionManager as you've already mentioned that you are capable of starting a new session and disposing it whenever needed.
From what I can understand of your post is that you are trying to expose an entity to your view (with some lazy-loaded properties) - which is already a bad idea because it leads to nasty LazyInitializationException(s).
You should consider making a distinguishion between your data-model and your domain model. The key concept has been described on this blog:
Ayende # Rahien
http://ayende.com/blog/4054/nhibernate-query-only-properties
If you say that you are writing a very simple 2-tier application then it probably will not harm if you will micro-manage your session in the data-layer (but keep in mind that this is not the best solution).
I would also look into the query that fetches your entity, as it seems to me that your are trying to obtain data that is just a part of your model - in this case Person. This can lead into serious problems like n+1 selects:
What is SELECT N+1?
So in general I think you should focus more on how things are structured in your application instead of searching for a SessionManager as it will not resolve all of your problems.
For any of you who are still looking for answers on this, I will share with you what I have so far.
This is only a very simple overview of the framework that I have decided to use, and is by far not the only solution for this problem.
The basic layout of my code is as follows:
NHibernate Repository
(references my model assembly and the UoW assembly)
Based on the HibernatingRhino's Repository implementation modified to suit my needs. Found here: http://ayende.com/Wiki/Rhino+Commons.ashx
public T Get(Guid Id)
{
return WrapUOW(() =>
{
using (Audit.LockAudit())
return (T)Session.Get(typeof(T), Id);
});
}
public void LoadFullObject(T partial)
{
if (partial == null)
throw new ArgumentNullException("partial");
if (partial.Id == Guid.Empty)
return;
WrapUOW(() =>
{
using (Audit.LockAudit())
{
LazyInitialiser.InitialiseCompletely(partial, Session);
}
});
}
public T SaveOrUpdate(T entity)
{
using (Audit.LockAudit())
{
With.Transaction(() =>
{
Enlist(entity);
Session.SaveOrUpdate(entity);
entity.HasChanged = false;
});
}
return entity;
}
protected void Enlist(T instance)
{
if (instance != null && instance.Id != Guid.Empty && !Session.Contains(instance))
using (Audit.LockAudit())
{
Session.Update(instance);
}
}
References a neat little helper class called 'Lazy Initializer for NHibernate' found here: http://www.codeproject.com/KB/cs/NHibernateLazyInitializer.aspx
This also contains Extension methods for Save, Delete and LoadFullObject
Have broken standards a little in this assembly by also creating a WrapUOW method to help simplify some of my code
protected static T WrapUOW(Func action)
{
IUnitOfWork uow = null;
if (!UnitOfWork.IsStarted)
uow = UnitOfWork.Start();
T result = action();
if (uow != null)
uow.Dispose();
return result;
}
NHibernate Unit of work
(references my model assembly)
Also based on the HibernatingRhino's UoW implementation and modified to suit
View - not important, just requried for MVVM implementation
Binds the values from the ViewModel
Model
Contains my entity classes and hibernate mapping files
ViewModel
Contains two main view base classes, ListPage and MaintenancePage
The ListPage base class just calls the Repository List method based on the object type we are listing. This loads a dehydrated list of entities.
The MaintenancePage takes an entity instance from the ListPage and calls the Repository.LoadFullObject method to rehydrate the entity for use on the screen.
This allows for the use of binding on the screen.
We can also safely call the Repository.SaveOrUpdate method from this page
Related
I am trying to learn and implement domain driven design in a non-web based project. I have a main loop that will do multiple procedures on a lot of entities in a single unit of work. I don't want any of the changes to be persisted unless the entire loop's work is successful. I'm using AutoMapper to convert persistence models to domain models within a repository, and my services are using the repository to retrieve data before doing work.
There are some elements of DDD that are not working well with my project and I am hoping someone can tell me what I have wrong about the whole process.
Here are the DDD ideas I'm struggling with:
Domain services should be used when a process involves multiple aggregate roots interacting with each other
You should pass aggregate root Ids into domain services which will then use repositories to load them
Repositories should return domain models that it constructs from mapped persistence models (in this case I am using AutoMapper)
Here is an example of what I'm trying to do.
using (var scope = serviceProvider.CreateScope())
{
var unitOfWork = scope.ServiceProvider.GetService<IUnitOfWork>();
var aggregate1Repo = scope.ServiceProvider.GetService<IAggregate1Repository>();
var aggregate2Repo = scope.ServiceProvider.GetService<IAggregate2Repository>();
var aggregate3Repo = scope.ServiceProvider.GetService<IAggregate3Repository>();
var firstService = scope.ServiceProvider.GetService<IFirstService>();
var secondService = scope.ServiceProvider.GetService<ISecondService>();
var aggregate1 = aggregate1Repo.Find(1); //First copy of aggregate1
var aggregate2 = aggregate2Repo.Find(1000);
var aggregate3 = aggregate3Repo.Find(123);
aggregate1.DoSomeInternalWork();
firstService.DoWork(aggregate1.Id,aggregate2.Id);
secondService.DoWork(aggregate1.Id,aggregate3.Id);
aggregate1Repo.Update(aggregate1);
unitOfWork.Commit();
}
Aggregate1Repo:
public class Aggregate1Repository
{
private readonly AppDBContext _dbContext;
private IMapper _mapper;
public Aggregate1Repository(AppDBContext context, IMapper mapper)
{
_dbContext = context;
_mapper = mapper;
}
public Aggregate1 Find(int id)
{
return _mapper.Map<Aggregate1>(_dbContext
.SomeDBSet.AsNoTracking()
.Find(id));
}
}
FirstService:
public class FirstService : IFirstService
{
private readonly IAggregate1Repository _agg1Repo;
private readonly IAggregate2Repository _agg2Repo;
public FirstService(IAggregate1Repository agg1Repo, IAggregate2Repository agg2Repo)
{
_agg1Repo = agg1Repo;
_agg2Repo = agg2Repo;
}
public void DoWork(int aggregate1Id, int aggregate2Id)
{
var aggregate1 = _agg1Repo.Find(aggregate1Id); //second copy of aggregate1
var aggregate2 = _agg2Repo.Find(aggregate2Id);
//do some calculations and modify aggregate1 in some fashion
//I could update aggregate1 in the repository here,
// but this copy of aggregate1 doesn't have the changes made prior to this point
}
}
SecondService:
public class SecondService : ISecondService
{
private readonly IAggregate1Repository _agg1Repo;
private readonly IAggregate3Repository _agg3Repo;
public FirstService(IAggregate1Repository agg1Repo, IAggregate3Repository agg3Repo)
{
_agg1Repo = agg1Repo;
_agg3Repo = agg3Repo;
}
public void DoWork(int aggregate1Id, int aggregate3Id)
{
var aggregate1 = _agg1Repo.Find(aggregate1Id); //third copy of aggregate1
var aggregate3 = _agg2Repo.Find(aggregate3Id);
//do some calculations and modify aggregate1 in some fashion
//I could update aggregate1 in the repository here,
// but this copy of aggregate1 doesn't have the changes made prior to this point
}
}
The problem here is that I'd essentially be doing work to three different copies of aggregate1 since a new object is created by AutoMapper in the repository each time I try to load it. I could put separate calls to aggregate1Repo.Update in the two services, but I'd still be working on three different objects that all represent the same thing. I feel like I must have a fundamental flaw in my thinking, but I don't know what it is.
First off, your problem isn't really related to DDD. It's just a typical ORM/AutoMapper issue.
You should NEVER use AutoMapper to map TO an persistence model or a domain model, this will almost never work.
The reasons for this lies, that most/many ORMs will track entities and their changes via references (i.e. EntityFramework). So if you use automapper and get new instances, you break the way ORM works and run into such problems.
This may be an interesting read for you: Why mapping DTOs to Entities using AutoMapper and EntityFramework is horrible
While it handles DTO -> Entity, it applies same to Domain Model -> Entity.
Also Jimmy Bogard (author of AutoMapper) once commented on a blog post (which is unavailable now, but the disqus comments are still there)
Jimmy Bogard commented:
There are definitely places to use AutoMapper, and places not to use
it. However, I think this post misses them:
Configuration
validation takes care of members that exist on the destination type
that aren't mapped. That's easy.
Dependency injection takes care of depending directly on other
assemblies. For example, you'd have IRepository in Core and the
implementation that references System.Data in another assembly
AutoMapper was never, ever intended to map back into a behavioral
model. AutoMapper is intended to build DTOs, not map back in
AutoMapper also uses Reflection.Emit and expression tree compilation,
cached once. If you use autoprojection, it's faster than any
server-side code you could write yourself.
The points you raised are common complaints, but mostly it's people
not understand how to use AutoMapper correctly. However, there are
places I absolutely wouldn't use AutoMapper:
When the destination type isn't a projection of the source type.
Seems obvious, if AutoMapper isn't Auto then there's no point. It's
supposed to get rid of the brain-dead code you would be forced to
write anyway.
Mapping to complex models. I only use AutoMapper to
flatten/project, never back to behavioral models. I'm very up front
about this and discourage this use whenever I see it.
Anywhere that you're not trying to delete code you would have written anyway.
You prefer explicit over convention. This is a whole other topic, with
pros and cons of both approaches.
You prefer not to understand the
magic. I build lots of convention-based helpers covering a wide array
of scenarios, but I make sure that my team understands what is
actually happening underneath the covers.
Your options basically boil down to
Use event sourcing for your domain model (and build it as a series of events inside the repository, so for persistence you only save new models)
OR
use your domain model directly as persistence model.
The later one will cause that some persistence detail leak into your domain model. This may or may not be acceptable for your use case. It usually works well in smaller projects where Event Sourcing is out of scope.
As for the rest of your example, it's a bit to far from a practical use case and it's hard to say why your services are created that way.
Could be a bad chosen aggregate root, wrong/bad separation of concerns. Hard to tell from abstracted terms as SecondService etc.
An aggregate root can be seen as a transaction boundary. All entities within that root needs to be updated at the same time.
The fact, that you pass only ids to the DoWork methods indicates that they are different operations (and hence, transactions on their own) or that only the id should be assigned.
If they were supposed to be used in an outer scope you should pass in the aggregate root references to it, rather than only pass the ids.
firstService.DoWork(aggregate1,aggregate2);
secondService.DoWork(aggregate1,aggregate3);
// instead of
firstService.DoWork(aggregate1.Id,aggregate2.Id);
secondService.DoWork(aggregate1.Id,aggregate3.Id);
You can't (and shouldn't) rely on the fact, that some ORMs may cache an entity, hence not rely that multiple calls to your repository will return the exact same instance of the entity.
I have a project structured like this :
WebSite --> Services --> Repositories --> Domain Objects
I use Entity Framework 6 and Autofac.
I've been told I should remove all construction logic from my domain objects so they remain as POCO as possible. The thing is, I have properties that should be initialized when a new object is created such as CreationDate and UserIdCreatedBy
As an example, if I have a Client object, I would use the following logic in the Client class constructor :
public class Client
{
public Client()
{
this.CreationDate = DateTime.UtcNow;
if (Thread.CurrentPrincipal is CustomPrincipal
&& ((CustomPrincipal)Thread.CurrentPrincipal).CustomIdentity != null
&& ((CustomPrincipal)Thread.CurrentPrincipal).CustomIdentity.User != null)
{
this.UserIdCreatedBy = ((CustomPrincipal)Thread.CurrentPrincipal).CustomIdentity.User.UserId;
}
}
... Properties and such
}
So now, I would like to take this constructor logic out of the domain object into a factory for that object. How can I do it gracefully so that Entity Framework uses it when I call MyContext.Clients.Create()? Is that even possible? I know calling the Thread's CurrentPrincipal is not all that good either, but it's for the example to show the logic could be more complex than a plain default value.
Thanks a lot
Assuming that you use DB storage to store the items (not for manipulations with them) I think you could use separate class for instantiating objects. (Some kind of factory as you described.)
For example, in my apps I often have UserManager class. This class does all work relating to the users creation depending on the login method (email+password, social ID etc). Also this class might contain methods for changing password etc.
UPD:
I use data layer as something that knows how to create/update/read/delete objects from/to database. In addition, class that works with db can has methods like selectByThis, selectByThat etc. So you never need to write something db-specific somewhere in your code, except db layer. (I mean that you never need to write something like .Where(a => a.SomeProp == true), you just use special method for this, so if you change the db you will just have to change your db layer, now whole the project.)
So yes, when I need some special logic for initializing object I use separate class. (Like some kind of manager.) That class does all the work and then just tells the db layer: “hey, I did all the work, so just save this object for me!”
It simplifies the maintenance for you. Also, this is how to follow the rule of single responsibility. One class initialize, do some work and the other class saves.
I'm using ASP .NET MVC (C#) and EntityFramework (database first) for a project.
Let's say I'm on a "Movie detail" page which shows the detail of one movie of my database. I can click on each movie and edit each one.
Therefore, I have a Movie class, and a Database.Movie class generated with EF.
My index action looks like :
public ActionResult MovieDetail(int id)
{
Movie movie = Movie.GetInstance(id);
return View("MovieDetail", movie);
}
GetInstance method is supposed to return an instance of Movie class which looks like this for the moment :
public static Movie GetInstance(int dbId)
{
using (var db = new MoviesEntities())
{
Database.Movie dbObject = db.Movies.SingleOrDefault(r => r.Id == dbId);
if (dbObject != null)
{
Movie m = new Movie(dbObject.Id, dbObject.Name, dbObject.Description);
return m;
}
return null;
}
}
It works fine but is this a good way to implement it? Is there an other cleaner way to get my instance of Movie class ?
Thanks
is this a good way to implement it?
That's a very subjective question. It's valid, and there's nothing technically wrong with this implementation. For my small-size home projects, I've used similar things.
But for business applications, it's better to keep your entities unrelated to your MVC application. This means that your data context + EF + generated entities should be kept in a separate project (let's call it the 'Data' project), and the actual data is passed in the form of a DTO.
So if your entity resembles this:
public class Person {
public int Id { get; set; }
public string Name { get; set; }
}
You'd expect there to be an equivalent DTO class that is able to pass that data:
public class PersonDTO {
public int Id { get; set; }
public string Name { get; set; }
}
This means that your 'Data' project only replies with DTO classes, not entities.
public static MovieDTO GetInstance(int dbId)
{
...
}
It makes the most sense that your DTOs are also in a separate project. The reason for all this abstraction is that when you have to change your datacontext (e.g. the application will start using a different data source), you only need to make sure that the new data project also communicates with the same DTOs. How it works internally, and which entities it uses, is only relevant inside the project. From the outside (e.g. from your MVC application), it doesn't matter how you get the data, only that you pass it in a form that your MVC projects already understand (the DTO classes).
All your MVC controller logic will not have to change, because the DTO objects haven't changed. This could save you hours. If you link the entity to your Controller AND View, you'll have to rewrite both if you suddenly decide to change the entity.
If you're worried about the amount of code you'll have to write for converting entities to DTOs and vice versa, you can look into tools like Automapper.
The main question: Is this needed?
That, again, is a very subjective question. It's relative to the scope of the project, but also the expected lifetime of the application. If it's supposed to be used for a long time, it might be worth it to keep everything interchangeable. If this is a small scale, short lifetime project, the added time to implement this might not be worth it.
I can't give you a definitive answer on this. Evaluate how well you want the application to adapt to changes, but also how likely it is that the applicaiton will change in the future.
Disclaimer: This is how we do it at the company where I work. This is not the only solution to this type of problem, but it's the one I'm familiar with. Personally, I don't like making abstractions unless there's a functional reason for it.
A few things:
The naming you're using is a little awkward and confusing. Generally, you don't ever want to have two classes in your project named the same, even if they're in different namespaces. There's nothing technically wrong with it, but it creates confusion. Which Movie do I need here? And if I'm dealing with a Movie instance, is it Movie or Database.Movie? If you stick to names like Movie and MovieDTO or Movie and MovieViewModel, the class names clearly indicate the purpose (lack of suffix indicates a database-backed entity).
Especially if you're coming from another MVC framework like Rails or Django, ASP.NET's particular flavor of MVC can be a little disorienting. Most other MVC frameworks have a true Model, a single class that functions as the container for all the business logic and also acts as a repository (which could be considered business logic, in a sense). ASP.NET MVC doesn't work that way. Your entities (classes that represent database tables) are and should be dumb. They're just a place for Entity Framework to stuff data it pulls from the database. Your Model (the M in MVC) is really more a combination of your view models and your service/DAL layer. Your Movie class (not to be confused with Database.Movie... see why that naming bit is important) on the other hand is trying to do triple duty, acting as the entity, view model and repository. That's simply too much. Your classes should do one thing and do it well.
Again, if you have a class that's going to act as a service or repository, it should be an actual service or repository, with everything those patterns imply. Even then, you should not instantiate your context in a method. The easiest correct way to handle it is to simply have your context be a class instance variable. Something like:
public class MovieRepository
{
private readonly MovieEntities context;
public MovieRepository()
{
this.context = new MovieEntities();
}
}
Even better, though is to use inversion of control and pass in the context:
public class MovieRepository
{
private readonly MovieEntities context;
public MovieRepository(MovieEntities context)
{
this.context = context;
}
}
Then, you can employ a dependency injection framework, like Ninject or Unity to satisfy the dependency for you (preferably with a request-scoped object) whenever you need an instance of MovieRepository. That's a bit high-level if you're just starting out, though, so it's understandable if you hold off on going the whole mile for now. However, you should still design your classes with this in mind. The point of inversion of control is to abstract dependencies (things like the context for a class that needs to pull entities from the database), so that you can switch out these dependencies if the need should arise (say perhaps if there comes a time when you're going to retrieve the entities from an Web API instead of through Entity Framework, or even if you just decide to switch to a different ORM, such as NHibernate). In your code's current iteration, you would have to touch every method (and make changes to your class in general, violating open-closed).
entity-model never should act as view-model. Offering data to the views is an essential role of the view-model. view-model can easily be recognized because it doesn’t have any other role or responsibility other than holding data and business rules that act solely upon that data. It thus has all the advantages of any other pure model such as unit-testability.
A good explanation of this can be found in Dino Esposito’s The Three Models of ASP.NET MVC Apps.
You can use AutoMapper
What is AutoMapper?
AutoMapper is a simple little library built to solve a deceptively complex problem - getting rid of code that mapped one object to another. This type of code is rather dreary and boring to write, so why not invent a tool to do it for us?
How do I get started?
Check out the getting started guide.
Where can I get it?
First, install NuGet. Then, install AutoMapper from the package manager console:
PM> Install-Package AutoMapper
Is it possible to expose the DataContext when extending a class in the DataContext? Consider this:
public partial class SomeClass {
public object SomeExtraProperty {
this.DataContext.ExecuteQuery<T>("{SOME_REALLY_COMPLEX_QUERY_THAT_HAS_TO_BE_IN_RAW_SQL_BECAUSE_LINQ_GENERATES_CRAP_IN_THIS INSTANCE}");
}
}
How can I go about doing this? I have a sloppy version working now, where I pass the DataContext to the view model and from there I pass it to the method I have setup in the partial class. I'd like to avoid the whole DataContext passing around and just have a property that I can reference.
UPDATE FOR #Aaronaught
So, how would I go about writing the code? I know that's a vague question, but from what I've seen online so far, all the tutorials feel like they assume I know where to place the code and how use it, etc.
Say I have a very simple application structured as (in folders):
Controllers
Models
Views
Where do the repository files go? In the Models folder or can I create a "Repositories" folder just for them?
Past that how is the repository aware of the DataContext? Do I have to create a new instance of it in each method of the repository (if so that seems in-efficient... and wouldn't that cause problems with pulling an object out of one instance and using it in a controller that's using a different instance...)?
For example I currently have this setup:
public class BaseController : Controller {
protected DataContext dc = new DataContext();
}
public class XController : BaseController {
// stuff
}
This way I have a "global" DataContext available to all controllers who inherit from BaseController. It is my understanding that that is efficient (I could be wrong...).
In my Models folder I have a "Collections" folder, which really serve as the ViewModels:
public class BaseCollection {
// Common properties for the Master page
}
public class XCollection : BaseCollection {
// X View specific properties
}
So, taking all of this where and how would the repository plug-in? Would it be something like this (using the real objects of my app):
public interface IJobRepository {
public Job GetById(int JobId);
}
public class JobRepository : IJobRepository {
public Job GetById(int JobId) {
using (DataContext dc = new DataContext()) {
return dc.Jobs.Single(j => (j.JobId == JobId));
};
}
}
Also, what's the point of the interface? Is it so other services can hook up to my app? What if I don't plan on having any such capabilities?
Moving on, would it be better to have an abstraction object that collects all the information for the real object? For example an IJob object which would have all of the properties of the Job + the additional properties I may want to add such as the Name? So would the repository change to:
public interface IJobRepository {
public IJob GetById(int JobId);
}
public class JobRepository : IJobRepository {
public IJob GetById(int JobId) {
using (DataContext dc = new DataContext()) {
return dc.Jobs.Single(j => new IJob {
Name = dc.SP(JobId) // of course, the project here is wrong,
// but you get the point...
});
};
}
}
My head is so confused now. I would love to see a tutorial from start to finish, i.e., "File -> New -> Do this -> Do that".
Anyway, #Aaronaught, sorry for slamming such a huge question at you, but you obviously have substantially more knowledge at this than I do, so I want to pick your brain as much as I can.
Honestly, this isn't the kind of scenario that Linq to SQL is designed for. Linq to SQL is essentially a thin veneer over the database; your entity model is supposed to closely mirror your data model, and oftentimes your Linq to SQL "entity model" simply isn't appropriate to use as your domain model (which is the "model" in MVC).
Your controller should be making use of a repository or service of some kind. It should be that object's responsibility to load the specific entities along with any additional data that's necessary for the view model. If you don't have a repository/service, you can embed this logic directly into the controller, but if you do this a lot then you're going to end up with a brittle design that's difficult to maintain - better to start with a good design from the get-go.
Do not try to design your entity classes to reference the DataContext. That's exactly the kind of situation that ORMs such as Linq to SQL attempt to avoid. If your entities are actually aware of the DataContext then they're violating the encapsulation provided by Linq to SQL and leaking the implementation to public callers.
You need to have one class responsible for assembling the view models, and that class should either be aware of the DataContext itself, or various other classes that reference the DataContext. Normally the class in question is, as stated above, a domain repository of some kind that abstracts away all the database access.
P.S. Some people will insist that a repository should exclusively deal with domain objects and not presentation (view) objects, and refer to the latter as services or builders; call it what you like, the principle is essentially the same, a class that wraps your data-access classes and is responsible for loading one specific type of object (view model).
Let's say you're building an auto trading site and need to display information about the domain model (the actual car/listing) as well as some related-but-not-linked information that has to be obtained separately (let's say the price range for that particular model). So you'd have a view model like this:
public class CarViewModel
{
public Car Car { get; set; }
public decimal LowestModelPrice { get; set; }
public decimal HighestModelPrice { get; set; }
}
Your view model builder could be as simple as this:
public class CarViewModelService
{
private readonly CarRepository carRepository;
private readonly PriceService priceService;
public CarViewModelService(CarRepository cr, PriceService ps) { ... }
public CarViewModel GetCarData(int carID)
{
var car = carRepository.GetCar(carID);
decimal lowestPrice = priceService.GetLowestPrice(car.ModelNumber);
decimal highestPrice = priceService.GetHighestPrice(car.ModelNumber);
return new CarViewModel { Car = car, LowestPrice = lowestPrice,
HighestPrice = highestPrice };
}
}
That's it. CarRepository is a repository that wraps your DataContext and loads/saves Cars, and PriceService essentially wraps a bunch of stored procedures set up in the same DataContext.
It may seem like a lot of effort to create all these classes, but once you get into the swing of it, it's really not that time-consuming, and you'll ultimately find it way easier to maintain.
Update: Answers to New Questions
Where do the repository files go? In the Models folder or can I create a "Repositories" folder just for them?
Repositories are part of your model if they are responsible for persisting model classes. If they deal with view models (AKA they are "services" or "view model builders") then they are part of your presentation logic; technically they are somewhere between the Controller and Model, which is why in my MVC apps I normally have both a Model namespace (containing actual domain classes) and a ViewModel namespace (containing presentation classes).
how is the repository aware of the DataContext?
In most instances you're going to want to pass it in through the constructor. This allows you to share the same DataContext instance across multiple repositories, which becomes important when you need to write back a View Model that comprises multiple domain objects.
Also, if you later decide to start using a Dependency Injection (DI) Framework then it can handle all of the dependency resolution automatically (by binding the DataContext as HTTP-request-scoped). Normally your controllers shouldn't be creating DataContext instances, they should actually be injected (again, through the constructor) with the pre-existing individual repositories, but this can get a little complicated without a DI framework in place, so if you don't have one, it's OK (not great) to have your controllers actually go and create these objects.
In my Models folder I have a "Collections" folder, which really serve as the ViewModels
This is wrong. Your View Model is not your Model. View Models belong to the View, which is separate from your Domain Model (which is what the "M" or "Model" refers to). As mentioned above, I would suggest actually creating a ViewModel namespace to avoid bloating the Views namespace.
So, taking all of this where and how would the repository plug-in?
See a few paragraphs above - the repository should be injected with the DataContext and the controller should be injected with the repository. If you're not using a DI framework, you can get away with having your controller create the DataContext and repositories, but try not to cement the latter design too much, you'll want to clean it up later.
Also, what's the point of the interface?
Primarily it's so that you can change your persistence model if need be. Perhaps you decide that Linq to SQL is too data-oriented and you want to switch to something more flexible like Entity Framework or NHibernate. Perhaps you need to implement support for Oracle, mysql, or some other non-Microsoft database. Or, perhaps you fully intend to continue using Linq to SQL, but want to be able to write unit tests for your controllers; the only way to do that is to inject mock/fake repositories into the controllers, and for that to work, they need to be abstract types.
Moving on, would it be better to have an abstraction object that collects all the information for the real object? For example an IJob object which would have all of the properties of the Job + the additional properties I may want to add such as the Name?
This is more or less what I recommended in the first place, although you've done it with a projection which is going to be harder to debug. Better to just call the SP on a separate line of code and combine the results afterward.
Also, you can't use an interface type for your Domain or View Model. Not only is it the wrong metaphor (models represent the immutable laws of your application, they are not supposed to change unless the real-world requirements change), but it's actually not possible; interfaces can't be databound because there's nothing to instantiate when posting.
So yeah, you've sort of got the right idea here, except (a) instead of an IJob it should be your JobViewModel, (b) instead of an IJobRepository it should be a JobViewModelService, and (c) instead of directly instantiating the DataContext it should accept one through the constructor.
Keep in mind that the purpose of all of this is to keep a clean, maintainable design. If you have a 24-hour deadline to meet then you can still get it to work by just shoving all of this logic directly into the controller. Just don't leave it that way for long, otherwise your controllers will (d)evolve into God-Object abominations.
Replace {SOME_REALLY_COMPLEX_QUERY_THAT_HAS_TO_BE_IN_RAW_SQL_BECAUSE_LINQ_GENERATES_CRAP_IN_THIS INSTANCE} with a stored procedure then have Linq to SQL import that function.
You can then call the function directly from the data context, get the results and pass it to the view model.
I would avoid making a property that calls the data context. You should just get the value from a service or repository layer whenever you need it instead of embedding it into one of the objects created by Linq to SQL.
I'm attempting to test my repository using an in-memory mock context.
I implement this with an in-memory Dictionary, as most people do. This implements members on my repository interface to Add, Remove, Find, etc, working with the in-memory collection.
This works fine in most scenarios:
[TestMethod]
public void CanAddPost()
{
IRepository<Post> repo = new MockRepository<Post>();
repo.Add(new Post { Title = "foo" });
var postJustAdded = repo.Find(t => t.Title == "foo").SingleOrDefault();
Assert.IsNotNull(postJustAdded); // passes
}
However, i have the following test which i cannot get to pass with the mock repository (works fine for the SQL repository).
Consider i have three repositories:
Posts (handles user content posts, like a StackOverflow question).
Locations (locations in the world, like "Los Angeles").
LocationPosts (junction table to handle many-many between Posts/Locations).
Posts can be added to nowhere, or they can also be added with a particular Location.
Now, here's my test:
[TestMethod]
public void CanAddPostToLocation()
{
var location = locationRepository.FindSingle(1); // Get LA
var post = new Post { Title = "foo", Location = location }; // Create post, with LA as Location.
postRepository.Add(post); // Add Post to repository
var allPostsForLocation = locationPostRepository.FindAll(1); // Get all LA posts.
Assert.IsTrue(allPostsForLocation.Contains(post)); // works for EF, fails for Mock.
}
Basically, when using the "real" EF/SQL Repository, when i add a Post to a particular location, EntityFramework is smart enough to add the "LocationPost" record, because of the association in the EDMX ("LocationPosts" navigational property on "Post" entity)
But how can i make my Mock repository smart enough to "mimic" this EF intelligence?
When i do "Add" on my mock repository, this just adds to the Dictionary. It has no smarts to go "Oh, wait, you have a dependant association, let me add that to the OTHER repository for you".
My Mock Repository is generic, so i dont know how to put the smarts in there.
I have also looked at creating a FakeObjectContext / FakeObjectSet (as advised by Julie Lerman on her blog), but this still does not cover this scenario.
I have a feeling my mocking solution isn't good enough. Can anyone help, or provide an up-to-date article on how to properly mock an Entity Framework 4/SQL Server repository covering my scenario?
The core of the issue is i have one repository per aggregate root (which is fine, but is also my downfall).
So Post and Location are both aggregate roots, but neither "own" the LocationPosts.
Therefore they are 3 seperate repositories, and in an in-memory scenario, they are 3 seperate Dictionaries. I think i'm missing the "glue" between them in my in-memory repo.
EDIT
Part of the problem is that i am using Pure POCO's (no EF code generation). I also do not have any change tracking (no snapshot-based tracking, no proxy classes).
I am under the impression this is where the "smarts" happen.
At the moment, i am exploring a delegate option. I am exposing a event in my Generic Mock Repository (void, accepts generic T, being the Entity) which i invoke after "Add". I then subscribe to this event in my "Post Repository", where i plan to add the related entities to the other repositories.
This should work. Will put as answer if it does so.
However, im not sure it this is the best solution, but then again, this is only to satisfy mocking (code won't be used for real functionality).
As i said in my EDIT, i explored the delegate option, which has worked succesfully.
Here's how i did it:
namespace xxxx.Common.Repositories.InMemory // note how this is an 'in-memory' repo
{
public class GenericRepository<T> : IDisposable, IRepository<T> where T : class
{
public delegate void UpdateComplexAssociationsHandler<T>(T entity);
public event UpdateComplexAssociationsHandler<T> UpdateComplexAssociations;
// ... snip heaps of code
public void Add(T entity) // method defined in IRepository<T> interface
{
InMemoryPersistence<T>().Add(entity); // basically a List<T>
OnAdd(entity); // fire event
}
public void OnAdd(T entity)
{
if (UpdateComplexAssociations != null) // if there are any subscribers...
UpdateComplexAssociations(entity); // call the event, passing through T
}
}
}
Then, in my In Memory "Post Repository" (which inherits from the above class).
public class PostRepository : GenericRepository<Post>
{
public PostRepository(IUnitOfWork uow) : base(uow)
{
UpdateComplexAssociations +=
new UpdateComplexAssociationsHandler<Post>(UpdateLocationPostRepository);
}
public UpdateLocationPostRepository(Post post)
{
// do some stuff to interrogate the post, then add to LocationPost.
}
}
You may also think "hold on, PostRepository derives from GenericRepository, so why are you using delegates, why don't you override the Add?" And the answer is the "Add" method is an interface implementation of IRepository - and therefore cannot be virtual.
As i said, not the best solution - but this is a mocking scenario (and a good case for delegates). I am under the impression not a lot of people are going "this far" in terms of mocking, pure POCO's and repository/unit of work patterns (with no change tracking on POCO's).
Hope this helps someone else out.