I'm attempting to test my repository using an in-memory mock context.
I implement this with an in-memory Dictionary, as most people do. This implements members on my repository interface to Add, Remove, Find, etc, working with the in-memory collection.
This works fine in most scenarios:
[TestMethod]
public void CanAddPost()
{
IRepository<Post> repo = new MockRepository<Post>();
repo.Add(new Post { Title = "foo" });
var postJustAdded = repo.Find(t => t.Title == "foo").SingleOrDefault();
Assert.IsNotNull(postJustAdded); // passes
}
However, i have the following test which i cannot get to pass with the mock repository (works fine for the SQL repository).
Consider i have three repositories:
Posts (handles user content posts, like a StackOverflow question).
Locations (locations in the world, like "Los Angeles").
LocationPosts (junction table to handle many-many between Posts/Locations).
Posts can be added to nowhere, or they can also be added with a particular Location.
Now, here's my test:
[TestMethod]
public void CanAddPostToLocation()
{
var location = locationRepository.FindSingle(1); // Get LA
var post = new Post { Title = "foo", Location = location }; // Create post, with LA as Location.
postRepository.Add(post); // Add Post to repository
var allPostsForLocation = locationPostRepository.FindAll(1); // Get all LA posts.
Assert.IsTrue(allPostsForLocation.Contains(post)); // works for EF, fails for Mock.
}
Basically, when using the "real" EF/SQL Repository, when i add a Post to a particular location, EntityFramework is smart enough to add the "LocationPost" record, because of the association in the EDMX ("LocationPosts" navigational property on "Post" entity)
But how can i make my Mock repository smart enough to "mimic" this EF intelligence?
When i do "Add" on my mock repository, this just adds to the Dictionary. It has no smarts to go "Oh, wait, you have a dependant association, let me add that to the OTHER repository for you".
My Mock Repository is generic, so i dont know how to put the smarts in there.
I have also looked at creating a FakeObjectContext / FakeObjectSet (as advised by Julie Lerman on her blog), but this still does not cover this scenario.
I have a feeling my mocking solution isn't good enough. Can anyone help, or provide an up-to-date article on how to properly mock an Entity Framework 4/SQL Server repository covering my scenario?
The core of the issue is i have one repository per aggregate root (which is fine, but is also my downfall).
So Post and Location are both aggregate roots, but neither "own" the LocationPosts.
Therefore they are 3 seperate repositories, and in an in-memory scenario, they are 3 seperate Dictionaries. I think i'm missing the "glue" between them in my in-memory repo.
EDIT
Part of the problem is that i am using Pure POCO's (no EF code generation). I also do not have any change tracking (no snapshot-based tracking, no proxy classes).
I am under the impression this is where the "smarts" happen.
At the moment, i am exploring a delegate option. I am exposing a event in my Generic Mock Repository (void, accepts generic T, being the Entity) which i invoke after "Add". I then subscribe to this event in my "Post Repository", where i plan to add the related entities to the other repositories.
This should work. Will put as answer if it does so.
However, im not sure it this is the best solution, but then again, this is only to satisfy mocking (code won't be used for real functionality).
As i said in my EDIT, i explored the delegate option, which has worked succesfully.
Here's how i did it:
namespace xxxx.Common.Repositories.InMemory // note how this is an 'in-memory' repo
{
public class GenericRepository<T> : IDisposable, IRepository<T> where T : class
{
public delegate void UpdateComplexAssociationsHandler<T>(T entity);
public event UpdateComplexAssociationsHandler<T> UpdateComplexAssociations;
// ... snip heaps of code
public void Add(T entity) // method defined in IRepository<T> interface
{
InMemoryPersistence<T>().Add(entity); // basically a List<T>
OnAdd(entity); // fire event
}
public void OnAdd(T entity)
{
if (UpdateComplexAssociations != null) // if there are any subscribers...
UpdateComplexAssociations(entity); // call the event, passing through T
}
}
}
Then, in my In Memory "Post Repository" (which inherits from the above class).
public class PostRepository : GenericRepository<Post>
{
public PostRepository(IUnitOfWork uow) : base(uow)
{
UpdateComplexAssociations +=
new UpdateComplexAssociationsHandler<Post>(UpdateLocationPostRepository);
}
public UpdateLocationPostRepository(Post post)
{
// do some stuff to interrogate the post, then add to LocationPost.
}
}
You may also think "hold on, PostRepository derives from GenericRepository, so why are you using delegates, why don't you override the Add?" And the answer is the "Add" method is an interface implementation of IRepository - and therefore cannot be virtual.
As i said, not the best solution - but this is a mocking scenario (and a good case for delegates). I am under the impression not a lot of people are going "this far" in terms of mocking, pure POCO's and repository/unit of work patterns (with no change tracking on POCO's).
Hope this helps someone else out.
Related
I have been trying to implement a generic repository in MVC with Unit of Work and Dependency Injection with Ninject.
The post I have been following is this one
http://codefizzle.wordpress.com/author/darkey69/
I am not getting anything returned when I try to use the repository in my controllers. I suspect it is because there is nothing that specifically links or injects the EFDbContext and cannot seem to work out how to do this.
If anyone has implemented this and can assist that would be greatly appreciated. I won't re-post my code just yet as it is all contained and explained in the post above.
While, ultimately, I would discourage you from using the UoW/Repository patterns with an ORM like Entity Framework, I can still give you an approach I've toyed with, which may be helpful even in something more appropriate to abstracting your context, like a service.
All the link you posted is doing is using generics so you don't have to actually define separate implementations of IRepository for each particular entity type. However, you still must manually new up an instance of the generic repository for each entity type in your IUnitOfWork implementation and alter the interface itself to include each entity's repository instance if you want to actually be able to work with the interface instead of the actual generic instance. Not only is that cumbersome, but it also violates open-closed on both UnitOfWork<T> and IUnitOfWork, and keeping them in sync with changes is going to be loads of fun, as well (read: sarcasm).
An alternative I've toyed with, though I won't go so far as to recommend it, is to use generic methods instead of generic classes. For example, instead of something like:
public class Repository<T> : IRepository<T>
where T : class
{
...
public IEnumerable<T> GetAll()
{
return _dbSet;
}
}
You might do:
public class Repository : IRepository
{
...
public IEnumerable<T> GetAll<T>()
where T : class
{
return context.Set<T>();
}
}
Which means, you only need to new up one Repository instance, and you can then access any entity type in your context off that:
var repo = new Repository(context);
var foos = repo.GetAll<Foo>();
var bars = repo.GetAll<Bar>();
This, of course, negates the need entirely for a unit of work.
The reason I won't necessarily recommend this approach is that it hasn't been field tested. As I've said, I've toyed around with it personally a bit, and I feel comfortable with it myself. However, I'd very much be interested to hear what other developers think of this approach.
I'm trying to build a new application using the Repository pattern for the first time and I'm a little confused about using a Repository. Suppose I have the following classes:
public class Ticket
{
}
public class User
{
public List<Ticket>AssignedTickets { get; set; }
}
public class Group
{
public List<User> GroupMembers { get;set; }
public List<Ticket> GroupAssignedTickets { get;set; }
}
I need a methods that can populate these collections by fetching data from the database.
I'm confused as to which associated Repository class I should put those methods in. Should I design my repositories so that everything returning type T goes in the repository for type T as such?
public class TicketRepository
{
public List<Ticket> GetTicketsForGroup(Group g) { }
public List<Ticket> GetTicketsForUser(User u) { }
}
public class UserRepository
{
public List<User> GetMembersForGroup(Group g) { }
}
The obvious drawback I see here is that I need to start instantiating a lot of repositories. What if my User also has assigned Widgets, Fidgets, and Lidgets? When I populate a User, I need to instantiate a WidgetRepository, a FidgetRepository, and a LidgetRepository all to populate a single user.
Alternatively, do I construct my repository so that everything requesting based on type T is lumped into the repository for type T as listed below?
public class GroupRepository
{
public List<Ticket> GetTickets(Group g) { }
public List<User> GetMembers(Group g) { }
}
public class UserRepository
{
public List<Ticket> GetTickets(User u) { }
}
The advantage I see here is that if I now need my user to have a collection of Widgets, Fidgets, and Lidgets, I just add the necessary methods to the UserRepository pattern and don't need to instantiate a bunch of different repository classes every time I want to create a user, but now I've scattered the concerns for a user across several different repositories.
I'm really not sure which way is right, if any. Any suggestions?
The repository pattern can help you to:
Put things that change for the same reason together
As well as
Separate things that change for different reasons
On the whole, I would expect a "User Repository" to be a repository for obtaining users. Ideally, it would be the only repository that you can use to obtain users, because if you change stuff, like user tables or the user domain model, you would only need to change the user repository. If you have methods on many repositories for obtaining a user, they would all need to change.
Limiting the impact of change is good, because change is inevitable.
As for instantiating many repositories, using a dependency injection tool such as Ninject or Unity to supply the repositories, or using a repository factory, can reduce new-ing up lots of repositories.
Finally, you can take a look at the concept of Domain Driven Design to find out more about the key purpose behind domain models and repositories (and also about aggregate roots, which are relevant to what you are doing).
Fascinating question with no right answer. This might be a better fit for programmers.stackexchange.com rather than stackoverflow.com. Here are my thoughts:
Don't worry about creating too many repositories. They are basically stateless objects so it isn't like you will use too much memory. And it shouldn't be a significant burden to the programmer, even in your example.
The real benefit of repositories is for mocking the repository for unit testing. Consider splitting them up based on what is simplest for the unit tests, to make the dependency injection simple and clear. I've seen cases where every query is a repository (they call those "queries" instead of repositories). And other cases where there is one repository for everything.
As it turns out, the first option was the more practical option in this case. There were a few reasons for this:
1) When making changes to a type and its associated repository (assume Ticket), it was far easier to modify the Ticket and TicketRepository in one place than to chase down every method in every repository that used a Ticket.
2) When I attempted to use interfaces to dictate the type of queues each repository could pull, I ran into issues where a single repository couldn't implement an generic interface using type T multiple times with the only differentiation in interface method implementation being the parameter type.
3) I access data from SharePoint and a database in my implementation, and created two abstract classes to provide data tools to the concrete repositories for either Sharepoint or SQL Server. Assume that in the example above Users come from Sharepoint while Tickets come from a database. Using my model I would not be able to use these abstract classes, as the group would have to inherit from both my Sharepoint abstract class and my SQL abstract class. C# does not support multiple inheritance of abstract classes. However, if I'm grouping all Ticket-related behaviours into a TicketRepository and all User-related behaviours into a UserRepository, each repository only needs access to one type of underlying data source (SQL or Sharepoint, respectively).
I'm working on a system where I'd like to have my layers decoupled as much as possible, you know, some kind of modular application to be able to switch databases and stuff without a serious modification of the rest of the system.
So, I've been watching for x-the time one of the talks of Robert C. Martin about good practices, clean code, decoupling architecture etc, to get some inspiration. What I find kinda weird is his description of the system Fitnesse and the way they've implemented store/load methods for WikiPages. I'm linking the video as well: Robert C. Martin - Clean Architecture and Design
What's he describing (at least from my understanding) is that the entity is aware of the mechanism how to store and load itself from some persistent layer. When he wanted to store WikiPages in-memory, he simply overrode the WikiPage and created a new InMemoryWikiPage. When he wanted to store them in a database, he did the same thing...
So, one of my questions is - what is this approach called? I've been learning the whole time about Repository patterns and stuff, and why should be classes like this persistence-ignorant, but I can't seem to find any materials on this thing he did. Because my application will consist of modules, I think this may help to solve my problems without a need for creating some centralized store for my entities... Every module would simply take care of itself including persistence of its entities.
I think the code would look like is something like this:
public class Person : IEntity
{
public int ID { get;set; }
public string Name { get;set; }
public void Save()
{
..
}
public void Update()
{
}
public void Delete()
{
}
...
}
Seems a bit weird, but... Or maybe I misunderstood what he said in the video?
My second question would be, if you don't agree with this approach, what would be the path you'd take in such modular application?
Please provide an example if possible with some explanation.
I'll answer your second question. I think you will be interested as well in Dependency Injection.
I'm not an expert on DI but I'll try to explain as clear as I'm able to.
First off, from wikipedia:
Dependency injection is a software design pattern that allows removing hard-coded dependencies and making it possible to change them, whether at run-time or compile-time.
The primary purpose of the dependency injection pattern is to allow selection among multiple implementations of a given dependency interface at runtime, or via configuration files, instead of at compile time.
There are many libraries around that help you implement this design pattern: AutoFac, SimpleInjector, Ninject, Spring .NET, and many others.
In theory, this is what your code would look like (AutoFac example)
var containerBuilder = new ContainerBuilder();
//This is your container builder. It will be used to register interfaces
// with concrete implementations
Then, you register concrete implementations for interface types:
containerBuilder.RegisterType<MockDatabase>().As<IDatabase>().InstancePerDependency();
containerBuilder.RegisterType<Person>().As<IPerson>().InstancePerDependency();
In this case, InstancePerDependency means that whenever you try to resolve IPerson, you'll get a new instance. It could be for example SingleInstance, so whenever you tried to resolve IPerson, you would get the same shared instance.
Then you build your container, and use it:
var container = containerBuilder.Build();
IPerson myPerson = container.Resolve<IPerson>(); //This will retrieve the object based on whatever implementation you registered for IPerson
myPerson.Id = 1;
myPerson.Save(); //Save your changes
The model I used in this example:
interface IEntity
{
int Id { get; set; }
string TableName { get; }
//etc
}
interface IPerson: IEntity
{
void Save();
}
interface IDatabase
{
void Save(IEntity entity);
}
class SQLDatabase : IDatabase
{
public void Save(IEntity entity)
{
//Your sql execution (very simplified)
//yada yada INSERT INTO entity.TableName VALUES (entity.Id)
//If you use EntityFramework it will be even easier
}
}
class MockDatabase : IDatabase
{
public void Save(IEntity entity)
{
return;
}
}
class Person : IPerson
{
IDatabase _database;
public Person(IDatabase database)
{
this._database = database;
}
public void Save()
{
_database.Save(this);
}
public int Id
{
get;
set;
}
public string TableName
{
get { return "Person"; }
}
}
Don't worry, AutoFac will automatically resolve any Person Dependencies, such as IDatabase.
This way, in case you wanted to switch your database, you could simply do this:
containerBuilder.RegisterType<SqlDatabase>().As<IDatabase>().InstancePerDependency();
I wrote an over simplified (not suitable for use) code which serves just as a kickstart, google "Dependency Injection" for further information. I hope this helps. Good luck.
The pattern you posted is an Active Record.
The difference between Repository and Active Record Pattern is that in Active Record pattern, data query and persistence, and the domain object are in one class, where as in Repository, the data persistence and query are decoupled from the domain object itself.
Another pattern that you may want to look into is the Query Object which, unlike respository pattern where its number of methods will increase in every possible query (filter, sorting, grouping, etc) the query object can use fluent interface to be expressive [1] or dedicated in which one you may pass parameter [2]
Lastly, you may look at Command Query Responsibility Segregation architecture for ideas. I personally loosely followed it, just picked up ideas that can help me.
Hope this helps.
Update base on comment
One variation of Repository pattern is this
UserRepository
{
IEnumerable<User> GetAllUsers()
IEnumerable<User> GetAllByStatus(Status status)
User GetUserById(int id)
...
}
This one does not scale since the repository get's updated for additional query that way be requested
Another variation is to pass query object as parameter to the data query
UserRepository
{
IEnumerable<User> GetAll(QueryObject)
User GetUserById(int id)
...
}
var query = new UserQueryObject(status: Status.Single)
var singleUsers = userRepo.GetAll(query)
Some in .Net world, Linq expression is passed instead of QueryObject
var singleUsers = userRepo.GetAll(user => user.Status == Status.Single)
Another variation is to do dedicate Repository for retrieval on one entity by its unique identifier and save it, while query object is used to submit data retrieval, just like in CQRS.
Update 2
I suggest you get familiar with the SOLID principles. These principles are very helpful in guiding you creating a loosely coupled, high cohesive architecture.
Los Techies compilation on SOLID pricples contains good introductory articles regarding SOLID priciples.
I am learning repository pattern and was reading Repository Pattern with Entity Framework 4.1 and Code First
and Generic Repository Pattern - Entity Framework, ASP.NET MVC and Unit Testing Triangle
about how they implement the repository pattern with Entity Framework.
Saying
•Hide EF from upper layer
•Make code better testable
Make code better testable I do understand, but why hide EF from upper layer?
Looking at their implementation, it seems just wrap the entity framework with a generic method for query the entity framework. Actually what's the reason for doing this?
I am assuming is for
Loose coupling (that's why hide EF from upper layer?)
Avoid repeat writting same LINQ statement for same query
Am I understand this correctly?
If I write a DataAccessLayer which is a class have methods
QueryFooObject(int id)
{
..//query foo from entity framework
}
AddFooObject(Foo obj)
{
.. //add foo to entity framework
}
......
QueryBarObject(int id)
{
..
}
AddBarObject(Bar obj)
{
...
}
Is that also a Repository Pattern?
Explaination for dummy will be great :)
I don't think you should.
The Entity Framework is already an abstraction layer over your database. The context uses the unit of work pattern and each DBSet is a repository. Adding a Repository pattern on top of this distances you from the features of your ORM.
I talked about this in my blog post:
http://www.nogginbox.co.uk/blog/do-we-need-the-repository-pattern
The main reason adding your own repository implementation is so that you can use dependency injection and make your code more testable.
EF is not very testable out of the box, but it's quite easy to make a mockable version of the EF data context with an interface that can be injected.
I talked about that here:
http://www.nogginbox.co.uk/blog/mocking-entity-framework-data-context
If we don't need the repository pattern to make EF testable then I don't think we need it at all.
This picture makes it easy to understand
One thing is to increase testability and have a loose coupling to underlying persistance technology. But you will also have one repository per aggregate root object (eg. an order can be an aggregate root, which also have order lines (which are not aggregate root), to make domain object persistance more generic.
It's also makes it much easier to manage objects, because when you save an order, it will also save your child items (which can be order lines).
It's also an advantage to keep your queries in a central place; otherwise your queries are scattered around and are harder to maintain.
And the first point you mention: "To hide EF" is a good thing! For instance, saving logic can be hard to implement. There are multiple strategies that apply best in different scenarios. Especially when it comes to saving entities which also have changes in related entities.
Using repositories (in combination with UnitOfWork) can centralize this logic too.
Here are some videos with a nice explanation.
Repository systems are good for testing.
One reason being that you can use Dependency Injection.
Basically you create an interface for your repository, and you reference the interface for it when you are making the object. Then you can later make a fake object (using moq for instance) which implements that interface. Using something like ninject you can then bind the proper type to that interface. Boom you've just taken a dependence out of the equation and replaced it with something testable.
The idea is to be able to easily swap out implementations of objects for testing purposes
Hope that makes sense.
The same reason you don't hard code file paths in your app: loose coupling and encapsulation. Imagine an app with hard coded references to "c:\windows\fonts" and the problems that can cause. You shouldn't hard code references to paths so why should you hard code references to your persistence layer? Hide your paths behind config settings (or special folders or whatever your os supports) and hide your persistence behind a repository. It will be much easier to unit test, deploy to other environments, swap implementations, and reason about your domain objects if the persistence concerns are hidden behind a repository.
When you are designing your repository classes to look alike domain object, to provide same data context to all the repositories and facilitating the implementation of unit of work, repository pattern makes sense. please find below some contrived example.
class StudenRepository
{
dbcontext ctx;
StundentRepository(dbcontext ctx)
{
this.ctx=ctx;
}
public void EnrollCourse(int courseId)
{
this.ctx.Students.Add(new Course(){CourseId=courseId});
}
}
class TeacherRepository
{
dbcontext ctx;
TeacherRepository(dbcontext ctx)
{
this.ctx=ctx;
}
public void EngageCourse(int courseId)
{
this.ctx.Teachers.Add(new Course(){CourseId=courseId});
}
}
public class MyunitOfWork
{
dbcontext ctx;
private StudentRepository _studentRepository;
private TeacherRepository _teacherRepository;
public MyunitOfWork(dbcontext ctx)
{
this.ctx=ctx;
}
public StudentRepository StundetRepository
{
get
{
if(_studentRepository==null)
_stundentRepository=new StundetRepository(this.ctx);
return _stundentRepository;
}
}
public TeacherRepository TeacherRepository
{
get
{
if(_teacherRepository==null)
_teacherRepository=new TeacherRepository (this.ctx);
return _teacherRepository;
}
}
public void Commit()
{
this.ctx.SaveChanges();
}
}
//some controller method
public void Register(int courseId)
{
using(var uw=new MyunitOfWork(new context())
{
uw.StudentRepository.EnrollCourse(courseId);
uw.TeacherRepository.EngageCourse(courseId);
uw.Commit();
}
}
I know it is bad provide links in answer here, however wanted to share the video which explains various advantages of Repository Pattern when using it with Entity framework. Below is the link of youtube.
https://www.youtube.com/watch?v=rtXpYpZdOzM
It also provides details about how to implement Repository pattern properly.
I am quite new to the FNH and NH world, so be gentle :P
I have created an application using FNH for data access which works good while not using lazy-loading, however once I enable lazy-loading everything goes pear shaped (as in, no sessions are open when I attempt to access the lazy-loaded properties etc).
The application layout I have created thus-far has a "Database" singleton which has various methods such as Save(), Refer() and List().
When calling Refer() a session is opened, the data is retrieved and the session is disposed; meaning there is no session available when attempting to access a lazy-loaded property from the returned object. Example: Database.Refer("username").Person since Person is lazy-loaded and the session has already closed.
I have read that Castle has a SessionManager that could be used for this very scenario but, either it's the late nights or lack of coffee, I can't seem to work out how to hook up FNH to use this manager as, in the spirit of castle, everything is defined in config files.
Am I missing something, or can this not be done? Are there any other session managers (or even more appropriate conventions) that I should look at?
Thanks for any help on this matter.
I don't think that your particular problem is connected with the SessionManager as you've already mentioned that you are capable of starting a new session and disposing it whenever needed.
From what I can understand of your post is that you are trying to expose an entity to your view (with some lazy-loaded properties) - which is already a bad idea because it leads to nasty LazyInitializationException(s).
You should consider making a distinguishion between your data-model and your domain model. The key concept has been described on this blog:
Ayende # Rahien
http://ayende.com/blog/4054/nhibernate-query-only-properties
If you say that you are writing a very simple 2-tier application then it probably will not harm if you will micro-manage your session in the data-layer (but keep in mind that this is not the best solution).
I would also look into the query that fetches your entity, as it seems to me that your are trying to obtain data that is just a part of your model - in this case Person. This can lead into serious problems like n+1 selects:
What is SELECT N+1?
So in general I think you should focus more on how things are structured in your application instead of searching for a SessionManager as it will not resolve all of your problems.
For any of you who are still looking for answers on this, I will share with you what I have so far.
This is only a very simple overview of the framework that I have decided to use, and is by far not the only solution for this problem.
The basic layout of my code is as follows:
NHibernate Repository
(references my model assembly and the UoW assembly)
Based on the HibernatingRhino's Repository implementation modified to suit my needs. Found here: http://ayende.com/Wiki/Rhino+Commons.ashx
public T Get(Guid Id)
{
return WrapUOW(() =>
{
using (Audit.LockAudit())
return (T)Session.Get(typeof(T), Id);
});
}
public void LoadFullObject(T partial)
{
if (partial == null)
throw new ArgumentNullException("partial");
if (partial.Id == Guid.Empty)
return;
WrapUOW(() =>
{
using (Audit.LockAudit())
{
LazyInitialiser.InitialiseCompletely(partial, Session);
}
});
}
public T SaveOrUpdate(T entity)
{
using (Audit.LockAudit())
{
With.Transaction(() =>
{
Enlist(entity);
Session.SaveOrUpdate(entity);
entity.HasChanged = false;
});
}
return entity;
}
protected void Enlist(T instance)
{
if (instance != null && instance.Id != Guid.Empty && !Session.Contains(instance))
using (Audit.LockAudit())
{
Session.Update(instance);
}
}
References a neat little helper class called 'Lazy Initializer for NHibernate' found here: http://www.codeproject.com/KB/cs/NHibernateLazyInitializer.aspx
This also contains Extension methods for Save, Delete and LoadFullObject
Have broken standards a little in this assembly by also creating a WrapUOW method to help simplify some of my code
protected static T WrapUOW(Func action)
{
IUnitOfWork uow = null;
if (!UnitOfWork.IsStarted)
uow = UnitOfWork.Start();
T result = action();
if (uow != null)
uow.Dispose();
return result;
}
NHibernate Unit of work
(references my model assembly)
Also based on the HibernatingRhino's UoW implementation and modified to suit
View - not important, just requried for MVVM implementation
Binds the values from the ViewModel
Model
Contains my entity classes and hibernate mapping files
ViewModel
Contains two main view base classes, ListPage and MaintenancePage
The ListPage base class just calls the Repository List method based on the object type we are listing. This loads a dehydrated list of entities.
The MaintenancePage takes an entity instance from the ListPage and calls the Repository.LoadFullObject method to rehydrate the entity for use on the screen.
This allows for the use of binding on the screen.
We can also safely call the Repository.SaveOrUpdate method from this page