Relationship between EF-Generated classes and model? - c#

I'm using ASP .NET MVC (C#) and EntityFramework (database first) for a project.
Let's say I'm on a "Movie detail" page which shows the detail of one movie of my database. I can click on each movie and edit each one.
Therefore, I have a Movie class, and a Database.Movie class generated with EF.
My index action looks like :
public ActionResult MovieDetail(int id)
{
Movie movie = Movie.GetInstance(id);
return View("MovieDetail", movie);
}
GetInstance method is supposed to return an instance of Movie class which looks like this for the moment :
public static Movie GetInstance(int dbId)
{
using (var db = new MoviesEntities())
{
Database.Movie dbObject = db.Movies.SingleOrDefault(r => r.Id == dbId);
if (dbObject != null)
{
Movie m = new Movie(dbObject.Id, dbObject.Name, dbObject.Description);
return m;
}
return null;
}
}
It works fine but is this a good way to implement it? Is there an other cleaner way to get my instance of Movie class ?
Thanks

is this a good way to implement it?
That's a very subjective question. It's valid, and there's nothing technically wrong with this implementation. For my small-size home projects, I've used similar things.
But for business applications, it's better to keep your entities unrelated to your MVC application. This means that your data context + EF + generated entities should be kept in a separate project (let's call it the 'Data' project), and the actual data is passed in the form of a DTO.
So if your entity resembles this:
public class Person {
public int Id { get; set; }
public string Name { get; set; }
}
You'd expect there to be an equivalent DTO class that is able to pass that data:
public class PersonDTO {
public int Id { get; set; }
public string Name { get; set; }
}
This means that your 'Data' project only replies with DTO classes, not entities.
public static MovieDTO GetInstance(int dbId)
{
...
}
It makes the most sense that your DTOs are also in a separate project. The reason for all this abstraction is that when you have to change your datacontext (e.g. the application will start using a different data source), you only need to make sure that the new data project also communicates with the same DTOs. How it works internally, and which entities it uses, is only relevant inside the project. From the outside (e.g. from your MVC application), it doesn't matter how you get the data, only that you pass it in a form that your MVC projects already understand (the DTO classes).
All your MVC controller logic will not have to change, because the DTO objects haven't changed. This could save you hours. If you link the entity to your Controller AND View, you'll have to rewrite both if you suddenly decide to change the entity.
If you're worried about the amount of code you'll have to write for converting entities to DTOs and vice versa, you can look into tools like Automapper.
The main question: Is this needed?
That, again, is a very subjective question. It's relative to the scope of the project, but also the expected lifetime of the application. If it's supposed to be used for a long time, it might be worth it to keep everything interchangeable. If this is a small scale, short lifetime project, the added time to implement this might not be worth it.
I can't give you a definitive answer on this. Evaluate how well you want the application to adapt to changes, but also how likely it is that the applicaiton will change in the future.
Disclaimer: This is how we do it at the company where I work. This is not the only solution to this type of problem, but it's the one I'm familiar with. Personally, I don't like making abstractions unless there's a functional reason for it.

A few things:
The naming you're using is a little awkward and confusing. Generally, you don't ever want to have two classes in your project named the same, even if they're in different namespaces. There's nothing technically wrong with it, but it creates confusion. Which Movie do I need here? And if I'm dealing with a Movie instance, is it Movie or Database.Movie? If you stick to names like Movie and MovieDTO or Movie and MovieViewModel, the class names clearly indicate the purpose (lack of suffix indicates a database-backed entity).
Especially if you're coming from another MVC framework like Rails or Django, ASP.NET's particular flavor of MVC can be a little disorienting. Most other MVC frameworks have a true Model, a single class that functions as the container for all the business logic and also acts as a repository (which could be considered business logic, in a sense). ASP.NET MVC doesn't work that way. Your entities (classes that represent database tables) are and should be dumb. They're just a place for Entity Framework to stuff data it pulls from the database. Your Model (the M in MVC) is really more a combination of your view models and your service/DAL layer. Your Movie class (not to be confused with Database.Movie... see why that naming bit is important) on the other hand is trying to do triple duty, acting as the entity, view model and repository. That's simply too much. Your classes should do one thing and do it well.
Again, if you have a class that's going to act as a service or repository, it should be an actual service or repository, with everything those patterns imply. Even then, you should not instantiate your context in a method. The easiest correct way to handle it is to simply have your context be a class instance variable. Something like:
public class MovieRepository
{
private readonly MovieEntities context;
public MovieRepository()
{
this.context = new MovieEntities();
}
}
Even better, though is to use inversion of control and pass in the context:
public class MovieRepository
{
private readonly MovieEntities context;
public MovieRepository(MovieEntities context)
{
this.context = context;
}
}
Then, you can employ a dependency injection framework, like Ninject or Unity to satisfy the dependency for you (preferably with a request-scoped object) whenever you need an instance of MovieRepository. That's a bit high-level if you're just starting out, though, so it's understandable if you hold off on going the whole mile for now. However, you should still design your classes with this in mind. The point of inversion of control is to abstract dependencies (things like the context for a class that needs to pull entities from the database), so that you can switch out these dependencies if the need should arise (say perhaps if there comes a time when you're going to retrieve the entities from an Web API instead of through Entity Framework, or even if you just decide to switch to a different ORM, such as NHibernate). In your code's current iteration, you would have to touch every method (and make changes to your class in general, violating open-closed).

entity-model never should act as view-model. Offering data to the views is an essential role of the view-model. view-model can easily be recognized because it doesn’t have any other role or responsibility other than holding data and business rules that act solely upon that data. It thus has all the advantages of any other pure model such as unit-testability.
A good explanation of this can be found in Dino Esposito’s The Three Models of ASP.NET MVC Apps.

You can use AutoMapper
What is AutoMapper?
AutoMapper is a simple little library built to solve a deceptively complex problem - getting rid of code that mapped one object to another. This type of code is rather dreary and boring to write, so why not invent a tool to do it for us?
How do I get started?
Check out the getting started guide.
Where can I get it?
First, install NuGet. Then, install AutoMapper from the package manager console:
PM> Install-Package AutoMapper

Related

Using DDD and AutoMapper, how do you do work on the same aggregate root in multiple services within a single unit of work?

I am trying to learn and implement domain driven design in a non-web based project. I have a main loop that will do multiple procedures on a lot of entities in a single unit of work. I don't want any of the changes to be persisted unless the entire loop's work is successful. I'm using AutoMapper to convert persistence models to domain models within a repository, and my services are using the repository to retrieve data before doing work.
There are some elements of DDD that are not working well with my project and I am hoping someone can tell me what I have wrong about the whole process.
Here are the DDD ideas I'm struggling with:
Domain services should be used when a process involves multiple aggregate roots interacting with each other
You should pass aggregate root Ids into domain services which will then use repositories to load them
Repositories should return domain models that it constructs from mapped persistence models (in this case I am using AutoMapper)
Here is an example of what I'm trying to do.
using (var scope = serviceProvider.CreateScope())
{
var unitOfWork = scope.ServiceProvider.GetService<IUnitOfWork>();
var aggregate1Repo = scope.ServiceProvider.GetService<IAggregate1Repository>();
var aggregate2Repo = scope.ServiceProvider.GetService<IAggregate2Repository>();
var aggregate3Repo = scope.ServiceProvider.GetService<IAggregate3Repository>();
var firstService = scope.ServiceProvider.GetService<IFirstService>();
var secondService = scope.ServiceProvider.GetService<ISecondService>();
var aggregate1 = aggregate1Repo.Find(1); //First copy of aggregate1
var aggregate2 = aggregate2Repo.Find(1000);
var aggregate3 = aggregate3Repo.Find(123);
aggregate1.DoSomeInternalWork();
firstService.DoWork(aggregate1.Id,aggregate2.Id);
secondService.DoWork(aggregate1.Id,aggregate3.Id);
aggregate1Repo.Update(aggregate1);
unitOfWork.Commit();
}
Aggregate1Repo:
public class Aggregate1Repository
{
private readonly AppDBContext _dbContext;
private IMapper _mapper;
public Aggregate1Repository(AppDBContext context, IMapper mapper)
{
_dbContext = context;
_mapper = mapper;
}
public Aggregate1 Find(int id)
{
return _mapper.Map<Aggregate1>(_dbContext
.SomeDBSet.AsNoTracking()
.Find(id));
}
}
FirstService:
public class FirstService : IFirstService
{
private readonly IAggregate1Repository _agg1Repo;
private readonly IAggregate2Repository _agg2Repo;
public FirstService(IAggregate1Repository agg1Repo, IAggregate2Repository agg2Repo)
{
_agg1Repo = agg1Repo;
_agg2Repo = agg2Repo;
}
public void DoWork(int aggregate1Id, int aggregate2Id)
{
var aggregate1 = _agg1Repo.Find(aggregate1Id); //second copy of aggregate1
var aggregate2 = _agg2Repo.Find(aggregate2Id);
//do some calculations and modify aggregate1 in some fashion
//I could update aggregate1 in the repository here,
// but this copy of aggregate1 doesn't have the changes made prior to this point
}
}
SecondService:
public class SecondService : ISecondService
{
private readonly IAggregate1Repository _agg1Repo;
private readonly IAggregate3Repository _agg3Repo;
public FirstService(IAggregate1Repository agg1Repo, IAggregate3Repository agg3Repo)
{
_agg1Repo = agg1Repo;
_agg3Repo = agg3Repo;
}
public void DoWork(int aggregate1Id, int aggregate3Id)
{
var aggregate1 = _agg1Repo.Find(aggregate1Id); //third copy of aggregate1
var aggregate3 = _agg2Repo.Find(aggregate3Id);
//do some calculations and modify aggregate1 in some fashion
//I could update aggregate1 in the repository here,
// but this copy of aggregate1 doesn't have the changes made prior to this point
}
}
The problem here is that I'd essentially be doing work to three different copies of aggregate1 since a new object is created by AutoMapper in the repository each time I try to load it. I could put separate calls to aggregate1Repo.Update in the two services, but I'd still be working on three different objects that all represent the same thing. I feel like I must have a fundamental flaw in my thinking, but I don't know what it is.
First off, your problem isn't really related to DDD. It's just a typical ORM/AutoMapper issue.
You should NEVER use AutoMapper to map TO an persistence model or a domain model, this will almost never work.
The reasons for this lies, that most/many ORMs will track entities and their changes via references (i.e. EntityFramework). So if you use automapper and get new instances, you break the way ORM works and run into such problems.
This may be an interesting read for you: Why mapping DTOs to Entities using AutoMapper and EntityFramework is horrible
While it handles DTO -> Entity, it applies same to Domain Model -> Entity.
Also Jimmy Bogard (author of AutoMapper) once commented on a blog post (which is unavailable now, but the disqus comments are still there)
Jimmy Bogard commented:
There are definitely places to use AutoMapper, and places not to use
it. However, I think this post misses them:
Configuration
validation takes care of members that exist on the destination type
that aren't mapped. That's easy.
Dependency injection takes care of depending directly on other
assemblies. For example, you'd have IRepository in Core and the
implementation that references System.Data in another assembly
AutoMapper was never, ever intended to map back into a behavioral
model. AutoMapper is intended to build DTOs, not map back in
AutoMapper also uses Reflection.Emit and expression tree compilation,
cached once. If you use autoprojection, it's faster than any
server-side code you could write yourself.
The points you raised are common complaints, but mostly it's people
not understand how to use AutoMapper correctly. However, there are
places I absolutely wouldn't use AutoMapper:
When the destination type isn't a projection of the source type.
Seems obvious, if AutoMapper isn't Auto then there's no point. It's
supposed to get rid of the brain-dead code you would be forced to
write anyway.
Mapping to complex models. I only use AutoMapper to
flatten/project, never back to behavioral models. I'm very up front
about this and discourage this use whenever I see it.
Anywhere that you're not trying to delete code you would have written anyway.
You prefer explicit over convention. This is a whole other topic, with
pros and cons of both approaches.
You prefer not to understand the
magic. I build lots of convention-based helpers covering a wide array
of scenarios, but I make sure that my team understands what is
actually happening underneath the covers.
Your options basically boil down to
Use event sourcing for your domain model (and build it as a series of events inside the repository, so for persistence you only save new models)
OR
use your domain model directly as persistence model.
The later one will cause that some persistence detail leak into your domain model. This may or may not be acceptable for your use case. It usually works well in smaller projects where Event Sourcing is out of scope.
As for the rest of your example, it's a bit to far from a practical use case and it's hard to say why your services are created that way.
Could be a bad chosen aggregate root, wrong/bad separation of concerns. Hard to tell from abstracted terms as SecondService etc.
An aggregate root can be seen as a transaction boundary. All entities within that root needs to be updated at the same time.
The fact, that you pass only ids to the DoWork methods indicates that they are different operations (and hence, transactions on their own) or that only the id should be assigned.
If they were supposed to be used in an outer scope you should pass in the aggregate root references to it, rather than only pass the ids.
firstService.DoWork(aggregate1,aggregate2);
secondService.DoWork(aggregate1,aggregate3);
// instead of
firstService.DoWork(aggregate1.Id,aggregate2.Id);
secondService.DoWork(aggregate1.Id,aggregate3.Id);
You can't (and shouldn't) rely on the fact, that some ORMs may cache an entity, hence not rely that multiple calls to your repository will return the exact same instance of the entity.

Saving data in MVC without Entity Framework?

The majority of MVC examples I see seem to use Entity Framework. I am currently writing an MVC application that will not use EF (using Dapper instead) and I'm wondering where I should include the data persistence logic?
My first thought was to place it in along with the classes for my model. This would mean that my model classes would look something like below:
class User
{
public int id {get; set;}
public string name {get; set;}
Create(string name)
{
// dapper to perform insert
}
Remove(int id)
{
// dapper to perform delete
}
//Update(),Delete() etc.
}
But I haven't used MVC much so I'm not sure if this is a common practice or not.
Is placing data persistence logic in with the Model a good practice or should I be taking a different approach?
Also, I believe that Stack Exchange uses MVC and Dapper - if anyone knows of anywhere that they have spoken about how they structured their code, feel free to point me towards it.
You wouldn't expect to have to open your computer and press a button on the harddrive to save data to it would you?
Basically the purpose of the MVC pattern, and SOLID design principles in general, is to separate your concerns. Putting logic related to saving, modifying or updating your database inside of your model, whose responsibility is to be an object that contains data, is counter to the philosophy of the pattern you're supposed to subscribe to in MVC.
Your controller is where you want to perform the logic to save your information, but there is still a data access layer that the concern of interacting with the database is abstracted to.
So you would have:
public class MyController {
IDataAccessLayer _dataAccessLayer;
public MyController(IDataAccessLayer dataAccessLayer) {
_dataAccessLayer = dataAccessLayer;
}
public ActionResult Create(Model myModel){
_dataAccessLayer.InsertIntoDatabase(myModel);
return View();
}
}
As per design of MVC persistence never be a problem. You can use any ORM you want. For MVC pattern to work you need Model ( partial ViewModel) to display your data in View. Controller will handle your flow of application.
Now from controller you can call any process to save your data.
I think you can use Repository pattern with Dapper as well same as EF.
Main thing to take care that your application should not be persistence aware. You can develop with Dapper and later you can also provide support for EF without much changing at your UI level.

ASP.NET MVC Design Issues

Whats the difference between Model and ViewModel? I should use both of them or I better skip one of them? who grabs the data from the database?
I wonder whats the best/right way to take my data from the database.
One option is to have a DAL (Data Access Layer) and instantiate it in every controller,
fill with it the viewmodels like:
var viewmodel = Dal.GetArticles();
Another option is to let the model itself grab the information from the Database:
var model = new ArticlesModel();
model.GetArticles();
public void GetArticles()
{
var context = new DbContext();
_articles = context.Articles
}
Another similar option is to have a static DAL so you can access it inside every model,
so each model will have a method to grab the data using the static DAL class (Which contain a DbContext class inside to access the Database)
public void GetArticles()
{
_articles = DAL.GetArticles();
}
So the general question is if the model itself needs to grab the data from the database or the controller itself can have access to the DAL.
While someone is writing a more useful answer, I will quickly address your points.
Model is the data you want to display.
More often than not, you will want to use object relational mapping so most of your business object classes correspond to database tables and you don't have to manually construct queries.
There are plenty of ORM solutions available, including Entity Framework, NHibernate and (now dying) LINQ to SQL.
There is also an awesome micro-ORM called Dapper which you may like if bigger frameworks feel unneccessarily bloated for your solution.
Make sure you learn about the differences between them.
DAL is more idiomatic in .NET than classes that “know” how to load themselves.
(Although in practice your solution will very likely be a mixture of both approaches—the key is, as usual, to keep the balance.)
My advice is to try keeping your models plain old CLR objects as long as your ORM allows it and as long as this doesn't add extra level of complexity to the calling code.
These objects, whenever possible (and sensible—there are exceptions for any rule!), should not be tied to a particular database or ORM implementation.
Migrating your code to another ORM, if needed, will be just a matter of rewriting data access layer.
You should understand, however, that this is not the main reason to separate DAL.
It is highly unlikely you'll want to change an ORM in the middle of the project, unless your initial choice was really unfit for the purpose or you suddenly gained a traction of 100,000 of users and your ORM can't handle it. Optimizing for this in the beginning is downright stupid because it distracts you from creating a great product capable of attracting even a fraction of hits you're optimizing for. (Disclaimer: I've walked this path before.)
Rather, the benefit of DAL is that you database access becomes always explicit and constrained to certain places where you want it to happen. For example, a view that received a model object to display will not be tempted to load something from the database, because in fact it is the job of controller to do so.
It's also generally good to separate things like business logic, presentation logic and database logic. Too often it results in better, less bug-ridden code. Also: you are likely to find it difficult to unit-test any code that relies on objects being loaded from the database. On the other hand, creating a “fake” in-memory data access layer is trivial with LINQ.
Please keep in mind that again, there are exceptions to this rule, like lazy properties generated by many ORMs that will load the associated objects on the go—even if called within a view. So what matters is you should make an informed decision when to allow data access and why. Syntaxic sugar might be useful but if your team has no idea about performance implications of loading 20,000 objects from ORM, it will become a problem.
Before using any ORM, learn how it works under the hood.
Choosing between Active Record-style objects and a DAL is mostly a matter of taste, common idioms in .NET, team habits and the possibility that DAL might eventually have to get replaced.
Finally, ViewModels are a different kind of beast.
Try to think of them like this:
You shouldn't have any logic in views that is more sophisticated than an if-then-else.
However, there often is some sophisticated logic in showing things.
Think pagination, sorting, combining different models in one view, understanding UI state.
These are the kinds of thing a view model could handle.
In simple cases, it just combines several different models into one “view-model”:
class AddOrderViewModel {
// So the user knows what is already ordered
public IEnumerable<Order> PreviousOrders { get; set; }
// Object being added (keeping a reference in case of validation errors)
public Order CurrentOrder { get; set; }
}
Models are just data, controllers combine the data and introduce some logic to describe data to be shown in view models, and views just render view models.
View model also serves as a kind of documentation. They answer two questions:
What data can I use in a view?
What data should I prepare in controller?
Instead of passing objects into ViewData and remembering their names and types, use generic views and put stuff in ViewModel's properties, which are statically typed and available with IntelliSense.
Also, you'll likely find it useful to create ViewModel hierarchies (but don't take it to extremes!). For example, if your site-wide navigation changes from breadcrumbs to something else, it's cool to just replace a property on base view model, a partial view to display it and the logic to construct it in the base controller. Keep it sensible.
A model represents the structure you like your data in and is not concerned about the view which may consume it. A model's intend is purely that of representing the structure.
A model may contain properties irrelevant to the view consuming it.
A view-model is designed with the view in mind. A view-model is intended for a 1-to-1 relationship to a view. A view-model only contains the basic fields and properties the view it is intended for requires.
In general you would have your controller contact a repository (In your example your DAL) obtaining the data and then populating either a model or view-model with the results, sending it down to the view.
Model (Domain Model): is the heart of the application, representing the biggest and most important business asset because it captures all the complex business entities, their relationships and their functionality.
ViewModel: Sitting atop the Model is the ViewModel:The two primary goals of the ViewModel are
1. to make the Model easily consumable by the View and
2. to separate and encapsulate the Model from the View.
Eg.
Model:
public class Product
{
...............
}
public class Category
{
...........
}
ViewModel:
public class ProductViewModel
{
public ProductViewModel(List<Product> products, List<Category> categories)
{
this.Products = products;
this.Categories = categories;
}
public List<Product> Products { get; set; }
public List<Category> Categories { get; set; }
}

ASp.NET MVC - Is it possible to simplify my architecture?

I have just started working on an MVC project and things are going ok but it looks like I am creating alot of spaghetti code with just too many objects. Can anyone see how I can simplify this solution before the whole projects gets out of hand?
ok, here's my set up:
DAL - has Entity framework connections and methods to obtain data then convert the data to my model objects in the model layer
BLL - sends the data back up to the UI
Model - this contains all the model objects that are used throughout the site, anything coming from the DAL is converted into these objects by creating a new object then populating the variables.
UI - my MVC solution
The DAL,BLL and Model are also used by other solutions.
Now with MVC, I am trying to use the validation annotations ([Required], etc) which means I have to re-create the model objects with the annotations. This is fine but if I want to save the data back into the database I need to convert the classes which is just messy.
Can anyone see how I can use my current model class library with MVC model objects that use the validation annotations?
If I have not explained myself clearly please let me know and I will provide more details.
Thanks
Ideally there needs to be a separation from the domain models on one hand and MVC models (they are really ViewModels) on the other hand. This separation is really crucial and strongly advised.
These will look a lot similar in most cases although ViewModel can contain extra stuff. Then you can use AutoMapper to convert from one to the other.
For example:
public class User // in entity DLL
{
[Required]
public string Name {get; set;}
}
public class UserViewModel : User // in MVC DLL
{
public string LastVisitedPage {get; set;} // which only MVC needs to know
}
Mapper.Map<User, UserViewModel>();
Mapper.Map<UserViewModel, User>();
you can put the metadata in metadata objects without recreating the model objects. Here is a very simple way of doing it, however it does require that the model objects themselves are marked as partial. I hope that is OK if not this solution will not work for you.
[MetadataType(typeof(PreviousResultsMetaData))]
public partial class PreviousResults
{
public class PreviousResultsMetaData
{
[DisplayName("Class Ranking Score")]
[Required]
[Range(0.0, 100.0)]
public object ClassRankingScore { get; set; }
}
}
in the example above there is a data model object called PreviousResults that is created elsewhere by some scaffolding code. It defines the POCO object that is sent to and from database using LINQ. The MetadataType attribute indicates the class that will be used to hold the metadata. Then you simply create plain objects that match the names of your real data members and annotate them.
I hope this helps.
You can use FluentValidation framework for validation. Look here
http://fluentvalidation.codeplex.com/
You can perfectly add attributes to your BLL (the business entities). Just add a reference and add a using statement for System.ComponentModel.DataAnnotations. Apart from that, you can implement the IValidatableObject interface (which is pretty easy, see below).
For the mapping, you can use for example AutoMapper, so you don't have to write to much of mapping logic yourself (if you can take advantage of the name mapping magic).
Validate example:
ICollection<ValidationResult> validationErrors = new List<ValidationResult>();
var validationContext = new ValidationContext(this, null, null);
Validator.TryValidateObject(this, validationContext, ValidationErrors, true);
return validationErrors;

How to expose the DataContext from with-in a DataContext class?

Is it possible to expose the DataContext when extending a class in the DataContext? Consider this:
public partial class SomeClass {
public object SomeExtraProperty {
this.DataContext.ExecuteQuery<T>("{SOME_REALLY_COMPLEX_QUERY_THAT_HAS_TO_BE_IN_RAW_SQL_BECAUSE_LINQ_GENERATES_CRAP_IN_THIS INSTANCE}");
}
}
How can I go about doing this? I have a sloppy version working now, where I pass the DataContext to the view model and from there I pass it to the method I have setup in the partial class. I'd like to avoid the whole DataContext passing around and just have a property that I can reference.
UPDATE FOR #Aaronaught
So, how would I go about writing the code? I know that's a vague question, but from what I've seen online so far, all the tutorials feel like they assume I know where to place the code and how use it, etc.
Say I have a very simple application structured as (in folders):
Controllers
Models
Views
Where do the repository files go? In the Models folder or can I create a "Repositories" folder just for them?
Past that how is the repository aware of the DataContext? Do I have to create a new instance of it in each method of the repository (if so that seems in-efficient... and wouldn't that cause problems with pulling an object out of one instance and using it in a controller that's using a different instance...)?
For example I currently have this setup:
public class BaseController : Controller {
protected DataContext dc = new DataContext();
}
public class XController : BaseController {
// stuff
}
This way I have a "global" DataContext available to all controllers who inherit from BaseController. It is my understanding that that is efficient (I could be wrong...).
In my Models folder I have a "Collections" folder, which really serve as the ViewModels:
public class BaseCollection {
// Common properties for the Master page
}
public class XCollection : BaseCollection {
// X View specific properties
}
So, taking all of this where and how would the repository plug-in? Would it be something like this (using the real objects of my app):
public interface IJobRepository {
public Job GetById(int JobId);
}
public class JobRepository : IJobRepository {
public Job GetById(int JobId) {
using (DataContext dc = new DataContext()) {
return dc.Jobs.Single(j => (j.JobId == JobId));
};
}
}
Also, what's the point of the interface? Is it so other services can hook up to my app? What if I don't plan on having any such capabilities?
Moving on, would it be better to have an abstraction object that collects all the information for the real object? For example an IJob object which would have all of the properties of the Job + the additional properties I may want to add such as the Name? So would the repository change to:
public interface IJobRepository {
public IJob GetById(int JobId);
}
public class JobRepository : IJobRepository {
public IJob GetById(int JobId) {
using (DataContext dc = new DataContext()) {
return dc.Jobs.Single(j => new IJob {
Name = dc.SP(JobId) // of course, the project here is wrong,
// but you get the point...
});
};
}
}
My head is so confused now. I would love to see a tutorial from start to finish, i.e., "File -> New -> Do this -> Do that".
Anyway, #Aaronaught, sorry for slamming such a huge question at you, but you obviously have substantially more knowledge at this than I do, so I want to pick your brain as much as I can.
Honestly, this isn't the kind of scenario that Linq to SQL is designed for. Linq to SQL is essentially a thin veneer over the database; your entity model is supposed to closely mirror your data model, and oftentimes your Linq to SQL "entity model" simply isn't appropriate to use as your domain model (which is the "model" in MVC).
Your controller should be making use of a repository or service of some kind. It should be that object's responsibility to load the specific entities along with any additional data that's necessary for the view model. If you don't have a repository/service, you can embed this logic directly into the controller, but if you do this a lot then you're going to end up with a brittle design that's difficult to maintain - better to start with a good design from the get-go.
Do not try to design your entity classes to reference the DataContext. That's exactly the kind of situation that ORMs such as Linq to SQL attempt to avoid. If your entities are actually aware of the DataContext then they're violating the encapsulation provided by Linq to SQL and leaking the implementation to public callers.
You need to have one class responsible for assembling the view models, and that class should either be aware of the DataContext itself, or various other classes that reference the DataContext. Normally the class in question is, as stated above, a domain repository of some kind that abstracts away all the database access.
P.S. Some people will insist that a repository should exclusively deal with domain objects and not presentation (view) objects, and refer to the latter as services or builders; call it what you like, the principle is essentially the same, a class that wraps your data-access classes and is responsible for loading one specific type of object (view model).
Let's say you're building an auto trading site and need to display information about the domain model (the actual car/listing) as well as some related-but-not-linked information that has to be obtained separately (let's say the price range for that particular model). So you'd have a view model like this:
public class CarViewModel
{
public Car Car { get; set; }
public decimal LowestModelPrice { get; set; }
public decimal HighestModelPrice { get; set; }
}
Your view model builder could be as simple as this:
public class CarViewModelService
{
private readonly CarRepository carRepository;
private readonly PriceService priceService;
public CarViewModelService(CarRepository cr, PriceService ps) { ... }
public CarViewModel GetCarData(int carID)
{
var car = carRepository.GetCar(carID);
decimal lowestPrice = priceService.GetLowestPrice(car.ModelNumber);
decimal highestPrice = priceService.GetHighestPrice(car.ModelNumber);
return new CarViewModel { Car = car, LowestPrice = lowestPrice,
HighestPrice = highestPrice };
}
}
That's it. CarRepository is a repository that wraps your DataContext and loads/saves Cars, and PriceService essentially wraps a bunch of stored procedures set up in the same DataContext.
It may seem like a lot of effort to create all these classes, but once you get into the swing of it, it's really not that time-consuming, and you'll ultimately find it way easier to maintain.
Update: Answers to New Questions
Where do the repository files go? In the Models folder or can I create a "Repositories" folder just for them?
Repositories are part of your model if they are responsible for persisting model classes. If they deal with view models (AKA they are "services" or "view model builders") then they are part of your presentation logic; technically they are somewhere between the Controller and Model, which is why in my MVC apps I normally have both a Model namespace (containing actual domain classes) and a ViewModel namespace (containing presentation classes).
how is the repository aware of the DataContext?
In most instances you're going to want to pass it in through the constructor. This allows you to share the same DataContext instance across multiple repositories, which becomes important when you need to write back a View Model that comprises multiple domain objects.
Also, if you later decide to start using a Dependency Injection (DI) Framework then it can handle all of the dependency resolution automatically (by binding the DataContext as HTTP-request-scoped). Normally your controllers shouldn't be creating DataContext instances, they should actually be injected (again, through the constructor) with the pre-existing individual repositories, but this can get a little complicated without a DI framework in place, so if you don't have one, it's OK (not great) to have your controllers actually go and create these objects.
In my Models folder I have a "Collections" folder, which really serve as the ViewModels
This is wrong. Your View Model is not your Model. View Models belong to the View, which is separate from your Domain Model (which is what the "M" or "Model" refers to). As mentioned above, I would suggest actually creating a ViewModel namespace to avoid bloating the Views namespace.
So, taking all of this where and how would the repository plug-in?
See a few paragraphs above - the repository should be injected with the DataContext and the controller should be injected with the repository. If you're not using a DI framework, you can get away with having your controller create the DataContext and repositories, but try not to cement the latter design too much, you'll want to clean it up later.
Also, what's the point of the interface?
Primarily it's so that you can change your persistence model if need be. Perhaps you decide that Linq to SQL is too data-oriented and you want to switch to something more flexible like Entity Framework or NHibernate. Perhaps you need to implement support for Oracle, mysql, or some other non-Microsoft database. Or, perhaps you fully intend to continue using Linq to SQL, but want to be able to write unit tests for your controllers; the only way to do that is to inject mock/fake repositories into the controllers, and for that to work, they need to be abstract types.
Moving on, would it be better to have an abstraction object that collects all the information for the real object? For example an IJob object which would have all of the properties of the Job + the additional properties I may want to add such as the Name?
This is more or less what I recommended in the first place, although you've done it with a projection which is going to be harder to debug. Better to just call the SP on a separate line of code and combine the results afterward.
Also, you can't use an interface type for your Domain or View Model. Not only is it the wrong metaphor (models represent the immutable laws of your application, they are not supposed to change unless the real-world requirements change), but it's actually not possible; interfaces can't be databound because there's nothing to instantiate when posting.
So yeah, you've sort of got the right idea here, except (a) instead of an IJob it should be your JobViewModel, (b) instead of an IJobRepository it should be a JobViewModelService, and (c) instead of directly instantiating the DataContext it should accept one through the constructor.
Keep in mind that the purpose of all of this is to keep a clean, maintainable design. If you have a 24-hour deadline to meet then you can still get it to work by just shoving all of this logic directly into the controller. Just don't leave it that way for long, otherwise your controllers will (d)evolve into God-Object abominations.
Replace {SOME_REALLY_COMPLEX_QUERY_THAT_HAS_TO_BE_IN_RAW_SQL_BECAUSE_LINQ_GENERATES_CRAP_IN_THIS INSTANCE} with a stored procedure then have Linq to SQL import that function.
You can then call the function directly from the data context, get the results and pass it to the view model.
I would avoid making a property that calls the data context. You should just get the value from a service or repository layer whenever you need it instead of embedding it into one of the objects created by Linq to SQL.

Categories

Resources