As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We normally use abstract function/Interfaces in our projects. Why it is really needed? Why can't we just go for Business logic Layer, Data Access Layer and Presentation Layer only
Function in Presentation Layer:
abc();
Function in Business Logic Layer:
public void abc()
{
//Preparing the list
}
Function in Data Access Layer:
public abstract void abc();
Function in Data Access SQLServer Layer:
public override void abc()
{
//Connection with database
}
Question is: Why is the Data Access Layer required ?
The easiest way to understand this, imo, is an abstraction over DataLayer.
You have set a functions to retrieve a data from xml file. But one day your product scales out and xml is not enough like a data storage. So you pass to some embeded database: sqlite. But one day you need to reuse your library in some enterprise context. So now you need to develop access to sqlserver, oracle, webservice.... In all these changes you will need to change not only the code that actually access the data, but the code that actually consumes it too. And what about the code that already use for years your first xml data access on client and happy with it? How about backcompatibility?
Having the abstraction if not direcly solves most of this problems, but definitely makes scallable your application and more resistant to changes, that, in our world, happen sometimes too frequently.
Generally, if you use interfaces in your code, then you will gain code manuverability in the form of Dependency Injection.
This will help you replace parts of your implementation in certain situations for example providing Mock objects during Unit Testing.
The abstract class or interface is not really a separate layer - it should be part of your business logic layer and it defines the interface that the actual data access layer (SQL data repository, for example) needs to implement to provide the data access service to your business layer.
Without this interface your business layer would be directly dependent on the SQL layer, while the interface removes this dependency: You put the abstract class or the interface into the business logic layer. Then the SQL layer (a separate assembly, for example) implements the abstract class/interface. This way the SQL layer is dependent on the business layer, not the other way around.
The result is a flexible app with an independent business layer that can work with multiple data repositories - all it needs is a layer that implements the interface the business layer defines. And it is not really only about data repositories - your business layer shouldn't be dependent on the context (asp.net vs. console app vs. service etc.), it shouldn't be dependent on the user interface classes, modules interfacing with your business app, etc.
Why interfaces :
Have you ever used using in c# :
using (Form f = new Form())
{
}
Here you will see that you can use only those classes inside using which implements IDisposable interface .
Two things which does not know each other can interact with each other using Interfaces only.
Interface gurantees that "some" functionality has surely been implemented by this type.
Why layers :
So that you can have separate dlls which will let you to reuse in different application.
Basically all is for code reuse and Performance gain.
I think you are talking about Facade layer.
It is an optional layer which will simplify the functions of Business Layer. Let's imagine, you have a ProductManager and CategoryManager and you want to do a particular action which involves using both (for example, get me top 5 products in all categories) then you could use a facade layer that uses ProductManager and CategoryManager.
It is inspired by Facade Pattern.
The abstraction helps create functionality, be it through a base class, an interface, or composition which, when used properly, does wonders for maintenance, readability, and reusability of code.
In regards to the code posted in the question, the code marked "Data Access Layer" acts as a common abstraction for the business layer to use. By doing so, the specific implementations of the the DAL (such as what's under "Data Access SQLServer Layer" in the sample) are decoupled from the business layer. Now you can make implementations of the DAL that access different databases, or perhaps automatically feed data for testing, etc.
The repository pattern is a fantastic example of this at work in a DAL (example is simplified):
public interface IProductRepository
{
Product Get(int id);
...
}
public class SqlProductRepository : IProductRepository
{
public Product Get(int id) { ... }
...
}
public class MockProductRepository : IProductRepository
{
private IDictionary<int, Product> _products = new Dictionary<int, Product>()
{
{ 1, new Product() { Name = "MyItem" } }
};
public Product Get(int id) { return _products[id]; }
...
}
public class AwesomeBusinessLogic
{
private IProductRepository _repository;
public AwesomeBusinessLogic(IProductRepository repository)
{
_repository = repository;
}
public Product GetOneProduct()
{
return _repository.GetProduct(1);
}
}
Even though this example uses interfaces, the same applies to the use of base classes. The beauty is that now I can feed either SqlProductRepository or MockProductRepository into AwesomeBusinessLogic and not have to change anything about AwesomeBusinessLogic. If another case comes along, all that's needed is a new implementation of IProductRepository and AwesomeBusinessLogic will still handle it without change because it only accesses the repository through the interface.
All of the previous answers may explain the needs of abstract layers, but I still want to add some of my thoughts.
Let's say that in our project we just have one implementation of a service in each layer. For instance, I have a contact DAL and a contact BLL service , and we could do something like this
namespace Stackoverflow
{
public class ContactDbService
{
public Contact GetContactByID(Guid contactID)
{
//Fetch a contact from DB
}
}
}
Contact BLL service:
namespace Stackoverflow
{
public class ContactBLLService
{
private ContactDbService _dbService;
public ContactBLLService()
{
_dbService = new ContactDbService();
}
public bool CheckValidContact(Guid contactID)
{
var contact = _dbService.GetContactByID(contactID);
return contact.Age > 50;
}
}
}
without defining interfaces/ abstract classes.
If we do like that, there would be some obvious drawbacks.
Code communication:
Imagine that if your project involves, your services may have many different methods, how could a maintainer (apart from you) know what your services do? Will he have to read your entire service in order to fix a small bug like InvalidCastOperation?
By looking at the interface, people will have the immediate knowledge of the capabilities of the service(at least).
Unit testing
You could test your logic using a fake/mock service to detect bugs in advance as well as prevent regression bugs from happening later.
Easier to change:
By referencing only by interfaces/ abstract classes in other classes, you could easily replace those interface implementations later without too many efforts of work.
Abstraction enables you to do refactoring quickly. Think of instead of using SQL server, you decide to use some other provider; if you do not have a data access layer, then you to do a huge refactor because you are calling data access methods directly. But if you have a data access layer, you only write a new data access layer, inheriting from your abstract data access layer and you do not change anything in the business layer.
Related
I'm trying to build a new application using the Repository pattern for the first time and I'm a little confused about using a Repository. Suppose I have the following classes:
public class Ticket
{
}
public class User
{
public List<Ticket>AssignedTickets { get; set; }
}
public class Group
{
public List<User> GroupMembers { get;set; }
public List<Ticket> GroupAssignedTickets { get;set; }
}
I need a methods that can populate these collections by fetching data from the database.
I'm confused as to which associated Repository class I should put those methods in. Should I design my repositories so that everything returning type T goes in the repository for type T as such?
public class TicketRepository
{
public List<Ticket> GetTicketsForGroup(Group g) { }
public List<Ticket> GetTicketsForUser(User u) { }
}
public class UserRepository
{
public List<User> GetMembersForGroup(Group g) { }
}
The obvious drawback I see here is that I need to start instantiating a lot of repositories. What if my User also has assigned Widgets, Fidgets, and Lidgets? When I populate a User, I need to instantiate a WidgetRepository, a FidgetRepository, and a LidgetRepository all to populate a single user.
Alternatively, do I construct my repository so that everything requesting based on type T is lumped into the repository for type T as listed below?
public class GroupRepository
{
public List<Ticket> GetTickets(Group g) { }
public List<User> GetMembers(Group g) { }
}
public class UserRepository
{
public List<Ticket> GetTickets(User u) { }
}
The advantage I see here is that if I now need my user to have a collection of Widgets, Fidgets, and Lidgets, I just add the necessary methods to the UserRepository pattern and don't need to instantiate a bunch of different repository classes every time I want to create a user, but now I've scattered the concerns for a user across several different repositories.
I'm really not sure which way is right, if any. Any suggestions?
The repository pattern can help you to:
Put things that change for the same reason together
As well as
Separate things that change for different reasons
On the whole, I would expect a "User Repository" to be a repository for obtaining users. Ideally, it would be the only repository that you can use to obtain users, because if you change stuff, like user tables or the user domain model, you would only need to change the user repository. If you have methods on many repositories for obtaining a user, they would all need to change.
Limiting the impact of change is good, because change is inevitable.
As for instantiating many repositories, using a dependency injection tool such as Ninject or Unity to supply the repositories, or using a repository factory, can reduce new-ing up lots of repositories.
Finally, you can take a look at the concept of Domain Driven Design to find out more about the key purpose behind domain models and repositories (and also about aggregate roots, which are relevant to what you are doing).
Fascinating question with no right answer. This might be a better fit for programmers.stackexchange.com rather than stackoverflow.com. Here are my thoughts:
Don't worry about creating too many repositories. They are basically stateless objects so it isn't like you will use too much memory. And it shouldn't be a significant burden to the programmer, even in your example.
The real benefit of repositories is for mocking the repository for unit testing. Consider splitting them up based on what is simplest for the unit tests, to make the dependency injection simple and clear. I've seen cases where every query is a repository (they call those "queries" instead of repositories). And other cases where there is one repository for everything.
As it turns out, the first option was the more practical option in this case. There were a few reasons for this:
1) When making changes to a type and its associated repository (assume Ticket), it was far easier to modify the Ticket and TicketRepository in one place than to chase down every method in every repository that used a Ticket.
2) When I attempted to use interfaces to dictate the type of queues each repository could pull, I ran into issues where a single repository couldn't implement an generic interface using type T multiple times with the only differentiation in interface method implementation being the parameter type.
3) I access data from SharePoint and a database in my implementation, and created two abstract classes to provide data tools to the concrete repositories for either Sharepoint or SQL Server. Assume that in the example above Users come from Sharepoint while Tickets come from a database. Using my model I would not be able to use these abstract classes, as the group would have to inherit from both my Sharepoint abstract class and my SQL abstract class. C# does not support multiple inheritance of abstract classes. However, if I'm grouping all Ticket-related behaviours into a TicketRepository and all User-related behaviours into a UserRepository, each repository only needs access to one type of underlying data source (SQL or Sharepoint, respectively).
I have a design problem with my poject that I don't know how to fix, I have a DAL Layer which holds Repositories and a Service Layer which holds "Processors". The role of processors is to provide access to DAL data and perform some validation and formatting logic.
My domain objects all have a reference to at least one object from the Service Layer (to retrieve the values of their properties from the repositories). However I face two cyclical dependencies. The first "cyclical dependency" comes from my design since I want my DAL to return domain objects - I mean that it is conceptual - and the second comes from my code.
A domain object is always dependent of at least one Service Object
The domain object retrieves his properties from the repositories by calling methods on the service
The methods of the service call the DAL
However - and there is the problem - when the DAL has finished his job, he has to return domain objects. But to create these objects he has to inject the required Service Object dependencies (As these dependencies are required by domain objects).
Therefore, my DAL Repositories have dependencies on Service Object.
And this results in a very clear cyclical dependency. I am confused about how I should handle this situation. Lastly I was thinking about letting my DAL return DTOs but it doesn't seem to be compatible with the onion architecture. Because the DTOs are defined in the Infrastructure, but the Core and the Service Layer should not know about Infrastucture.
Also, I'm not excited about changing the return types of all the methods of my repositories since I have hundreds of lines of code...
I would appreciate any kind of help, thanks !
UPDATE
Here is my code to make the situation more clear :
My Object (In the Core):
public class MyComplexClass1
{
MyComplexClass1 Property1 {get; set;}
MyComplexClass2 Property2 {get; set;}
private readonly IService MyService {get; set;}
public MyComplexClass1(IService MyService)
{
this.MyService = MyService;
this.Property1 = MyService.GetMyComplexClassList1();
.....
}
This is my Service Interface (In the Core)
public interface IService
{
MyComplexClass1 GetMyComplexClassList1();
...
}
This my Repository Interface (In the Core)
public interface IRepoComplexClass1
{
MyComplexClass1 GetMyComplexClassObject()
...
}
Now the Service Layer implements IService, and the DAL Layer Implements IRepoComplexClass1.
But my point is that in my repo, I need to construct my Domain Object
This is the Infrascruture Layer
using Core;
public Repo : IRepoComplexClass1
{
MyComplexClass1 GetMyComplexClassList1()
{
//Retrieve all the stuff...
//... And now it's time to convert the DTOs to Domain Objects
//I need to write
//DomainObject.Property1 = new MyComplexClass1(ID, Service);
//So my Repository has a dependency with my service and my service has a dependency with my repository, (Because my Service Methods, make use of the Repository). Then, Ninject is completely messed up.
}
I hope it's clearer now.
First of all, typically architectural guidance like the Onion Architecture and Domain Driven Design (DDD) do not fit all cases when designing a system. In fact, using these techniques is discouraged unless the domain has significant complexity to warrant the cost. So, the domain you are modelling is complex enough that it will not fit into a more simple pattern.
IMHO, both the Onion Architecture and DDD try to achieve the same thing. Namely, the ability to have a programmable (and perhaps easily portable) domain for complex logic that is devoid of all other concerns. That is why in Onion, for example, application, infrastructure, configuration and persistence concerns are at the edges.
So, in summary, the domain is just code. It can then utilize those cool design patterns to solve the complex problems at hand without worrying about anything else.
I really like the Onion articles because the picture of concentric barriers is different to the idea of a layered architecture.
In a layered architecture, it is easy to think vertically, up and down, through the layers. For example, you have a service on top which speaks the outside world (through DTOs or ViewModels), then the service calls the business logic, finally, the business logic calls down to some persistence layer to keep the state of the system.
However, the Onion Architecture describes a different way to think about it. You may still have a service at the top, but this is an application service. For example, a Controller in ASP.NET MVC knows about HTTP, application configuration settings and security sessions. But the job of the controller isn't just to defer work to lower (smarter) layers. The job is to as quickly as possible map from the application side to the domain side. So simply speaking, the Controller calls into the domain asking for a piece of complex logic to be executed, gets the result back, and then persists. The Controller is the glue that is holding things together (not the domain).
So, the domain is the centre of the business domain. And nothing else.
This is why some complain about ORM tools that need attributes on the domain entities. We want our domain completely clean of all concerns other than the problem at hand. So, plain old objects.
So, the domain does not speak directly to application services or repositories. In fact, nothing that the domain calls speaks to these things. The domain is the core, and therefore, the end of the execution stack.
So, for a very simple code example (adapted from the OP):
Repository:
// it is only infrastructure if it doesn't know about specific types directly
public Repository<T>
{
public T Find(int id)
{
// resolve the entity
return default(T);
}
}
Domain Entity:
public class MyComplexClass1
{
MyComplexClass1 Property1 {get; } // requred because cannot be set from outside
MyComplexClass2 Property2 {get; set;}
private readonly IService MyService {get; set;}
// no dependency injection frameworks!
public MyComplexClass1(MyComplexClass1 property1)
{
// actually using the constructor to define the required properties
// MyComplexClass1 is required and MyComplexClass2 is optional
this.Property1 = property1;
.....
}
public ComplexCalculationResult CrazyComplexCalculation(MyComplexClass3 complexity)
{
var theAnswer = 42;
return new ComplexCalculationResult(theAnswer);
}
}
Controller (Application Service):
public class TheController : Controller
{
private readonly IRepository<MyComplexClass1> complexClassRepository;
private readonly IRepository<ComplexResult> complexResultRepository;
// this can use IoC if needed, no probs
public TheController(IRepository<MyComplexClass1> complexClassRepository, IRepository<ComplexResult> complexResultRepository)
{
this.complexClassRepository = complexClassRepository;
this.complexResultRepository = complexResultRepository;
}
// I know about HTTP
public void Post(int id, int value)
{
var entity = this.complexClassRepository.Find(id);
var complex3 = new MyComplexClass3(value);
var result = entity.CrazyComplexCalculation(complex3);
this.complexResultRepository.Save(result);
}
}
Now, very quickly you will be thinking, "Woah, that Controller is doing too much". For example, how about if we need 50 values to construct MyComplexClass3. This is where the Onion Architecture is brilliant. There is a design pattern for that called Factory or Builder and without the constraints of application concerns or persistence concerns, you can implement it easily. So, you refactor into the domain these patterns (and they become your domain services).
In summary, nothing the domain calls knows about application or persistence concerns. It is the end, the core of the system.
Hope this makes sense, I wrote a little bit more than I intended. :)
I have set up my VS Solution with the common layers in separate projects: Presentation, Business, Entities, and Data Access Layers. I have this static class AppSettings in the DAL which i want to call its Load() method at Application_Start in the Globla.asax.cs. It basically loads up my application settings from the web.config.
My question is: Should i be making a business logic class to access it from my Presentation Layer or can i access my AppSettings directly from my Presentation Layer to the DataAccess Layer (ignoring the Business Layer).
If so, does the same go for everything? Must i always go through the business layer to get to the Data Layer?
public static class AppSettings
{
public static int ApplicationID { get; set; }
public static string ServiceEndpoint { get; set; }
public static string ServiceCode { get; set; }
public static string ConnectionString { get; set; }
public static void Load()
{
//Connection String
AppSettings.ConnectionString = System.Configuration.ConfigurationManager.ConnectionStrings["USpace"].ConnectionString;
//Applicatin Settings
AppSettings.ApplicationID = Convert.ToInt32(System.Configuration.ConfigurationManager.AppSettings["AppID"]);
AppSettings.ServiceEndpoint = (string)System.Configuration.ConfigurationManager.AppSettings["ServiceEndpoint"];
AppSettings.ServiceCode = (string)System.Configuration.ConfigurationManager.AppSettings["ServiceCode"];
}
}
If i must go through the business logic Layer the BLL's class will look like this?:
public static class BLLAppSettings
{
public static int ApplicationID
{
get
{
AppSettings.ApplicationID
}
}
...
I would recommend always going through the business logic layer to access the data layer, so that all of the safeguards built into the business logic layer are in play. Would you want the data layer to be used without the business layer?
If your focus is Design Patterns, then by all means, have fun pounding those square pegs in the little round holes.
If your focus is on Application Design, then you focus on the Design Patterns that make sense for your Application, and even for individual parts of your Application.
Knowing the patterns is knowledge. Knowing when, and when not, to use them is wisdom...
It's one man's oppinion, but I hope it helps...
Ayende recently posted a few articles against this practice (I understood it like that) :
http://ayende.com/blog/153061/northwind-starter-kit-review-it-is-all-about-the-services
And I agree with him : you have to ask your self "what is the purpose of this layer" and if you can't answer then you can't remove this layer and keep your software simple.
So if you have no business operation when you get your data then deal directly with your data layer !
If the data is in the application's config file (web.config) you don't need to "go through" anything besides System.ConfigurationManager.AppSettings
You should start out by keeping it simple but within reason. General principles of software engineering should be your guide when designing your application. In this case, my immediate thought is that by having one global AppSettings class then you will be coupling your business and data access layer to that class. That may seem reasonable now but what about when you have 50 different settings and only 20 of them apply to the data access layer? What if, down the road, your business layer has to load the settings from a different source than the DAL? On top of that, in your current design your coupling both layers to global singleton. That is typically not a good idea.
Even in smaller apps I would advocate for having different settings objects defined for each layer. In my design, it would be similar to your BLLAppSettings. It would encapsulate the source of the settings, in this case your global AppSettings class. However, where my design would differ is that BLLAppSettings would be a concrete instance of an Interface defined in the BLL layer that must be given to the BLL layer via Constructor, Factory, or Dependency Injection. A similar class, DALAppSettings would be necessary in my recommended design.
In this way, your BLL and DAL are not coupled to the global AppSettings defined in the Presentation Layer. The implementation details of BLLAppSettings and DALAppSettings can vary independently when necessary, but for the time being can remain internally tied to your global AppSettings class.
I have a program that deals with (virtual) money order transactions, and there are many classes that has logic requiring data to be saved to a database. My question is if I shall call my repositories directly from the various classes, or if I shall instead raise events, which a DatabaseManager class object can listen to, and hence all repositories will be called from this one class.
I don't have experience working with databases and repositories, so would appreciate some deeper insight and tips here. Like on what criteria you would chose different approach etc.
It's probably important to note that the database in this case is not used to retrieve data for performing program logic, except on program startup. So it's basically keeping all data in runtime objects, and just dumping to database for archiving.
I would pass an IRepository to my classes which they can then call to save data. For one thing it makes testing easier because you can easily inject a mock repository and on the other hand it makes it explicit that your classes have a dependency like that. You might want to search for the term Dependency Injection.
Simple example:
class Account
{
public Account(IRepository<Account> repository)
{
_Repository = repository;
}
public void ChangeOwner(Owner newOwner)
{
// change ownership
_Repository.Save(this);
}
}
Is it possible to expose the DataContext when extending a class in the DataContext? Consider this:
public partial class SomeClass {
public object SomeExtraProperty {
this.DataContext.ExecuteQuery<T>("{SOME_REALLY_COMPLEX_QUERY_THAT_HAS_TO_BE_IN_RAW_SQL_BECAUSE_LINQ_GENERATES_CRAP_IN_THIS INSTANCE}");
}
}
How can I go about doing this? I have a sloppy version working now, where I pass the DataContext to the view model and from there I pass it to the method I have setup in the partial class. I'd like to avoid the whole DataContext passing around and just have a property that I can reference.
UPDATE FOR #Aaronaught
So, how would I go about writing the code? I know that's a vague question, but from what I've seen online so far, all the tutorials feel like they assume I know where to place the code and how use it, etc.
Say I have a very simple application structured as (in folders):
Controllers
Models
Views
Where do the repository files go? In the Models folder or can I create a "Repositories" folder just for them?
Past that how is the repository aware of the DataContext? Do I have to create a new instance of it in each method of the repository (if so that seems in-efficient... and wouldn't that cause problems with pulling an object out of one instance and using it in a controller that's using a different instance...)?
For example I currently have this setup:
public class BaseController : Controller {
protected DataContext dc = new DataContext();
}
public class XController : BaseController {
// stuff
}
This way I have a "global" DataContext available to all controllers who inherit from BaseController. It is my understanding that that is efficient (I could be wrong...).
In my Models folder I have a "Collections" folder, which really serve as the ViewModels:
public class BaseCollection {
// Common properties for the Master page
}
public class XCollection : BaseCollection {
// X View specific properties
}
So, taking all of this where and how would the repository plug-in? Would it be something like this (using the real objects of my app):
public interface IJobRepository {
public Job GetById(int JobId);
}
public class JobRepository : IJobRepository {
public Job GetById(int JobId) {
using (DataContext dc = new DataContext()) {
return dc.Jobs.Single(j => (j.JobId == JobId));
};
}
}
Also, what's the point of the interface? Is it so other services can hook up to my app? What if I don't plan on having any such capabilities?
Moving on, would it be better to have an abstraction object that collects all the information for the real object? For example an IJob object which would have all of the properties of the Job + the additional properties I may want to add such as the Name? So would the repository change to:
public interface IJobRepository {
public IJob GetById(int JobId);
}
public class JobRepository : IJobRepository {
public IJob GetById(int JobId) {
using (DataContext dc = new DataContext()) {
return dc.Jobs.Single(j => new IJob {
Name = dc.SP(JobId) // of course, the project here is wrong,
// but you get the point...
});
};
}
}
My head is so confused now. I would love to see a tutorial from start to finish, i.e., "File -> New -> Do this -> Do that".
Anyway, #Aaronaught, sorry for slamming such a huge question at you, but you obviously have substantially more knowledge at this than I do, so I want to pick your brain as much as I can.
Honestly, this isn't the kind of scenario that Linq to SQL is designed for. Linq to SQL is essentially a thin veneer over the database; your entity model is supposed to closely mirror your data model, and oftentimes your Linq to SQL "entity model" simply isn't appropriate to use as your domain model (which is the "model" in MVC).
Your controller should be making use of a repository or service of some kind. It should be that object's responsibility to load the specific entities along with any additional data that's necessary for the view model. If you don't have a repository/service, you can embed this logic directly into the controller, but if you do this a lot then you're going to end up with a brittle design that's difficult to maintain - better to start with a good design from the get-go.
Do not try to design your entity classes to reference the DataContext. That's exactly the kind of situation that ORMs such as Linq to SQL attempt to avoid. If your entities are actually aware of the DataContext then they're violating the encapsulation provided by Linq to SQL and leaking the implementation to public callers.
You need to have one class responsible for assembling the view models, and that class should either be aware of the DataContext itself, or various other classes that reference the DataContext. Normally the class in question is, as stated above, a domain repository of some kind that abstracts away all the database access.
P.S. Some people will insist that a repository should exclusively deal with domain objects and not presentation (view) objects, and refer to the latter as services or builders; call it what you like, the principle is essentially the same, a class that wraps your data-access classes and is responsible for loading one specific type of object (view model).
Let's say you're building an auto trading site and need to display information about the domain model (the actual car/listing) as well as some related-but-not-linked information that has to be obtained separately (let's say the price range for that particular model). So you'd have a view model like this:
public class CarViewModel
{
public Car Car { get; set; }
public decimal LowestModelPrice { get; set; }
public decimal HighestModelPrice { get; set; }
}
Your view model builder could be as simple as this:
public class CarViewModelService
{
private readonly CarRepository carRepository;
private readonly PriceService priceService;
public CarViewModelService(CarRepository cr, PriceService ps) { ... }
public CarViewModel GetCarData(int carID)
{
var car = carRepository.GetCar(carID);
decimal lowestPrice = priceService.GetLowestPrice(car.ModelNumber);
decimal highestPrice = priceService.GetHighestPrice(car.ModelNumber);
return new CarViewModel { Car = car, LowestPrice = lowestPrice,
HighestPrice = highestPrice };
}
}
That's it. CarRepository is a repository that wraps your DataContext and loads/saves Cars, and PriceService essentially wraps a bunch of stored procedures set up in the same DataContext.
It may seem like a lot of effort to create all these classes, but once you get into the swing of it, it's really not that time-consuming, and you'll ultimately find it way easier to maintain.
Update: Answers to New Questions
Where do the repository files go? In the Models folder or can I create a "Repositories" folder just for them?
Repositories are part of your model if they are responsible for persisting model classes. If they deal with view models (AKA they are "services" or "view model builders") then they are part of your presentation logic; technically they are somewhere between the Controller and Model, which is why in my MVC apps I normally have both a Model namespace (containing actual domain classes) and a ViewModel namespace (containing presentation classes).
how is the repository aware of the DataContext?
In most instances you're going to want to pass it in through the constructor. This allows you to share the same DataContext instance across multiple repositories, which becomes important when you need to write back a View Model that comprises multiple domain objects.
Also, if you later decide to start using a Dependency Injection (DI) Framework then it can handle all of the dependency resolution automatically (by binding the DataContext as HTTP-request-scoped). Normally your controllers shouldn't be creating DataContext instances, they should actually be injected (again, through the constructor) with the pre-existing individual repositories, but this can get a little complicated without a DI framework in place, so if you don't have one, it's OK (not great) to have your controllers actually go and create these objects.
In my Models folder I have a "Collections" folder, which really serve as the ViewModels
This is wrong. Your View Model is not your Model. View Models belong to the View, which is separate from your Domain Model (which is what the "M" or "Model" refers to). As mentioned above, I would suggest actually creating a ViewModel namespace to avoid bloating the Views namespace.
So, taking all of this where and how would the repository plug-in?
See a few paragraphs above - the repository should be injected with the DataContext and the controller should be injected with the repository. If you're not using a DI framework, you can get away with having your controller create the DataContext and repositories, but try not to cement the latter design too much, you'll want to clean it up later.
Also, what's the point of the interface?
Primarily it's so that you can change your persistence model if need be. Perhaps you decide that Linq to SQL is too data-oriented and you want to switch to something more flexible like Entity Framework or NHibernate. Perhaps you need to implement support for Oracle, mysql, or some other non-Microsoft database. Or, perhaps you fully intend to continue using Linq to SQL, but want to be able to write unit tests for your controllers; the only way to do that is to inject mock/fake repositories into the controllers, and for that to work, they need to be abstract types.
Moving on, would it be better to have an abstraction object that collects all the information for the real object? For example an IJob object which would have all of the properties of the Job + the additional properties I may want to add such as the Name?
This is more or less what I recommended in the first place, although you've done it with a projection which is going to be harder to debug. Better to just call the SP on a separate line of code and combine the results afterward.
Also, you can't use an interface type for your Domain or View Model. Not only is it the wrong metaphor (models represent the immutable laws of your application, they are not supposed to change unless the real-world requirements change), but it's actually not possible; interfaces can't be databound because there's nothing to instantiate when posting.
So yeah, you've sort of got the right idea here, except (a) instead of an IJob it should be your JobViewModel, (b) instead of an IJobRepository it should be a JobViewModelService, and (c) instead of directly instantiating the DataContext it should accept one through the constructor.
Keep in mind that the purpose of all of this is to keep a clean, maintainable design. If you have a 24-hour deadline to meet then you can still get it to work by just shoving all of this logic directly into the controller. Just don't leave it that way for long, otherwise your controllers will (d)evolve into God-Object abominations.
Replace {SOME_REALLY_COMPLEX_QUERY_THAT_HAS_TO_BE_IN_RAW_SQL_BECAUSE_LINQ_GENERATES_CRAP_IN_THIS INSTANCE} with a stored procedure then have Linq to SQL import that function.
You can then call the function directly from the data context, get the results and pass it to the view model.
I would avoid making a property that calls the data context. You should just get the value from a service or repository layer whenever you need it instead of embedding it into one of the objects created by Linq to SQL.