Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
tl;dr
In a good design. Should accessing the database be handled in a separate business logic layer (in an asp.net MVC model), or is it OK to pass IQueryables or DbContext objects to a controller?
Why? What are the pros and cons of each?
I'm building an ASP.NET MVC application in C#. It uses EntityFramework as an ORM.
Let's simplify this scenario a bit.
I have a database table with cute fluffy kittens. Each kitten has a kitten image link, kitten fluffiness index, kitten name and kitten id. These map to an EF generated POCO called Kitten. I might use this class in other projects and not just the asp.net MVC project.
I have a KittenController which should fetch the latest fluffy kittens at /Kittens. It may contain some logic selecting the kitten, but not too much logic. I've been arguing with a friend about how to implement this, I won't disclose sides :)
Option 1: db in the controller:
public ActionResult Kittens() // some parameters might be here
{
using(var db = new KittenEntities()){ // db can also be injected,
var result = db.Kittens // this explicit query is here
.Where(kitten=>kitten.fluffiness > 10)
.Select(kitten=>new {
Name=kitten.name,
Url=kitten.imageUrl
}).Take(10);
return Json(result,JsonRequestBehavior.AllowGet);
}
}
Option 2: Separate model
public class Kitten{
public string Name {get; set; }
public string Url {get; set; }
private Kitten(){
_fluffiness = fluffinessIndex;
}
public static IEnumerable<Kitten> GetLatestKittens(int fluffinessIndex=10){
using(var db = new KittenEntities()){ //connection can also be injected
return db.Kittens.Where(kitten=>kitten.fluffiness > 10)
.Select(entity=>new Kitten(entity.name,entity.imageUrl))
.Take(10).ToList();
}
} // it's static for simplicity here, in fact it's probably also an object method
// Also, in practice it might be a service in a services directory creating the
// Objects and fetching them from the DB, and just the kitten MVC _type_ here
}
//----Then the controller:
public ActionResult Kittens() // some parameters might be here
{
return Json(Kittens.GetLatestKittens(10),JsonRequestBehavior.AllowGet);
}
Notes: GetLatestKittens is unlikely to be used elsewhere in the code but it might. It's possible to use the constructor of Kitten instead of a static building method and changing the class for Kittens. Basically it's supposed to be a layer above the database entities so the controller does not have to be aware of the actual database, the mapper, or entity framework.
What are some pros and cons for each design?
Is there a clear winner? Why?
Note: Of course, alternative approaches are very valued as answers too.
Clarification 1: This is not a trivial application in practice. This is an application with tens of controllers and thousands of lines of code, and the entities are not only used here but in tens of other C# projects. The example here is a reduced test case.
The second approach is superior. Let's try a lame analogy:
You enter a pizza shop and walk over to the counter. "Welcome to McPizza Maestro Double Deluxe, may I take your order?" the pimpled cashier asks you, the void in his eyes threatening to lure you in. "Yeah I'll have one large pizza with olives". "Okay", the cashier replies and his voice croaks in the middle of the "o" sound. He yells towards the kitchen "One Jimmy Carter!"
And then, after waiting for a bit, you get a large pizza with olives. Did you notice anything peculiar? The cashier didn't say "Take some dough, spin it round like it's Christmas time, pour some cheese and tomato sauce, sprinkle olives and put in an oven for about 8 minutes!" Come to think of it, that's not peculiar at all. The cashier is simply a gateway between two worlds: The customer who wants the pizza, and the cook who makes the pizza. For all the cashier knows, the cook gets his pizza from aliens or slices them from Jimmy Carter (he's a dwindling resource, people).
That is your situation. Your cashier isn't dumb. He knows how to make pizza. That doesn't mean he should be making pizza, or telling someone how to make pizza. That's the cook's job. As other answers (notably Florian Margaine's and Madara Uchiha's) illustrated, there is a separation of responsibilities. The model might not do much, it might be just one function call, it might be even one line - but that doesn't matter, because the controller doesn't care.
Now, let's say the owners decide that pizzas are just a fad (blasphemy!) and you switch over to something more contemporary, a fancy burger joint. Let's review what happens:
You enter a fancy burger joint and walk over to the counter. "Welcome to Le Burger Maestro Double Deluxe, may I take your order?" "yeah, I'll have one large hamburger with olives". "Okay", and he turns to the kitchen, "One Jimmy Carter!"
And then, you get a large hamburger with olives (ew).
Option 1 and 2 are bit extreme and like the choice between the devil and the deep blue sea but if I had to choose between the two I would prefer option 1.
First of all, option 2 will throw a runtime exception because Entity Framework does not support to project into an entity (Select(e => new Kitten(...)) and it does not allow to use a constructor with parameters in a projection. Now, this note seems a bit pedantic in this context, but by projecting into the entity and returning a Kitten (or an enumeration of Kittens) you are hiding the real problem with that approach.
Obviously, your method returns two properties of the entity that you want to use in your view - the kitten's name and imageUrl. Because these are only a selection of all Kitten properties returning a (half-filled) Kitten entity would not be appropriate. So, what type to actually return from this method?
You could return object (or IEnumerable<object>) (that's how I understand your comment about the "object method") which is fine if you pass the result into Json(...) to be processed in Javascript later. But you would lose all compile time type information and I doubt that an object result type is useful for anything else.
You could return some named type that just contains the two properties - maybe called "KittensListDto".
Now, this is only one method for one view - the view to list kittens. Then you have a details view to display a single kitten, then an edit view and then a delete confirm view maybe. Four views for an existing Kitten entity, each of which needs possibly different properties and each of which would need a separate method and projection and a different DTO type. The same for the Dog entity and for 100 entities more in the project and you get perhaps 400 methods and 400 return types.
And most likely not a single one will be ever reused at any other place than this specific view. Why would you want to Take 10 kittens with just name and imageUrl anywhere a second time? Do you have a second kittens list view? If so, it will have a reason and the queries are only identical by accident and now and if one changes the other one does not necessarily, otherwise the list view is not properly "reused" and should not exist twice. Or is the same list used by an Excel export maybe? But perhaps the Excel users want to have 1000 kittens tomorrow, while the view should still display only 10. Or the view should display the kitten's Age tomorrow, but the Excel users don't want to have that because their Excel macros would not run correctly anymore with that change. Just because two pieces of code are identical they don't have to be factored out into a common reusable component if they are in a different context or have different semantics. You better leave it a GetLatestKittensForListView and GetLatestKittensForExcelExport. Or you better don't have such methods in your service layer at all.
In the light of these considerations an excursion to a Pizza shop as an analogy why the first approach is superior :)
"Welcome to BigPizza, the custom Pizza shop, may I take your order?" "Well, I'd like to have a Pizza with olives, but tomato sauce on top and cheese at the bottom and bake it in the oven for 90 minutes until it's black and hard like a flat rock of granite." "OK, Sir, custom Pizzas are our profession, we'll make it."
The cashier goes to the kitchen. "There is a psycho at the counter, he wants to have a Pizza with... it's a rock of granite with ... wait ... we need to have a name first", he tells the cook.
"No!", the cook screams, "not again! You know we tried that already." He takes a stack of paper with 400 pages, "here we have rock of granite from 2005, but... it didn't have olives, but paprica instead... or here is top tomato ... but the customer wanted it baked only half a minute." "Maybe we should call it TopTomatoGraniteRockSpecial?" "But it doesn't take the cheese at the bottom into account..." The cashier: "That's what Special is supposed to express." "But having the Pizza rock formed like a pyramid would be special as well", the cook replies. "Hmmm ... it is difficult...", the desparate cashier says.
"IS MY PIZZA ALREADY IN THE OVEN?", suddenly it shouts through the kitchen door. "Let's stop this discussion, just tell me how to make this Pizza, we are not going to have such a Pizza a second time", the cook decides. "OK, it's a Pizza with olives, but tomato sauce on top and cheese at the bottom and bake it in the oven for 90 minutes until it's black and hard like a flat rock of granite."
If option 1 violates a separation of concerns principle by using a database context in the view layer the option 2 violates the same principle by having presentation centric query logic in the service or business layer. From a technical viewpoint it does not but it will end up with a service layer that is anything else than "reusable" outside of the presentation layer. And it has much higher development and maintenance costs because for every required piece of data in a controller action you have to create services, methods and return types.
Now, there actually might be queries or query parts that are reused often and that's why I think that option 1 is almost as extreme as option 2 - for example a Where clause by the key (will be probably used in details, edit and delete confirm view), filtering out "soft deleted" entities, filtering by a tenant in a multi-tenant architecture or disabling change tracking, etc. For such really repetetive query logic I could imagine that extracting this into a service or repository layer (but maybe only reusable extensions methods) might make sense, like
public IQueryable<Kitten> GetKittens()
{
return context.Kittens.AsNoTracking().Where(k => !k.IsDeleted);
}
Anything else that follows after - like projecting properties - is view specific and I would not like to have it in this layer. In order to make this approach possible IQueryable<T> must be exposed from the service/repository. It does not mean that the select must be directly in the controller action. Especially fat and complex projections (that maybe join other entities by navigation properties, perform groupings, etc.) could be moved into extension methods of IQueryable<T> that are collected in other files, directories or even another project, but still a project that is an appendix to the presentation layer and much closer to it than to the service layer. An action could then look like this:
public ActionResult Kittens()
{
var result = kittenService.GetKittens()
.Where(kitten => kitten.fluffiness > 10)
.OrderBy(kitten => kitten.name)
.Select(kitten => new {
Name=kitten.name,
Url=kitten.imageUrl
})
.Take(10);
return Json(result,JsonRequestBehavior.AllowGet);
}
Or like this:
public ActionResult Kittens()
{
var result = kittenService.GetKittens()
.ToKittenListViewModel(10, 10);
return Json(result,JsonRequestBehavior.AllowGet);
}
With ToKittenListViewModel() being:
public static IEnumerable<object> ToKittenListViewModel(
this IQueryable<Kitten> kittens, int minFluffiness, int pageItems)
{
return kittens
.Where(kitten => kitten.fluffiness > minFluffiness)
.OrderBy(kitten => kitten.name)
.Select(kitten => new {
Name = kitten.name,
Url = kitten.imageUrl
})
.Take(pageItems)
.AsEnumerable()
.Cast<object>();
}
That's just a basic idea and a sketch that another solution could be in the middle between option 1 and 2.
Well, it all depends on the overall architecture and requirements and all what I wrote above might be useless and wrong. Do you have to consider that the ORM or data access technology could be changed in future? Could there be a physical boundary between controller and database, is the controller disconnected from the context and do the data need to be fetched via a web service for example in future? This would require a very different approach which would more lean towards option 2.
Such an architecture is so different that - in my opinion - you simply can't say "maybe" or "not now, but possibly it could be a requirement in future, or possibly it won't". This is something that the project's stakeholders have to define before you can proceed with architectural decisions as it will increase development costs dramatically and it will we wasted money in development and maintenance if the "maybe" turns out to never become reality.
I was talking only about queries or GET requests in a web app which have rarely something that I would call "business logic" at all. POST requests and modifying data are a whole different story. If it is forbidden that an order can be changed after it is invoiced for example this is a general "business rule" that normally applies no matter which view or web service or background process or whatever tries to change an order. I would definitely put such a check for the order status into a business service or any common component and never into a controller.
There might be an argument against using IQueryable<T> in a controller action because it is coupled to LINQ-to-Entities and it will make unit tests difficult. But what is a unit test going to test in a controller action that doesn't contain any business logic, that gets parameters passed in that usually come from a view via model binding or routing - not covered by the unit test - that uses a mocked repository/service returning IEnumerable<T> - database query and access is not tested - and that returns a View - correct rendering of the view is not tested?
This is the key phrase there:
I might use this class in other projects and not just the asp.net MVC project.
A controller is HTTP-centric. It is only there to handle HTTP requests. If you want to use your model in any other project, i.e. your business logic, you can't have any logic in the controllers. You must be able to take off your model, put it somewhere else, and all your business logic still works.
So, no, don't access your database from your controller. It kills any possible reuse you might ever get.
Do you really want to rewrite all your db/linq requests in all your projects when you can have simple methods that you reuse?
Another thing: your function in option 1 has two responsibilities: it fetches the result from a mapper object and it displays it. That's too many responsibilities. There is an "and" in the list of responsibilities. Your option 2 only has one responsibility: being the link between the model and the view.
I'm not sure about how ASP.NET or C# does things. But I do know MVC.
In MVC, you separate your application into two major layers: The Presentational layer (which contains the Controller and View), and the Model layer (which contains... the Model).
The point is to separate the 3 major responsibilities in the application:
The application logic, handling request, user input, etc. That's the Controller.
The presentation logic, handling templating, display, formats. That's the View.
The business logic or "heavy logic", handling basically everything else. That's your actual application basically, where everything your application is supposed to do gets done. This part handles domain objects that represents the information structures of the application, it handles the mapping of those objects into permanent storage (be it session, database or files).
As you can see, database handling is found on the Model, and it has several advantages:
The controller is less tied to the model. Because "the work" gets done in the Model, should you want to change your controller, you'll be able to do so more easily if your database handling is in the Model.
You gain more flexibility. In the case where you want to change your mapping scheme (I want to switch to Postgres from MySQL), I only need to change it once (in the base Mapper definition).
For more information, see the excellent answer here: How should a model be structured in MVC?
I prefer the second approach. It at least separates between controller and business logic. It is still a little bit hard to unit test (may be I'm not good at mocking).
I personally prefer the following approach. Main reason is it is easy to unit testing for each layer - presentation, business logic, data access. Besides, you can see that approach in a lot of open source projects.
namespace MyProject.Web.Controllers
{
public class MyController : Controller
{
private readonly IKittenService _kittenService ;
public MyController(IKittenService kittenService)
{
_kittenService = kittenService;
}
public ActionResult Kittens()
{
// var result = _kittenService.GetLatestKittens(10);
// Return something.
}
}
}
namespace MyProject.Domain.Kittens
{
public class Kitten
{
public string Name {get; set; }
public string Url {get; set; }
}
}
namespace MyProject.Services.KittenService
{
public interface IKittenService
{
IEnumerable<Kitten> GetLatestKittens(int fluffinessIndex=10);
}
}
namespace MyProject.Services.KittenService
{
public class KittenService : IKittenService
{
public IEnumerable<Kitten> GetLatestKittens(int fluffinessIndex=10)
{
using(var db = new KittenEntities())
{
return db.Kittens // this explicit query is here
.Where(kitten=>kitten.fluffiness > 10)
.Select(kitten=>new {
Name=kitten.name,
Url=kitten.imageUrl
}).Take(10);
}
}
}
}
#Win has the idea I'd more or less follow.
Have the Presentation just presents.
The Controller simply acts as a bridge, it does nothing really, it is the middle man. Should be easy to test.
The DAL is the hardest part. Some like to separate it out on a web service, I have done so for a project once. That way you can also have the DAL act as an API for others (internally or externally) to consume - so WCF or WebAPI comes to mind.
That way your DAL is completely independent of your web server. If someone hacks your server, the DAL is probably still secure.
It's up to you I guess.
Single Responsibility Principle. Each of your classes should have one and only one reason to change. #Zirak gives a good example of how each person has a single reponsibility in the chain of events.
Let's look at the hypothetical test case you have provided.
public ActionResult Kittens() // some parameters might be here
{
using(var db = new KittenEntities()){ // db can also be injected,
var result = db.Kittens // this explicit query is here
.Where(kitten=>kitten.fluffiness > 10)
.Select(kitten=>new {
Name=kitten.name,
Url=kitten.imageUrl
}).Take(10);
return Json(result,JsonRequestBehavior.AllowGet);
}
}
With a service layer in between, it might look something like this.
public ActionResult Kittens() // some parameters might be here
{
using(var service = new KittenService())
{
var result = service.GetFluffyKittens();
return Json(result,JsonRequestBehavior.AllowGet);
}
}
public class KittenService : IDisposable
{
public IEnumerable<Kitten> GetFluffyKittens()
{
using(var db = new KittenEntities()){ // db can also be injected,
return db.Kittens // this explicit query is here
.Where(kitten=>kitten.fluffiness > 10)
.Select(kitten=>new {
Name=kitten.name,
Url=kitten.imageUrl
}).Take(10);
}
}
}
With a few more imaginary controller classes, you can see how this would be much easier to reuse. That's great! We have code reuse, but there's even more benefit. Lets say for example, our Kitten website is taking off like crazy, everyone wants to look at fluffy kittens, so we need to partition our database (shard). The constructor for all our db calls needs to be injected with the connection to the proper database. With our controller based EF code, we would have to change the controllers because of a DATABASE issue.
Clearly that means that our controllers are now dependant upon database concerns. They now have too many reasons to change, which can potentially lead to accidental bugs in the code and needing to retest code that is unrelated to that change.
With a service, we could do the following, while the controllers are protected from that change.
public class KittenService : IDisposable
{
public IEnumerable<Kitten> GetFluffyKittens()
{
using(var db = GetDbContextForFuffyKittens()){ // db can also be injected,
return db.Kittens // this explicit query is here
.Where(kitten=>kitten.fluffiness > 10)
.Select(kitten=>new {
Name=kitten.name,
Url=kitten.imageUrl
}).Take(10);
}
}
protected KittenEntities GetDbContextForFuffyKittens(){
// ... code to determine the least used shard and get connection string ...
var connectionString = GetShardThatIsntBusy();
return new KittensEntities(connectionString);
}
}
The key here is to isolate changes from reaching other parts of your code. You should be testing anything that is affected by a change in code, so you want to isolate changes from one another. This has the side effect of keeping your code DRY, so you end up with more flexible and reusable classes and services.
Separating the classes also allows you to centralize behavior that would have either been difficult or repetitive before. Think about logging errors in your data access. In the first method, you would need logging everywhere. With a layer in between you can easily insert some logging logic.
public class KittenService : IDisposable
{
public IEnumerable<Kitten> GetFluffyKittens()
{
Func<IEnumerable<Kitten>> func = () => {
using(var db = GetDbContextForFuffyKittens()){ // db can also be injected,
return db.Kittens // this explicit query is here
.Where(kitten=>kitten.fluffiness > 10)
.Select(kitten=>new {
Name=kitten.name,
Url=kitten.imageUrl
}).Take(10);
}
};
return this.Execute(func);
}
protected KittenEntities GetDbContextForFuffyKittens(){
// ... code to determine the least used shard and get connection string ...
var connectionString = GetShardThatIsntBusy();
return new KittensEntities(connectionString);
}
protected T Execute(Func<T> func){
try
{
return func();
}
catch(Exception ex){
Logging.Log(ex);
throw ex;
}
}
}
Either way is not so good for testing. Use dependency injection to get the DI container to create the db context and inject it into the controller constructor.
EDIT: a little more on testing
If you can test you can see if you application works per spec before you publish.
If you can't test easily you won't write your test.
from that chat room:
Okay, so on a trivial application you write it and it doesn't change very much,
but on a non trivial application you get these nasty things called dependencies, which when you change one breaks a lot of shit, so you use Dependency injection to inject a repo that you can fake, and then you can write unit tests in order to make sure your code doesn't
If I had (note: really had) to chose between the 2 given options, I'd say 1 for simplicity, but I don't recommend using it since it's hard to maintain and causes a lot of duplicate code.
A controller should contain as less business logic as possible. It should only delegate data access, map it to a ViewModel and pass it to the View.
If you want to abstract data access away from your controller (which is a good thing), you might want to create a service layer containing a method like GetLatestKittens(int fluffinessIndex).
I don't recommend placing data access logic in your POCO either, this doesn't allow you to switch to another ORM (NHibernate for example) and reuse the same POCO's.
After reading a blog post mentioning how it seems wrong to expose a public getter just to facilitate testing, I couldn't find any concrete examples of better practices.
Suppose we have this simple example:
public class ProductNameList
{
private IList<string> _products;
public void AddProductName(string productName)
{
_products.Add(productName);
}
}
Let's say for object-oriented design reasons, I have no need to publicly expose the list of products. How then can I test whether AddProductName() did its job? (Maybe it added the product twice, or not at all.)
One solution is to expose a read-only version of _products where I can test whether it has only one product name -- the one I passed to AddProductName().
The blog post I mentioned earlier says it's more about interaction (i.e., did the product name get added) rather than state. However, state is exactly what I'm checking. I already know AddProductName() has been called -- I want to test whether the object's state is correct once that method has done its work.
Disclaimer: Although this question is similar to
Balancing Design Principles: Unit Testing, (1) the language is different (C# instead of Java), (2) this question has sample code, and (3) I don't feel the question was adequately answered (i.e., code would have helped demonstrate the concept).
Unit tests should test the public API.
If you have "no need to publicly expose the list of products", then why would you care whether AddProductName did its job? What possible difference would it make if the list is entirely private and never, ever affected anything else?
Find out what affect AddProductName has on the state that can be detected using the API, and test that.
Very similar question here: Domain Model (Exposing Public Properties)
You could make your accessors protected so you can mock or you could use internal so that you can access the property in a test but that IMO would be wrong as you have suggested.
I think sometimes we get so caught up in wanting to make sure that every little thing in our code is tested. Sometime we need to take a step back and ask why are we storing this value, and what is its purpose? Then instead of testing that the value gets set we can then start testing that the behaviour of the component is correct.
EDIT
One thing you can do in your scenario is to have a bastard constructor where you inject an IList and then you test that you have added a product:
public class ProductNameList
{
private IList<string> _products;
internal ProductNameList(IList<string> products)
{
_products = products;
}
...
}
You would then test it like this:
[Test]
public void FooTest()
{
var productList = new List<string>();
var productNameList = new ProductNameList(productList);
productNameList.AddProductName("Foo");
Assert.IsTrue(productList[0] == "Foo");
}
You will need to remember to make internals visable to your test assembly.
Make _products protected instead of private. In your mock, you can add an accessor.
To test if AddProductName() did it's job, instead of using a public getter for _ProductNames, make a call to GetProductNames() - or the equivalent that's defined in your API. Such a function may not necessarily be in the same class.
Now, if your API does not expose any way to get information about product names, then AddProductName() has no observable side effects (In which case, it is a meaningless function).
If AddProductName() does have side effects, but they are indirect - say, a method in ProductList that writes the list of product names to a file, then ProductList should be split into two classes - one that manages the list, and the other that calls it's Add and Get API, and performs side effects.
Here is my question...
I work in Telecom industry and have a piece of software which provides the best network available for a given service number or a site installation address. My company uses the network of the wholesale provider and we have our own network as well. To assess what services a customer might be able to get, I call a webservice to find out the services available on a given telephone exchange and based on the services available, I need to run some checks against either our network or the network of the wholesale provider.
My question is how this can be modelled using interfaces in C#? The software that I have does not make use of any interfaces and whatever classes are there are just to satisfy the fact that code cannot live outside classes.
I am familiar with the concept of interfaces, at least on theoretical level, but not very familiar with the concept of programming to interfaces.
What I am thinking is along the following lines:
Create an interface called IServiceQualification which will have an operation defined : void Qualify(). Have two classes called QualifyByNumber and QualifyByAddress and both of these implement the interface and define the details of the operation Qualify. Am I thinking along the right lines or is there a different/better way of approaching this issue.
I have read a few examples of programming to interfaces, but would like to see this utilized in a work situation.
Comments/suggestions are most welcome.
I would probably make it go a little bit deeper, but you are on the right track. I would personally create IServiceQualification with a Qualify method and then below that an abstract class called ServiceQualification which would have an abstract method called Qualify that any kind of qualifier class could implement. This lets you define common behavior among your qualifiers (there is bound to be some) while still creating the separation of concerns at a high level.
Interfaces have a defined purpose and using them properly lets you implement in any way you want without having your code require that implementation. So, we can create a service that looks something like:
public bool ShouldQualify(IServiceQualification qualification)
And no matter the implementation we send it, this method will work. It becomes something you never have to change or modify once its working. Additionally, it leads you directly to bugs. If someone reports that qualifications by address aren't working, you know EXACTLY where to look.
Take a look at the strategy design pattern. Both the problem and the approach that you have described sound like a pretty close fit.
http://www.dofactory.com/Patterns/PatternStrategy.aspx
You should think of interfaces in terms of a contract. It specifies that a class implements certain function signatures meaning you class can call them with known parameters and expect a certain object back - what happens in the middle is upto the developer of the interface to decide. This loose coupling makes your class system a lot more flexible (it has nothing to do with saving key strokes surfash)
Heres an example which is roughly aimed at your situation (but will require more modelling).
public interface IServiceQualification{
bool Qualifies(Service serv);
}
public class ClientTelephoneService : IServiceQualification
{
public bool Qualifies(Service serv){
return serv.TelNumber.Contains("01234");
}
}
public class ClientAddressService : IServiceQualification
{
public bool Qualifies(Service serv){
return serv.Address.Contains("ABC");
}
}
//just a dummy service
public class Service{
public string TelNumber = "0123456789";
public string Address = "ABC";
}
//implementation of a checker which has a list of available services and takes a client who implements the
//interface (meaning we know we can call the Qualifies method
public class ClassThatReturnsTheAvailableServices
{
//ctor
List<Service> services = //your list of all services
public List<Service> CheckServices(IServiceQualification clientServiceDetails)
{
var servicesThatQualify = new List<Service>();
foreach(var service in services){
if(clientServiceDetails.Qualifies(service)){
services.Add(service);
}
}
return servicesThatQualify;
}
}
In the projects I worked on I have classes that query/update database, like this one,
public class CompanyInfoManager
{
public List<string> GetCompanyNames()
{
//Query database and return list of company names
}
}
as I keep creating more and more classes of this sort, I realize that maybe I should make this type of class static. By doing so the obvious benefit is avoid the need to create class instances every time I need to query the database. But since for the static class, there is only one copy of the class, will this result in hundreds of requests contend for only one copy of static class?
Thanks,
I would not make that class static but instead would use dependency injection and pass in needed resources to that class. This way you can create a mock repository (that implements the IRepository interface) to test with. If you make the class static and don't pass in your repository then it is very difficult to test since you can't control what the static class is connecting to.
Note: The code below is a rough example and is only intended to convey the point, not necessarily compile and execute.
public interface IRepository
{
public DataSet ExecuteQuery(string aQuery);
//Other methods to interact with the DB (such as update or insert) are defined here.
}
public class CompanyInfoManager
{
private IRepository theRepository;
public CompanyInfoManager(IRepository aRepository)
{
//A repository is required so that we always know what
//we are talking to.
theRepository = aRepository;
}
public List<string> GetCompanyNames()
{
//Query database and return list of company names
string query = "SELECT * FROM COMPANIES";
DataSet results = theRepository.ExecuteQuery(query);
//Process the results...
return listOfNames;
}
}
To test CompanyInfoManager:
//Class to test CompanyInfoManager
public class MockRepository : IRepository
{
//This method will always return a known value.
public DataSet ExecuteQuery(string aQuery)
{
DataSet returnResults = new DataSet();
//Fill the data set with known values...
return returnResults;
}
}
//This will always contain known values that you can test.
IList<string> names = new CompanyInfoManager(new MockRepository()).GetCompanyNames();
I didn't want to ramble on about dependency injection. Misko Hevery's blog goes into great detail with a great post to get started.
It depends. Will you ever need to make your program multithreaded? Will you ever need to connect to more than one database? Will you ever need to store state in this class? Do you need to control the lifetime of your connections? Will you need data caching in the future? If you answer yes to any of these, a static class will make things awkward.
My personal advice would be to make it an instance as this is more OO and would give you flexibility you might need in the future.
You have to be careful making this class static. In a web app, each request is handled on its own thread. Static utilities can be thread-unsafe if you are not careful. And if that happens you are not going to be happy.
I would highly recommend you follow the DAO pattern. Use a tool like Spring to make this easy for you. All you have to do is configure a datasource and your DB access and transactions will be a breeze.
If you go for a static class you will have to design it such that its largely stateless. The usual tactic is to create a base class with common data access functions and then derive them in specific classes for, say, loading Customers.
If object creation is actually the overhead in the entire operation, then you could also look at pooling pre-created objects. However, I highly doubt this is the case.
You might find that a lot of your common data access code could be made into static methods, but a static class for all data access seems like the design is lost somewhere.
Static classes don't have any issues with multi-threaded access per-se, but obviously locks and static or shared state is problematic.
By making the class static, you would have a hard time unit testing it, as then you
would probably have to manage internally the reading of the connection string in a non-clear manner, either by reading it inside the class from a configuration file or requesting it from some class that manages these constants. I'd rather instantiate such a class in a traditional way
var manager = new CompanyInfoManager(string connectionString /*...and possible other dependencies too*/)
and then assign it to a global/public static variable, if that makes sense for the class, ie
//this can be accessed globally
public static CompanyInfoManager = manager;
so now you would not sacrifice any flexibility for your unit tests, since all of the class's dependencies are passed to it through its constructor
I've commonly seen examples like this on business objects:
public void Save()
{
if(this.id > 0)
{
ThingyRepository.UpdateThingy(this);
}
else
{
int id = 0;
ThingyRepository.AddThingy(this, out id);
this.id = id;
}
}
So why here, on the business object? This seems like contextual or data related more so than business logic.
For example, a consumer of this object might go through something like this...
...Get form values from a web app...
Thingy thingy = Thingy.CreateNew(Form["name"].Value, Form["gadget"].Value, Form["process"].Value);
thingy.Save();
Or, something like this for an update...
... Get form values from a web app...
Thingy thingy = Thingy.GetThingyByID(Int32.Parse(Form["id"].Value));
Thingy.Name = Form["name"].Value;
Thingy.Save();
So why is this? Why not contain actual business logic such as calculations, business specific rules, etc., and avoid retrieval/persistence?
Using this approach, the code might look like this:
... Get form values from a web app...
Thingy thingy = Thingy.CreateNew(Form["name"].Value, Form["gadget"].Value, Form["process"].Value);
ThingyRepository.AddThingy(ref thingy, out id);
Or, something like this for an update...
... get form values from a web app ...
Thingy thingy = ThingyRepository.GetThingyByID(Int32.Parse(Form["id"].Value));
thingy.Name = Form["Name"].Value;
ThingyRepository.UpdateThingy(ref thingy);
In both of these examples, the consumer, who knows best what is being done to the object, calls the repository and either requests an ADD or an UPDATE. The object remains DUMB in that context, but still provides it's core business logic as pertains to itself, not how it is retrieved or persisted.
In short, I am not seeing the benefit of consolidating the GET and SAVE methods within the business object itself.
Should I just stop complaining and conform, or am I missing something?
This leads into the Active Record pattern (see P of EAA p. 160).
Personally I am not a fan. Tightly coupling business objects and persistence mechanisms so that changing the persistence mechanism requires a change in the business object? Mixing data layer with domain layer? Violating the single responsibility principle? If my business object is Account then I have the instance method Account.Save but to find an account I have the static method Account.Find? Yucky.
That said, it has its uses. For small projects with objects that directly conform to the database schema and have simple domain logic and aren't concerned with ease of testing, refactoring, dependency injection, open/closed, separation of concerns, etc., it can be a fine choice.
Your domain objects should have no reference to persistance concerns.
Create a repository interface in the domain that will represent a persistance service, and implement it outside the domain (you can implement it in a separate assembly).
This way your aggregate root doesn't need to reference the repository (since it's an aggregate root, it should already have everyting it needs), and it will be free of any dependency or persistance concern. Hence easier to test, and domain focused.
While I have no understanding of DDD, it makes sense to have 1 method (which will do UPSERT. Insert if record doesn't exist, Update otherwise).
User of the class can act dumb and call Save on an existing record and Update on a new record.
Having one point of action is much clearer.
EDIT: The decision of whether to do an INSERT or UPDATE is better left to the repository. User can call Repository.Save(....), which can result in a new record (if record is not already in DB) or an update.
If you don't like their approach make your own. Personally Save() instance methods on business objects smell really good to me. One less class name I need to remember. However, I don't have a problem with a factory save but I don't see why it would be so difficult to have both. IE
class myObject
{
public Save()
{
myObjFactory.Save(this);
}
}
...
class myObjectFactory
{
public void Save(myObject obj)
{
// Upsert myObject
}
}