Accessing common functions across multiple services - c#

I'm currently building an application, and as it stands I have a single service for each controller (the service handles the business logic for the controller). Each service has it's own dbcontext.
I've recognised that several services need to perform the same functions (retrieve the same lists of data from the database and perform the same logic on them before returning them). So ideally I need a way for the services to access common functions.
My first thought is to create a simple helper class that each service could use, with simple functions that take a dbcontext as one of the parameters, so that the functions could perform database queries as well as logic and return the result.
Is this a good idea? Would I run into problems by structuring my code this way, or is there a better more robust and accepted approach I should take?

I'd say you're on the right track, but go one step further with the single responsibility principle. http://blog.codinghorror.com/curlys-law-do-one-thing/. It's a proven strategy for keeping code clean. I avoid "helper" classes per say. They can get messy by having too many responsibilities. Instead I try to really think about what my class should do. Then I give it a really good name to remind me that it only does that one thing.
The fact that your services each have their own Db Context can be a problem. Just make sure that if you call upon more than 1 dependent service that you pass in the same Db context to them all. If your object graph is large, a container like AutoFac will be a big help.

Is the data being returned the same? Are they using their own unique DB context or is it the same DB context?
Generally I would recommend avoiding creating a helper class. Generally a helper class is used to manipulate an object(s) rather than perform a database query.
Based on your comment there are two ways you could achieve this, one easier than the other.
Option 1:
If your application really is a simple one that you're not too concerned about doing things the 'correct' way then you could simply create base service class and update your services to extend it, and move your common database access into the base class, like so:
abstract class BaseService
{
...
public ICollection<ExampleRecord> GetDatabaseRecords()
{
using (var context = new ApplicationDbContext())
{
/* Your DbContext code */
}
return databaseRecords;
}
...
}
Then extend BaseService like so:
public class ExampleService : BaseService
{
...
public ICollection<ExampleRecord> GetRecords()
{
return this.GetDatabaseRecords();
}
...
}
This would get the job done and be a better option to what you're currently doing, however it's generally not the best approach.
Optios 2:
If your application is more than a simple one and you're concerned about code maintainability then I would suggest looking into moving your database access code into a separate repository class and use an IoC container such as StructureMap to inject it the said repository into your services via dependency injection.
Personally I would recommend option 2 as it's far cleaner, more maintainable/extensible and you're not violating any of the SOLID principles.

You can use an abstract service to define a common methods
This is a good tutorial about
generic repository, services layer, IoC, unit test an entity framework

Related

Why is an interface used when implementing the repository pattern, with entity framework?

When implementing the repository pattern, in conjunction with entity framework, why do I see many examples using an interface for their repository class? A specific example of a reference of this is here.
What's the point of the interface? Why not just the class? Will there really need to be more than one class subscribing to that very specific interface, for just Employees, for example?
It's a frequently used pattern, quite often used specifically for unit testing, not really specific to entity framework, or repository pattern, or even any data access kind. Another great benefit is that it gives the chance to latter provide an alternate implementation without change the code using it.
Consider for example this code, that uses the dependency injection pattern:
public class EmployeeService
{
private readonly IEmployeeRepository employeeRepository;
public EmployeeService(IEmployeeRepository employeeRepository)
{
this.employeeRepository=employeeRepository;
}
public IEnumerable<Employee> GetAllEmployees()
{
IEnumerable<Employee> employeeList=this.employeeRepository.GetAll();
//Optionally do some processing here
return employeeList;
}
}
By having the interface in the repository, note that you now can work entirely using the interface, without ever mentioning the actual repository, this is the real value of it. It gives mainly two benefits:
If you want to write an automated unit test for this class, you may give it a fake implementation of the IEmployeeRepository, which would not go to the real database, but instead return a hardcoded list, so that you can test your method without worrying about the DB for now. This is called a 'Mock', and is often the main reason of putting that interface there. There are also a couple of libraries that automate that process, all relying on the fact that they generate a fake class implementing an interface. By far, that's the most common reason for putting an interface like that.
You may decide sometime in the future that you want to replace entity framework with something else, or, say, want to implement a repository to something different than a relational DB. In this case, you would write another repository, implementing the very same interface, but doing something completely different. Given that the services using it rely only on the interface that code will work entirely unmodified as long as the same contract is respected (of course, the code that actually creates the repo and gives it to the service must change, but that's another history). That way the same service works the same no matter where it reads/saves the data.

Session and Transaction managment in Service Layer

I have an application that will have some presentations layer (web, mobile, wpf, wcf, windows service to work on background etc...) and We are using NHibernate to persist the domain objects. We will have repositories (class library) to persist data, a service layer to use theses repositories to persist according to business rules. My question is, we do not know how to implement the a trasactional management in this service layer. We will probably use (more than one) repositories in a same service layer method and we need to control the transaction on the service layer. I would like to implement something like this (by attributes):
public class DomainObjectService
{
[Transactional]
public bool CreateDomainObject(DomainObject domainObject, /* other parameters */)
{
foreach(var item in /* collection */)
{
_itemRepository.Save(item);
}
if (/* some condition */) {
/* change the domainObject here */
}
_domainObjectRepository.Save(domainObject);
}
}
And does this Transactional attribute control my transactional with Commit/RollBack when we got erros. Is it possible? Or is there another solution to do this?
Thank you
What you have asked does not have a straight forward answer.
The behavior you wish to have sounds like you need to implement a unit of work pattern.
NHibernate's own ISession is in fact an implementation of a unit of work. I personally recommend implementing your own unit of work so that you have greater control over what your specific application considers a unit of work.
The use of attributes in a service layer class really doesn't make a lot of sense to me personally. I have seen people create custom controller attributes in an MVC application that handles transactions but I've never personally agreed with that kind of implementation.
You mentioned using more than one repository in the service layer. This is quite a common practice but it also means that each of those repositories will need to be operating within the same unit of work. If you application is using dependency injection, then one option is to have each repository accept an ISession in its constructor. Your dependency injection framework of choice could be setup in such a way as to inject the same ISession into all of the repositories. Your setup could be configured to begin a new transaction every time a new ISession is created.
You also mentioned different presentation layers such as web, mobile, wpf, etc. How you deal with sessions and transactions in each of those different types of applications can be quite different. That is why I always point people in the unit of work direction because each of those different application types could have a completely different definition for what it considers a unit of work. For a web application, you would typically go with a new unit of work for each web request. For a wpf application, the unit of work could be per screen, or until the user hits the save button, etc. Also, by implementing a unit of work, you can reuse that same unit of work implementation more easily across those different application types.
Again, this is not a question wish a straight forward answer but in general, I typically make use of a custom unit of work and a dependency injection framework to make this problem much easier to deal with.
Here are some helpful links that you may wish to investigate:
http://nhibernate.info/doc/patternsandpractices/nhibernate-and-the-unit-of-work-pattern.html
Correct use of the NHibernate Unit Of Work pattern and Ninject
Unit of work/repository managers for NHibernate?

Dependency Injection what´s the big improvement?

Currently I'm trying to understand dependency injection better and I'm using asp.net MVC to work with it. You might see some other related questions from me ;)
Alright, I'll start with an example controller (of an example Contacts Manager asp.net MVC application)
public class ContactsController{
ContactsManagerDb _db;
public ContactsController(){
_db = ContactsManagerDb();
}
//...Actions here
}
Allright, awesome that's working. My actions can all use the database for CRUD actions. Now I've decided I wanted to add unit testing, and I've added another contructor to mock a database
public class ContactsController{
IContactsManagerDb _db;
public ContactsController(){
_db = ContactsManagerDb();
}
public ContactsController(IContactsManagerDb db){
_db = db;
}
//...Actions here
}
Awesome, that's working to, in my unit tests I can create my own implementation of the IContactsManagerDb and unit test my controller.
Now, people usually make the following decision (and here is my actual question), get rid of the empty controller, and use dependency injection to define what implementation to use.
So using StructureMap I've added the following injection rule:
x.For<IContactsManagerDb>().Use<ContactsManagerDb>();
And ofcourse in my Testing Project I'm using a different IContactsManagerDb implementation.
x.For<IContactsManagerDb>().Use<MyTestingContactsManagerDb>();
But my question is, **What problem have I solved or what have I simplified by using dependency injection in this specific case"
I fail to see any practical use of it now, I understand the HOW but not the WHY? What's the use of this? Can anyone add to this project perhaps, make an example how this is more practical and useful?
The first example is not unit testable, so it is not good as it is creating a strong coupling between the different layers of your application and makes them less reusable. The second example is called poor man dependency injection. It's also discussed here.
What is wrong with poor man dependency injection is that the code is not autodocumenting. It doesn't state its intent to the consumer. A consumer sees this code and he could easily call the default constructor without passing any argument, whereas if there was no default constructor it would have immediately been clear that this class absolutely requires some contract to be passed to its constructor in order to function normally. And it is really not to the class to decide which specific implementation to choose. It is up to the consumer of this class.
Dependency injection is useful for 3 main reasons :
It is a method of decoupling interfaces and implementations.
It is good for reducing the amount of boiler plate / factory methods in an application.
It increases the modularity of packages.
As an example - consider the Unit test which required access to a class, defined as an interface. In many cases, a unit test for an interface would have to invoke implementations of that interface -- thus if an implementation changed, so would the unit test. However, with DI, you could "inject" an interface's implementation at run time into a unit test using the injection API - so that changes to implementations only have to be handled by the injection framework, not by individual classes that use those implementations.
Another example is in the web world : Consider the coupling between service providers and service definitions. If a particular component needs access to a service - it is better to design to the interface than to a particular implementation of that service. Injection enables such design, again, by allowing you to dynamically add dependencies by referencing your injection framework.
Thus, the various couplings of classes to one another are moved out of factories and individual classes, and dealt with in a uniform, abstract, reusable, and easily-maintained manner when one has a good DI framework. The best tutorials on DI that I have seen are on Google's Guice tutorials, available on YouTube. Although these are not the same as your particular technology, the principles are identical.
First, your example won't compile. var _db; is not a valid statement because the type of the variable has to be inferred at declaration.
You could do var _db = new ContactsManagerDb();, but then your second constructor won't compile because you're trying to assign an IContactsManagerDb to an instance of ContactsManagerDb.
You could change it to IContactsManagerDb _db;, and then make sure that ContactsManagerDb derives from IContactsManagerDb, but then that makes your first constructor irrelvant. You have to have the constructor that takes the interface argument anyways, so why not just use it all the time?
Dependency Injection is all about removing dependancies from the classes themselves. ContactsController doesn't need to know about ContactsManagerDb in order to use IContactsManagerDb to access the Contacts Manager.

How to inject ninject itself into a static class with extension functions

I got some static classes with extension methods which add 'business logic' to entities using the repository pattern.
Now sometimes i need to create a new IRepository in these extension functions.
I'm currently working around it by accessing my Ninject kernel through the object I'm extending, but it's really ugly:
public static IEnumerable<ISomething> GetSomethings(this IEntity entity)
{
using (var dataContext = entity.kernel.Get<IDataContext>())
return dataContext.Repository<ISomething>().ToList();
}
I could also make a static constructor, accessing the Ninject kernel somehow from a factory, is there already infrastructure for that in Ninject 2?
Does anybody know a better solution? Does anybody have some comments on this way to implement business logic.
On the issue of extension methods and how they get stuff. You have two approaches:
Service Location - have a global Kernel and drop down to Service Location (which is different from Dependency Injection). The problem here though is that your Entity (or its extensions) should not be assuming its context and instead demanding it
As you are an extension method have the thing you're extending pass you what you need
As you've more or less guessed, this (Having a global Kernel that becomes the dumping ground) is something that Ninject tries to dissuade you from. In general, the extension for whatever the you're using (e.g., MVC or WCF) will provide something if its appropriate. For example, the WCF extension has http://github.com/ninject/ninject.extensions.wcf/blob/master/source/Ninject.Extensions.Wcf/NinjectServiceHost.cs
The larger issue here is that dependencies like this should probably not propagate down to the Entity level - it should stay at the Service level and be propagated from there (using DDD vocabulary).
You may find this answer by me interesting as it covers this ground a bit (more from a Ninject techniques that an architectural concepts perspective)

DAL design question

I need to design a Data access layer DAL .Net Enterprise library version 3.5 Data access application block (DAAB)
In my application,I've various logical modules like Registration, billing, order management, user management,etc
Am using C# business entities to map the module objects to database tables and then return the List collection to the client.
I would like to design my DAL in such a way that if tomorrow we decide to use some other data access framework we should have minimal code change.
Given this, how do i design my class structure?
I thought I would have a class DbManagerBase which would be a wrapper over existing .net DAAB
This class DbManagerBase would implement an interface called IDbManagerBase which would have public methods like ExecuteReader, ExecuteNonQuery, etc.
The client class ie. RegistrationDAL,UserManagermentDAL would have the following code inside each of its methods:
IDbManagerBase obj= new DbManagerBase()
obj.ExecuteReader(myStoredProcName)
.
.
.
is this a good OOPS design?may i know any better approach please?or do i need to use inheritance here?
Can i have all the methods in DbManagerBase class and RegistrationDAL,UserManagermentDAL classes as static?I guess,if i've methods as static then the above interface code wont make any sense...right???
To truly abstract the DAL I'd use the repository pattern.
To answer a few of the questions:
Can i have all the methods in
DbManagerBase class and
RegistrationDAL,UserManagermentDAL
classes as static?
I would probably go with a non-static approach cause it gives the flexibility to better control instantiation of the DALs (eg. you could create instances of them from a factory), also it will allow you to have two DALs in place that are talking to different DBs in a cleaner way. Also you will not need to create an instance of the DbManagerBase in every object since it would be an instance member.
Regarding IDbManagerBase having ExecuteReader, ExecuteNonQuery and obj.ExecuteReader(myStoredProcName)
I would be careful about baking the knowledge about database specific concepts in too many places. Keep in mind some DBs to not support stored procedures.
Another point is that before I went about implementing a DAL of sorts I would be sure to read through some code in other open source DALs like NHibernate or Subsonic. It is completely possible they would solve your business problem and reduce your dev time significantly.
If you are looking for a small example of a layered DAL architecture there is my little project on github (it is very basic but shows how you can build interfaces to support a lot of esoteric databases)

Categories

Resources