I have an application that will have some presentations layer (web, mobile, wpf, wcf, windows service to work on background etc...) and We are using NHibernate to persist the domain objects. We will have repositories (class library) to persist data, a service layer to use theses repositories to persist according to business rules. My question is, we do not know how to implement the a trasactional management in this service layer. We will probably use (more than one) repositories in a same service layer method and we need to control the transaction on the service layer. I would like to implement something like this (by attributes):
public class DomainObjectService
{
[Transactional]
public bool CreateDomainObject(DomainObject domainObject, /* other parameters */)
{
foreach(var item in /* collection */)
{
_itemRepository.Save(item);
}
if (/* some condition */) {
/* change the domainObject here */
}
_domainObjectRepository.Save(domainObject);
}
}
And does this Transactional attribute control my transactional with Commit/RollBack when we got erros. Is it possible? Or is there another solution to do this?
Thank you
What you have asked does not have a straight forward answer.
The behavior you wish to have sounds like you need to implement a unit of work pattern.
NHibernate's own ISession is in fact an implementation of a unit of work. I personally recommend implementing your own unit of work so that you have greater control over what your specific application considers a unit of work.
The use of attributes in a service layer class really doesn't make a lot of sense to me personally. I have seen people create custom controller attributes in an MVC application that handles transactions but I've never personally agreed with that kind of implementation.
You mentioned using more than one repository in the service layer. This is quite a common practice but it also means that each of those repositories will need to be operating within the same unit of work. If you application is using dependency injection, then one option is to have each repository accept an ISession in its constructor. Your dependency injection framework of choice could be setup in such a way as to inject the same ISession into all of the repositories. Your setup could be configured to begin a new transaction every time a new ISession is created.
You also mentioned different presentation layers such as web, mobile, wpf, etc. How you deal with sessions and transactions in each of those different types of applications can be quite different. That is why I always point people in the unit of work direction because each of those different application types could have a completely different definition for what it considers a unit of work. For a web application, you would typically go with a new unit of work for each web request. For a wpf application, the unit of work could be per screen, or until the user hits the save button, etc. Also, by implementing a unit of work, you can reuse that same unit of work implementation more easily across those different application types.
Again, this is not a question wish a straight forward answer but in general, I typically make use of a custom unit of work and a dependency injection framework to make this problem much easier to deal with.
Here are some helpful links that you may wish to investigate:
http://nhibernate.info/doc/patternsandpractices/nhibernate-and-the-unit-of-work-pattern.html
Correct use of the NHibernate Unit Of Work pattern and Ninject
Unit of work/repository managers for NHibernate?
Related
In my project, I use entity framework 7 and asp.net mvc 6 \ asp.net 5. I want to create CRUD for own models
How can I do better:
Use dbcontext from the controller.
In the following link author explain that this way is better, but whether it is right for the controllers?
Make own wrapper.
The some Best practices write about what is best to do own repository.
I'm not going to change the ef at something else, so do not mind, even if there is a strong connectivity to access data from a particular implementation
and I know that in ef7 dbcontext immediately implemented Unit Of Work and Repository patterns.
The answer to your question is primarily opinion-based. No one can definitively say "one way is better than the other" until a lot of other questions are answered. What is the size / scope / budget of your project? How many developers will be working on it? Will it only have (view-based) MVC controllers, or will it have (data-based) API controllers as well? If the latter, how much overlap will there be between the MVC and API action methods, if any? Will it have any non-web clients, like WPF? How do you plan to test the application?
Entity Framework is a Data Access Layer (DAL) tool. Controllers are HTTP client request & response handling tools. Unless your application is pure CRUD (which is doubtful), there will probably be some kind of Business Logic processing that you will need to do between when you receive a web request over HTTP and when you save that request's data to a database using EF (field X is required, if you provide data for field Y you must also provide data for field Z, etc). So if you use EF code directly in your controllers, this means your business processing logic will almost surely be present in the controllers along with it.
Those of us who have a decent amount of experience developing non-trivial applications with .NET tend to develop opinions that neither business nor data access logic should be present in controllers because of certain difficulties that emerge when such a design is implemented. For example when you put web/http request & response logic, along with business logic and data access logic into a controller, you end up having to test all of those application aspects from the controller actions themselves (which is a glaring violation of the Single Responsibility Principle, if you care about SOLID design). Also let's say you develop a traditional MVC application with controllers that return views, then decide you want to extend the app to other clients like iOS / android / WPF / or some other client that doesn't understand your MVC views. If you decide to implement a secondary set of WebAPI data-based controller actions, you will be duplicating business and data access logic in at least 2 places.
Still, this does not make a decision to keep all business & data-access logic in controllers intrinsically "worse" than an alternate design. Any decision you make when designing the architecture of a web application is going to have advantages and disadvantages. There will always be trade-offs no matter which route you choose. Advantages of keeping all of your application code in controllers can include lower cost, complexity, and thus, time to market. It doesn't make sense to over-engineer complex architectures for very simple applications. However unfortunate, I have personally never had the pleasure of developing a simple application, so I am in the "general opinion" boat that keeping business and data access code in controllers is "probably not" a good long-term design decision.
If you're really interested in alternatives, I would recommend reading these two articles. They are a good primer on how one might implement a command & query (CQRS) pattern that controllers can consume. EF does implement both the repository and unit of work patterns out of the box, but that does not necessarily mean you need to "wrap" it in order to move the data access code outside of your controllers. Best of luck making these kinds of decisions for your project.
public async Task<ActionResult> Index() {
var user = await query.Execute(new UserById(1));
return View(user);
}
Usually I prefer using Repository pattern along with UnitOfWork pattern (http://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application) - I instantiate DbContext in an UnitOfWork instance object and I inject that DbContext in the repositories. After that I instantiate UnitOfWork in the controller and the controller does not know anything about the DbContext:
public ActionResult Index()
{
var user = unitOfWork.UsersRepository.GetById(1); // unitOfWork is dependency injected using Unity or Ninject or some other framework
return View(user);
}
This depends on the lifecycle of your application.
If it will be used, extended and changed for many years, then I'd say creating a wrapper is a good choice.
If it is a small application and, as you have said, you don't intend to change EntityFramework to another ORM, then spare yourself the work of creating a wrapper and use it directly in the controller.
There is no definite answer to this. It all depends on what you are trying to do.
If you are going for code maintainability I would suggest using a wrapper.
I'm currently building an application, and as it stands I have a single service for each controller (the service handles the business logic for the controller). Each service has it's own dbcontext.
I've recognised that several services need to perform the same functions (retrieve the same lists of data from the database and perform the same logic on them before returning them). So ideally I need a way for the services to access common functions.
My first thought is to create a simple helper class that each service could use, with simple functions that take a dbcontext as one of the parameters, so that the functions could perform database queries as well as logic and return the result.
Is this a good idea? Would I run into problems by structuring my code this way, or is there a better more robust and accepted approach I should take?
I'd say you're on the right track, but go one step further with the single responsibility principle. http://blog.codinghorror.com/curlys-law-do-one-thing/. It's a proven strategy for keeping code clean. I avoid "helper" classes per say. They can get messy by having too many responsibilities. Instead I try to really think about what my class should do. Then I give it a really good name to remind me that it only does that one thing.
The fact that your services each have their own Db Context can be a problem. Just make sure that if you call upon more than 1 dependent service that you pass in the same Db context to them all. If your object graph is large, a container like AutoFac will be a big help.
Is the data being returned the same? Are they using their own unique DB context or is it the same DB context?
Generally I would recommend avoiding creating a helper class. Generally a helper class is used to manipulate an object(s) rather than perform a database query.
Based on your comment there are two ways you could achieve this, one easier than the other.
Option 1:
If your application really is a simple one that you're not too concerned about doing things the 'correct' way then you could simply create base service class and update your services to extend it, and move your common database access into the base class, like so:
abstract class BaseService
{
...
public ICollection<ExampleRecord> GetDatabaseRecords()
{
using (var context = new ApplicationDbContext())
{
/* Your DbContext code */
}
return databaseRecords;
}
...
}
Then extend BaseService like so:
public class ExampleService : BaseService
{
...
public ICollection<ExampleRecord> GetRecords()
{
return this.GetDatabaseRecords();
}
...
}
This would get the job done and be a better option to what you're currently doing, however it's generally not the best approach.
Optios 2:
If your application is more than a simple one and you're concerned about code maintainability then I would suggest looking into moving your database access code into a separate repository class and use an IoC container such as StructureMap to inject it the said repository into your services via dependency injection.
Personally I would recommend option 2 as it's far cleaner, more maintainable/extensible and you're not violating any of the SOLID principles.
You can use an abstract service to define a common methods
This is a good tutorial about
generic repository, services layer, IoC, unit test an entity framework
I've got some really expensive business logic inside my domain layer where the data must be tracked in order to get a picture of what happened if something fails. Because of this, I want to declare a simple logging interface:
public interface ILogger {
void Log(LogEntry entry);
}
Now my question is - where does this interface belongs to? Of course logging might be a infrastructure concern (and a little bit of cross layer concern), but if I place it in the infrastructure layer, my domain services don't get access to it. If I place it into the domain layer, I introduce the concept of logging into my domain, which feels awkward.
I'm already using certain concepts from CQRS & EventSourcing inside my application, however throwing a event for like everything that happens with the data inside a domain service seems like an overkill (especially if the data falls into a state that doesn't get returned by a domain service until further transformations have been made.)
There are some options here.
Use decorators. You say you are already using CQRS, so add decorators to the commands/queries which you want to log. The downside is that you can only log before and after the execution of the command/query, not during the execution. And I'm not sure if it will be easy to log for your events as well this way.
Use your interface. If you choose this path, than indeed your ILogger interface should be in the domain layer, because the domain will require a component that implements your logger requirements, so the domain layer is the one to define this interface. The implementation of it must be elsewhere, and in an infrastructure layer sounds fine to me.
[...] my domain services don't get access to it
Why not? ILogger should live in the infrastucture layer, but who said that domain layer has no access to infrastructure members?
As far as I know, infrastructure is an unrelated non-domain specific code which solves common problems like I/O, networking, database-access and so on. And logging is an infrastructure concern.
Infrastructure code should implement or provide cross-layer software pieces, and it might provide a infrastructure-based ILogger implementation. If your domain requires some kind of specific code for logging, you'll provide a SomeDomainLogger implemented in the domain layer.
I don't know if you're already using inversion of control, since this is the best way to load such kind of implementations of infrastructure code.
Hi created an extension method to control the lifecycle of an EF context. My code is below
public static Entities GetCentralRepositoryContext(this HttpContext httpcontext)
{
if (HttpContext.Current.Items["context"] == null)
{
HttpContext.Current.Items["context"] = new Entities();
}
return (Entities)HttpContext.Current.Items["context"];
}
I've created many layers in my solution as projects and have started to think about IOC. The above code sits in my BL layer project but for it to work I need to create a reference to my DL layer as that's where the entities class resides. How can I remove the reference to my DL layer and inject into my extension method. Is this even possible?
The approach you are taking has several problems. First of all, static methods tend to be a problem for loose coupling, and you'll notice this quickly when trying to unit test your code. Besides this, your business layer has a dependency on System.Web, which makes your business layer technology specific, which will make it very hard to move part of the system to for instance a Windows Service, and again makes unit testing almost impossible.
Instead of doing this, start injecting your Entities class into the constructor of all types that need it. At the beginning of each request you can build up the dependency graph of services in your application, specific to that request. At that point you can determine that the Entities instance should have a lifetime of a web request.
This however, will start to get cumbersome to do without a DI framework. Or at least, a DI framework will make this much easier to do.
When you start writing unit tests, you'll find that it will be very hard when directly using the EF ObjectContext in your application. This article might give you some ideas how to abstract the ObjectContext behind a testable interface.
I would like to hear from you what are de the main advantages and drawbacks in applying dependency injection at the controller level, and/or domain level.
Let me explain; if I receive a IUserRepository as param for my User, I may proceed in two ways:
I inject IUserRepository direct on my domain object, then I consume User at controller level without newing objects, it means, I get them ready from the DI container.
I inject IUserRepository on my controller (say, Register.aspx.cs), and there I new all my domain objects using dependencies that came from the DI container.
Yesterday, when I was talking to my friend, he told me that if you get your domain objects from the container you loose its lifecicle control, as the container manages it for you, he meant that it could be error prone when dealing with large xml configuration files. Opinion which disagree as you may have a tests that loops through every domain object within an assembly and then asks the container whether thats a singleton, request scope, session scope or app escope. It fails if any of them are true. A way of ensuring that this kind of issue wont happen.
I fell more likely to use the domain approach (1), as I see a large saving on repetitive lines of code at controller level (of course there will be more lines at XML file).
Another point my friend rose was that, imagine that for any reason youre obligated to change from di container A to B, and say that B has no support for constructor injection (which is the case for a seam container, Java, which manipulates BC or only do its task via setter injection), well, his point is that, if I have all my code at controller level I'm able to refactor my code in a smoothly maner, as I get access to tools like Auto-Refactoring and Auto-Complete, which is unavailable when youre dealing with XML files.
Im stuck at this point, as I should have a decision to make right away.
Which approach should I leverage my architecture?
Are there other ways of thinking???
Do you guys really think this is a relevant concern, should I worry about it?
If you want to avoid an anemic domain model you have to abandon the classic n-tier, n-layer CRUDY application architecture. Greg Young explains why in this paper on DDDD. DI is not going to change that.
CQRS would be a better option, and DI fits very well into the small, autonomous components this type of architecture tends to produce.
I'm not into the Java sphere, but according to your details in your questions it seems like you use some kind of MVC framework (since you deal with Controllers and domain). But I do have an opinion about how to use DI in a Domain Driven architecture.
First there are several ways of doing DDD: Some uses MVC in presentation and no application service layer between MVC and Domain. Other uses MVP (or MVVM) and no service layer. BUT I think some people will agree on me that you very rarely inject repositories (or other services...). I would recommend to inject Repositories in Command (using MVC and no service layer), Presenter (if you use MVP) or Application Services (if you use service layer). I mostly use an application layer where each service get the repositories they need injected in constructor.
Second I wouldn't worry about switching between IoC containers. Most container framework today support ctor injection and can auto-resolve parameters. Now I know that you're a Java developer and I'm a MS developer, but MS Practices team has a Common Service locator that can helps you in producing code that are extremely non-dependent of which container framework you uses. There is probably some similar in the Java community.
So go for option 2. Hope I pushed you into right direction.