We have an existing repository which is based on EF4 / POCO and is working well. We want to add a service layer using WCF Data Services and looking for some best practice advice.
So far we have developed a class which has a IQueryable property and the getter triggers the repository 'get all users' method. The problem so far have been two-fold:
1) It required us to decorate the ID field of the poco object to tell data service what field was the id. This now means that our POCO object is not 'pure'.
2) It cannot figure out the relationships between the objects (which is obvious i guess).
I've now stopped this approach and i'm thinking that maybe we should expose the OBjectContext from the repository and use more 'automatic' functionality of EF.
Has anybody got any advice or examples of using the repository pattern with WCF Data Services ?
I guess it's a matter of being pragmatic. Does decorating the POCO break anything else? If not, perhaps it's the best way to do it.
WCF Data Services and oData are pretty new, I've also been looking for guidance and it seems a bit thin.
Can you expand a bit more on what you want to expose, and who'll be using it?
The issues I've seen so far in our project
Having an MyRepository : Objectcontext and a
MyDataService : DataService splits logic, so we've
created helpers. I suppose we could have inherited Repository though - (literally just thought of that as I typed this!)
Query and Change Interceptors are your friends, but
should delegate to helpers (or base class) to ensure
DRY. ie - if your repository already
had GetAllUsers, and does logic that
myservice.svc/Users doesn't handle,
you may need to implement a query
interceptor to do the filtering -
again DRY means a helper (or base method) that both
the repository and interceptor can
use.
asp.net compatibility allows you
to tap in nicely to authentication /
authorisation - in a query
interceptor, it's a nice way to
ensure you're allowed to see only the
things you're allowed to see.
A couple of traps....
If it's Flash / Flex based you will
probably have issues with Flash /
Flex not being able to use HTTP
PUT/MERGE or DELETE. You get around
this by using x-httpmethod-override
If it's javascript / jquery, make
sure you turn on json
Overall, I really like it, a super fast way to expose an API, and provided you don't have heavy business logic, it works well.
Related
From what I understand in onion architecture, the domain must contain all the business logic. And enforcing database validations are typically done by using Services.
My code is inspired from this repo https://github.com/asadsahi/AspNetCoreSpa , where they are using features, where each folder has all the validation rules and logic for specific feature inside the application layer.
What is the best way to share a specific validation for multiple features? Should I create a service and use it for each feature?
And what is the reason that they moved all the business logic to the application layer while the domain entities does not have any logic?
I found a good article talking about what I need here Dealing with Duplication in MediatR Handlers
Excluding sub-handlers or delegating handlers, where should my logic
go? Several options are now available to me:
Its own class (named appropriately) Domain service (as was its
original purpose in the DDD book) Base handler class Extension method
Method on my DbContext Method on my aggregate root/entity As to which
one is most appropriate, it naturally depends on what the duplicated
code is actually doing. Common query? Method on the DbContext or an
extension method to IQueryable or DbSet. Domain behavior? Method on
your domain model or perhaps a domain service. There’s a lot of
options here, it really just depends on what’s duplicated and where
those duplications lie. If the duplication is within a feature
folder, a base handler class for that feature folder would be a good
idea.
In the end, I don’t really prefer any approach to the another. There
are tradeoffs with any approach, and I try as much as possible to let
the nature of the duplication to guide me to the correct solution.
In my project, I use entity framework 7 and asp.net mvc 6 \ asp.net 5. I want to create CRUD for own models
How can I do better:
Use dbcontext from the controller.
In the following link author explain that this way is better, but whether it is right for the controllers?
Make own wrapper.
The some Best practices write about what is best to do own repository.
I'm not going to change the ef at something else, so do not mind, even if there is a strong connectivity to access data from a particular implementation
and I know that in ef7 dbcontext immediately implemented Unit Of Work and Repository patterns.
The answer to your question is primarily opinion-based. No one can definitively say "one way is better than the other" until a lot of other questions are answered. What is the size / scope / budget of your project? How many developers will be working on it? Will it only have (view-based) MVC controllers, or will it have (data-based) API controllers as well? If the latter, how much overlap will there be between the MVC and API action methods, if any? Will it have any non-web clients, like WPF? How do you plan to test the application?
Entity Framework is a Data Access Layer (DAL) tool. Controllers are HTTP client request & response handling tools. Unless your application is pure CRUD (which is doubtful), there will probably be some kind of Business Logic processing that you will need to do between when you receive a web request over HTTP and when you save that request's data to a database using EF (field X is required, if you provide data for field Y you must also provide data for field Z, etc). So if you use EF code directly in your controllers, this means your business processing logic will almost surely be present in the controllers along with it.
Those of us who have a decent amount of experience developing non-trivial applications with .NET tend to develop opinions that neither business nor data access logic should be present in controllers because of certain difficulties that emerge when such a design is implemented. For example when you put web/http request & response logic, along with business logic and data access logic into a controller, you end up having to test all of those application aspects from the controller actions themselves (which is a glaring violation of the Single Responsibility Principle, if you care about SOLID design). Also let's say you develop a traditional MVC application with controllers that return views, then decide you want to extend the app to other clients like iOS / android / WPF / or some other client that doesn't understand your MVC views. If you decide to implement a secondary set of WebAPI data-based controller actions, you will be duplicating business and data access logic in at least 2 places.
Still, this does not make a decision to keep all business & data-access logic in controllers intrinsically "worse" than an alternate design. Any decision you make when designing the architecture of a web application is going to have advantages and disadvantages. There will always be trade-offs no matter which route you choose. Advantages of keeping all of your application code in controllers can include lower cost, complexity, and thus, time to market. It doesn't make sense to over-engineer complex architectures for very simple applications. However unfortunate, I have personally never had the pleasure of developing a simple application, so I am in the "general opinion" boat that keeping business and data access code in controllers is "probably not" a good long-term design decision.
If you're really interested in alternatives, I would recommend reading these two articles. They are a good primer on how one might implement a command & query (CQRS) pattern that controllers can consume. EF does implement both the repository and unit of work patterns out of the box, but that does not necessarily mean you need to "wrap" it in order to move the data access code outside of your controllers. Best of luck making these kinds of decisions for your project.
public async Task<ActionResult> Index() {
var user = await query.Execute(new UserById(1));
return View(user);
}
Usually I prefer using Repository pattern along with UnitOfWork pattern (http://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application) - I instantiate DbContext in an UnitOfWork instance object and I inject that DbContext in the repositories. After that I instantiate UnitOfWork in the controller and the controller does not know anything about the DbContext:
public ActionResult Index()
{
var user = unitOfWork.UsersRepository.GetById(1); // unitOfWork is dependency injected using Unity or Ninject or some other framework
return View(user);
}
This depends on the lifecycle of your application.
If it will be used, extended and changed for many years, then I'd say creating a wrapper is a good choice.
If it is a small application and, as you have said, you don't intend to change EntityFramework to another ORM, then spare yourself the work of creating a wrapper and use it directly in the controller.
There is no definite answer to this. It all depends on what you are trying to do.
If you are going for code maintainability I would suggest using a wrapper.
I have created quite a few projects where my business logic has been directly accessing my data layer. Since its the only way I have been setting up my MVC projects, I cannot say for sure where the system has been lacking.
I would, however, like to improve on this. To remove many return functions from my controllers, there are 2 ways I see to achieve the same goal.
Including these return functions as methods of the model classes(doesnt make sense, since the datacontext would need to be initialized within every model).
Using a repository
After reading up a bit on repositories, I haven't come across any instances where "Thinning your controllers" may be a 'pro' to using a repository(a generic repository, could be related to this).
For an understanding of the answer I am looking for, I would like to know if, besides the above mentioned reason, I should use a repository. Is there really a need for a repository?In this case, my project will only be reading data (Full CRUD functionality wont be needed).
There is definitely a need for a repository. Every class should only have one real responsibility where possible; your controller's job is simply to 'give' information to the view. An additional benefit to this is if that if you do create a repository layer then, providing you make interfaces for them, you can make your solution a lot more testable. If your controller knows how to get data from a database (past using a repository - or similar) then your controller is "doing" more than one thing, which violates the single responsibility principle.
I used to use a generic repository pattern using the library SharpRepository, however I found that I needed more fine-grained control over what each of my repositories had access to (for example, there were some repositories I did not want to have mutation control and only be read-only). As a result I switched back to using non-generic repositories. Any half-decent IOC tool will be able to register your repositories based on convention (i.e, IFooRepository maps to FooRepository), so the number of classes is not really a factor.
As a commentor mentioned your title doesn't really sum up your question, so I'll summarize it for other answer authors:
Is there a benefit in using the repository pattern to simplify the controller?
I'm trying to wrap my head around repository pattern and dependency injection concepts for my ASP.NET MVC applications.
I ran across the article Repository Pattern with Entity Framework, and really liked how simple the code is. There doesn't appear to be that much code and it's all completely generic. That is, there's no need for multiple repositories for the different objects in the database as most people appear to be doing. This is just what I want.
However, the code is written for code first, which I'm not planning to use.
Questions:
Is there a good reason why the same code couldn't be used for applications that don't use code first?
Can someone recommend a better approach for my applications that don't use code first? (Keeping in mind that I'm absolutely sold on this generic pattern.)
Any other tips to help me move forward?
You can make a repository interface for any underlying data store. You can simply define an interface like so:
public interface IRepository
{
IQueryable<T> GetQueryable<T>();
void Insert<T>(T item);
}
Then, you can implement a class behind this which will implement it. It doesn't have to be code-first; you can back it with an ObjectContext created from an EDMX file, for example.
The key here is in creating the right abstraction. You can easily do that with an interface, and then implement it however you want behind the scenes.
Because you're using dependency injection, the implementation doesn't matter as much, as long as you've defined the contract correctly, the implementation (and testing of it) should be simple. And if it doesn't work, or you want a different data store altogether, you just tell your dependency injector to use a different implementation, the contract doesn't change.
The same can be said for any abstraction you create; you can have an interface that reads and writes data (like the article you reference does), you just have to pull the abstraction out.
Have a look on this. I think this link will help you most
http://www.codeproject.com/Tips/572761/Generic-repository-pattern-using-EF-with-Dependenc
In this link Generic repository pattern is used with dependency injection in MVC project without using code first approach.
I know there are actually a number of questions similar to this one, but I could not find one that exactly answers my question.
I am building a web application that will
obviously display data to the users :)
have a public API for authenticated users to use
later be ported to mobile devices
So, I am stuck on the design. I am going to use asp.net MVC for the website, however I am not sure how to structure my architecture after that.
Should I:
make the website RESTful and act as the API
in my initial review, the GET returns the full view rather than just the data, which to me seems like it kills the idea of the public API
also, should I really be performing business logic in my controller? To be able to scale, wouldn't it be better to have a separate business logic layer that is on another server, or would I just consider pushing my MVC site to another server and it will solve the same problem? I am trying to create a SOLID design, so it also seems better to abstract this to a separate service (which I could just call another class, but then I get back to the problem of scalability...)
make the website not be RESTful and create a RESTful WCF service that the website will use
make both the website and a WCF service that are restful, however this seems redundant
I am fairly new to REST, so the problem could possibly be a misunderstanding on my part. Hopefully, I am explaining this well, but if not, please let me know if you need anything clarified.
I would make a separate business logic layer and a (restful) WCF layer on top of that. This decouples your BLL from your client. You could even have different clients use the same API (not saying you should, or will, but it gives you the flexibility). Ideally your service layer should not return your domain entities, but Data Transfer Objects (which you could map with Automapper), though it depends on the scope and specs of your project.
Putting it on another server makes it a different tier, tier <> layer.
Plain and simple.... it would be easiest from a complexity standpoint to separate the website and your API. It's a bit cleaner IMO too.
However, here are some tips that you can do to make the process of handling both together a bit easier if you decide on going that route. (I'm currently doing this with a personal project I'm working on)
Keep your controller logic pretty bare. Judging on the fact that you want to make it SOLID you're probably already doing this.
Separate the model that is returned to the view from the actual model. I like to create models specific to views and have a way of transforming the model into this view specific model.
Make sure you version everything. You will probably want to allow and support old API requests coming in for quite some time.... especially on the phone.
Actually use REST to it's fullest and not just another name for HTTP. Most implementations miss the fact that in any type of response the state should be transferred with it (missing the ST). Allow self-discovery of actions both on the page and in the API responses. For instance, if you allow paging in a resource always specify in the api or the webpage. There's an entire wikipedia page on this. This immensely aids with the decoupling allowing you to sometimes automagically update clients with the latest version.
Now you're controller action will probably looking something like this pseudo-code
MyAction(param) {
// Do something with param
model = foo.baz(param)
// return result
if(isAPIRequest) {
return WhateverResult(model)
}
return View(model.AsViewSpecificModel())
}
One thing I've been toying with myself is making my own type of ActionResult that handles the return logic, so that it is not duplicated throughout the project.
I would use the REST service for your website, as it won't add any significant overhead (assuming they're on the same server) and will greatly simplify your codebase. Instead of having 2 APIs: one private (as a DLL reference) and one public, you can "eat your own dogfood". The only caution you'll need to exercise is making sure you don't bend the public API to suit your own needs, but instead having a separate private API if needed.
You can use RestSharp or EasyHttp for the REST calls inside the MVC site.
ServiceStack will probably make the API task easier, you can use your existing domain objects, and simply write a set of services that get/update/delete/create the objects without needing to write 2 actions for everything in MVC.