I know there are actually a number of questions similar to this one, but I could not find one that exactly answers my question.
I am building a web application that will
obviously display data to the users :)
have a public API for authenticated users to use
later be ported to mobile devices
So, I am stuck on the design. I am going to use asp.net MVC for the website, however I am not sure how to structure my architecture after that.
Should I:
make the website RESTful and act as the API
in my initial review, the GET returns the full view rather than just the data, which to me seems like it kills the idea of the public API
also, should I really be performing business logic in my controller? To be able to scale, wouldn't it be better to have a separate business logic layer that is on another server, or would I just consider pushing my MVC site to another server and it will solve the same problem? I am trying to create a SOLID design, so it also seems better to abstract this to a separate service (which I could just call another class, but then I get back to the problem of scalability...)
make the website not be RESTful and create a RESTful WCF service that the website will use
make both the website and a WCF service that are restful, however this seems redundant
I am fairly new to REST, so the problem could possibly be a misunderstanding on my part. Hopefully, I am explaining this well, but if not, please let me know if you need anything clarified.
I would make a separate business logic layer and a (restful) WCF layer on top of that. This decouples your BLL from your client. You could even have different clients use the same API (not saying you should, or will, but it gives you the flexibility). Ideally your service layer should not return your domain entities, but Data Transfer Objects (which you could map with Automapper), though it depends on the scope and specs of your project.
Putting it on another server makes it a different tier, tier <> layer.
Plain and simple.... it would be easiest from a complexity standpoint to separate the website and your API. It's a bit cleaner IMO too.
However, here are some tips that you can do to make the process of handling both together a bit easier if you decide on going that route. (I'm currently doing this with a personal project I'm working on)
Keep your controller logic pretty bare. Judging on the fact that you want to make it SOLID you're probably already doing this.
Separate the model that is returned to the view from the actual model. I like to create models specific to views and have a way of transforming the model into this view specific model.
Make sure you version everything. You will probably want to allow and support old API requests coming in for quite some time.... especially on the phone.
Actually use REST to it's fullest and not just another name for HTTP. Most implementations miss the fact that in any type of response the state should be transferred with it (missing the ST). Allow self-discovery of actions both on the page and in the API responses. For instance, if you allow paging in a resource always specify in the api or the webpage. There's an entire wikipedia page on this. This immensely aids with the decoupling allowing you to sometimes automagically update clients with the latest version.
Now you're controller action will probably looking something like this pseudo-code
MyAction(param) {
// Do something with param
model = foo.baz(param)
// return result
if(isAPIRequest) {
return WhateverResult(model)
}
return View(model.AsViewSpecificModel())
}
One thing I've been toying with myself is making my own type of ActionResult that handles the return logic, so that it is not duplicated throughout the project.
I would use the REST service for your website, as it won't add any significant overhead (assuming they're on the same server) and will greatly simplify your codebase. Instead of having 2 APIs: one private (as a DLL reference) and one public, you can "eat your own dogfood". The only caution you'll need to exercise is making sure you don't bend the public API to suit your own needs, but instead having a separate private API if needed.
You can use RestSharp or EasyHttp for the REST calls inside the MVC site.
ServiceStack will probably make the API task easier, you can use your existing domain objects, and simply write a set of services that get/update/delete/create the objects without needing to write 2 actions for everything in MVC.
Related
So, I'm working on a DDD application, I'll skip the details, but globally: one of the service aims to retrieve information from a Database, process it and write the "processed data" (an aggregate actually) into a flatfile (and no, I cannot change that - the flat file is to be sent to a printer that can interpret the file). Nothing out of the ordinary except for the flatfile part. When writing down the code, I was thinking that of course, I need to write into a file the result of the processed data as part of my application service, and to me, it is the same as writing an aggregate to database using a unit of work through a repository class.
So my question is: Is a FlatFileUnitOfWork legitimate as part of a DDD? If so, does anyone have a (good) example of it ? Because to me, it is rather uncommon, and I wasn't able to find a correct example of a "FlatFileUnitOfWork".
Thanks a lot.
NB: The Web API is written in C#
Joining TSeng, I'd say it depends! :)
According to your description, Unit of Work is very unlikely a fitting pattern in your case. What is a proper solution, in DDD, as its name suggests, depends on the domain! - (EDIT: shortened)
My question would be: what's the business process behind that printing. Is this just a minor matter - or is it a crucial part of the core domain (e.g. the whole application is about revolutionary printing out cool designed concert tickets) - or something between?
If it's just a minor matter - far away, kinda nothing to do with the core domain - then an application event or command might be OK. E.g. you emit an application event in your core domain's context, which is then caught in another context, who lets the printer do its job via sending that flat file to it. Alternatively, this printing might belong to the same context (still being a minor issue). In that case, your application service might call (or "command") the porper module of the infrastructure layer doing that printing via flat file.
If it's part of the core domain then it might happen e.g. that a domain service is somehow responsible for composing that crucial printing stuff - or something like that. In this case, precise details of the solution would depend on a thorough analysis (knowledge crunching, domain modelling) of the core domain.
EDIT - Sample Case
For my sample case, I imagine you have a Ticket Printing micro-service, which is your core domain, - because you are printing the coolest concert tickets ever, and that's the main point of the whole application.
Within this service, I imagine you have a complex domain model for building up that coolest ticket layout, on top of which there's a TicketComposer providing a TicketToPrint value object containing all important information you need for that printing - e.g. like this:
public TicketToPrint ComposeTicketToPrint(SoldTicket ticket)
{
// ...
}
In that case, you need a TicketPrinter class in your Infrastructure layer, who does the job of printing out that ticket. Neither your Domain nor your Application layer shouldn't even know how it does that. I.e. your application service method would look something like this:
public void PrintSoldTicket(SoldTicketDTO ticketDto)
{
SoldTicket soldTicket = CreateSoldTicket(ticketDto);
var composer = new TicketComposer();
TicketToPrint ticketToPrint = composer.ComposeTicketToPrint(soldTicket);
var printer = new TicketPrinter();
printer.Print(ticketToPrint);
}
And then in the end of the chain, your TicketPrinter in Infrastructure layer does the job you are asking:
public void Print(TicketToPrint ticketToPrint)
{
// Creating the flat file and sending it to the printer...
}
Does this sample answer your question?
The printer looks like a UI layer from the DDD, it just "displays" the data.
You should have some kind of Presenter which passes the Aggregate to some Infrastructure service which is responsible for translation of the Aggregate into a format which is understandable by the printer.
In my project, I use entity framework 7 and asp.net mvc 6 \ asp.net 5. I want to create CRUD for own models
How can I do better:
Use dbcontext from the controller.
In the following link author explain that this way is better, but whether it is right for the controllers?
Make own wrapper.
The some Best practices write about what is best to do own repository.
I'm not going to change the ef at something else, so do not mind, even if there is a strong connectivity to access data from a particular implementation
and I know that in ef7 dbcontext immediately implemented Unit Of Work and Repository patterns.
The answer to your question is primarily opinion-based. No one can definitively say "one way is better than the other" until a lot of other questions are answered. What is the size / scope / budget of your project? How many developers will be working on it? Will it only have (view-based) MVC controllers, or will it have (data-based) API controllers as well? If the latter, how much overlap will there be between the MVC and API action methods, if any? Will it have any non-web clients, like WPF? How do you plan to test the application?
Entity Framework is a Data Access Layer (DAL) tool. Controllers are HTTP client request & response handling tools. Unless your application is pure CRUD (which is doubtful), there will probably be some kind of Business Logic processing that you will need to do between when you receive a web request over HTTP and when you save that request's data to a database using EF (field X is required, if you provide data for field Y you must also provide data for field Z, etc). So if you use EF code directly in your controllers, this means your business processing logic will almost surely be present in the controllers along with it.
Those of us who have a decent amount of experience developing non-trivial applications with .NET tend to develop opinions that neither business nor data access logic should be present in controllers because of certain difficulties that emerge when such a design is implemented. For example when you put web/http request & response logic, along with business logic and data access logic into a controller, you end up having to test all of those application aspects from the controller actions themselves (which is a glaring violation of the Single Responsibility Principle, if you care about SOLID design). Also let's say you develop a traditional MVC application with controllers that return views, then decide you want to extend the app to other clients like iOS / android / WPF / or some other client that doesn't understand your MVC views. If you decide to implement a secondary set of WebAPI data-based controller actions, you will be duplicating business and data access logic in at least 2 places.
Still, this does not make a decision to keep all business & data-access logic in controllers intrinsically "worse" than an alternate design. Any decision you make when designing the architecture of a web application is going to have advantages and disadvantages. There will always be trade-offs no matter which route you choose. Advantages of keeping all of your application code in controllers can include lower cost, complexity, and thus, time to market. It doesn't make sense to over-engineer complex architectures for very simple applications. However unfortunate, I have personally never had the pleasure of developing a simple application, so I am in the "general opinion" boat that keeping business and data access code in controllers is "probably not" a good long-term design decision.
If you're really interested in alternatives, I would recommend reading these two articles. They are a good primer on how one might implement a command & query (CQRS) pattern that controllers can consume. EF does implement both the repository and unit of work patterns out of the box, but that does not necessarily mean you need to "wrap" it in order to move the data access code outside of your controllers. Best of luck making these kinds of decisions for your project.
public async Task<ActionResult> Index() {
var user = await query.Execute(new UserById(1));
return View(user);
}
Usually I prefer using Repository pattern along with UnitOfWork pattern (http://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application) - I instantiate DbContext in an UnitOfWork instance object and I inject that DbContext in the repositories. After that I instantiate UnitOfWork in the controller and the controller does not know anything about the DbContext:
public ActionResult Index()
{
var user = unitOfWork.UsersRepository.GetById(1); // unitOfWork is dependency injected using Unity or Ninject or some other framework
return View(user);
}
This depends on the lifecycle of your application.
If it will be used, extended and changed for many years, then I'd say creating a wrapper is a good choice.
If it is a small application and, as you have said, you don't intend to change EntityFramework to another ORM, then spare yourself the work of creating a wrapper and use it directly in the controller.
There is no definite answer to this. It all depends on what you are trying to do.
If you are going for code maintainability I would suggest using a wrapper.
I have a set of services I want to be able to access via one end point altogether.
Now I want to build something in wcf rather than use an existing framework/software so that is out of the question.
Suppose I have 10 contracts each representing a contract of an indepedent service that I want to "route" to, what direction should I go?
public partial class ServiceBus : ICardsService
{
//Proxy
CMSClient cards = new CMSClient();
public int methodExample()
{
return cards.methodExample();
}
So far I've tried using a partial class "ServiceBus" that implements each contract but then I have more than a few (60+) recurrences of identical function signatures so I think I should think in a different angle.
Anyone got an idea of what I should do? or what direction to research? currently I'm trying to use a normal wcf service that's going to be configured with a lot of client end points directing to each of the services it routes TO - and one endpoint for the 'application' to consume.
I'm rather new at wcf so anything that may seem too trivial to mention please do mention it anyway.
Thanks in advance.
I have a set of services I want to be able to access via one end point
altogether.
...
So far I've tried using a partial class "ServiceBus" that implements
each contract
It's questionable whether this kind of "service aggregation" pattern should be achieved by condensing multiple endpoints into an uber facade endpoint. Even when implemented well, this will still result in a brittle single failure point in your solution.
Suppose I have 10 contracts each representing a contract of an
indepedent service that I want to "route" to, what direction should I
go?
Stated broadly, your aim seems to be to decouple the caller and service so that the caller makes a call and based on the call context the call is routed the relevant services.
One approach would be to do this call mediation on the client side. This is an unusual approach but would involve creating a "service bus" assembly containing the capability to dynamically call a service at run-time, based on some kind of configurable metadata.
The client code would consume the assembly in-process, and at run-time call into the assembly, which would then make a call to the metadata store, retrieving the contract, binding, and address information for the relevant service, construct a WCF channel, and return it to the client. The client can then happily make calls against the channel and dispose it when finished.
An alternative is to do the call mediation remotely and luckily WCF does provide a routing service for this kind of thing. This allows you to achieve the service aggregation pattern you are proposing, but in a way which is fully configurable so your overall solution will be less brittle. You will still have a single failure point however, unless you load balance the router service.
I'm not sure about making it client side as I can't access some of the
applications (external apis) that are connecting to our service
Well, any solution you choose will likely involve some consumer rewrite - this is almost unavoidable.
I need to make it simple for the programmers using our api
This does not rule out a client side library approach. In fact in some ways this will make it really easy for the developers, all they will need to do is grab a nuget package, wire it up and start calling it. However I agree it's an unusual approach and would also generate a lot of work for you.
I want to implement the aggregation service with one endpoint for a
few contracts
Then you need to find a way to avoid having to implment multiple duplicate (or redundant) service operations in a single service implementation.
The simplest way would probably be to define a completely new service contract which exposes only those operations distinct to each of the services, and additionally a single instance of each of the redundant operations. Then you would need to have some internal routing logic to call the backing service operations depending on what the caller wanted to do. On second thoughts not so simple I think.
Do you have any examples of a distinct service operation and a redundant one?
I have a small website I implemented with AngularJS, C# and Entity Framework. The whole website is a Single Page Application and gets all of its data from one single C# web service.
My question deals with the interface that the C# web service should expose. For once, the service can provide the Entities in a RESTful way, providing them directly or as DTOs. The other approach would be that the web service returns an object for exactly one use case so that the AngularJS Controller only needs to invoke the web service once and can work with the responded model directly.
To clarify, please consider the following two snippets:
// The service returns DTOs, but has to be invoked multiple
// times from the AngularJS controller
public Order GetOrder(int orderId);
public List<Ticket> GetTickets(int orderId);
And
// The service returns the model directly
public OrderOverview GetOrderAndTickets(int orderId);
While the first example exposes a RESTful interface and works with the resource metaphor, it has the huge drawback of only returning parts of the data. The second example returns an object tailored to the needs of the MVC controller, but can most likely only be used in one MVC controller. Also, a lot of mapping needs to be done for common fields in the second scenario.
I found that I did both things from time to time in my webservice and want to get some feedback about it. I do not care too much for performance, altough multiple requests are of course problematic and once they slow down the application too much, they need refactoring. What is the best way to design the web service interface?
I would advise going with the REST approach, general purpose API design, rather than the single purpose remote procedure call (RPC) approach. While the RPC is going to be very quick at the beginning of your project, the number of end points usually become a liability when maintaining code. Now, if you are only ever going to have less than 20 types of server calls, I would say you can stick with this approach without getting bitten to badly. But if your project is going to live longer than a year, you'll probably end up with far more end points than 20.
With a rest based service, you can always add an optional parameter to describe child records said resource contains, and return them for the particular call.
There is nothing wrong with a RESTful service returning child entities or having an optional querystring param to toggle that behavior
public OrderOverview GetOrder(int orderId, bool? includeTickets);
When returning a ticket within an order, have each ticket contain a property referring to the URL endpoint of that particular ticket (/api/tickets/{id} or whatever) so the client can then work with the ticket independent of the order
In this specific case I would say it depends on how many tickets you have. Let's say you were to add pagination for the tickets, would you want to be getting the Order every time you get the next set of tickets?
You could always make multiple requests and resolve all the promises at once via $q.all().
The best practice is to wrap up HTTP calls in an Angular Service, that multiple angular controllers can reference.
With that, I don't think 2 calls to the server is going to be a huge detriment to you. And you won't have to alter the web service, or add any new angular services, when you want to add new views to your site.
Generally, API's should be written independently minded of what's consuming it. If you're pressed for time and you're sure you'll never need to consume it from some other client piece, you could write it specifically for your web app. But generally that's how it goes.
I am working on an app which uses WCF as data layer.
I understand there as certain benefits such as security. What would be other benefits or handicaps in such approach?
Isn't serializing and de-serializing would cost performance?
how about maintenance, testing and maintainability?
What would be other drawbacks of such approach?
So you have a data layer and it is accessed using WCF. First the upside to this: you can move your data layer wherever you need it and your applications should not care. (as long as the dns resolves correctly) And if it is hosted inside IIS then you gain some security by doing SLL as your secured layer in front of your service. And if your services are well written you can easily throw them into a load balanced process.
On the downside you need to be concerned about how you expose that service. If it communicates the data back in XML you will suffer a much larger serialization penalty than if you used JSON as your means of serializing data.
In the middle side of things (neither good or bad) you would be forcing yourself to be careful (I would hope) in how you format your requests. For example, passing only a key for a delete instead of the entire record to delete. (believe me, I've seen systems written like this!!)
You should also carefully design your services so that your svc file contains something like this:
public Customer GetCustomer(int customerID)
{
return DataLayer.GetCustomer(customerID);
}
This way you can easily directly utilize your datalayer if some other application is already sitting on your WCF server. A good example of this is you may have your data layer isolated inside your internal network. Sheltered by the DMZ. Your intranet may need to access the same data layer so you can put your intranet applications on that server and directly use the datalayer. Or they can be on a different server but use the data layer libraries directly.
One final note...which we encountered a need for in one situation. If you implement something out on the DMZ that needs to directly access a server instead of being routed through the firewalls, you can easily create a proxy of your data services. The proxy just takes your service interface and implements calls through the firewall to your service behind the DMZ. Took us maybe one day to implement this.
For testing: well that is no different than anywhere else you have a data layer. You need to do your tests, use repeatable data in your test setup, and proper cleanup after your tests complete. It also does not change for maintainability, etc. However you need to have a clear approach for versioning of your services to encompass interface changes. But, again, that is the same no matter where your data services lie.
Hope this helps some.