Domain Driven Design Windows Azure Web Job - c#

Certain behaviors of my domain model qualifies to be delegated to Azure WebJob. If I continue to use same domain model class across Website & Web Job,
it seems like violating separation of concerns. There will be tight coupling between two different processes.
Should a background process, Web Job in this case, always have its dedicated Domain Model & behaviors exposed by this should only be consumed by one process?

If your domain model is free of any dependencies to the environment (which it should be), then I don't see a problem with that.
On the contrary: Using the same domain model within a bounded context is preferable, because you are able to capture the business rules in one place. Like this, you are sure you don't run into impedance mismatch problems between two models.
If you are using domain events, you already have a basis for the communication with the web job. This is exactly what we've been doing for over a year, and it works great:
Web apps publish domain events to an Azure Storage Queue
The web job receives them and performs the background processing on the same model
So all you need to do is create two separate application / service layers (one for the web application, one for the background worker) and make sure all domain logic is in a reusable library.

Related

In a layered architecture, how can the Application layer know about Web URLs?

I'm currently working on a .NET 5 app that is using a layered architecture (Web / Application / Infrastructure / Domain). If I am to follow the onion/clean architecture pattern, the dependencies should flow in one direction only, e.g:
Web -> Application -> Infrastructure -> Domain
I now find myself needing to send several emails from the Application layer containing specific front-end URLs. This means that the Application layer will know about the Web layer, breaking the dependency flow.
A sample use case flow would be:
User makes a request, gets handled by a controller in the Web layer
Controller calls a handler on the Application layer
The Application layer uses an email service from the Infrastructure layer to send an email
On step #3 I'm in the Application layer but need Web URLs to construct the email body.
How can I solve for this issue?
I've recently solved this problem within my organization. In our case we have an API "market place" used by the company as a whole, then a reverse proxy used by closely integrated clients, and finally an internal load balancer for the API containers.
That's 3 layers of URL knowledge that my Web and App layers shouldn't know about (even at the Web layer it shouldn't know this because that would make our web layer have more than one responsibility rather than just be a router (e.g. via Mediatr)).
Using Rewriters in the Infrastructure
This is what Z. Danev's answer is all about. This works, but you must maintain all the rules for each of these layers, and each of those rewrites may add overhead. Also, those rules could get tricky depending on the complexity of the data you return.
It is a valid solution though. Depending on your organization this may be an easy thing, or it may be a hard one because it's maintained by other teams, need work tickets, and so on to get the job done.
Well, if you can't or don't want to do that, then...
Application Layer Dependency Inversion and Patterns
Disclaimer: This solution works great for us, but it does have one drawback: at some level, you have to maintain something that knows about the layers above. So caveat emptor.
The situation I described above is roughly analogous to your problem, though perhaps more complex (you can do the same but simplify it). Without violating your architectural principals you need to provide an interface (or more than one) that can be injected into your application layer as an application service.
We called ours ILinkBuilderService and created a LinkBuilderService that itself can be wired up through a DI container with individual ILinkBuilder implementations. Each of these implementations could be a MarketPlaceBuilder, a GatewayBuilder, etc. and will be arranged according to a Chain of Responsibility and Strategy patterns from outermost proxy to innermost.
In this way, the builders inspect the web context (headers, request, etc.) to determine which one should handle the responsibility of building links. Your application layer (e.g. your email sender) simply calls the link building service interface with key data, and this is used to generate client-facing URLs without exposing the application layer to the web context.
Without going into too many details, these builders inspect headers like X-Forwarded-For, custom headers, and other details provided by proxies as the HTTP request hits each endpoint. Chain of Responsibility is key, because it allows the application layer to generate the correct URL no matter at which layer the request originated from.
So how does this not break the one-way flow?
Well, you push these builders one-way down into your application layer. Technically, they do reach back up to the web layer for context, but that is encapsulated. This is ok and does not violate your architecture. This is what dependency inversion is all about.
Consider configuring "well known urls" at the web infrastructure level (gateway or load balancer for example) so you can have "mycompany.com/user-action-1" in the email and that will translate to the proper endpoint of your web app.

Application Service code in WebAPI

We are starting a new project and trying to implement some concepts from Domain driven design. We are planning to have following layers:
Web Interface (WebAPI)
Application Services (library)
Domain Services (library)
Data Access Services (Library)
We are thinking about merging Web interface and Application service together. So, our webAPI will be talking to repositories, domain model and domain services.
Is this fine or should we have separate project form application services and WebAPI should only communicate with Application services?
Thanks
HTTP should be seen as one of potentially many access ports to reach your application services. If you could be entirely sure that you will never have to speak to your application through another communication channel than HTTP then I'd say it's perfectly not to have a seperate application layer.
However, I'd also say that it's very hard to predict how application needs will evolve and since adding an additionnal layer of indirection to segregate the application layer right away shouldn't be very costly (it's just delegation) that's what I'd do.

Where to locate custom membership, roles, profile providers in a 3-tier setup?

I have a 3 tier ASP.NET MVC 3 project that has a data layer, service layer then a presentation layer which calls upon the service layer to get data. I'm actually using the doFactory patterns in action solution.
I want to implement a custom membership, roles, profile provider but I'm not sure exactly where to put it. I was thinking of putting it in the service layer then have the provider call on the DAO objects to get the info.
Any other ideas?
You're thinking pretty well. Though the UI layer interacts with the client and takes their password, your service layer should process attempts the enter system.
Your action methods pass along the information to the service objects responsible for authorization.
Your service layer would have no idea whether it is in a web application or not.
The data layers is just the place where that information is stored, not where it is processed.
You might choose to keep the ID of the user in the UI layer, in session. On login the Service layer would take the username/password/whatever and return a UserID. Or, your action methods could pass in a session key into the service layer each time, to get the User information.
Edit due to comment: I'm doing this in my current project (couple $million scope). I have my security decisions in the action methods. (Though of course the tools for making this simple are objects from the Service Layer.) For example, if the current user doesn't have this role or that role, then redirect them to a rejection page, otherwise, do the thing. MyServiceLayerObject.DoThing() has no security inside it.
It's the simplest way for my app and many others. ("Simplest" means it will will be screwed up the least. When it comes to security, simple is good!) Since the Action method is the gateway to the functionality, having security in the service layer would just cause extra work and actually obscure what security was happening. Now, that's my app, where there is usually one place where each action takes place.
Your app may be different. The more different action methods and (especially) different components are using your Services Layer's functionality, the more you'd want your Service Layer functionality locked down with your authorization scheme. Many people feel that security should always be in the service layer, and that any additional security actions in the UI layer would be bonus redundancy. I don't agree with that.
Here is an existing implementation of Membership Providers in 3 tier world that I found when looking for the same thing...
http://elysianonline.com/programming/wcf-wrapper-for-asp-net-membership/
And here ...
http://elysianonline.com/programming/using-the-wcf-membership-provider/

resource request validation, service or business layer responsibility?

Assuming you have a Business Layer that you will be using for both front-end external facing web application, as well as back-end internal facing application. The external application will always contain the user's logged-in identity/profile in session. The back end application is only for internal administrators.
In a scenario where you have the following business layer method SensitiveInfoManager.GetResource(id). You can imagine that when external users call this method you would want some sort of validation to insure that the id that is passed in does in fact belong to the user that is requesting it. Assuming you have the right structure in the database where you will be able to establish a link from the requesting user to the resource they are requesting. Also you can imagine that a back-end website administrator should be able to call the same method, however that user is in no way tied to the resource, but by definition of being an internal administrator should simply be able to request whatever resource they want.
The question is how do you accomplish this with maximum reuse and best separation of concerns? Do you incorporate this validation into the business layer, setting some sort of flag at class level that says "validate me" or "don't validate me" depending on who the consumer is. Or do you front your business layer with a Service Layer, tasking it with authorization of the requested resources. Forcing the front-end application to channel request through the service layer, where the back-end application may go to the Business Layer directly?
I think that the Service Layer is the most natural place for the Authorization process.
If however you decide to add the authorization functionality to the Business Layer, then I would create an interface IAuthorizationAuthority that contains all the functionality to check for permissions. I would create two classes that implement this interface (one for the external application and one for the admin application) and use a Dependency Injection library so that you can decide on application level which implementation should be used.

3-tier architecture v. 3-server architecture

I'm building a traditional .NET MVC site, so I've got a natural 3-tier software architecture setup (presentation in the form of Views, business layer in the controller, and data layer in the models and data access layer).
When I've deployed such sites, it usually goes either on one server (where the web site and db live), or two servers (a web server and a separate db server).
How does one go about a 3-server architecture (WEB, APP, and DB)? Would the web server just have the presentation (e.g. the physical View/aspx pages), the app server would hold the config file and bin folder, and the db server would remain as is?
My question is essentially, can you simply move the /bin and all app logic onto a separate server from the presentation views? If so, how do you configure the servers to know where to look? If there's a good primer somewhere or someone can give me the lowdown, I'd be forever indebted.
MVC is not a 3-tier architecture. Not every solution needs to be 3-tier or n-tier, but it is still important to understand the distinction. MVC happens to have 3 main elements, but those elements do not work in a "tiered" fashion, they are interdependent:
Model <----- Controller
\ |
\ v
---- View
The View depends on the Model. The Controller depends on the View and Model. These multiple dependency paths therefore do not function as tiers.
Typically a 3-tier solution looks like:
Data Access <--- [Mapper] ---> Domain Model <--- [Presenter/Controller] ---> UI
Presenter/Controller is somewhat optional - in Windows Forms development, for example, you usually don't see it, instead you have a "smart client" UI, which is OK too.
This is a 3-tier architecture because each of the 3 main tiers (Data, Domain, UI) has only one dependency. Classically, the UI depends on the Domain Model (or "Business" model) and the Domain Model depends on the DAL. In more modern implementations, the Domain Model does not depend the DAL; instead, the relationship is inverted and an abstract Mapping layer is injected later on using an IoC container. In either case, each tier only depends on the previous tier.
In an MVC architecture, C is the Controller, V is the UI (Views), and M is the Domain Model. Therefore, MVC is a presentation architecture, not a system architecture. It does not encapsulate the data access. It may not necessarily fully encapsulate the Domain Model, which can be treated as an external dependency. It is not tiered.
If you wanted to physically separate the tiers then it is usually done by exposing the Domain Model as a Web Service (i.e. WCF). This gives you improved scalability and a cleaner separation of concerns - the Domain Model is literally reusable anywhere and can be deployed across many machines - but comes with a significant up-front development cost as well as an ongoing maintenance cost.
The server architecture mirrors the 3-tier diagram above:
Database Server <----- Web Services <----- Application
The "Application" is your MVC application, which shares a Domain Model with the Web Services (through SOAP or REST). Web Services run on a dedicated server (or servers), and the database is, obviously, hosted on its own server. This is a 3-tier, 3-server architecture.
In some circles, I have seen this discussion phrased as the difference between n-tier and n-layer where a "layer" in this context potentially represents another machine. In order to have a middle layer using this definition, it must be hosted. For example, if you had a service layer which the presentation layer called to get its data, then the service layer could be on a different machine than the presentation or database. However, that service layer is hosted either as a windows service or as a web service. I.e., there is a process listening for requests on that machine. Thus, you cannot simply move the bin folder to different machine and hope to have this work. I would look at WCF (Windows Communication Foundation) for creating these types services.
ASP.NET MVC does not help you in setting up a 3tier system. This is realy only a frontend pattern.
The main issue you have to solve implementing a multi tier system is the transport of objects from one server to another. You have to find a way to serialize all objects depending on the transport channel. This gets slow and development gets more complicated.
There are reasons to have a separate app-server: You might have logic in it that other application need or the app-server might have different permissions than the Webserver. But its hard to imagine a high traffic website, where all requests lead to a call to a remote app - server.
Next logical scale up would be two web servers and one database server.
Eventually after adding many web servers it might be worth adding a service layer.
You might also want to add a distributed cache, session state server, email server, and other specialized servers at some point too as you scale.
So your questions seems to be ...
"can you simply move the /bin and all app logic onto a separate server from the presentation views?"
If I am understanding correctly, I believe the files in your bin folder will be the compiled code behinds for your asp.net pages. If that is the case then, no, I believe they need to be on the same machine as the asp pages.
If you want to have your business logic on a seperate machine from the presentation layer you would need to wrap that code into a seperate dll and expose it via soap or some other protocol .. and then call those SOAP exposed dlls on the other server from the code in your presentation layer.

Categories

Resources