So, I'm working on a DDD application, I'll skip the details, but globally: one of the service aims to retrieve information from a Database, process it and write the "processed data" (an aggregate actually) into a flatfile (and no, I cannot change that - the flat file is to be sent to a printer that can interpret the file). Nothing out of the ordinary except for the flatfile part. When writing down the code, I was thinking that of course, I need to write into a file the result of the processed data as part of my application service, and to me, it is the same as writing an aggregate to database using a unit of work through a repository class.
So my question is: Is a FlatFileUnitOfWork legitimate as part of a DDD? If so, does anyone have a (good) example of it ? Because to me, it is rather uncommon, and I wasn't able to find a correct example of a "FlatFileUnitOfWork".
Thanks a lot.
NB: The Web API is written in C#
Joining TSeng, I'd say it depends! :)
According to your description, Unit of Work is very unlikely a fitting pattern in your case. What is a proper solution, in DDD, as its name suggests, depends on the domain! - (EDIT: shortened)
My question would be: what's the business process behind that printing. Is this just a minor matter - or is it a crucial part of the core domain (e.g. the whole application is about revolutionary printing out cool designed concert tickets) - or something between?
If it's just a minor matter - far away, kinda nothing to do with the core domain - then an application event or command might be OK. E.g. you emit an application event in your core domain's context, which is then caught in another context, who lets the printer do its job via sending that flat file to it. Alternatively, this printing might belong to the same context (still being a minor issue). In that case, your application service might call (or "command") the porper module of the infrastructure layer doing that printing via flat file.
If it's part of the core domain then it might happen e.g. that a domain service is somehow responsible for composing that crucial printing stuff - or something like that. In this case, precise details of the solution would depend on a thorough analysis (knowledge crunching, domain modelling) of the core domain.
EDIT - Sample Case
For my sample case, I imagine you have a Ticket Printing micro-service, which is your core domain, - because you are printing the coolest concert tickets ever, and that's the main point of the whole application.
Within this service, I imagine you have a complex domain model for building up that coolest ticket layout, on top of which there's a TicketComposer providing a TicketToPrint value object containing all important information you need for that printing - e.g. like this:
public TicketToPrint ComposeTicketToPrint(SoldTicket ticket)
{
// ...
}
In that case, you need a TicketPrinter class in your Infrastructure layer, who does the job of printing out that ticket. Neither your Domain nor your Application layer shouldn't even know how it does that. I.e. your application service method would look something like this:
public void PrintSoldTicket(SoldTicketDTO ticketDto)
{
SoldTicket soldTicket = CreateSoldTicket(ticketDto);
var composer = new TicketComposer();
TicketToPrint ticketToPrint = composer.ComposeTicketToPrint(soldTicket);
var printer = new TicketPrinter();
printer.Print(ticketToPrint);
}
And then in the end of the chain, your TicketPrinter in Infrastructure layer does the job you are asking:
public void Print(TicketToPrint ticketToPrint)
{
// Creating the flat file and sending it to the printer...
}
Does this sample answer your question?
The printer looks like a UI layer from the DDD, it just "displays" the data.
You should have some kind of Presenter which passes the Aggregate to some Infrastructure service which is responsible for translation of the Aggregate into a format which is understandable by the printer.
Related
Background: I need parts of my system to be able to push various status messages to some data structure so that they can be consumed by a caller, without passing the data structure into methods explicitly, and where the needs of the callers can differ.
Detail: my application has two (and conceivably more) heads, an ASP.NET MVC 5 web site and a Windows service. So normally, while the composition root of a web application would be the web site itself, I am using a separate composition root that both these "front ends" connect to--this allows them to share a common configuration, as almost all of their dependency injection will be 100% identical. Plus, for testing, I've decided to keep most of the code out of the web site as truly unit testing controllers is problematic.
So my code needs to be able to run outside of the context of any web request. Similarly, anything the service does on a schedule needs to be able to be run as an on-demand job from the web site. So most of the heavy-lifting code in my application is NOT in the web site or the service.
Now, back to the needs of my status messages:
Some status messages will be logged, but potentially more will be logged when run as a service. It's okay to queue the log items and save them at the end.
When, say, a job is run on-demand from the web site, fewer things may be logged because any issues the user can take care of will be displayed directly to the user, and for debug purposes we only care about outright errors happening. New messages need to be pushed to the web site immediately (probably through websockets).
Also, a job may be run in debug or verbose mode, so that more informational or warning messages are produced one time (say on the web) than would be the case another time (from the headless service). Code generating messages shouldn't worry about these details at all, unless something that would hurt performance in production is placed inside compiler directives for debug mode).
Additionally, some of the code pushes errors, warnings, or information into the objects that are returned from a request. These are easy to handle. But other errors, warnings, or information (such as errors that prevent said requested objects from being fetched at all) need to bubble up outside of the normal return values.
Right now I'm using something that seems less than ideal: all my methods have to accept a parameter that they can modify in order to bubble up such errors. For example:
public IReadOnlyCollection<UsableItem> GetUsableItems(
ReadOnlyHashSet<string> itemIds,
List<StatusMessage> statusMessages
) {
var resultItems = _itemService.Get(itemIds);
var resultItemsByHasFrobDuplicate = resultItems
.GroupBy(i => i.FrobId)
.ToLookup(grp => grp.Count() > 1, grp => grp.ToList());
statusMessages
.AddRange(
resultItemsByHasFrobDuplicate[true]
.Select(items => $#"{items[0].FrobId
} is used by multiple items {string.Join(",", items.Select(i => i.usableItemId))
}")
);
return resultItemsByHasFrobDuplicate[false]
.Select(grp => grp.First())
.ToList()
.AsReadOnly();
}
So you can see here that while normally items can be in the return value from the method (and these items can even have their own status messages placed on them), others cannot—the calling code can't deal with duplicates and expects a collection of UsableItem objects that do NOT have duplicate FrobId values. The situation of the duplicates is unexpected and needs to bubble up to the user or the log.
The code would be greatly improved by being able to remove the statusMessages parameter and do something more like CurrentScope.PushMessage(message) and know that these messages will be properly handled based on their severity or other rules (the real messages are an object with several properties).
Oh, and I left something out in the code above. What I really have to do is:
_itemService.Get(itemIds, statusMessages); // -- take the darn parameter everywhere
Argh. That is not ideal.
I instantly thought of MiniProfiler.Current as similar, where it's available anywhere but it's scoped to the current request. But I don't understand how it is able to be static, yet segregate any Step calls between different requests so that a user doesn't get another user's steps in his output. Plus, doesn't it only work for MVC? I need this to work when there is no MVC, just non-web code.
Can anyone suggest a way to improve my code and not have to pass around a list to method after method? Something that will work with unit tests is also important, as I need to be able to set up a means to capture the bubbled errors in my mock within a unit test (or be able to do nothing at all if that's not the desired portion of the system to test).
P.S. I don't mind tactful criticism of my little ToLookup pattern above for separating duplicates. I use that technique a lot and would be interested in a better way.
I think you're just looking at this the wrong way. None of this actually involves or really is related to a request. You simply need some service you can inject which pushes messages out. How it does that is inconsequential, and the whole point of dependency injection is that the class with the dependency shouldn't know or care.
Create an interface for your messaging service:
public interface IMessagingService
{
void PushMessage(string message);
}
Then, you should alter your class which contains GetUsableItems a bit to inject the messaging service into the constructor. In general, method injection (what you're doing currently by passing List<StatusMessages> into the method) is frowned upon.
public class MyAwesomeClass
{
protected readonly IMessagingService messenger;
public MyAwesomeClass(IMessagingService messenger)
{
this.messenger = messenger;
}
Then, in your method:
messenger.PushMessage("My awesome message");
The implementation of this interface, then will probably vary based on whether it's injected in the web app or the windows service. Your web app will likely have an implementation that simply utilizes its own code to push messages, whereas the windows service will likely need an implementation that utilizes HttpClient to make requests to your web app. Setup your DI container to inject the right implementation for the right application and you're done.
I am working on an app which uses WCF as data layer.
I understand there as certain benefits such as security. What would be other benefits or handicaps in such approach?
Isn't serializing and de-serializing would cost performance?
how about maintenance, testing and maintainability?
What would be other drawbacks of such approach?
So you have a data layer and it is accessed using WCF. First the upside to this: you can move your data layer wherever you need it and your applications should not care. (as long as the dns resolves correctly) And if it is hosted inside IIS then you gain some security by doing SLL as your secured layer in front of your service. And if your services are well written you can easily throw them into a load balanced process.
On the downside you need to be concerned about how you expose that service. If it communicates the data back in XML you will suffer a much larger serialization penalty than if you used JSON as your means of serializing data.
In the middle side of things (neither good or bad) you would be forcing yourself to be careful (I would hope) in how you format your requests. For example, passing only a key for a delete instead of the entire record to delete. (believe me, I've seen systems written like this!!)
You should also carefully design your services so that your svc file contains something like this:
public Customer GetCustomer(int customerID)
{
return DataLayer.GetCustomer(customerID);
}
This way you can easily directly utilize your datalayer if some other application is already sitting on your WCF server. A good example of this is you may have your data layer isolated inside your internal network. Sheltered by the DMZ. Your intranet may need to access the same data layer so you can put your intranet applications on that server and directly use the datalayer. Or they can be on a different server but use the data layer libraries directly.
One final note...which we encountered a need for in one situation. If you implement something out on the DMZ that needs to directly access a server instead of being routed through the firewalls, you can easily create a proxy of your data services. The proxy just takes your service interface and implements calls through the firewall to your service behind the DMZ. Took us maybe one day to implement this.
For testing: well that is no different than anywhere else you have a data layer. You need to do your tests, use repeatable data in your test setup, and proper cleanup after your tests complete. It also does not change for maintainability, etc. However you need to have a clear approach for versioning of your services to encompass interface changes. But, again, that is the same no matter where your data services lie.
Hope this helps some.
I know there are actually a number of questions similar to this one, but I could not find one that exactly answers my question.
I am building a web application that will
obviously display data to the users :)
have a public API for authenticated users to use
later be ported to mobile devices
So, I am stuck on the design. I am going to use asp.net MVC for the website, however I am not sure how to structure my architecture after that.
Should I:
make the website RESTful and act as the API
in my initial review, the GET returns the full view rather than just the data, which to me seems like it kills the idea of the public API
also, should I really be performing business logic in my controller? To be able to scale, wouldn't it be better to have a separate business logic layer that is on another server, or would I just consider pushing my MVC site to another server and it will solve the same problem? I am trying to create a SOLID design, so it also seems better to abstract this to a separate service (which I could just call another class, but then I get back to the problem of scalability...)
make the website not be RESTful and create a RESTful WCF service that the website will use
make both the website and a WCF service that are restful, however this seems redundant
I am fairly new to REST, so the problem could possibly be a misunderstanding on my part. Hopefully, I am explaining this well, but if not, please let me know if you need anything clarified.
I would make a separate business logic layer and a (restful) WCF layer on top of that. This decouples your BLL from your client. You could even have different clients use the same API (not saying you should, or will, but it gives you the flexibility). Ideally your service layer should not return your domain entities, but Data Transfer Objects (which you could map with Automapper), though it depends on the scope and specs of your project.
Putting it on another server makes it a different tier, tier <> layer.
Plain and simple.... it would be easiest from a complexity standpoint to separate the website and your API. It's a bit cleaner IMO too.
However, here are some tips that you can do to make the process of handling both together a bit easier if you decide on going that route. (I'm currently doing this with a personal project I'm working on)
Keep your controller logic pretty bare. Judging on the fact that you want to make it SOLID you're probably already doing this.
Separate the model that is returned to the view from the actual model. I like to create models specific to views and have a way of transforming the model into this view specific model.
Make sure you version everything. You will probably want to allow and support old API requests coming in for quite some time.... especially on the phone.
Actually use REST to it's fullest and not just another name for HTTP. Most implementations miss the fact that in any type of response the state should be transferred with it (missing the ST). Allow self-discovery of actions both on the page and in the API responses. For instance, if you allow paging in a resource always specify in the api or the webpage. There's an entire wikipedia page on this. This immensely aids with the decoupling allowing you to sometimes automagically update clients with the latest version.
Now you're controller action will probably looking something like this pseudo-code
MyAction(param) {
// Do something with param
model = foo.baz(param)
// return result
if(isAPIRequest) {
return WhateverResult(model)
}
return View(model.AsViewSpecificModel())
}
One thing I've been toying with myself is making my own type of ActionResult that handles the return logic, so that it is not duplicated throughout the project.
I would use the REST service for your website, as it won't add any significant overhead (assuming they're on the same server) and will greatly simplify your codebase. Instead of having 2 APIs: one private (as a DLL reference) and one public, you can "eat your own dogfood". The only caution you'll need to exercise is making sure you don't bend the public API to suit your own needs, but instead having a separate private API if needed.
You can use RestSharp or EasyHttp for the REST calls inside the MVC site.
ServiceStack will probably make the API task easier, you can use your existing domain objects, and simply write a set of services that get/update/delete/create the objects without needing to write 2 actions for everything in MVC.
I've recently been handed a code base which does a few things I'm a little different to how I usually do them.
The main difference is that it seems to pass elements (say for example a drop down list control) down to the business logic layer (in this case a separate project but still in the same solution) where the binding to business data takes place.
My natural approach is always to surface the information that is required up to the UI and bind there.
I'm struggling to match the first technique to any of the standard patterns but that may be down to the actual implementation less than the idea of what it is doing.
Has anyone ever encountered this type of architecture before? If so can you explain the advantages?
The solution is an ASP.Net website. Thanks.
Thanks,
I would make the case that this is a bad architecture, since the original developer tightly coupled the business logic to the presentation layer. If you wanted to switch from webforms to, say, MVC, you'd have to refactor chunks of your business layer, which shouldn't be the case!
If it's at all possible, you should consider moving away from developing the site in this fashion. In the interim, you can at least start the decoupling process by splitting the logic up a little bit further. If, say, you have a BindDropDown(DropDownList ddl) method, split the method apart, so you have a GetDropDownData() method that returns your actual business object, and BindDropDown only sets the values of the DropDownList. That way, at least, you'll be more easily able to move away from the tight coupling of the presentation layer and business layer in the future.
Of course, if the site is already designed like that (with a clear demarcation between the presentation layer, the intermediate "presentation binding" layer, and the business layer), I could see a case being made that it's acceptable. It doesn't sound like that's the case, however.
No, you should not pass UI elements to the Domain Model to bind / Populate.
Your domain model should ideally be able to be used with Windows Forms / WPF / Silverlight / ASP.NET / MVC you name it.
Now, I kinda understand the idea that your business objects should know how to store and render themselves etc it's the OO holy grail, but in practice this doesn't work well, as there often are dependencies (database middleware, UI components etc) with those functions, that you do not want in your BO assembly, it severely limits your reusablility.
Something that you can do though that gives your users the illusion of your BO knowing how to render itself is using extension classes (in a separate assembly, to contain the dependencies) something like...
public static class AddressUIExtensions
{
public static void DisplayAddress(this Address add, AddressControl control)
{
...
}
}
Then the API user can simply do
var ctrl = new AddressControl();
address.DisplayAddress(ctrl);
but you still have physical separation.
Has anyone ever encountered this type of architecture before?
If so can you explain the advantages?
The only advantage is speed of development - in the short-term; so it's well suited to simple apps, proof-of-concepts (PoC), etc.
Implementing proper abstraction usually takes time and brings complexity. Most of the time that is what you really want, but sometimes an app might be built as a simple throw-away PoC.
In such cases it isn't so much that a room full of people sit down and debate architectures for a couple of hours and arrive at the decision that binding in the BL makes sense - it's usually a "whatever-gets-it-done-fastest" call by the developers based on speed.
Granted, that simple laziness or ignorance will probably be the reason why it's used in other cases.
Your business layer should return a model - view model that the UI layer will in turn use to populate what it needs - period. There should be nothing sent to the business layer in terms of ui components - period. Its that simple and that hard and fast of a rule.
i have the following directories:
-UI
-BusinessLogic
-DataAccess
-BusinessObjects
if i have a class that is a client stub to a server side service that changes state on a server system, where would that go . .
this code belongs in the recycle bin ;-)
seriously, if you wrote it and don't know where it goes, then either the code is questionable or your partitioning is questionable; how are we supposed to have more information about your system than you have?
now if you just want some uninformed opinions, those we've got by the petabyte:
it goes in the UI because you said it's a client stub
it goes in the business logic because it implements the effect of a business rule
it goes in the data access layer because it is accessing a state-changing service
it goes in the business object layer because it results in a state change on the server
it would be more helpful if you told us what the stub actually does; without specifics it is hard to know where it belongs, and/or it is easy to argue in a vacuum about where it "should" belong
I would consider this a form of data access, although it's not clear to me that you need to put it in the same project as the rest of your data access classes. Remember that the layers are mainly conceptual -- to help you keep your design clean. Separating them into different projects helps organizationally, but is not mandatory. If it's an actual stub class, then the data access project is probably the natural home for it, but if it's only used in the UI layer, then keeping it there would probably be ok.
I don't think it belongs in any of those. You either need a new directory or a new project entirely. But out of those given, I would have to say BusinessObjects because it's certainly not accessing data according to your description, and rather is simply acting like a local object (stub).
In a web service repository.