I m building a website in ASP.NET MVC 4 C#.
Requirement is as follows:
0. Build an order by collecting required details across several pages.
User should be able to navigate back and forth (to check the values entered) before submitting the order finally.
There may be a dependency w.r.t data collected in subsequent pages based on the current selection. For example, if I change the country to which product needs to be shipped, custom duties & taxes applicable in the next page needs to be changed. Hence data in subsequent pages need to be invalidated. If modification to a field does not impact any other data(change in quantities ordered), current selection needs to be persisted.
For scenario 1, I m planning to use Memento pattern. The object will be serialized and persisted in database.
However, I m not sure how to deal with scenario 2. I m sure there would be a design pattern that I can use here. A code sample would be definitely helpful.
Initially I thought of Observer pattern. However, I do not have any subscribers active to be acted on the change.(Values will be saved in DB and will be loaded of next/previous page). Also, we are mostly looking at a single entity of storage here (Field 1 and 2 will be populated on page 1, field 3,4,5 in page2......etc)
I would separate the responsibilities like this for your scenarios:
1. managing the page navigation and interactions
2. awareness of state of the object
Because you are going to have a complex page navigation based on your scenario, I would recommend you to use a Mediator to manage the interaction/communication between pages.
In the other hand you need to somehow manage the state of the object and based on that state you would need to invalidate and take action, so you also would need to have a State pattern.
After this, there is one level upper that has the knowledge about what the scenario is that you can use something like usecase/application controller that manages your scenario using those implemented patterns (Mediator, State).
Related
I have an AngularJS application that is backed by a .NET core CQRS API and MongoDB database. Whilst I know and understand most of the technologies really well, MongoDB and document databases as a whole are new to me and I'm still learning.
The data in its simplest form is a document that can have up to 3 tiers of hierarchy (top level, group level and final node). When the document is first created and inserted into the Mongo database it has no tiers/nodes at all just the high level info like name, author etc. Then afterwards in the UI the user may add any number of new groups and final nodes in those groups.
In a relational database I would simply post the necessary info to the command that would insert the new rows and the command would be called something like 'AddNewGroup'. I wouldn't have to post all the information just the key IDs needed and new information to insert.
However, this approach doesn't seem correct with a Mongo. Am I right in assuming that I should post the whole document and have a single update command that overwrites the existing document in the database? Or is there a better way?
Also should I still break my commands down to the specific kind of updates that are being done e.g. UpdateAuthorName, AddNewGroup etc if the whole document is always being updated.
Your Document is the Write model, the Aggregate in the Domain driven design world. Being CQRS without Event sourcing you need to store the Document state along with the generated events. You also need to protect from concurrent writes. That being said, you have two options:
For each command you update only the nested-document that changed, i.e. the Document's header.
It has the advantage that is fast and the probability of concurrent modification exceptions is lower, if you have separate protection for each document section in place (i.e. a version attribute for each as oppose to a single version for the entire document).
It has the disadvantage that it couples too much the Domain model (class) with the infrastructure, as you need to put the queries inside the Document class. If you mix the Domain with the Infrastructure then you don't have a pure model anymore and you lose the ability to safely retry the command. This can be done outside the Domain class, in the infrastructure, if you "teach" the infrastructure repository to react differently based on the emitted events.
It is also an indication that you have in fact multiple write models, each model for the document's sections (header, body, footer, notes etc), as a write model is dictated by the consistency boundary. In this case they would share the same Document ID though.
For all commands you replace the whole document, no matter what changed inside.
It has the huge advantage that you can have a pure Domain class, with no dependency to the infrastructure whatsoever. The infrastructure just take the whole state and replace the persisted state and append the new events, in the same transaction.
It has the disadvantage that is slower than the first solution. This is the course if you follow the DDD approach, after you identified the Document as an Aggregate (in DDD the Aggregate is fully loaded and fully persisted in response to executing commands).
In my web app I'm tracking view counts on pages.
Right now, the action in the controller issues a command to the data layer to increment the view count on the model before returning the result of the query.
This action seems to break the rules of Command-Query-Separation, since with the request the user agent is submitting a query and unwittingly issuing a command (to increment the view count)
What architectural decisions need to be taken to maintain the Command-Query-Separation in this action?
You should consider CQS relative to the conceptual level of the operation in question. Here are some examples that all seem to violate CQS, but only do so on a different conceptual level:
A ReadFile call on a file system object does not modify the file - but it canupdate the last accessed timestamp on the file.
A FindById call to a repository should not change the database - but it can very well add the queried object to a cache.
A GET operation on a REST API should not change the model - but it can update statistical data.
These examples have one thing in common: They maintain the state of the model the client works on, but they do modify data outside of that model. And this is not a violation of CQS.
Another way to see this is through the principle of least surprise. A client of the above mentioned REST API would not expect the model to change with a GET request, but he doesn't care if the API updates a statistical counter.
I'm currently developing an application using ASP.NET MVC, and now I need to create an interface (web page) that will allow the users to pick and choose from a set of different objecs, the ones they'd like to use as the building blocks for constructing a more complex object.
My question is supposed to be generic, but to provide the actual example, let's say the application that will allow users to design pieces of furniture, like wardrobes, kitchen cabinets, etc. So, I've created C# classes representing the basic building blocks of furniture design, like basic shapes (pieces of wood that added together form a box, etc), doors, doorknobs, drawers, etc. Each of these classes have some common properties (width, height, length) and some specific properties, but all descend from a basic class called FurnitureItem, so there are ways for them to be 'connected' together, and interchanged. For instance, there are different types of doors that can be used in a wardrobe... like SimpleDoor, SlidingDoor, and so on. The user designing the furniture would have to choose wich type of Door object to apply to the current furniture. Also, there are other items, like dividing panels, shelves, drawers, etc. The resulting model of course would be a complete customized modularly designed wardrobe or kitchen cabinet, for example.
The problem is that while I can easily instantiate all the objects that I need and connect them together using C#, forming a complete furniture item, I need to provide a way for users to do it using a web interface. That means, they would probably have a toolbox or toolbar of some sort, and select (maybe drag and drop) items to a design panel, in the web interface... so, while in the browser I cannot have my C# class implementation... and if I post the selected item to the server (either a form post or using ajax), i need to reconstruct the whole collection of objects that were already previously chosen by the user, so I can fit the newly added item... and calculate it's dimensions, etc. and then finaly return the complete modified set of objects...
I'm trying to think of different ways of caching, or persisting theses objects while the user is still designing (adding and deleting items), since there may be many roundtrips to the server, because the proper calculation of dimentions (width, height, etc of contained objects) is done at the server by methods of my C# classes. It would be nice maybe to store objects for the currrent furniture being designed in a session object or cache object per user... even then I need to be able to provide some type of ID to the object being added and the one being added to, in a parent owner kind of way, so I can identify properly the object instance back in the server where the new instance will be connected to.
I know it's somehow confusing... but I hope this gives one idea of the problem I'm facing... In other words, I need to keep a set of interconnected objects in the server because they are responsible for calculations and applying some constraints, while allowing the users to manipulate each of these objects and how they are connected, adding and deleting them, through a web interface. So at the end, the whole thing can be persisted in a database. Idealy I want even to give user a visual representation or feedback, so they can see what they are designing as they go along...
Finally, the question is more so as to what approach should I take to this problem. Are C# classes enough in the server (encapsulating calculation and maybe generating each one it's own graphical representation back to the client)? Will I need to create similar classes in javascript to allow a more slick user experience? Will it be easier if I manage to keep the objects alive in a session or cache object between requests? Or should I just instantiate all objects that form the whole furniture again on each user interaction (for calculation)? In that case, I would have to post all the objects and all the already customized properties every time?
Any thoughts or ideas on how to best approach this problem are greatly appreciated...
Thanks!
From the way you've described it, here is what I'm envisioning:
It sounds like you do want a slick looking UI so yes, you'll want to divide your logic into two sets; a client-side set for building and a server-side set for validation. I would get heavy on the javascript so that the user can happily build their widget disconnected, and then validate everything once it's posted to the server.
Saving to a session opens a whole can of webfarm worms. If these widgets can be recreated in less than a minute (once they've decided what they like), I would avoid saving partials all together. If it's absolutely necessary though, I would save them to the database.
If the number of objects to construct a widget is reasonable, it could all come down at once. But if there are hundreds of types of 'doors' you're going to want to consider asynchronous calls to load them, with possible paging/sorting.
I'm confused about your last part about instantiating/posting all objects that form the whole furniture. This shouldn't be necessary. I imagine the user would do his construction on his client, and then pass up a single widget object to the server for validation.
That's what I'm thinking anyway... by the way, hello StackOverflow, this is my first post.
You might want to take a look at Backbone.js for this kind of project. It allows you to create client-side models, collections, views and controllers that would be well suited to your problem domain. It includes built in Ajax code for loading/saving those models/collections to/from the server.
As far as storing objects before the complete object is sent to the server, you could utilize localStorage, and store your object data as a JSON string.
I created a User Control derived of a Tab Page with certain controls such as ListView, Buttons, and Textboxes in it, to add Tab Pages to a Tab Control dynamically during run time.
How do I handle the events from such controls within each User Page (multiple instances of these user control tabs) in my main form where my Tab Control is going to be located? Initially I want to be able to communicate the data in some of these controls inside each User Page back to the main form.
This isn't just in regards to tab pages, although we use a tab page derived in a similar fashion. Sending events willy-nilly through the UI causes me much confusion (keeping the order in which the various events trigger and so on). Instead, I create a controller class which is passed to the various UI components and those components notify the controller with changes and the UI elements subscribe to events on the controller to receive information about the environment.
To make this concrete, each of the derived Tab Pages is passed a reference to the controller. They may change the state of the controller based on user actions. Say the current record is changed, the UI calls a method on the controller telling it the new record.
The other UI elements on the page are notified of this change because they subscribe to the controller's OnCurrentRecordChange event.
While this introduces another class into the mix, the advantage is that you have a single controller orchestrating the changes to the UI, rather than a bunch of events percolating and passing information around. I find that is also breaks dependencies on UI elements collaborating: I can add, remove or change UI elements and as long as they all speak to the controller for updates there is far less code rework.
If you have ever found yourself debugging UI "loops" (where changes in one control are triggering changes in other controls which trigger yet more changes which eventually affect the original component) then the extra work of a controller class will pay off immediately.
Update: Answering your comment... A first stop would be to read up on the Model View Controller architecture: http://en.wikipedia.org/wiki/Model%E2%80%93View%E2%80%93Controller
For a concrete example: http://www.dotnetheaven.com/Uploadfile/rmcochran/MVC_intro02012006001723AM/MVC_intro.aspx
When working with Windows Forms many people get stuck in a two tier design: UI + Data Layer. The binding system makes this very natural as there is an easy way to get at data (Entity Framework, LINQ) and an easy way to wire data to the UI (the designers). Adding a controller between them isn't as hard as it may seem though.
In my work I use LLBLGen (http://www.llblgen.com/defaultgeneric.aspx) for my low level data layer, but you could substitute LINQ or Entity Framework or any other data access tool and the general overview would be the same.
Above this layer I build my business objects. Many of them are nothing more than facades for the LLBLGen objects (if my business rules don't say much about the entity), while others have a lot of validation built in and they aggregate several low level objects into more usable business objects. Finally there are the business objects that don't directly have entities behind them (communications objects between systems, for example).
The controller object I mention lives alongside my business objects in that it knows about these objects and it even hands them out to the UI for data binding purposes. When the UI wants a change make, it notifies the controller and it uses the business objects to ensure the updates are permitted and if so passes the changes back down the the data layer.
In the diagram on Wikipedia, the View is my UI. The Controller is my coordination object which is mediating changes in both directions while the Model is my business object layer (which has a low level below this, but that is an implementation detail that is hidden from the higher layers).
Although going from "View <-> Model" (classic data binding) to "View <-> Controller <-> Model" seems to be adding complexity, the major benefit is that the controller becomes the one stop shopping location for "truth" about the application. The UI requests data from the controller, so the controller knows about all the UI elements that have a given data binding. If things change, an event notifies all the UI elements and they change visually for the user. The nice thing is that there is one "truth" about system state, and that is the truth the controller is managing for the UI.
When data needs to be persisted, the request goes to the controller to update the model. Again, we have a single place for the coordination of the save and subsequent updates. All of the data validation integrity rules are in the business logic layer (the Model) so the controller's code is kept light.
By separating your UI concerns, your coordination concerns and your business logic concerns
you end up with each having very "lightweight" methods and properties. More importantly, each subsystem can take a very simplistic view of the application as they focus on that piece of the puzzle instead of threading events, UI updates and data updates in one monolithic piece of code.
For further reading I would recommend any of the articles or books about ASP.NET MVC. While this is not Winforms, the basic ideas underlying the MVC can be applied to winforms. There are some good comments here as well: https://stackoverflow.com/questions/2406/looking-for-a-mvc-sample-for-winforms
I should be clear that what I am doing with my projects is probably not considered "pure" MVC, but simply a separation of concerns along the natural fault-lines found in the applications we develop. We don't use a "framework" for this (although some code generation is involved).
Are non-data classes (that are not representing anything in the database) still considered a part of an application's domain model or not? Would you put them together with your Linq2Sql domain model or somewhere else??
Edit: Info about classes: For example, I have a "StatusMessage" class which is instantiated under certain circumstances, and may be discarded, or displayed to the user. It has nothing to do with data from the database (neither retrieval nor storage). Another example is an "Invitation" class. Users on my site can 'invite' each other, and if they do, an Invitation class is created which will encrypt some information and then output a link that the user can give to someone else. I have >25 of those classes. They are not for data transfer, they do real work, but they are not related to the database, and I wouldn't say they are all 'helpers'?! ....
Domain model is data pertinent to the domain. It can come from any source, could be one way (e.g. calculated and persisted only, and never read back). The database is just one domain data persistence strategy.
So yes the data from different places could be part of the domain model.
Personally I would consider a message to be more of a view model entity, whereas the state indicating the requirement for particular messages could be in the domain model. In the case of the invite I would have said that the message flows through to a service and thus becomes domain data - that is ultimately passed to and I suppose becomes domain data pertinent to the other user (and say displayed using some other view model).
It depends.
If these classes represent a combination of data coming from different tables, process data, take decisions and orchestrate operations, I would consider them business level entities and keep them in the business layer.
If these are some kind of helpers, then it will depend.
ADDED: After having read you extra info on those classes, I think many of them would deserve a rightful place in your business logic. You may wish to draw a line between a domain model and business logic. I suppose you consider you domain model to only contain database mapping classes and that's fine. But then there are also business rules, worker classes that accept user input, process it, take decisions and invoke necessary operations to enact them. These could include CRUDing something to the database, sending email notifications, initiating scheduler tasks, notifying other services etc. For many actions their result will only be distantly reflected in the database, some values may be changed, but not like a complete business object state goes directly to the database. Therefore, it would make sense to keep them together in a dedicated layer.
Another option would be to put the logic of those classes into stored procedures thus persisting it in the database. What doesn't fit there may be grouped together as helpers.
With "StatusMessage", it may not be necessary to have a separate class for that. Messages belong to the view level. A class could just decide on which message to show but then the real display work will take place closer to the UI.