I'm finding myself with a bit of anarchitectural problem: I'm working on a smallish project that, among other things, involves data entry and persistance, with a DAL using a webservice with a custom JSON protocol. So far, so good, and it would be a relatively simple matter slapping together some quick&dirty DataTable + DataGrid code and be done with it.
This is a learning project, however, and I'm trying to figure out how to do a somewhat cleaner design, specifically MVVM with a WPF gui, using the Caliburn.Micro framework. The server part is fixed, but I'm doing the entire client part, including the DAL.
With the DG+DT combo, it's pretty easy doing a bunch of edits in the DG, and when user commits simply iterate the Rows, checking the RowState property and firing create/update/delete DAL methods as necessary. DataTable doesn't seem very MVVM databinding-friendly, though, and the ViewModels involved shouldn't care what kind of UI control they're being used with... given that persistance is done through a webservice, requiring batch commit of modifications seems reasonable enough, though.
So I'm pondering what my design options are.
As I understand it, the DAL should deal with model-layer objects (I don't think it's necessary to introduce DTOs for this project), and these will be wrapped in ViewModels before being databound in editor ViewModels.
The best idea I've been able to come up with so far is to do a clone of the items-to-be-edited collection when firing up an editor ViewModel, then on commit checking the databound collection against the copy - that'll let me detect new/modified/deleted objects, but seems somewhat tedious.
I've also toyed with the idea of keeping IsModified and IsNewlyCreated properties (I guess those would go in the ViewModel?) - keeping track of deleted items could probably be handled by keeping the editable items in an ObservableCollection, handling the CollectionChanged event, and adding deleted items to a separate list?
As you can see, I'm pretty unsure how to handle this, and any suggestions would be greatly appreciated :)
First of all
1- Don't do any changes untill you reached a point where you can't live without code changes.
2- As you already said that , you are creating a learning project and you want more modular application so my thoughts would be revolving around how to make my application more modular first before going deep into implementational details.
3- Have you considered using Prism + MVVM framework?
4- I would still suggest that in your ViewModel , you can still use DataTable to bind the data to the grids and byusing DataTable.GetChanges() method will give you all the changes in the table so you don't ever need to maintain boolean varaibles like IsNew or IsModified.
5- If you are not convinced using DataTable yet than use ObservrableCollection to bind data to the grid. ObservrableCollection does not notify indivisual item changed , it will only notify when item are added or removed.
Related
I'm quite confused about the architecture of my MVVM application (formerly WinRT, now targeting UWP) concerning data access. I'm quite unsure how to propagate changes across the UI and where to put access to the data layer.
Here's the basic architecture:
Model layer: contains models that only have auto properties (no navigation properties that reference other models, just Ids; so they are basically just representations of the database). They don't implement INotifyPropertyChanged.
Data acccess layer: A repository that uses sqlite-net to store models in a database. It exposes the basic CRUD operations. It returns and accepts model from the model layer.
ViewModels:
ViewModels for the Models: They wrap around the models and expose properties. Sometimes I two-way bind content of controls (e.g. TextBoxes) to properties. The setters then access the data layer to persist this change.
PageViewModels for Views: They contain ViewModels from above and Commands. Many Commands have become very long as they do the data access, perform domain specific logic and update the PageViewModels properties.
Views (Pages): They bind to the PageViewModels and through DataTemplate to the ViewModels for the models. Sometimes there is two-way databinding, sometimes I use Commands.
I now have several problems with this architecture:
Problem 1: One model can be represented on the screen at several palaces. For example, a master-detail view that displays a list of all available entities of a type. The user can select one of them and its content is displayed in the detail view. If the user now changes a property (e.g. the model's name) in the detail view, the change should be immediatelly reflected in the master list. What is the best way of doing this?
Have one ViewModel for the model? I don't think this makes much sense, as the master list needs only very little logic, and the detail view much more.
Let the model implement INotifyPropertyChanged and thus propagate the change to the ViewModels? The problem I have with this, is that the data layer currently doesn't guarantee that the objects it returns for two read operations on one model id are identical - they just contain the data read from the database and are newly created when they are read (I think that's the way sqlite-net works). I'm also not really sure how to avoid memory leaks happening because of all the PropertyChanged event subscriptions from the ViewModels. Should I implement IDisposable and let the PageViewModel call its children's Dispose() method?
I currently have a DataChanged event on my data access layer. It is called whenever a create, update or delete operation occurs. Each ViewModel that can be displayed simultaneously listens to this event, checks whether the changed model is the one its the ViewModel for and then updates its own properties. Again I have the problem of the memory leak and this becomes slow, as too many ViewModels have to check whether the change is really for them.
Another way?
Problem 2: I'm also not sure whether the place I access data is really well chosen. The PageViewModels have become extremely convoluted and basically do everything. And all ViewModels require knowledge of the data layer with my architecture.
I've been thinking of scrapping data access with sqlite-net and using Entity Framework 7 instead. Would this solve the problems above, i.e. does it guarantee object identity for one model when I use the same context? I also think it would simplify the ViewModels as I rarely need read operations, as this is done through navigation properties.
I've also been wondering whether having two way databinding is good idea at all in a MVVM application, as it requires the property setter to call the data access layer to persist the changes. Is it better to do only one-way binding and persist all changes through commands?
I'd be really happy if someone could comment on my architecture and suggest improvements or point to good articles on MVVM architecture that focus on my problems.
Have one ViewModel for the model? I don't think this makes much sense, as the master list needs only very little logic, and the detail view much more.
ViewModel is not dependent on the model. ViewModel uses the model to address the needs of the view. ViewModel is the single point of contact for the view so whatever the view needs the viewmodel has to provide. So it can be a single model/multiple models. But you can break down a single ViewModels into multiple sub ViewModels to make the logic easier. Its like detail pane can be separated into a user control with its own view model. Your master page will just have the window that will host this control and the MasterViewmodel will push the responsibilities to the sub ViewModel.
Let the model implement INotifyPropertyChanged and thus propagate the change to the ViewModels? The problem I have with this, is that
the data layer currently doesn't guarantee that the objects it returns
for two read operations on one model id are identical - they just
contain the data read from the database and are newly created when
they are read (I think that's the way sqlite-net works). I'm also not
really sure how to avoid memory leaks happening because of all the
PropertyChanged event subscriptions from the ViewModels. Should I
implement IDisposable and let the PageViewModel call its children's
Dispose() method?
The danger is not using INotifyPropertyChanged, but as your rightly said its with the subcribing and unsubscribing. Wherever there is a need to subscribe to any event - not only INotifyPropertyChanged you need to use IDisposable to unsubscribe itself and its child ViewModels. I am not clear on the datalayer you describe, but if it publishes the property changed event for any modification I dont see any problem using INotifyPropertyChanged.
3.I currently have a DataChanged event on my data access layer. It is called whenever a create, update or delete operation occurs. Each
ViewModel that can be displayed simultaneously listens to this event,
checks whether the changed model is the one its the ViewModel for and
then updates its own properties. Again I have the problem of the
memory leak and this becomes slow, as too many ViewModels have to
check whether the change is really for them.
As I said earlier, if you handle the subscribe/unsubscribe properly for all models you need not worry about performance issue of INotifyPropertyChanged. But what might be adding to the problem is the number of calls you make to the database for requesting data. Have you considered using Async...Await for the data access layer which will not block the UI for any update thats happening. Even if the data update is slow a reactive UI which doesnt get blocked by the data calls is a better option.
So try adding a data access service which is abstracted over the DAL layer and provide a asynchronous approach to accessing the data. Also have a look at the Mediator Pattern. That might prove helpful.
I'm also not sure whether the place I access data is really well
chosen. The PageViewModels have become extremely convoluted and
basically do everything. And all ViewModels require knowledge of the
data layer with my architecture.
2 main problems i see,
If you feel the PageViewModel is too huge break down into sub view models of manageable size. Its very subjective, so you have to try to see what all parts can be broken down to its own component/usercontrol with its own viewmodel.
When you say ViewModels require knowledge of data layer, I hope you mean they are dependent on a Interface that manages the DAL layer services and doesn't have direct access to class with CRUD methods. If not try to add an abstract layer which does you actually do in your view model. And that will handle the DAL CRUD operations.
I've been thinking of scrapping data access with sqlite-net and using
Entity Framework 7 instead.
Don't try to replace sqlite-net with EF without hard evidence. You need to measure performance in your app before trying to jump into such big changes. What if the problem lies in your code rather than the component you are using. First try to fix the above mentioned issues then you can segregate the DAL layer via interfaces and replace it if needed.
I've also been wondering whether having two way databinding is good
idea at all in a MVVM application, as it requires the property setter
to call the data access layer to persist the changes. Is it better to
do only one-way binding and persist all changes through commands?
If you are making a call to database directly everytime you make a change to the field/ for every key stroke then its a problem. Then you should have a Copy of the Data Model and persist the changes only when you click the save button.
I'm bulding my first WPF application using the MVVM pattern over a WCF Service. I'm new on this technologies. After a lot of work and with the help of this comunity, I manage to create the bases for my app, from data and service layers to a full client using MVVM pattern and WPF. Still, got some conceptual concern/doubts about this technics that maybe someone can help clarify.
MY QUESTIONS
1) As long as I undestand each view-viewmodel has not knowledge of the
exitance of the rest of the views. That means that each view with his
viewmodel is isolated. So what happends when in my app i need for
instance to show a view, that creates a new view and need to get the
result from this child view on the caller view? In this case each view
has is viewmodel, so how do i share this information between
views/viewmodels?
2) My WCF service expose POCO's object to Client. So this is
essentialy a disconected enviroment. So what about reports? If i
follow MVVM guidelines, i should contact my WCF service from my
viewmodel, get the objects and then expose a property that i somehow
had to bind to a report object in XAML, right? So the report should
has not know about my database.Which objects can i use to build my
reports that allow me to use POCO's object a data origin?
3) This one I know is a bit controversial in the comunity. My Data and
Service layer comunicate data using POCO's generated from the
database, wich is ok. Now my doubt is, when i comunicate to client,
should i use this same objects or build my own custom objects?
4)When i need to save a header-detail object to database (for instance
a purchase order from clients), should i create a custom object wich
has an instance of the header object an a collection of detail items
on server side, or this is viewmodel work?
5) Can someone give me a practical example of when is usefull to have more than one view per viewmodel? From what I have been doing i get to the conclusion that every view extremely depends on the viewmodel,
Any comment will be appreciated. I'm trying to follow good programing
practices here.
UPDATE
After the revived comments i'll try to clarify my questions:
About 1) I had suspected that this is one of the key issues with MVVM. Anyway i'm trying to stay away form external tools, because in the past i had severe issues about then. When you did encounter problems with external toolkits finding an answer is very hard or sometimes impossible. Can't this be resolved with a not so complex approach using basic MVVM that comes in Visual Studio?
About 2) I'm not using anything yet. I'm thinking in advance. How do you recommend to build my reports in a MVVM way? In the past, i had done something similar using disconnected Crystal reports objects. I made the query in the server (with a recordset), send the data to client using XML or something, and in the client tranform the data to recordset again and set the report datasource to this object. I'm thinking in a similar approach but using pocos classes and MVVM. Any ideas?
About 3) I think this is what i've been doing, but i'm not sure. For instance when i need to fill a combobox with customers for filter customers orders, I expose directly my POCOs classes. I know this is not the more efficient approach because i need to transfer all the propertis of my objecs when I need only 2 o 3 of them, but for simplicity i send the entire object. When i need to show in a grid the result filteres customers orders,I use a custom class with only the propertis that i want to show in the grid. When you said "i Create DTO" you mean this? Isn't POCO's classes also DTO?
About 4) When I need to insert or update a master detail object (a customer puchase order for instead), it generaly involves making changes to at least 2 or more database objects. So my question is: Should I create and expose a complex object in the datalayer that contains individual database object classes? or is better to expose the base object and let viewmodel handle the individual object and send them one by one to service layer for update? Hope it gets clear.
About 5) I suspect that. I'll keep it in mind.
Thanks!
This is an inherent problem with MVVM in WPF. There are two libraries that help to solve this problem. Take a look at
Caliburn.Micro which uses a ViewModel first approach to solve this
problem. The other library is Microsoft's own Prism library. this
library takes a View first approach to solve this issue.
how are you generating reports? If you are using something like SSRS, they have their own exposed WCF service for retrieving reports.
You can wrap this in a service and consume it in your ViewModels.
It depends. How complex are your objects? if you are doing simple operations the data model is probably fine. However, for more
complex operations i tend to create a DTO (data transfer oject) that
encompasses a Unit of Work.
I'm not sure i understand the question.
You should strive to always have one view per viewmodel. If there is a reason to have a separate view, there is probably a good reason
to have a separate viewmodel. The problem you are probably having is
related to #1 and you want to somehow share data between these views.
Overall, I know your pain and I have a love/hate relationship with WPF using MVVM for some of the reasons that you have stated. out of the two frameworks that I list in #1, I have used Calibrun.micro and it makes WPF MVVM much more accessible and easy to use. a good blog post to get started is:
http://www.mindscapehq.com/blog/index.php/2012/01/12/caliburn-micro-part-1-getting-started/
If you want you can also take a look at prism:
http://compositewpf.codeplex.com/
There are some other ones out there. These are the two that i have had experience with. Prism is OK. However, I personally do not like their navigation service.
Hopefully this helps!
Is there any way to have a datagrid listen to the database and automatically update the data if the database data is changed? I use a SQL Server database.
I'd like to use Linq-2-SQL if possible
Because #Slaggg asked: there are fairly straightforward ways of doing this, but they're almost certainly going to involve a lot of coding, it'll impact performance pretty significantly, and my strong suspicion is that it'll be more trouble than it's worth.
That said, for a typical n-tier application, at a very high level you'll need:
(1) A way to notify the middle tier when the data changes. You could use custom-code triggers inside each the table that fire off some sort of notification (perhaps using WCF and CLR stored procedures), or you could use a SqlDependency object. Probably the second would work better.
(2) A way to notify each client connected to that middle tier. Assuming that you're using WCF, you'll need to use one of the duplex bindings that are available, such as Net.TCP or HttpPollingDuplex (for Silverlight). You'll need to make sure this is configured correctly on both the client and the server. You'll also need to manually keep track of which clients might be interested in the update, so that you can know which ones to update, and you'll need to be able to remove them from that list when they go away or timeout. Tomek, from the MS WCF team, has some pretty good examples on his blog that you might want to investigate.
(3) A mechanism to update the local client's model and/or viewmodel and/or UI once you get the notification from the middle tier that something has changed. This is more complicated than you'd think: it's difficult enough to keep your UI in sync with your data model under normal circumstances, but it gets dramatically more complicated when that data model can be changing under you from the other direction as well.
The idea behind these sorts of notifications is straightforward enough: but getting all the details right is likely to keep you debugging way into the night. And the guy who has to support all this two years from now will curse your name.
Hope this helps.
It depends from where you are updating the database:
From the same context (in
Silverlight, are you adding,
deleting, editing on the same page)
From a ChildWindow in your
Silverlight application
From an
external, non-related tool, outside
of your Silverlight application
This seems like it would be a common issue to be but I don't know the best way to solve it. I want to be able to send an Entity to a view, have changes be made to the entity in the view, but then cancel (remove) those changes if the user cancels out of the view. What is the proper way to do this.
Here are two options I have but I think there should be others that are better
1) Take an entity, create a clone, send the clone to the view...if changes are accepted, update the original entity with the clone's values
2) Send the entity to the view, if the user cancels, remove the entity from NHibernate's cache and reload it from the database
For (2), the issue for me would be that the old entity could still be referenced throughout my project after it has been removed from the cache.
Edit:
Ok, so the evict method is the way to go if I am implementing method (2). Thanks, I could not remember the details of that one. However, the issue of view objects referencing my old evicted entities makes the issue tough to deal with. I can't just have my view automatically update to a new entity without having custom code in each one to rebind when my custom eviction event is raised. And rebinding may not be trivial in certain cases. I need to think on this some more as I may be over complicating but at the moment, this method seems trickier.
I suspect I am going to be stuck with method (1) which has its own set of problems but will wait a bit longer to see if anyone else has some ideas.
Edit 2: Just found this. I think it pretty much covers the answer in detail and comes with a great demo project - Building a Desktop To-Do Application with NHibernate - http://msdn.microsoft.com/en-us/magazine/ee819139.aspx
In addition to this, NHibernate has a Session.Refresh(Object entity) function which seems to solve the exact problem. So, when an entity is changed but then cancelled before save, I can just call Session.Refresh to reload it from the database and discard the changes.
I'll go for option 1 and use what is called a ViewModel instead of your entity.
The ViewModel is representation of you model for a specific view. In the ViewModel you can mix data from different entities and pre-format values to fit the view. Is an elegant way of passing data to a view and you can accomplish what you want easily.
Using ViewModels is becoming the preferred way of working in ASP.net MVC and Silverlight / WPF.
To read more about Viewmodels: http://blogs.msdn.com/dphill/archive/2009/01/31/the-viewmodel-pattern.aspx
The best way to do this is to call the Evict method on the ISession used to load the object. This will remove the object from the session cache. and you can then reload and redisplay it.
Evicting the object from the session makes it transient detached so if there are still references to it in the project they will not be persisted when the session is flushed. How you deal with that depends on your application but I would suggest raising an event to notify subscribers that they need to re-load the object.
I'm a total amateur writing a small App to track to changes in folders. I imagine I'll be keeping information about the directories to watch in one datatable bound to a gridview, when the user clicks a button, the program will create FileSystemWatchers to keep an eye on the directories and they will send their event messages to another datatable bound to another gridview. Where in the wide wide world of OOP should I be declaring, initiating, and manipulating the Datatables? The main form, inside main, in a class, or should I "give up" and use Visual Studio to automagically create a DataSet and stick two tables in it?
Well horses for courses. For a little utility app you would probably be better off using the VS "Visual/RAD" style of programming. Eg drag and drop tables etc on to the form, like most of the tutorials show.
Strictly speaking, and for a larger app, a more correct way would be to make a separate assembly(.dll) that handles data access and you call the classes within that assembly from the main form. This concept goes under a number of terms, but effectively you want to separate your concerns. In other words, let the UI handle UI interactions, have a separate assembly/project/whatever that handles database interactivity, and another separate assembly/project/whatever that handles business logic etc.
That last couple of sentences can mean different things to different people, and there is no 100% correct way to do things.
Some articles that may help:
link text
link text
link text
I agree with KiwiBastard: you get quite a bit of benefit from using the VS tools to generate a typed DataSet.
That just generates classes, though. You still have to manage an instance of the DataSet. For a very simple app, where I haven't factored UI and business logic into different classes, I'd do that in the Form. For an app of any complexity, it's part of the business logic class.
Something that will probably save you a lot of trouble: data binding is good, ADO is good, but certain kinds of ADO code (in particular event handlers on the DataTable) do not play well with data binding. If you're using BindingSources (and, really, you should be), it's generally a good idea to suspend binding whenever you're manipulating the DataSet's objects programmatically (like, when you're adding and deleting rows). The cost of suspending and resuming binding is very small, and it eliminates an entire class of problems that are extremely hard to diagnose.