General questions regarding MVVM pattern with WCF service - c#

I'm bulding my first WPF application using the MVVM pattern over a WCF Service. I'm new on this technologies. After a lot of work and with the help of this comunity, I manage to create the bases for my app, from data and service layers to a full client using MVVM pattern and WPF. Still, got some conceptual concern/doubts about this technics that maybe someone can help clarify.
MY QUESTIONS
1) As long as I undestand each view-viewmodel has not knowledge of the
exitance of the rest of the views. That means that each view with his
viewmodel is isolated. So what happends when in my app i need for
instance to show a view, that creates a new view and need to get the
result from this child view on the caller view? In this case each view
has is viewmodel, so how do i share this information between
views/viewmodels?
2) My WCF service expose POCO's object to Client. So this is
essentialy a disconected enviroment. So what about reports? If i
follow MVVM guidelines, i should contact my WCF service from my
viewmodel, get the objects and then expose a property that i somehow
had to bind to a report object in XAML, right? So the report should
has not know about my database.Which objects can i use to build my
reports that allow me to use POCO's object a data origin?
3) This one I know is a bit controversial in the comunity. My Data and
Service layer comunicate data using POCO's generated from the
database, wich is ok. Now my doubt is, when i comunicate to client,
should i use this same objects or build my own custom objects?
4)When i need to save a header-detail object to database (for instance
a purchase order from clients), should i create a custom object wich
has an instance of the header object an a collection of detail items
on server side, or this is viewmodel work?
5) Can someone give me a practical example of when is usefull to have more than one view per viewmodel? From what I have been doing i get to the conclusion that every view extremely depends on the viewmodel,
Any comment will be appreciated. I'm trying to follow good programing
practices here.
UPDATE
After the revived comments i'll try to clarify my questions:
About 1) I had suspected that this is one of the key issues with MVVM. Anyway i'm trying to stay away form external tools, because in the past i had severe issues about then. When you did encounter problems with external toolkits finding an answer is very hard or sometimes impossible. Can't this be resolved with a not so complex approach using basic MVVM that comes in Visual Studio?
About 2) I'm not using anything yet. I'm thinking in advance. How do you recommend to build my reports in a MVVM way? In the past, i had done something similar using disconnected Crystal reports objects. I made the query in the server (with a recordset), send the data to client using XML or something, and in the client tranform the data to recordset again and set the report datasource to this object. I'm thinking in a similar approach but using pocos classes and MVVM. Any ideas?
About 3) I think this is what i've been doing, but i'm not sure. For instance when i need to fill a combobox with customers for filter customers orders, I expose directly my POCOs classes. I know this is not the more efficient approach because i need to transfer all the propertis of my objecs when I need only 2 o 3 of them, but for simplicity i send the entire object. When i need to show in a grid the result filteres customers orders,I use a custom class with only the propertis that i want to show in the grid. When you said "i Create DTO" you mean this? Isn't POCO's classes also DTO?
About 4) When I need to insert or update a master detail object (a customer puchase order for instead), it generaly involves making changes to at least 2 or more database objects. So my question is: Should I create and expose a complex object in the datalayer that contains individual database object classes? or is better to expose the base object and let viewmodel handle the individual object and send them one by one to service layer for update? Hope it gets clear.
About 5) I suspect that. I'll keep it in mind.
Thanks!

This is an inherent problem with MVVM in WPF. There are two libraries that help to solve this problem. Take a look at
Caliburn.Micro which uses a ViewModel first approach to solve this
problem. The other library is Microsoft's own Prism library. this
library takes a View first approach to solve this issue.
how are you generating reports? If you are using something like SSRS, they have their own exposed WCF service for retrieving reports.
You can wrap this in a service and consume it in your ViewModels.
It depends. How complex are your objects? if you are doing simple operations the data model is probably fine. However, for more
complex operations i tend to create a DTO (data transfer oject) that
encompasses a Unit of Work.
I'm not sure i understand the question.
You should strive to always have one view per viewmodel. If there is a reason to have a separate view, there is probably a good reason
to have a separate viewmodel. The problem you are probably having is
related to #1 and you want to somehow share data between these views.
Overall, I know your pain and I have a love/hate relationship with WPF using MVVM for some of the reasons that you have stated. out of the two frameworks that I list in #1, I have used Calibrun.micro and it makes WPF MVVM much more accessible and easy to use. a good blog post to get started is:
http://www.mindscapehq.com/blog/index.php/2012/01/12/caliburn-micro-part-1-getting-started/
If you want you can also take a look at prism:
http://compositewpf.codeplex.com/
There are some other ones out there. These are the two that i have had experience with. Prism is OK. However, I personally do not like their navigation service.
Hopefully this helps!

Related

NHibernate DTO with deep object graph

I am writing a smart client WPF application using MVVM that communicates with a WCF service layer containing the business logic and domain objects that uses NHibernate to manage persistence. We are in control of both sides of the wire.
Currently, I am working on creating a screen to Edit Product Details it has a tab control with each tab representing some aspect of the Product such as Main Details, Product Class, Container Type and so on. In the end, there will probably be at least 5 of these tabs.
Up to now I have been working on transforming simple domain objects to DTOs using SetResultTransformer and this has been working quite nicely.
Now that I am getting to a more complicated object I am getting a bit stuck. I would like to return a DTO to be displayed that contains the Main Product details, categories and classes. As far as categories and classes are concerned I would not want to return every single property of the domain object.
Questions:
1) How do people go about creating a DTO where there are several one to
many collections to return as in this example?
2) Is there any concerns about the DTO becoming too large?
3) When sending the DTO back to the back end is it better to send the same type of DTO with the updated values or some other more command oriented DTO?
Thanks for any help
Alex
We are currently using pretty big DTOs and it is working pretty fine. NHibernate is doing a lot of lazy loading, so this helps with big objects.
We are using bags for one to many relations, they are lazy loaded and are working pretty well.
Depending on the type of application lazy loading can be a bit of a problem. We had some problems with our rich client application with big DTOs but with some planning and a sound architecture it works pretty well.
I don't know if large DTOs are really a problem with NHibernate, but so far we don't have got any problems.
We are sending the whole object back and forth and it is doing well. NHibernate updates just the changed fields and this is really nice.
I wouldn't serialize the NHIbernate objects over web services or something like that (I don't know the WCF service layer and how it communicates with your application). If I am transferring data through web services I am generating new data objects and fill them accoringly, transfer them back and forth and update the NHibernate objects with those.
Have you tried Automapper? I do all my DTO mappings with Automapper and it works like a charm.
Have a look at automapper. I'm sure you'll like it.

architecture help: wpf/mvvm data-entry frontend for custom json webservice

I'm finding myself with a bit of anarchitectural problem: I'm working on a smallish project that, among other things, involves data entry and persistance, with a DAL using a webservice with a custom JSON protocol. So far, so good, and it would be a relatively simple matter slapping together some quick&dirty DataTable + DataGrid code and be done with it.
This is a learning project, however, and I'm trying to figure out how to do a somewhat cleaner design, specifically MVVM with a WPF gui, using the Caliburn.Micro framework. The server part is fixed, but I'm doing the entire client part, including the DAL.
With the DG+DT combo, it's pretty easy doing a bunch of edits in the DG, and when user commits simply iterate the Rows, checking the RowState property and firing create/update/delete DAL methods as necessary. DataTable doesn't seem very MVVM databinding-friendly, though, and the ViewModels involved shouldn't care what kind of UI control they're being used with... given that persistance is done through a webservice, requiring batch commit of modifications seems reasonable enough, though.
So I'm pondering what my design options are.
As I understand it, the DAL should deal with model-layer objects (I don't think it's necessary to introduce DTOs for this project), and these will be wrapped in ViewModels before being databound in editor ViewModels.
The best idea I've been able to come up with so far is to do a clone of the items-to-be-edited collection when firing up an editor ViewModel, then on commit checking the databound collection against the copy - that'll let me detect new/modified/deleted objects, but seems somewhat tedious.
I've also toyed with the idea of keeping IsModified and IsNewlyCreated properties (I guess those would go in the ViewModel?) - keeping track of deleted items could probably be handled by keeping the editable items in an ObservableCollection, handling the CollectionChanged event, and adding deleted items to a separate list?
As you can see, I'm pretty unsure how to handle this, and any suggestions would be greatly appreciated :)
First of all
1- Don't do any changes untill you reached a point where you can't live without code changes.
2- As you already said that , you are creating a learning project and you want more modular application so my thoughts would be revolving around how to make my application more modular first before going deep into implementational details.
3- Have you considered using Prism + MVVM framework?
4- I would still suggest that in your ViewModel , you can still use DataTable to bind the data to the grids and byusing DataTable.GetChanges() method will give you all the changes in the table so you don't ever need to maintain boolean varaibles like IsNew or IsModified.
5- If you are not convinced using DataTable yet than use ObservrableCollection to bind data to the grid. ObservrableCollection does not notify indivisual item changed , it will only notify when item are added or removed.

Exposing EF Model to a variety of clients

Hey guys, I hope everyone is doing well.
I have (more-less) a broad question referring to exposing a model to various clients.
Here is my situation: I have a model (sitting on top of Oracle) that is created using EF 4.0 and 3rd party Oracle provider. The model resides in a library so it can be easily referenced by multiple projects.
My goal is to make the model consumable by as many types of clients as possible:
.Net client code (Silverlight, WPF and ASP.Net, services, etc.).
MS Office apps (Excel)
Now, I don’t want to get into the business of creating custom methods over the Model (e.g. GetCustomersWhoAreVeryUpsetOrderedByUpsetRank()). I’d like to be able to expose the model in such way that the client code can decide (at run time) how to construct the queries. Should I take in IQueriable, execute it in a service and return the result data set? Or do I let the client do all the work via the model?
I did give oData a shot but it appears that the client side library used to write Linq queries against the model is rather limiting. In addition the protocol does not support updates.
So my question is what is the best approach/technology/implementation in exposing the Model based on the above mentioned criteria?
Many thanks in advance.
I'd advice you not to share your model 1:1 with your clients or reuse it 1:1 for different clients.
To share with stakeholders, use some simple DTOs. The mapping code can be created automatically with a CASE tool, T4 transformation or any other source code creation. If you share your own model, you run into problems as soon as you have to / want to refactor something or if one client has some specific requirements.
Almost same depends on the query methods from EF (the DAL). Define some DataMappers interfaces with common requirements and implement a default behavior. If you ever need your GetCustomersWhoAreVeryUpsetOrderedByUpsetRank(), you are sill fine since you can add this query to a data mapper deriving from the default mapper. With this approach the core system stays clear and reusable and each client is able to get her/his custom features.

Proxy object references in MVC code

I am just figuring out best practice with MVC now I have a project where we have chosen to use it in anger.
My question is.
If creating a list view which is bound to an IEnumerable is this bad practise?
Would it be better to seperate the code generated by the WCF Service reference into a datastructure which essentially holds the same data but abstracts further from the service, meaning that the UI is totally unaware of the service implementation beneath.
or do people just bind to the proxy object types and have done with it ?
My personal feeling is to create an abstraction by creating a model and placing the Collection in that and referring to the collection in the UI code from the model.
but this seems to violate the DRY principle with respect to proxies.
Well, the best practice is to use a View Model which is populated from the Model. In many cases they could be the same because the view shows all the properties returned by the service, but another view could show only a subset of them. That's why having a view model is considered a good practice. This view model can also contain some calculated properties that are specific to the view. To further simplify the mapping between those objects you could use AutoMapper. There's also a nice article you may take a look at explaining the concept of view models.

DTOs vs Serializing Persisted Entities

I'm curious to know what the community feels on this subject. I've recently come into the question with a NHibernate/WCF scenario(entities persisted at the service layer) and realized I may be going the wrong direction here.
My question is plainly, when using a persistent object graph(NHibernate, LINQ to SQL, etc) behind a web service(WCF in this scenario), do you prefer to send those entities over the wire? Or would you create a set of lighter DTO's(sans cyclic references) across?
DTOs. Use AutoMapper for object-to-object mapping
I've been in this scenario multiple times before and can speak from experience on both sides. Originally I was just serializing my entities and sending them as is. This worked fine from a functional standpoint but the more I looked into it the more I realized that I was sending more data than I needed to and I was losing the ability to vary the implementation on either side. In subsequent service applications I've taken to created DTOs whose only purpose is to get data to and from the web service.
Outside of any interop, having to think about all the fields that are being sent over the wire is very helpful (to me) to make sure I'm not sending data that isn't needed or worse, should not get down to the client.
As others have mentioned, AutoMapper is a great tool for entity to DTO mapping.
I've almost always created dtos to transfer over the wire and use richter entities on my server and client. On the client they'll have some common presentation logic while on the server they'll have business logic. Mapping between the dtos and the entities can be dumb but it needs to happen. Tools like AutoMapper help you.
If you're asking do I send serialized entities from a web service to the outside world? then the answer is definitely no, you're going to get minimal interoperability if you do that. DTOs help solve this problem by defining a set of 'objects' that can be instantiated in any language whether you're using C#, Java, Javascript or anything else.
I've always had problems sending nHibernate objects over the wire. Particularly if your using a ActiveRecord model. and/or if your object has ties to the session (yuck). Another nasty result is that nHibernate may try and load the object at the entry of the method (before you can get to it) which can also possibly cause problems.
So...getting the message here? problems, problems problems...DTO's all the way

Categories

Resources