When should I update a database to reflect property changes? - c#

I am new to WPF and I am building a small app with Linq To Entities (and SQLite database).
I just would like to know, where do I have to call my methods in order to update the database, when a property has changed ?
I would say in the property in ViewModel like this :
public string FirstName
{
get
{
return this.person.FirstName;
}
set
{
this.person.FirstName = value;
OnPropertyChanged("FirstName");
this.person.updateFirstname(value);
}
}
I am not sure if this is the best solution...

The problem of when to save to the database gives rise to the Unit of Work pattern. Linq-to-Entities has a reasonable implementation of this with the ObjectContext, where data is queued up in the context and then saved to the database when the logical unit of work is complete.
In your example, you are already setting the property on the L2E entity, Person, which is likely connected to the context. When you call ObjectContext.SaveChanges, this will be saved without the need for the updateFirstname method.
The thing you have to decide is when to call ObjectContext.SaveChanges (and thus end the unit of work), and doing this when the user explicitly saves or when the form is closed (optionally propmting for the user to commit or discard changes) is a reasonable approach here. To implement this, your viewmodels reference the ObjectContext and can call the SaveChanges method when the user action (usually modeled with a WPF ICommand published by the viewmodel and bound to the view) is executed.

You should concentrate your updates around unit-of-work rather than around individual fields. If your database is properly normalized each row will represent an entity and should be treated as such, updates to an entity should keep an entity in "valid" state. In your scenario if you update person's first name with intention of also updating last name if the app or server blows up your person record will be invalid.
In terms of MVVM, I usually either piggyback on grid's "update entire row at once" strategy and route that event into viewmodel or I just give them a save button :)

It is best to inject a service Interface into your ViewModel constructor and use some type of service to update the database.
This way you end up with loosely coupled system and your ViewModel stays agnostic of your Data Access Layer as it should...

Related

Stuck at reloading related entities for new change from outside?

The scenario here is for each screen (view) there is one ViewModel behind. And for best (or recommended) practice, we should use one long-alive DbContext for each ViewModel.
So there is one requirement to reload the related entities if there is some change (new added / deleted entities) made in another ViewModel.
Here are some solutions to this issue:
Publish some event or send some message to notify about the change, the subscriber ViewModels can:
Add/remove the added/deleted entities accordingly without having to reload the entities, this looks like syncing data between ViewModels. It has its own complexity because the added/removed entities here should not have state tracked (meaning the state should be Unchanged not Added or Deleted because these changes have already been updated to database). Also proxied entities cannot be added to multiple DbContexts, ... too many issues here.
Reload all the related entities. This is not naturally supported by EF.
Just reload the whole ViewModel at the time of switching screen (meaning the ViewModel won't be kept for a whole lifetime of the application). This may be applicable in some cases but actually it's not flexible enough to be used in any case (such as some change may be done from outside the application - another application - usually we just need a Refresh button on the current view to refresh data, so reload the whole ViewModel will affect the current View unnecessarily and may cause some bad visual effect, ...)
So I'm really looking for a good solution to this by reloading the related entities. By Googling around, looks like that this is not easily done by Entity Framework, the quickest and safest way is just create and use a new DbContext which means create and use a new ViewModel (please Note that I'm using dependency injection to inject the DbContext into the ViewModel, so the DbContext's lifetime is actually the same with the ViewModel's).
I can Google to find some hacky code to reload entities in Entity Framework but I don't really like hacky stuff. So if possible please share with me your approach, your solutions to this issue or even persuade me that hacky stuff is just fine.
we should use one long-alive DbContext for each ViewModel
I wouldn't say this is true.
You can and probably should create new DbContext instance for every load/update operation.
Using different DbContext instances give you possibility execute queries asynchronously.
For Windows applications (Winforms, WPF) asynchronous database access has huge improve in loading times, while application remain responsive.
With one DbContext this wouldn't be easy.
Instead injecting DbContext, create DbContext factory and inject it to the viewmodel, then
using (var context = _contextFactory.Create<MyDbContext>())
{
var orders = await context.Orders.ToListAsync();
return orders.Select(order => order.ToOrderDto());
}
But what I am afraid of, is that your business an view logic totally rely on database structure.
Your viewmodel shouldn't depend on DbContext, instead depend on a abstraction of database layer. (actually your question is the first wall you hit when rely on DbContext).
public interface OrderDataAccess
{
Task<Order> GetOrder(Guide id);
Task<IEnumerable<OrderLine>> GetOrderLines(Guide orderId);
}
When you load whole view you can load order and order lines.
var orderTask = _dataAccess.GetOrder(id);
var orderLinesTask = _dataAccess.GetOrderLines(id);
await Task.WhenAll(orderTask, orderLinesTask);
this.OrderViewModel = orderTask.Result;
this.OrderLinesViewModels = orderLinesTask.Result;
Then when for example you need reload order lines
this.OrderLinesViewModels = await _dataAccess.GetOrderLines(id);
Using transient DbContext instances just kicks the can down the road. Your ViewModel has some entity data that might be out-of-date. But it also might have unsaved changes. You simply have to decide how you want to handle that on a ViewModel-by-ViewModel basis.
In a Desktop App the ViewModel is the Unit-of-Work, and is still the proper scope for the DbContext.
If you decide you want to reload a tracked entity, or all the tracked entities for a DbContext instance, it shouldn't be a problem. EG something like:
void ReloadAllTrackedEntities()
{
foreach (var entry in ChangeTracker.Entries())
{
entry.Reload();
}
}
On a side-note, since you're building a desktop app, did you know EF Core supports using INotifyPropertyChanged for change tracking?

Can I tell linq 2 SQL not to update certain columns after attachment or use of UpdateModel?

Let's assume I have the following situation, the update method in my service accepts a model (the one that is going to be updated) as an input parameter. The model can be unattached (in which case attach method is called before submitting changes) or attached (in which case we just submit changes). Edit actions just call this update method in my service. Now let's assume I cannot change the code in those actions (the code that produces the model to be updated). Can I still somehow prevent certain columns from updating from within the update method. Note that I might want to set those columns using linq to SQL, but only during insert method.
I'm quite sure I'm trying something unconventional here, but it might help me write some easy to reuse code. If it cannot be done, then I'll solve it differently, but it never hurts to try something new.
The Attach method does provide an override to accept both a modified and original entity.
Attach Modified Entity on Data Context
When using this the internal change tracker will figure out which columns have been updated and will only update those ones on the datasource which have changed, rather than updating all columns.
Alternatively if you want more explicit control over which properties are updated, you can reattach your entity as unmodified in its original state:
Attach Modified/Unmodified Entity on Data Context
This will hook up the internal change tracker to the PropertyChanging events on the entity so it can be tracked. You would then simply change the values of the properties on that entity in the Update method on your Service.
void Update(MyModel model)
{
using (MyContext ctx = new MyContext())
{
ctx.DeferredLoadingEnabled = false;
ctx.MyEntities.Attach(model.OriginalEntity);
model.OriginalEntity.Value = model.ModifiedEntity.Value;
ctx.SubmitChanges();
}
}
The pitfall of these approaches means you must maintain both the original and modified entities in your model, but could be set when your entities are loaded - a simple shallow copy of the object should do the trick by deriving from ICloneable in a partial class for each entity.

Domain Model with Nhibernate design issue

I´m trying to get started in the “DDD with C#” world.
I use NHibernate as my ORM tool, thus trying to develop a PI(Persistence Ignorance) model.
However, in some of my entities (which are being represented as POCOS) I have business rules in the setters of my properties.
For example, I have a “User” entity which have a flag that indicates if this user is blocked or not, when this flag is true a second field called “Block Date”
must be automatically filled whith the current date.
Everything seems very clear and simple, but the problem arises in the moment that I´m recovering users that has already persisted in the database, even though
the blocked users will have their “Blocked Dates” update to the current date, according whit this logic.
Initially I thought in a second flag “isLoaded” that would indicates that the object is being hydrated by NHibernate and then this logic wouldn´t be launched,
however this didn´t seem like PI. Any suggestion on how to improve this?
You can define field access strategy in your mapping for the IsBlocked property. Basically, you would say to NHibernate to use underlying private field (_isBlocked) instead of property and hence, your setter logic in IsBlocked property won't be executed.
This SO question has a good answer on access strategies.
Official NHibernate documentation.
If you are using Fluent NHibernate for mapping, this is how you could define it:
Map(x => x.IsBlocked).Access.CamelCaseField(Prefix.Underscore);
In addition to Miroslavs solution for the NHibernate problem, I'd really recommend moving away from putting logic behind property setters, especially when other fields need to be changed.
public void Block()
{
_isBlocked = true;
_blockedDate = DateTime.Now;
}
See answers to this question for why.

How to revert the ef4 context, or at least some entities to their original values?

Scenario:
Retrieve some entities
Update some properties on those entities
You perform some sort of business logic which dictates that you should no longer have those properties updated; instead you should insert some new entities documenting the results of your business logic.
Insert said new entities
SaveChanges();
Obviously in the above example calling SaveChanges() will not only insert the new entities, but update the properties on the original entities. Before I have managed to rearrange my code in a way where changes to the context (and its entities) would only be made when I knew for sure that I would want all my changes saved, however that’s not always possible. So the question is what is the best way to handle this scenario? I don’t work with the context directly, rather through repositories, if that matters. Is there a simple way to revert the entities to their original values? What is the best practice in this sort of scenario?
Update
Although I disagree with Ladislav that the business logic should be rearranged in such way that the validation always come before any modification to the entities, I agree that the solution should really be persisting wanted changes on a different context. The reason I disagree is because my business transaction is fairly long, and validation or error checking that might happen at the end of the transaction are not always obvious upfront. Imagine a Christmas tree you're decorating with lights from the top down, you've already modified the tree by the time you're working on the lower branches. What happens if one of the lights breaks? You want to roll back all of your changes, but you want to create some ERROR entities. As Ladislav suggested the most straight forward way would be to save the ERROR entities on a different context, allowing the original one (with the modified metaphorical tree) to expire without SaveChanges being ever called.
Now, in my situation I utilize Ninject for dependance injection, injecting one EF context into all of my repositories that are in the scope of the top level service. What this means is that my business layer classes don't really have control of creating new EF contexts. Not only do they not have access to the EF context (remember they work through repositories), but the injection has already occurred higher in the object hierarchy. The only solution I found is to create another class that will utilize Ninject to create a new UOW within it.
//business logic executing against repositories with already injected and shared (unit of work) context
Tree = treeRepository.Get();
Lights = lightsRepsitory.Get();
//update the tree as you're decorating it with lights
if(errors.Count == 0)
{
//no errors, calling SaveChanges() on any one repository will commit the entire UOW as they all share the same injected EF context
repository1.SaveChanges();
}
else
{
//oops one of the lights broke, we need to insert some Error entities
//however if we just add id to the errorRepository and call SaveChanges() the modifications that happened
//to the tree will also be committed.
TreeDecoratorErroHandler.Handle(errors);
}
internal class TreeDecoratorErroHandler
{
//declare repositories
//constructor that takes repository instances
public static void Handle(IList<Error> errors)
{
//create a new Ninject kernel
using(Ninject... = new Ninject...)
{
//this will create an instance that will get injected with repositories sharing a new EF instance
//completely separate from the one outside of this class
TreeDecoratorErroHandler errorHandler = ninjectKernel.Get<TreeDecoratorErroHandler>();
//this will insert the errors and call SaveChanges(), the only changes in this new context are the errors
errorHandler.InsertErrors(errors);
}
}
//other methods
}
You should definitely use a new context for this. Context is unit of work and once your business logic says: "Hey I don't want to update this entity" then the entity is not part of unit of work. You can either detach the entity or create new context.
There is possibility to use Refresh method but that method is supposed to be used in scenarios where you have to deal with optimistic concurrency. Because of that this method refreshes only scalar and complex properties and foreign keys if part of the entity - if you made changes to navigation properties these can be still present after you refresh the entity.
Take a look at ObjectContext.Refresh with RefreshMode.StoreWins I think that will do what you want. Starting a new context would achieve the same thing I guess, but not be as neat.

Persistence with EntityFramework in ASP.NET MVC application

In my ASP.NET MVC application I need to implement persistence of data. I've choose Entity Framework for its ability to create classes, database tables and queries from entity model so that I don't have to write SQL table creation or Linq to SQL queries by hand. So simplicity is my goal.
My approach was to create model and than a custom HttpModule that gets called at the and of each request and that just called SaveChanges() on the context. That made my life very hard - entity framework kept throwing very strange exception. Sometimes it worked - no exception but sometimes it did not. First I was trying to fix the problems one by one but when I got another one I realized that my general approach is probably wrong.
So that is the general practice to implement for implementing persistence in ASP.NET MVC application ? Do I just call saveChanges after each change ? Isn't that little inefficient ? And I don't know how to do that with Services patter anyway (services work with entities so I'd have to pass context instance to them so that they could save changes if they make some).
Some links to study materials or tutorials are also appreciated.
Note: this question asks for programing practice. I ask those who will consider it vague to bear in mind that it is still solving my very particular problem and right technique will save me a lot of technical problems before voting to close.
You just need to make sure SaveChanges gets called before your request finishes. At the bottom of a controller action is an ideal place. My controller actions typically look like this:
public ActionResult SomeAction(...)
{
_repository.DoSomething();
...
_repository.DoSomethingElse();
...
_repository.SaveChanges();
return View(...);
}
This has the added benefit that if an exception gets thrown, then SaveChanges will not get called. And you can either handle the exception in the action or in Controller.OnException.
It's going to be no more or less efficient than calling a stored procedure that many number of times (with respect to number of connections that need to be made).
Nominally, you would make all your changes to the object set, then SaveChanges to commit all those changes.
So instead of doing this:
mySet.Objects.Add(someObject);
mySet.SaveChanges();
mySet.OtherObjects.Add(someOtherObject);
mySet.SaveChanges();
You just need to do:
mySet.Objects.Add(someObject);
mySet.OtherObjects.Add(someOtherObject);
mySet.SaveChanges();
// Commits Both Changes
Usually your data access is wrapped by an object implementing the repsitory pattern. You then invoke a Save() method on the repository.
Something like
var customer = customerRepository.Get(id);
customer.FirstName = firstName;
customer.LastName = lastName;
customerRepository.SaveChanges();
The repository can then be wrapped by a service layer to provide view model objects or DTO's
Isn't that little inefficient ?
Don't prematurely optimise. When you have a performance issue, analyse the performance, identify a cause and then optimise. Repeat.
Update
A repository wraps data access, usually a single entity. A service layer wraps business logic and can access multiply entities through multiple repositories. It usually deals with 'slim' models or DTO's.
An example could be something like getting a list of invoices for a customer
public Customer GetCustomerWithInvoices(int id) {
var customer = customerRepository.Get(id);
var invoiceList = invoiceRepository.GetAllInvoicesFor(id);
return new Customer {
Customer = customer,
Invoices = invoiceList
};
}

Categories

Resources