I create instance of context per form and may multi forms use same entity
How handle this when save?
The DbContext is part of your "view state", intended to give you access to the database, hold a copy of the data the user is working on in the Change Tracker, and flush changes back to the database.
In a smart client app the natural scope and lifetime for the DbContext is either global or scoped to a UI element that represents a task or use-case. But in either case you have to prevent the DbContext's Change Tracker from growing too large or from holding on to copies of data too long, as even a single form can stay open all day. A possible rule-of-thumb for these apps is to clear the change tracker after every successful SaveChanges(). Other than that, of course you must avoid long-running transactions so the DbContext doesn't hold a DbConnection open for a long time, but you have to do that anyway. The point is that a DbContext with an empty change tracker can be long-lived.
Using a Short-Lived DbContext is possible, but you loose the services of the Change Tracker and .Local ObservableCollection which make data binding really easy. So that's not a silver bullet.
So on to the question at-hand, if you have DbContext-per-form, and you have form-to-form communication of entities which end up getting saved by a different DbContext, you really have no choice but to disconnect the entities from their home DbContext before sending them over.
Or this might indicate that the two forms really should share a View State, and you should create an explicit View State object that includes the DbContext and the entieies and have both share that between forms.
Take a look at the Model-View-ViewModel (MVVM) pattern which introduces a distinction between the View (form) and the ViewModel, the data that models the use case user's interaction with the app. It's a useful pattern and is very widely used in XAML-based smart clients. Also XAML is (still) the future of smart client apps, so it's useful to learn.
Related
I am new to development in C# so still figuring a few things out.
I am trying to stick to good design principals so as to keep my code maintainable. So doing I am building a Line of Business application using the MVVM and Factory patterns. I am also using Dependency Injection (Unity).
My question is about refreshing the data cache created with Dependency Injection. When I create my View Model the data cache is loaded as follows:
_container.RegisterType<IRepo<Relation>, RelationRepo>(new TransientLifetimeManager());
My scenario is that I have a data presented to the users in a GridView. I have a multi user environment with dynamic information so it is possible that from time to time the data in the Cache becomes obsolete. This may result, as could be expected, in an DBConcurrency error from time to time. What I am grappling with is how to handle this in the correct manner. Should I abort the whole process and have the user reload the application thereby recreating the DI or is there an elegant way to refresh the cache and re-present the data after providing necessary information to user. Using the 2n'd option I could perhaps place a refresh button on the screen or a timed event to refresh data so user can see any variations in data.
So basically what I'm asking is how do I keep the Cache in sync with the underlying Data Base in real-time?
Thanks in advance.
This a very broad question you might prefer to take to the software engineering stackexchange site. There are many approaches for handling this kind of problem.
In any case you are going to have to handle concurrency exceptions. One approach you could take to reduce the chances of their occurrence is by distributing updates to clients via SignalR. Take the approach that a single update will distribute via a SignalR hub to all other clients. Still, the update may be broadcast moments after a concurrent update, and this will require some UI feature to explain that things have changes.
Although its a while ago I thought i'd post the solution to my problem in case someone else is grappling with the concepts as I was.
The issue was that I was not properly disposing my ViewModel when I navigated away from the View (or if there was a DB Error). Therefor when I returned to the View the EF DBContext was still using the local store. i.e not Disposed.
I subsequently implemented the Prism Framework and implemented IRegionMemberLifetime on a base class from which all my ViewModels inherit. This causes the instance of the ViewModel placed in the Region Manager (IRegion) to be removed when it transitions from an activated to deactivated state.
My (pseudo) base class looks like this:
using Prism.Mvvm;
using Prism.Regions;
public MyBindableBase : BindableBase, IRegionMemberLifetime
{
public virtual bool KeepAlive => false;
...
}
I'm creating my first application using EntityFramework. I'm using Entity Framework Core and MVVMLight.
I created a DbContext descendant class. I would like to know when to instanciate this DbContext.
My first thought was to create 1 instance for each View.
Imagine the following scenario :
I have an Item class
I create a ItemsViewModel to manage the list of items. In this viewModel, I add a property for the DbContext
When the user double-click an item, it's displayed in a detail view, associated to an ItemViewModel. This view-model also has an instance of my DbContext.
When the user quit the detail view :
If he saved, I update the DbContext of the list
If he canceled, the list doesn't have to be updated
Is this a correct way of doing things ? I've read somewhere that one should have only 1 instance of the DbContext. But in this case every modifications to the detail view are propagated to the list view, even if the detail view was canceled.
Many thanx for listening
Hence you're developing WPF application,you can use a context instance per form.
Here it is from EF Team :
When working with Windows Presentation Foundation (WPF) or Windows
Forms, use a context instance per form. This lets you use
change-tracking functionality that context provides.
I would like to recommend to use repository pattern with Dependency injection (DI).Then you don't need to worry about the instantiation and dispose of the dbcontext. Those are automatic.
Hence you're using EF core, you can use Autofac as a DI API.
Good Article for you : How to work with DbContext
Another good article where it explains how to implement a decoupled, unit-testable, N tier architecture based on Generic Repository Pattern with Entity Framework, IoC Container and Dependency Injection.Yes, this article is for the MVC. But you can get good knowledge of this pattern using this article.
Generic Repository and Unit of Work Pattern, Entity Framework,Autofac
There are tons of articles and SO questions on that, google for "DbContext lifetime desktop application". Also, this MSDN magazine might be helpful, although they discuss the case of nHibernate, rules are exactly the same.
Data Access - Building a Desktop To-Do Application with NHibernate
The recommended practice for desktop applications is to use a session per form, so that each form in the application has its own session. Each form usually represents a distinct piece of work that the user would like to perform, so matching session lifetime to the form lifetime works quite well in practice. The added benefit is that you no longer have a problem with memory leaks, because when you close a form in the application, you also dispose of the session. This would make all the entities that were loaded by the session eligible for reclamation by the garbage collector (GC).
There are additional reasons for preferring a single session per form. You can take advantage of NHibernate’s change tracking, so it will flush all changes to the database when you commit the transaction. It also creates an isolation barrier between the different forms, so you can commit changes to a single entity without worrying about changes to other entities that are shown on other forms.
While this style of managing the session lifetime is described as a session per form, in practice you usually manage the session per presenter.
As for the "1 instance of DbContext", this also is commented there:
A common bad practice with [...] desktop applications is to have a single global session for the entire application.
and reasons are discussed below.
Working on a WPF application using MVVM and powered by Entity Framework. We were very keen to allow users to multi-window this app, for usability purposes. However, that has the potential to cause problems with EF. If we stick to the usual advice of creating one copy of the Repository per ViewModel and someone opens multiple windows of the same ViewModel, it could cause "multiple instances of IEntityChangeTracker" errors.
Rather than go with a Singleton, which has its own problems, we solved this by putting a Refresh method on the repository that gets a fresh data context. Then we do things like this all over the shop:
using (IRepository r = Rep.Refresh())
{
r.Update(CurrentCampaign);
r.SaveChanges();
}
Which is mostly fine. However, it causes problems with maintaining state. If the context is refreshed while a user is working with an object, their changes will be lost.
I can see two ways round this, both of which have their own drawbacks.
We call SaveChanges constantly. This has the potential to slow down the application with constant database calls. Plus there are occasions when we don't want to store incomplete objects.
We copy EF objects into memory when loaded, have the user work with those, and then add a "Save" button which copies all the objects back to the EF object and saves. We could do this with an automapper, but is still seems unnecessary faff.
Is there another way?
I believe having the repository for accessing entity framework as a singleton may not always be wrong.
If you have a scenario were you have a client side repository, i.e. a repository which is part of the executable of the client application, i.e. is used by one client, then a singleton might be ok. Of course I would not use a singleton on the server side.
I asked Brian Noyes (Microsoft MVP) a similar question on a "MVVM" course on the pluralsight website.
I asked: "What is the correct way to dispose of client services which are used in the view model?"
And in his response he wrote: "...most of my client services are Singletons anyway and live for the life of the app."
In addition having a singleton does not prevent you from writing unit tests, as long as you have an interface for the singleton.
And if you use a dependency injection framework (see Marks comment), which is a good idea for itself, changing to singleton instantiation is just a matter of performing a small change in the setup of the injection container for the respective class.
I am developing a multi tier desktop application(Onion architecture), with a WinForm Project as UI, and I used EF code first to access my DB, and for my Domain models, I want to use POCOs, so I have two choices :
Connected POCOs
Disconnected POCOs
If I use disconnected POCOs I have to do a lot of stuff outside of EF and do not use EF features, because:
I have to save and manage entity's State in client side
When I want save changes to DB, I have to sync client side POCO's State
with DbContext entity's State.
During adding POCOs to newly created DbContext, I have to control
to prevent adding two entities with same key to the DbContext.
So, it seems normal, to use Connected POCOs, but in this situation I think I have the following challenges:
If I want to manage lifetime of my DbContext using an IoC container, in a multiuser
environment, and keep alive the DbContexts for all users from time their
getting POCOs to time that saving back their changes to DB, it take large
amount of server memory, I think, and it isn't efficient.
I want to save below related graph of my entities in one transaction
I have these questions:
How can I use connected POCOs in a my multitier application?
which config should I use to manage lifetime of my DbContexts using IoC containers in my case?
Is there any way to make(or change) the total graph(in client side) and then pass it to save in DbContext?
Is there any sample that implement this situations?
I implemented repository pattern to my software by the help of this article. One question boggles my mind. I think Database class should implement singleton pattern because user should not create more than one database context using "var ourDatabase = new Database();" statement. Am i right or is this situation not a criticial issue for usage of the implementation.
You should not have the database context as a singleton with Entity Framework.
For starters, each context instance tracks all changes made to it and "save changes" saves all the changes. So, if you have a web application and you made your context a singleton then all the users would be updating the same context and when one called "save changes" it would save changes for everyone. In a single-user Windows application this is less of an issue, unless you have different parts of the application working in parallel.
Also be mindful that the context caches data it has already loaded and tracks changes by default. In short, this can mean memory bloat and decreased performance as more and more objects are tracked - though in practice the impact of this varies.
From a performance point of view, Entity Framework implements connection pooling underneath the covers so don't worry about creating and disposing database context objects - it is very cheap.