Refreshing cache in Dependency Injection - c#

I am new to development in C# so still figuring a few things out.
I am trying to stick to good design principals so as to keep my code maintainable. So doing I am building a Line of Business application using the MVVM and Factory patterns. I am also using Dependency Injection (Unity).
My question is about refreshing the data cache created with Dependency Injection. When I create my View Model the data cache is loaded as follows:
_container.RegisterType<IRepo<Relation>, RelationRepo>(new TransientLifetimeManager());
My scenario is that I have a data presented to the users in a GridView. I have a multi user environment with dynamic information so it is possible that from time to time the data in the Cache becomes obsolete. This may result, as could be expected, in an DBConcurrency error from time to time. What I am grappling with is how to handle this in the correct manner. Should I abort the whole process and have the user reload the application thereby recreating the DI or is there an elegant way to refresh the cache and re-present the data after providing necessary information to user. Using the 2n'd option I could perhaps place a refresh button on the screen or a timed event to refresh data so user can see any variations in data.
So basically what I'm asking is how do I keep the Cache in sync with the underlying Data Base in real-time?
Thanks in advance.

This a very broad question you might prefer to take to the software engineering stackexchange site. There are many approaches for handling this kind of problem.
In any case you are going to have to handle concurrency exceptions. One approach you could take to reduce the chances of their occurrence is by distributing updates to clients via SignalR. Take the approach that a single update will distribute via a SignalR hub to all other clients. Still, the update may be broadcast moments after a concurrent update, and this will require some UI feature to explain that things have changes.

Although its a while ago I thought i'd post the solution to my problem in case someone else is grappling with the concepts as I was.
The issue was that I was not properly disposing my ViewModel when I navigated away from the View (or if there was a DB Error). Therefor when I returned to the View the EF DBContext was still using the local store. i.e not Disposed.
I subsequently implemented the Prism Framework and implemented IRegionMemberLifetime on a base class from which all my ViewModels inherit. This causes the instance of the ViewModel placed in the Region Manager (IRegion) to be removed when it transitions from an activated to deactivated state.
My (pseudo) base class looks like this:
using Prism.Mvvm;
using Prism.Regions;
public MyBindableBase : BindableBase, IRegionMemberLifetime
{
public virtual bool KeepAlive => false;
...
}

Related

Use one entity from two context instance entity framework

I create instance of context per form and may multi forms use same entity
How handle this when save?
The DbContext is part of your "view state", intended to give you access to the database, hold a copy of the data the user is working on in the Change Tracker, and flush changes back to the database.
In a smart client app the natural scope and lifetime for the DbContext is either global or scoped to a UI element that represents a task or use-case. But in either case you have to prevent the DbContext's Change Tracker from growing too large or from holding on to copies of data too long, as even a single form can stay open all day. A possible rule-of-thumb for these apps is to clear the change tracker after every successful SaveChanges(). Other than that, of course you must avoid long-running transactions so the DbContext doesn't hold a DbConnection open for a long time, but you have to do that anyway. The point is that a DbContext with an empty change tracker can be long-lived.
Using a Short-Lived DbContext is possible, but you loose the services of the Change Tracker and .Local ObservableCollection which make data binding really easy. So that's not a silver bullet.
So on to the question at-hand, if you have DbContext-per-form, and you have form-to-form communication of entities which end up getting saved by a different DbContext, you really have no choice but to disconnect the entities from their home DbContext before sending them over.
Or this might indicate that the two forms really should share a View State, and you should create an explicit View State object that includes the DbContext and the entieies and have both share that between forms.
Take a look at the Model-View-ViewModel (MVVM) pattern which introduces a distinction between the View (form) and the ViewModel, the data that models the use case user's interaction with the app. It's a useful pattern and is very widely used in XAML-based smart clients. Also XAML is (still) the future of smart client apps, so it's useful to learn.

Good usage of DbContext

I'm creating my first application using EntityFramework. I'm using Entity Framework Core and MVVMLight.
I created a DbContext descendant class. I would like to know when to instanciate this DbContext.
My first thought was to create 1 instance for each View.
Imagine the following scenario :
I have an Item class
I create a ItemsViewModel to manage the list of items. In this viewModel, I add a property for the DbContext
When the user double-click an item, it's displayed in a detail view, associated to an ItemViewModel. This view-model also has an instance of my DbContext.
When the user quit the detail view :
If he saved, I update the DbContext of the list
If he canceled, the list doesn't have to be updated
Is this a correct way of doing things ? I've read somewhere that one should have only 1 instance of the DbContext. But in this case every modifications to the detail view are propagated to the list view, even if the detail view was canceled.
Many thanx for listening
Hence you're developing WPF application,you can use a context instance per form.
Here it is from EF Team :
When working with Windows Presentation Foundation (WPF) or Windows
Forms, use a context instance per form. This lets you use
change-tracking functionality that context provides.
I would like to recommend to use repository pattern with Dependency injection (DI).Then you don't need to worry about the instantiation and dispose of the dbcontext. Those are automatic.
Hence you're using EF core, you can use Autofac as a DI API.
Good Article for you : How to work with DbContext
Another good article where it explains how to implement a decoupled, unit-testable, N tier architecture based on Generic Repository Pattern with Entity Framework, IoC Container and Dependency Injection.Yes, this article is for the MVC. But you can get good knowledge of this pattern using this article.
Generic Repository and Unit of Work Pattern, Entity Framework,Autofac
There are tons of articles and SO questions on that, google for "DbContext lifetime desktop application". Also, this MSDN magazine might be helpful, although they discuss the case of nHibernate, rules are exactly the same.
Data Access - Building a Desktop To-Do Application with NHibernate
The recommended practice for desktop applications is to use a session per form, so that each form in the application has its own session. Each form usually represents a distinct piece of work that the user would like to perform, so matching session lifetime to the form lifetime works quite well in practice. The added benefit is that you no longer have a problem with memory leaks, because when you close a form in the application, you also dispose of the session. This would make all the entities that were loaded by the session eligible for reclamation by the garbage collector (GC).
There are additional reasons for preferring a single session per form. You can take advantage of NHibernate’s change tracking, so it will flush all changes to the database when you commit the transaction. It also creates an isolation barrier between the different forms, so you can commit changes to a single entity without worrying about changes to other entities that are shown on other forms.
While this style of managing the session lifetime is described as a session per form, in practice you usually manage the session per presenter.
As for the "1 instance of DbContext", this also is commented there:
A common bad practice with [...] desktop applications is to have a single global session for the entire application.
and reasons are discussed below.

Persisting EF changes with constantly refreshed repository in WPF MVVM

Working on a WPF application using MVVM and powered by Entity Framework. We were very keen to allow users to multi-window this app, for usability purposes. However, that has the potential to cause problems with EF. If we stick to the usual advice of creating one copy of the Repository per ViewModel and someone opens multiple windows of the same ViewModel, it could cause "multiple instances of IEntityChangeTracker" errors.
Rather than go with a Singleton, which has its own problems, we solved this by putting a Refresh method on the repository that gets a fresh data context. Then we do things like this all over the shop:
using (IRepository r = Rep.Refresh())
{
r.Update(CurrentCampaign);
r.SaveChanges();
}
Which is mostly fine. However, it causes problems with maintaining state. If the context is refreshed while a user is working with an object, their changes will be lost.
I can see two ways round this, both of which have their own drawbacks.
We call SaveChanges constantly. This has the potential to slow down the application with constant database calls. Plus there are occasions when we don't want to store incomplete objects.
We copy EF objects into memory when loaded, have the user work with those, and then add a "Save" button which copies all the objects back to the EF object and saves. We could do this with an automapper, but is still seems unnecessary faff.
Is there another way?
I believe having the repository for accessing entity framework as a singleton may not always be wrong.
If you have a scenario were you have a client side repository, i.e. a repository which is part of the executable of the client application, i.e. is used by one client, then a singleton might be ok. Of course I would not use a singleton on the server side.
I asked Brian Noyes (Microsoft MVP) a similar question on a "MVVM" course on the pluralsight website.
I asked: "What is the correct way to dispose of client services which are used in the view model?"
And in his response he wrote: "...most of my client services are Singletons anyway and live for the life of the app."
In addition having a singleton does not prevent you from writing unit tests, as long as you have an interface for the singleton.
And if you use a dependency injection framework (see Marks comment), which is a good idea for itself, changing to singleton instantiation is just a matter of performing a small change in the setup of the injection container for the respective class.

Prevent repeated DB calls because of object oriented approach in MVC3 app

We have a MVC3 application that we have created many small actions and views to handle placing the data wherever we need to. For instance if it was a blog and we wanted to show comments, we have a comment action and view and we can place that wherever we want, a user profile view, and blog post view, etc.
The problem this has caused is each small view or action needs to make a call, usually to the same service, multiple times per a page load because of all the other small views we have in our application. So on a very large page containing these small views, we could have 80+ sql calls with 40% of them being duplicates and then the page slows down. Current solution is to cache some data, and pass some data around in the ViewBag if we can so if you want like a user's profile, you check to see if its cache or the ViewBag if it isn't ask for it.
That feels really really dirty for a design pattern and the viewbag approach seems awful since it has to be passed from the top down. We've added some data into HttpCurrent.Items to make it per a request (instead of caching since that data can change) but there has to be some clean solution that doesn't feel wrong and is clean too?
EDIT
I've been asked to be more specific and while this is a internal business application I can't give away to much of the specifics.
So to put this into a software analogy. Lets compare this to facebook. Imagine this MVC app had an action for each facebook post, then under that action it has another action for the like button and number of comments, then another action for showing the top comments to the user. The way our app is designed we would get the current users profile in each action (thus like 4 times at the minimum in the above situation) and then the child action would get the parent wall post to verify that you have permission to see it. Now you can consider caching the calls to each security check, wall post, etc, but I feel like caching is for things that will be needed over the lifetime of the app, not just little pieces here and there to correct a mistake in how your application is architected.
Are you able to replace any of your #Html.Action() calls with #Html.Partial() calls, passing in the model data instead of relying on an action method to get it from the db?
You could create a CompositeViewModel that contains your other ViewModels as properties. For example, it might contain a UserViewModel property, a BlogPostViewModel property, and a Collection<BlogComment> property.
In your action method that returns the container / master view, you can optimize the data access. It sounds like you already have a lot of the repeatable code abstracted through a service, but you didn't post any code so I'm not sure how DRY this approach would be.
But if you can do this without repeating a lot of code from your child actions, you can then use #Html.Partial("~/Path/to/view.cshtml", Model.UserViewModel) in your master view, and keep the child action method for other pages that don't have such a heavy load.
I see two potential places your code might be helped based on my understanding of your problem.
You have too many calls per page. In other words your division of work is too granular. You might be able to combine calls to your service by making objects that contain more information. If you have a comments object and an object that has aggregate data about comments, maybe combine them into one object/one call. Just a thought.
Caching more effectively. You said you're already trying to cache the data, but want a possibly better way to do this. On a recent project I worked on I used an AOP framework to do caching on WCF calls. It worked out really well for development, but was ultimately too slow in a heavy traffic production website.
The code would come out like this for a WCF call (roughly):
[Caching(300)]
Comment GetComment(int commentId);
You'd just put a decorator on the WCF call with a time interval and the AOP would take care of the rest as far as caching. Granted we also used an external caching framework (AppFabric) to store the results of the WCF calls.
Aspect Oriented Framework (AOP): http://en.wikipedia.org/wiki/Aspect-oriented_programming
We used Unity for AOP: Enterprise Library Unity vs Other IoC Containers
I would strongly consider trying to cache the actual service calls though, so that you can call them to your hearts content.
The best thing to do is to create an ActionFilter that will create and teardown your persistence method. This will ensure that the most expensive part of data access (ie creating the connection) is limited to once per request.
public class SqlConnectionActionFilter : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
var sessionController = filterContext.Controller;
if (filterContext.IsChildAction)
return;
//Create your SqlConnection here
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
if (filterContext.IsChildAction)
return;
//Commit transaction & Teardown SqlConnection
}
}
The thing is: if you are doing the query 80 times, then you are hitting the db 80 times. Putting a request-scoped cache is the best solution. The most elegant way of implementing it is through AOP, so your code doesn't mind about that problem.

Datagrid doesn't refresh on changed data

Is there any way to have a datagrid listen to the database and automatically update the data if the database data is changed? I use a SQL Server database.
I'd like to use Linq-2-SQL if possible
Because #Slaggg asked: there are fairly straightforward ways of doing this, but they're almost certainly going to involve a lot of coding, it'll impact performance pretty significantly, and my strong suspicion is that it'll be more trouble than it's worth.
That said, for a typical n-tier application, at a very high level you'll need:
(1) A way to notify the middle tier when the data changes. You could use custom-code triggers inside each the table that fire off some sort of notification (perhaps using WCF and CLR stored procedures), or you could use a SqlDependency object. Probably the second would work better.
(2) A way to notify each client connected to that middle tier. Assuming that you're using WCF, you'll need to use one of the duplex bindings that are available, such as Net.TCP or HttpPollingDuplex (for Silverlight). You'll need to make sure this is configured correctly on both the client and the server. You'll also need to manually keep track of which clients might be interested in the update, so that you can know which ones to update, and you'll need to be able to remove them from that list when they go away or timeout. Tomek, from the MS WCF team, has some pretty good examples on his blog that you might want to investigate.
(3) A mechanism to update the local client's model and/or viewmodel and/or UI once you get the notification from the middle tier that something has changed. This is more complicated than you'd think: it's difficult enough to keep your UI in sync with your data model under normal circumstances, but it gets dramatically more complicated when that data model can be changing under you from the other direction as well.
The idea behind these sorts of notifications is straightforward enough: but getting all the details right is likely to keep you debugging way into the night. And the guy who has to support all this two years from now will curse your name.
Hope this helps.
It depends from where you are updating the database:
From the same context (in
Silverlight, are you adding,
deleting, editing on the same page)
From a ChildWindow in your
Silverlight application
From an
external, non-related tool, outside
of your Silverlight application

Categories

Resources