I've got an Application which consists of 2 parts at the moment
A Viewer that receives data from a database using EF
A Service that manipulates data from the database at runtime.
The logic behind the scenes includes some projects such as repositories - data access is realized with a unit of work. The Viewer itself is a WPF-Form with an underlying ViewModel.
The ViewModel contains an ObservableCollection which is the datasource of my Viewer.
Now the question is - How am I able to retrieve the database-data every few minutes? I'm aware of the following two problems:
It's not the latest data my Repository is "loading" - does EF "smart" stuff and retrieves data from the local cache? If so, how can I force EF to load the data from the database?
Re-Setting the whole ObservableCollection or adding / removing entities from another thread / backgroundworker (with invokation) is not possible. How am I supposed to solve this?
I will add some of my code if needed but at the moment I don't think that this would help at all.
Edit:
public IEnumerable<Request> GetAllUnResolvedRequests() {
return AccessContext.Requests.Where(o => !o.IsResolved);
}
This piece of code won't get the latest data - I edit some rows manually (set IsResolved to true) but this method retrieves it nevertheless.
Edit2:
Edit3:
var requests = AccessContext.Requests.Where(o => o.Date >= fromDate && o.Date <= toDate).ToList();
foreach (var request in requests) {
AccessContext.Entry(request).Reload();
}
return requests;
Final Question:
The code above "solves" the problem - but in my opinion it's not clean. Is there another way?
When you access an entity on a database, the entity is cached (and tracked to track changes that your application does until you specify AsNoTracking).
This has some issues (for example, performance issues because the cache increases or you see an old version of entities that is your case).
For this reasons, when using EF you should work with Unit of work pattern (i.e. you should create a new context for every unit of work).
You can have a look to this Microsoft article to understand how implement Unit of work pattern.
http://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
In your case using Reload is not a good choice because the application is not scalable. For every reload you are doing a query to database. If you just need to return desired entities the best way is to create a new context.
public IEnumerable<Request> GetAllUnResolvedRequests()
{
return GetNewContext().Requests.Where(o => !o.IsResolved).ToList();
}
Here is what you can do.
You can define the Task (which keeps running on ThreadPool) that periodically checks the Database (consider that periodically making EF to reload data has its own cost).
And You can define SQL Dependency on your query so that when there is a change in data, you can notify the main thread for the same.
Related
The scenario here is for each screen (view) there is one ViewModel behind. And for best (or recommended) practice, we should use one long-alive DbContext for each ViewModel.
So there is one requirement to reload the related entities if there is some change (new added / deleted entities) made in another ViewModel.
Here are some solutions to this issue:
Publish some event or send some message to notify about the change, the subscriber ViewModels can:
Add/remove the added/deleted entities accordingly without having to reload the entities, this looks like syncing data between ViewModels. It has its own complexity because the added/removed entities here should not have state tracked (meaning the state should be Unchanged not Added or Deleted because these changes have already been updated to database). Also proxied entities cannot be added to multiple DbContexts, ... too many issues here.
Reload all the related entities. This is not naturally supported by EF.
Just reload the whole ViewModel at the time of switching screen (meaning the ViewModel won't be kept for a whole lifetime of the application). This may be applicable in some cases but actually it's not flexible enough to be used in any case (such as some change may be done from outside the application - another application - usually we just need a Refresh button on the current view to refresh data, so reload the whole ViewModel will affect the current View unnecessarily and may cause some bad visual effect, ...)
So I'm really looking for a good solution to this by reloading the related entities. By Googling around, looks like that this is not easily done by Entity Framework, the quickest and safest way is just create and use a new DbContext which means create and use a new ViewModel (please Note that I'm using dependency injection to inject the DbContext into the ViewModel, so the DbContext's lifetime is actually the same with the ViewModel's).
I can Google to find some hacky code to reload entities in Entity Framework but I don't really like hacky stuff. So if possible please share with me your approach, your solutions to this issue or even persuade me that hacky stuff is just fine.
we should use one long-alive DbContext for each ViewModel
I wouldn't say this is true.
You can and probably should create new DbContext instance for every load/update operation.
Using different DbContext instances give you possibility execute queries asynchronously.
For Windows applications (Winforms, WPF) asynchronous database access has huge improve in loading times, while application remain responsive.
With one DbContext this wouldn't be easy.
Instead injecting DbContext, create DbContext factory and inject it to the viewmodel, then
using (var context = _contextFactory.Create<MyDbContext>())
{
var orders = await context.Orders.ToListAsync();
return orders.Select(order => order.ToOrderDto());
}
But what I am afraid of, is that your business an view logic totally rely on database structure.
Your viewmodel shouldn't depend on DbContext, instead depend on a abstraction of database layer. (actually your question is the first wall you hit when rely on DbContext).
public interface OrderDataAccess
{
Task<Order> GetOrder(Guide id);
Task<IEnumerable<OrderLine>> GetOrderLines(Guide orderId);
}
When you load whole view you can load order and order lines.
var orderTask = _dataAccess.GetOrder(id);
var orderLinesTask = _dataAccess.GetOrderLines(id);
await Task.WhenAll(orderTask, orderLinesTask);
this.OrderViewModel = orderTask.Result;
this.OrderLinesViewModels = orderLinesTask.Result;
Then when for example you need reload order lines
this.OrderLinesViewModels = await _dataAccess.GetOrderLines(id);
Using transient DbContext instances just kicks the can down the road. Your ViewModel has some entity data that might be out-of-date. But it also might have unsaved changes. You simply have to decide how you want to handle that on a ViewModel-by-ViewModel basis.
In a Desktop App the ViewModel is the Unit-of-Work, and is still the proper scope for the DbContext.
If you decide you want to reload a tracked entity, or all the tracked entities for a DbContext instance, it shouldn't be a problem. EG something like:
void ReloadAllTrackedEntities()
{
foreach (var entry in ChangeTracker.Entries())
{
entry.Reload();
}
}
On a side-note, since you're building a desktop app, did you know EF Core supports using INotifyPropertyChanged for change tracking?
What is the best way to refresh data in Entity Framework 5? I've got an WPF application showing statistics from a database where data is changing all the time. Every 10 seconds the application is updating the result but the default behaviour for EF seems to be to cache the previous results. I would thus like a way to invalidate the previous results so a new set of data can be loaded.
The context of interest is defined in the following way:
public partial class MyEntities: DbContext
{
...
public DbSet<Stat> Stats { get; set; }
...
}
After some reading I was able to find a few approaches, but I have no idea of how efficient these ways are and if they come with downsides.
Create a new instance of the entities object
using (var db = new MyEntities())
{
var stats = from s in db.Stats ...
}
This works but feels inefficient because there are many other places where data is retrieved, and I don't want to reopen a new connection every time I need some data. Wouldn't it be more efficient to keep the connection open and do it another way?
Call refresh on the ObjectContext
var stats = from s in db.Stats ...
ObjectContext.Refresh(RefreshMode.StoreWins, stats );
This also assumes I'm extracting ObjectContext from the dbContext in this way:
private MyEntities db = null;
private ObjectContext ObjectContext
{
get
{
return ((IObjectContextAdapter)db).ObjectContext;
}
}
This is the solution I'm using as it is now. It seems simple. But I read somewhere that ObjectContext nowadays isn't directly accessible in DbContext because the EF team doesn't think that anyone would need it, and that you can do all things you need directly in DbContext. This makes me think that maybe this is not the best way to do it, or?
I know there is a reload method of dbContext.Entry but since I'm not reloading a single entity but rather retrieve a list of entities, I don't really know if this way will work. If I get 5 stat objects in the first query, save them in a list and do a reload on them when it's time to update, I might miss out others that have been added to the list on the database. Or have I completely misunderstood the reload method? Can I do a reload on a DbSetspecified in MyEntities?
There are a number of questions above but what I mainly want to know is what is the best practice in EF5 for asking the same query to the database over and over again? It might very well be something that I haven't discovered yet...
Actually, and even if it seems counter intuitive, the first option is the correct one, see this
DbContext are design to have short lifespans, hence their instantiation cost is quite low compared to the cost of reloading everything, it's mostly due to things like caching, and their data loading designs in general.
That's also why EF works so "naturally" well with ASP .NET MVC, since the DbContext is instantiated at each request.
That doesn't mean you have to create DbContext all over the place of course, in your context, using a DbContext per update operation (the one happening every 10secs) seems good enough, if during that operation you would need to delete a particular row, for example, you would pass the DbContext around, not create a new one.
I'm using Entity Framework 4.0. I do a lot of read operations on the database (data analysis). No data will be saved. Currently, despite the Lazy Loading, number of I / O operations to the database server slows down considerably the application. I decided most of the small tables loaded into memory (.ToList()) and then generate the calculation. Is there a way to automatically read the context of the data in the table are only the first references to it and have not been updated by the life context?. The idea is that with further references to this table was not queried database, only the memory of the application.
Now, I use this code:
public class cDBReader
{
private List<RISK_T_MEMBERS> fMembers;
public List<RISK_T_MEMBERS> Members
{
get
{
if (fMembers == null)
using (RiskEntities context = new RiskEntities(TConfiguration.connectionString))
{
context.RISK_T_MEMBERS.MergeOption = System.Data.Objects.MergeOption.NoTracking;
fMembers = context.RISK_T_MEMBERS.ToList();
}
return fMembers;
}
set { fMembers = value; }
}
}
Could you please add some more information? What is wrong with the current implementation?
It seems like what you need is some kind of caching. Here is an example implementation on caching.
An even simple ( less overkill in my opinion ) way to implement caching would be this.
Last but not least is any implementation of yours ( like the one you already made ). Maybe the singleton pattern could help you here? That is a singleton object that saves all those data you need globally and implements gets to your data by providing the context currently active, and in case the data is null, the application uses the active context to get them. Dropping and creating the context could also speed up the application after a while ( and that's the reason why I think you should provide the active context to the getData functions ).
You could try Eager Loading instead of Lazy Loading first to get full control on what EF loads and just manually preload what you need once. Loaded data will stay alive within one context lifetime bounds, if you will need more and are sure that database is read only, you could just store somewhere and then attach preloaded data when new context is created
Ran into a weird problem with RavenDB
public ActionResult Save(RandomModel model)
{
//Do some stuff, validate model etc..
RavenSession.Store(model);
RavenSession.SaveChanges();
var newListOfModels = RavenSession.Query<RandomModel>().ToList();
return View("randomview",newListOfModels);
}
The newListOfModels does not contain the model i just added with the store method.
However, if i add a Thread.Sleep(100) after savechanges the stored model is included in the new list.
Am i storing and Saving stuff to RavenDB the wrong way?
How should i be doing this?
Of course there is a work around by just adding the incoming model to the newListOfModels and running SaveChanges after for example in a basecontrollers onactionexecuted method.
My primary concern is why i need to delay the thread before i can query the documentsession and find my newly added model there.
RavenDB indexes are stale by their nature. From the documentation:
RavenDB performs data indexing in a background thread, which is
executed whenever new data comes in or existing data is updated.
Running this as a background thread allows the server to respond
quickly even when large amounts of data have changed, however in that
case you may query stale indexes.
So you need to tell RavenDB when querying to wait for the index to be refressed.
You can do with the various WaitFor... customization, you will most probably want the WaitForNonStaleResultsAsOfLastWrite option:
var newListOfModels = RavenSession
.Query<RandomModel>()
.Customize(x => x.WaitForNonStaleResultsAsOfLastWrite()).ToList();
i recently ran into the surprising behaviour of EF4, where after adding an entity to a context it is not available for querying (well, you need to make your queries aware, that you might be searching in the memory) unless SaveChanges() is called.
Let me explain a bit our scenario:
We are using the UnitOfWork pattern with EF 4.0 and POCO objects. We recently decided to implement a Message Bus, where we would implement in the message handlers most of the application logic.
The problem i ran into was when i was passing around my UnitOfWork(a context wrapper in our case) in the messages. For example I have the logic of printing a Barcode, when i do it, it should change the Printed Counter in the DB. The printing can happen ad hoc for an existing package, or can be done automatically when creating a special type of a package. I pass over the UnitOfWork, and then i look for a barcode with:
public void Handle(IBarCodePrintMessage message)
{
if (message.UnitOfWork == null)
using (var uow = factory.Create<IUnitOfWork>)
{
Handle(message, uow);
uow.Commit();
}
else
Handle(message, message.UnitOfWork);
}
void Handle(IBarCodePrintMessage message, IUnitOfWork uow)
{
// the barcode with the PackageID is in the context or in the db
var barCode = uow.BarCodes.Where(b => b.PackageID == message.PackageID).SingleOrDefault();
barCode.IncreasePrintCount(); // this might be actually quite complex, and sometimes fail
printingServices.PrintBarCode(barCode);
}
My problem is, that if the Barcode was added within this same uow, and there was no Commit yet, then it is not found.
This is the simpliest example, we have tons of code that was doing its own commit, and now all this needs to go under one transaction.
I have a couple of questions actually:
1) I was thinking of somehow hacking my IUnitOfWork, to return a set of objects, that might be either in memory (not commited changes), or in the DB (if not retrieved yet). This is actually a behaviour i would expect my UnitOfWork, tell me what is the last state i am at, even though i didnt commit yet, and not to give me the db state. For the db state i can create another context.
Anyways this seems to be quite tricky, cause i would need to implement my own type of entity collection, all the extendsion methods on it (where, select, first, groupby, etc), then to get it to work the IQueryable way (meaning it doesnt list the tables straight away), and then find a way to match up the locally cached entities and the retrieved ones.
To me this seems to be a generic problem, and i would believe there is already an implementation out there, just not sure where.
2) another option is to introduce manual transactions. I tried it against SQLCE4.0 and it might solve the issue (call savecontext very often, so the entities are queriable from the db, and then if any error occurs, rollback the transaction), but i have big doubts. There might be different threads runnign different transactions at the same time, not sure how would they interact if one rolls back.
Also we are using SQL CE 4.0 and SQL Express 2008 (both, and can be switched dynamically). Starting to handle transactions this way seems to bring in the DTC, which - i read everywhere - is quite a heavy thing, and i would prefer to avoid. Is there a way to use transactions in an easy way without the risk of leveraging them to the DTC?
3) Does anyone have any other options or ideas on how to go about this problem?
You can query context for not persisted entities.
var entity = context.ObjectStateManager
.GetObjectStateEntries(EntityState.Added)
.Where(e => !e.IsRelationship)
.Select(e => e.Entity)
.OfType<BarCode>()
.FirstOrDefualt(b => b.PackageId == message.PackageId);
You can make this more generic and incorporate it into your UoW logic where you will first check your unsaved entities and if you not find the entity you will query the database.
Anyway if you know that you create a new barcode which will be needed in the following processing it should be passed in the "message" as property exposed on your interface.