I have a WPF application and am using entity framework. I was reading the tutorial
and the author was talking about disconnected entities, which is the first time I have heard of them. I am a little confused as to when disconnected entities are actually needed, as I have been using EF just fine to do CRUD operations on my business objects. If I am using the same context when I am doing CRUD operations on a business object, why do I need to ever manually track the entity state changes? Thanks for any help.
If you are always keeping around the originating context instance, then you probably do not need to worry about disconnected entities. Disconnected entities often come up in the context of web services or web sites, where the context from which an entity was originally retrieved (and, for example, placed into a Session) is no longer available some time down the road when that entity has been modified and needs to be saved back to the database.
Related
This is basic winform application, no service or anything in-between. I am fetching some records from db using Entity Framework. Below code is in a class called PersonRepository.
var obj = Context.Persons.Where(u=>u.Id==20);
obj.RegisterDate = obj.RegisterDate.ToMountainStandardTime();
return obj;
ToMountainStandardTime is an extension method for date type.
Now after I pull this record, and display to UI. User does some action on screen, And based on requirements insert record in another table called "Activity". User don't need to save anything back in Person table.
After doing their things, like this
Context.Activities.Add(newActivityObject);
Context.SaveChange();
Both methods are in same class. Along with adding a new object in activity table, it also update the register date of selected person class.
I know the reason, this Context object initialize in constructor of PersonRepository class and being used by all the methods in this class.
Most of my experience is using this via restful services where I don't need much to worry about such things because for every request we create new instances of context.
I can simply handle this by Detach the object from context before editing it like this
Context.Entry(obj).State = EntityState.Detached;
But want to know if there is some better way to handle this?
You have a few choices to consider. Firstly, entities either can only be relied upon to be valid within the scope of the DbContext they were read, or they need to be detached and re-attached to transition between DbContext boundaries.
To keep entities scoped within their DbContext, your options are:
Long-lived (i.e. Singleton) DbContext.
Short-lived, project entities to POCO containers and re-load entities on-demand as needed.
The third option is to use short-lived DbContexts, but then manually manage detaching and re-attaching the entities.
I never recommend this third option as it is prone to errors and encourages issues like stale data overwrites. It's neat in concept, but more often than not becomes a repeated source of headaches in practice.
For smaller applications that themselves have relatively short runtime lives, a long-lived DbContext can be a simple to implement option. The biggest negatives of a long-lived DbContext are:
Having a context alive for extended periods of time can mean performance degrades over time as more entities are cached. The assumption that cached entities are better for performance can be misplaced as time to perform operations against entities (updates/inserts) increase as the cache grows, since EF will look through the cache for entity references that might be associated to new/changed entity values.
Data that the context is loading will become stale if multiple instances are running, or external processes can modify data state. By default EF will return cached copies which must manually be reloaded if suspected of being stale.
For larger applications, or long-running applications I would strongly lean towards using short-lived DbContext that rely on POCO ViewModels/DTOs for view-duration data state. This means leveraging Projection via Select or Automapper's ProjectTo to load relevant data from entities on demand to pass to views, then reload entities by ID, and transfer across data or perform actions with changed state values during updates after verifying Row Version Numbers / Timestamps to detect possible stale data state. Reloading an entity and it's related data by PK is extremely quick.
Not only does this avoid the complexity/mess of trying to juggle detached entities, (and reloading data state anyways to guard against stale overwrites) but it can lead to more optimal data read operations and index utilization for many scenarios, especially things like search results which only need a few values from specific tables rather than reading entire entity graphs. A cardinal sin of passing Entities to views is attempting to avoid extra data reads by avoiding eager loading and disabling lazy loading to leave "unused" relationships as #null, or even populating Entity class objects with just a few fields to serve as a view model with .Select which leads to errors or bad assumptions/overwrites in later code. An Entity should always represent a complete (or complete-able) state of the data row. Dual-purposing entities to serve as both data domain state and view state is asking for trouble. Methods expecting an entity should never need to be concerned about whether they are getting a complete entity or a partially complete one.
I am using Entity Framework 6 where, for performance reasons, I load my entities into my DbContext up front and then use them locally. Up to now, all changes to the database have gone through the DbContext so my local entities and the database have been in sync. However, I now have to call a Stored Procedure on the database, which has the side effects of making changes to tables (outside of the DbContext) that need to be reflected in my entities. By changes, I mean it is adding new records and deleting / updating existing records.
I do not want to dispose of my DbContext and create a new one, as some of the entity instances are wrapped within ViewModel classes. So, deleting the DbContext in this way would lead to major problems in the UI.
It is my understanding that simply calling Load() on all my DbSets of the DbContext will just replace the existing instances. So, any objects using the old entities instances won't work.
So, I thought I could use the Reload method like:
context.Entry(entity).Reload();
which would update my local entities, but I can only do this for the entities that the DbContext already knows about. It doesn't cover any NEW entities or DELETED entities that were created / deleted as a result of the Stored Procedure executing.
So, I am looking for a way to:
Load, from the database, entities that are NEW to my DbContext
Reload existing entities in my DbContext
Remove any deleted entities from my DbContext
Here is the official documentation for Entity Framework.
Starting from the analysis of your database situation, it suggests smart and quick ways to obtain what you want, detailing when necessary data-read strategies (like eager or lazy loading) or providing tutorials to correctly use code generation and the Wizard GUI.
http://www.entityframeworktutorial.net/choosing-development-approach-with-entity-framework.aspx
Here some more detailed info and tutorial on data-read strategies:
https://www.c-sharpcorner.com/article/eager-loading-lazy-loading-and-explicit-loading-in-entity-framework/
As I already told you in comments, I would suggest a database-first approach and with lazy loading to avoid uncontrolled data behaviours (or reloading the whole db when running a stored procedure).
Talking about the SP, it can simply be mapped through the Wizard that comes with Entity Framework and wrapped by a method.
Hope you will find these resources helpful!
In general EntityFramework can not aware on change in database and update dbcontext .there is no optimized or EntityFramework built-in solution for it.
I think you can use CDC in SqlServer, push change to your application and update your dbcontext. But it not acceptable solution for all business and senario
I am developing a multi tier desktop application(Onion architecture), with a WinForm Project as UI, and I used EF code first to access my DB, and for my Domain models, I want to use POCOs, so I have two choices :
Connected POCOs
Disconnected POCOs
If I use disconnected POCOs I have to do a lot of stuff outside of EF and do not use EF features, because:
I have to save and manage entity's State in client side
When I want save changes to DB, I have to sync client side POCO's State
with DbContext entity's State.
During adding POCOs to newly created DbContext, I have to control
to prevent adding two entities with same key to the DbContext.
So, it seems normal, to use Connected POCOs, but in this situation I think I have the following challenges:
If I want to manage lifetime of my DbContext using an IoC container, in a multiuser
environment, and keep alive the DbContexts for all users from time their
getting POCOs to time that saving back their changes to DB, it take large
amount of server memory, I think, and it isn't efficient.
I want to save below related graph of my entities in one transaction
I have these questions:
How can I use connected POCOs in a my multitier application?
which config should I use to manage lifetime of my DbContexts using IoC containers in my case?
Is there any way to make(or change) the total graph(in client side) and then pass it to save in DbContext?
Is there any sample that implement this situations?
We have two web applications and each one is creating it's own data context for EF. When I make changes to an entity in one app I see the changes on the page and in the database when I view the data in SQL Server Mangement Studio. However, I don't immediately see the changes in the other application.
Both apps use dependency injection and both applications use the same business layer and data layer. So the UI in both apps go through a common controller class (not to be confused with MVC controllers) and the controller goes through the repository for the entity it is retrieving. Because they are different apps they each have their own instance of entity framework data context.
If there is some kind of caching, how might I turn that off?
Thanks in advance.
EDIT - Maybe the caching is ocurring somewhere above EF? Clearing my browser cache doesn't seem to fix the issue. After some time goes by I will suddenly see the update to the record in the other app, but for a while no amount of refreshing will show me the updates.
If the caching is occurring in the context, the cached values should go away when you dispose of the context at the end of the current HTTP request. You do have a per-request context, right?
Here's some context lifetime best practices.
Yes, contexts cache their results.
You should be creating a new context with every query (or implement something like NHibernate's Session-per-request)
Question - What is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database?
Background:
That is say I have the classes Node and Relationship, and the application is building up a graph of related objects using these classes. There might be 1000 nodes with various relationships between them. The application needs to query the structure hence an in-memory approach is good for performance no doubt (e.g. traverse the graph from Node X to find the root parents)
The graph does need to be persisted however into a database with tables NODES and RELATIONSHIPS.
Therefore what is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database?
Ideal requirements would include:
build up changes in-memory and then 'save' afterwards (mandatory)
when saving, apply updates to database in correct order to avoid hitting any database constraints (mandatory)
keep persistence mechanism separate from model, for ease in changing persistence layer if needed, e.g. don't just wrap an ADO.net DataRow in the Node and Relationship classes (desirable)
mechanism for doing optimistic locking (desirable)
Or is the overhead of all this for a smallish application just not worth it and I should just hit the database each time for everything? (assuming the response times were acceptable) [would still like to avoid if not too much extra overhead to remain somewhat scalable re performance]
I'm using the self tracking entities in Entity Framework 4. After the entities are loaded into memory the StartTracking() MUST be called on every entity. Then you can modify your entity graph in memory without any DB-Operations. When you're done with the modifications, you call the context extension method "ApplyChanges(rootOfEntityGraph)" and SaveChanges(). So your modifications are persisted. Now you have to start the tracking again on every entity in the graph. Two hints/ideas I'm using at the moment:
1.) call StartTracking() at the beginning on every entity
I'm using an Interface IWorkspace to abstract the ObjectContext (simplifies testing -> see the OpenSource implementation bbv.DomainDrivenDesign at sourceforge). They also use a QueryableContext. So I created a further concrete Workspace and QueryableContext implementation and intercept the loading process with an own IEnumerable implementation. When the workspace's consumer executes the query which he get with CreateQuery(), my intercepting IEnumerable object registers an eventhandler on the context's ChangeTracker. In this event handler I call StartTracking() for every entity loaded and added into the context (doesn't work if you load the objects with NoTrakcing, because in that case the objects aren't added to the context and the event handler will not be fired). After the enumeration in the self made Iterator, the event handler on the ObjectStateManager is deregistered.
2.) call StartTracking() after ApplyChanges()/SaveChanges()
In the workspace implementation, I ask the context's ObjectStateManager for the modified entities, i.e:
var addedEntities = this.context.ObjectStateManager.GetObjectStateEntries(EntityState.Added);
--> analogous for modified entities
cast them to IObjectWithChangeTracker and call the AcceptChanges() method on the entity itself. This starts the object's changetracker again.
For my project I have the same mandatory points as you. I played around with EF 3.5 and didn't find a satisfactory solution. But the new ability of self tracking entities in EF 4 seems to fit my requirements (as far as I explored the funcionality).
If you're interested, I'll send you my "spike"-project.
Have anyone an alternative solution? My project is a server application which holds objects in memory for fast operations, while modifications should also be persisted (no round trip to DB). At some points in code the object graphs are marked as deleted/terminated and are removed from the in-memory container. With the explained solution above I can reuse the generated model from EF and have not to code and wrapp all objects myself again. The generated code for the self tracking entities arises from T4 templates which can be adapted very easily.
Thanks a lot for other ideas/critism
Short answer is that you can still keep a graph (collection of linked objects) of the objects in memory and write the changes to the database as they occur. If this is taking too long, you could put the changes onto a message queue (but that is probably overkill) or execute the updates and inserts on a separate thread.