The way I use LINQ SQL is with 1 global datacontext.
I am having problems though. I have one page I go to that grabs all Cases in the db and copies data from the result IEnumerable.
Then, when I navigate to go do some updates on those cases, it fails.
Is there anything I can do to fix concurrency issues or these types of general issues while still only using 1 data context per user session? Would it help maybe I used a new data context every page load or something?
Thanks
You cannot have 1 global datacontext in ASP.NET; you need one global context per HTTP request, because you'll deal with issues like you mentioned. LINQ to SQL tracks objects changes in a graph, and a static context will contain instances of objects disposed in a previous HTTP request. Plus, overtime, it would get bloated and take up a lot of memory. The approach is to store an instance of it in HttpContext.Current.Items.
Related
This is basic winform application, no service or anything in-between. I am fetching some records from db using Entity Framework. Below code is in a class called PersonRepository.
var obj = Context.Persons.Where(u=>u.Id==20);
obj.RegisterDate = obj.RegisterDate.ToMountainStandardTime();
return obj;
ToMountainStandardTime is an extension method for date type.
Now after I pull this record, and display to UI. User does some action on screen, And based on requirements insert record in another table called "Activity". User don't need to save anything back in Person table.
After doing their things, like this
Context.Activities.Add(newActivityObject);
Context.SaveChange();
Both methods are in same class. Along with adding a new object in activity table, it also update the register date of selected person class.
I know the reason, this Context object initialize in constructor of PersonRepository class and being used by all the methods in this class.
Most of my experience is using this via restful services where I don't need much to worry about such things because for every request we create new instances of context.
I can simply handle this by Detach the object from context before editing it like this
Context.Entry(obj).State = EntityState.Detached;
But want to know if there is some better way to handle this?
You have a few choices to consider. Firstly, entities either can only be relied upon to be valid within the scope of the DbContext they were read, or they need to be detached and re-attached to transition between DbContext boundaries.
To keep entities scoped within their DbContext, your options are:
Long-lived (i.e. Singleton) DbContext.
Short-lived, project entities to POCO containers and re-load entities on-demand as needed.
The third option is to use short-lived DbContexts, but then manually manage detaching and re-attaching the entities.
I never recommend this third option as it is prone to errors and encourages issues like stale data overwrites. It's neat in concept, but more often than not becomes a repeated source of headaches in practice.
For smaller applications that themselves have relatively short runtime lives, a long-lived DbContext can be a simple to implement option. The biggest negatives of a long-lived DbContext are:
Having a context alive for extended periods of time can mean performance degrades over time as more entities are cached. The assumption that cached entities are better for performance can be misplaced as time to perform operations against entities (updates/inserts) increase as the cache grows, since EF will look through the cache for entity references that might be associated to new/changed entity values.
Data that the context is loading will become stale if multiple instances are running, or external processes can modify data state. By default EF will return cached copies which must manually be reloaded if suspected of being stale.
For larger applications, or long-running applications I would strongly lean towards using short-lived DbContext that rely on POCO ViewModels/DTOs for view-duration data state. This means leveraging Projection via Select or Automapper's ProjectTo to load relevant data from entities on demand to pass to views, then reload entities by ID, and transfer across data or perform actions with changed state values during updates after verifying Row Version Numbers / Timestamps to detect possible stale data state. Reloading an entity and it's related data by PK is extremely quick.
Not only does this avoid the complexity/mess of trying to juggle detached entities, (and reloading data state anyways to guard against stale overwrites) but it can lead to more optimal data read operations and index utilization for many scenarios, especially things like search results which only need a few values from specific tables rather than reading entire entity graphs. A cardinal sin of passing Entities to views is attempting to avoid extra data reads by avoiding eager loading and disabling lazy loading to leave "unused" relationships as #null, or even populating Entity class objects with just a few fields to serve as a view model with .Select which leads to errors or bad assumptions/overwrites in later code. An Entity should always represent a complete (or complete-able) state of the data row. Dual-purposing entities to serve as both data domain state and view state is asking for trouble. Methods expecting an entity should never need to be concerned about whether they are getting a complete entity or a partially complete one.
I wonder if somebody could point me in the right direction. I've recently started playing with LinqToSQL and love the strongly typed data objects etc.
I'm just struggling to understand the impact on database performance etc. For example, say I was developing a simple user profile page. The page shows basic information about the user, some information on their recent activity, and a list of unread notifications.
If I was developing a stored procedure for this page, I could create a single SP which returns multiple datatables covering all of the required information - resulting in a single db call.
However, using LinqToSQL, this could results in many calls - one for user info, atleast one for activity, atleast one for notifications, if I then want further info on notifications this may result in further calls - multiple db calls.
Should I be worried about the number of db calls happenning as a result of using this design pattern? Ie, are the multiple db handshakes etc going to degrade my db etc?
I'd appreciate your thoughts on this!
Thanks
David
LINQ to SQL can consume multiple results from a stored proc if you need to go that route. Unfortnately the designer has problems mapping them correctly, so you will probably need to create your mapping manually. See http://www.thinqlinq.com/Default/Using-LINQ-to-SQL-to-return-Multiple-Results.aspx.
You can configure LINQ to SQL to eagerly load the child records if you know that you're going to need them for every parent record. Use the DataLoadOptions and .LoadWith to configure it.
You can also project an object graph with multiple child collections in the Select clause of a LINQ query to reduce the number of DB hits that you make.
Ultimately, you need to check a number of options to determine which route is the best performance for your situation. It's not a one size fits all scenario.
Is it worst from a performance standpoint ? Yes, it should be. Multiple roundtrips are usually worse than single.
The real question is, do you mind? Is your application going to receive enough visits to warrant the added complexity of a stored procedure? Or do you value the simplicity of future modifications over raw performance?
In any case, if you need the performance, you can create a stored procedure and map it on your context. This will give you one single call, but return the data as objects
Here is an article explaining a bit about that option:
linq-to-sql-returning-multiple-result-sets
In the project my team is working on, there is a windows service which iterates through all the entities in a certain table, and updates some of their fields based on some rules we defined. We use NHibernate as our ORM tool. Currently, we open one session and one transaction for the entire proccess, which means the transaction is commited after all the entities have been proccessed. I think this approach isn't good, and I wanted to hear some more opinios:
Should we keep our current way of managing the session, Or should move to a different approach?
One option I thought about is opening a transaction per entity, and another suggestion was to open a new session for each entity.
What approach you think will work best?
There isn't a single way to do it; it all depends on the specific cases.
In the app I'm working on, I have examples of the three approaches, and there's a reason for choosing each one. For example:
The whole process must have transactional atomicity: use a single transaction
The process has a lot of common data, but each record in the "master" table can be considered a unit of work: use a single session, multiple transactions
Processing each record in the master table should be independent from the others (including error handling): use a session per record
I've written my own caching layer for my objects that come out of data access. My reasoning here is I'd like my data access layer to do just that -- data access. I don't really want it to worry about caching, and I'd only like to go in to that layer when I need to fetch data out of the database. Perhaps this is not the right way to think about things -- please let me know if I'm off track.
Anyway, there is at least one issue that I've ran in to so far. In one scenario, I load an object from NHibernate and stick it in the cache in one request. In the next request I get that object from the cache, modify it, and go back down to NHibernate to save it. Obviously NHibernate pukes, in this particular instance with a "Illegal attempt to associate a collection with two open sessions" exception.
So my question is, I guess, is there anything I should be aware of or do to make this work? Or should I just use a 2nd level cache that's built in to NHibernate?
NHibernate has caching for a reason.. use it :)
You'll find there are quite a few options for a second level cache provider that give you much more flexibility for cheaper then you could build it yourself. A perfect example is something like memcache if you decide you need to run a service on multiple systems.
Sorry, if this is a duplicate. Please point me to the appropriate question if this is but I could not find exactly what I am looking for.
So I am using a Linq to SQL datacontext for entity tracking and persistence in an ASP.NET web application. It is for an Intranet application that does not have a ton of users at a time. Right now I cam storing the datacontext in session state, which makes me feel dirty! It seems like I need the context always to be present though because I need to preserve the change tracking on the entities that are being modified. All of our screens have a Save button that would then call SubmitChanges() on the DataContext and persist all of the pending changes in memory.
Should I be storing the DataContext? Should I be disposing of it at the end of each request and then recreate it somehow and get the pending changes? If I should recreate it every time, I dont understand how the context could know what has changed without a ton of redundant database hits on each request.
First, I would say to stop putting things in Session altogether. Especially if you don't have a lot of users, just load the data when you need it.
Don't store the data context at all. Just create a new one on each page when you need it. When they hit the Save button, recreate a data context, load the object from the database, make the changes necessary based on the form input, and then save it back to the database. It should just be two database hits for each object, one to load, and then one to save it back.
I think the best practice with the data context is the Unit of Work pattern, where the scope of the Unit of Work is the single request that you are servicing. Instantiate a new data context each time you need to make changes. If you're concerned about overwriting changes that have been made after drawing the previous page, then consider using and persisting a version/timestamp in a hidden field and checking it against that returned from the data context when retrieving the entity to update.