I'm using LINQ to SQL, and having a bit of an issue incrementing a view counter cross-connection.
The teeny bit of code I'm using is:
t = this.AppManager.ForumManager.GetThread(id);
t.Views = t.Views + 1;
this.AppManager.DB.SubmitChanges();
Now in my tests, I am running this multiple times, non-concurrently. There are a total of 4 copies of the object performing this test.
That is to say, there is no locking issue, or anything like that but there are 4 data contexts.
Now, I would expect this to work like this: fetch a row, modify a field, update the row. However, this is throwing a ChangeConflictException.
Why would the change be conflicted if none of the copies of this are running concurrently?
Is there a way to ignore change conflicts on a certain table?
EDIT: Found the answer:
You can set "UpdateCheck = Never" on all columns on a table to create a last-in-wins style of update. This is what the application was using before I ported it to LINQ, so that is what I will use for now.
EDIT2: While my fix above did indeed prevent the exception from being thrown, it did not fix the underlying issue:
Since I have more than one data context, there ends up being more than one cached copy of each object. Should I be recreating my data context with every page load?
I would rather instruct the data context to forget everything. Is this possible?
I believe DataContext is indented to be relatively lightweight and short-lived. IMO, you should not cache data loaded with a DataContext longer than necessary. When it's short lived, it remains relatively small because (as I understand it) the DataContext's memory usage is primarily associated with tracking the changes you make to objects managed by it (retrieved by it).
In the application I work on, we create the context, display data on the UI, wait for user updates and then update the data. However, that is necessary mainly because we want the update to be based on what the user is looking at (otherwise we could retrieve the data and update all at once when the user hits update). If your updates are relatively free-standing, I think it would be sensible to retrieve the row immediately before updating it.
You can also use System.Data.Linq.DataContext.Refresh() to re-sync already-retrieved data with data in the database to help with this problem.
To respond to your last comment about making the context forget everything, I don't think there's a way to do that, but I suspect that's because all there is to a context is the tracked changes (and the connection), and it's just as well that you create a new context (remember to dispose of the old one) because really you want to throw away everything that the context is.
Related
I'm trying to make entity framework work properly with my application. The scenario I have is something like the following: Say I have 8000 items, and each item has 100 components. The way it currently works is I eager load the 8000 items and lazy load the components for each item, because eager loading the entire thing would be too slow on application startup.
As far as I understood in order to have lazy loading working, I need to keep the context alive for the whole application lifetime. So I have a single instance of the context that is open on startup and is closed on exit. I also use that to track changes and save changes.
However I've been reading about EF and many people advise against this approach, in favor of opening and closing contexts at each operation. My question is: how would you go about lazy loading properties, tracking changes, and saving changes if I cannot work with the same context?
Furthermore, I am already facing with issues since I use different threads to load data in the background or save in the background (say it's saving, if I edit a tracking property it raises an exception). I fixed some of them by using a FIFO queue (on a specific thread) for operations on the same context, however tracking properties won't respect the queue.
Some help would be greatly appreaciated as to how to use EF properly
When using entity framework, when reading from some tables/views, it seems I get old data back. By this I mean that an external process has changed the data.
When running my code, I see EF build and run (using profiler) an SQL query to retrieve the data, but then the old values end up in the object.
What is more confusing to me is that this does not happen for all tables/view, but for the tables/views it does effect, it is consistent.
If I restart IIS I get the correct result, so clearly the values are being held somewhere.
What is causing this selective cacheing of data and how do I influence it?
This is normal when you use same instance of ObjectContext to long. Make it's lifetime as short as possible. Instance per request should be fine.
I currently have a method which reads data to determine if an update is needed, and then pushes the update to the database (dependency injected). The method is hit very hard, and I found concurrency related bugs, namely, multiple updates since several threads read the data before the first update.
I solved this using a lock, it works quite nicely. How may I instead use a TransactionScope to do the same thing? Can I? Will it block another thread as a lock would? Further, can I 'lock' an a specific 'id' as I am doing with a lock (I keep a Dictionary that stores an object to lock on for each id)?
I am using Entity Framework 5, though its hidden by a repository and unit of work pattern.
Application level locking may not be a solution for this problem. First of all you usually need to lock only single record or range of records. Next you may later need to lock another modifications and get into quite complex code.
This situation is usually handled with either optimistic or pessimistic concurrency.
Optimistic concurrency - you will have additional database generated column (database usually have special type for that like timestamp or rowversion). Database will automatically update that column every time you update the record. If you configure this column as row version EF will include the column in the where condition of the update => the executed update will search for the record with given key and row version. If the record is found it will be updated. If the record is not found it means either record with the key doesn't exist or someone else has updated the record since current process loaded its data => you will get exception and you can try to refresh data and save changes again. This mode is useful for records which are not updated too much. In your case it can cause just another troubles.
Pessimistic concurrency - this mode uses database locking instead. When you query the record you will lock it for update so no one else can also lock it for update or update directly. Unfortunately this mode currently doesn't have direct support in EF and you must execute it through raw SQL. I wrote an article explaining the pessimistic concurrency and its usage with EF. Even pessimistic concurrency may not be a good solution for database under heavy load.
If you really build a solution where a lot of concurrent processes tries to update same data all the time you may end up with redesign because there will be no reliable high performing solution based on locking or rerunning failed updates.
I would like to have optimized version of my WinForms C# based application for slower connections. For this reason I wanted to introduce timestamp column into all tables (that change) and load most of things the first time it's needed and then just read updates/inserts/deletes that could have been done by other people using application.
For this question to have an example I've added a timestamp column into Table called Konsultanci. Considering that this table might be large I would like to load it once and then check for updates/inserts. In a simple way to load it all I do it like this:
private void KonsultantsListFill(ObjectListView listView)
{
using (var context = new EntityBazaCRM(Settings.sqlDataConnectionDetailsCRM)) {
ObjectSet<Konsultanci> listaKonsultantow = context.Konsultancis;
GlobalnaListaKonsultantow = listaKonsultantow.ToList(); // assign to global variable to be used all around the WinForms code.
}
}
How would I go with checking if anything changed to the table? Also how do I handle updates in WinForms c#? Should I be checking for changes on each tabpage select, opening new gui's, saving, loading of clients, consultants and so on? Should I be refreshing all tables all the time (like firing a background thread that is executed every single action that user does? or should it only be executed prior to eventual need for the data).
What I'm looking here is:
General advice on how to approach timestamp problem and refreshing data without having to load everything multiple times (slow connection issues)
A code example with Entity Framework considering timestamp column? Eventually code to be used prior executing something that requires data?
Timestamps are not well suited to help you detect when your cache needs to be updated. First off, they are not datetimes (read here) so they don't give you any clue as to when a record was updated. Timestamps are geared more towards assisting in optimistic locking and concurrency control, not cache management. When trying to update your cache you need a mechanism like a LastModified datetime field on your tables (make sure it's indexed!) and then a mechanism to periodically check for rows that have been modified since the last time you checked.
Regarding keeping your data fresh, you could run a separate query (possibly on another thread) that finds all records with the LastModified > than the last time you checked and then "upsert" (update or insert) them into your cache context. Another mechanism with Entity Framework is to use the Context.Refresh() method.
This may be a trivial question, but I will ask it anyway: can updates in three or four tables in one database trigger updates to one or two tables in another database (in MS SQL 2005?) I am happy to create any amount of T-SQL necessary, and I also have VS 2008 with C# ability but have never written a trigger like that before.
Essentially, I had the "GetProducts()" of the data repository call my stored procedure anytime any data was loaded with what scope they needed, and I physically changed the "cached" data. Everything in life was good.
Now, they don't want the data update as part of the repository at all. They feel that it is external to the project and should be handled without interaction.
Anyone have any suggestions on what to pursue? Links to ideas already out there would be fantastic.
A trigger only kicks off when one table is updated,inserted or deleted. If you havea specific order the tables must be inserted, you could put the trigger on the last one.
Alternatively you could write the trigger to examine the other tables as well to ensure all have records. Or you could write one trigger for each table. Or if real time updates are not required, you could have a job that runs periodically to handle the changes needed. Not knowing exactly what you want to do, it is hard to say what is the best way to handle your particular situation. Whatever you do with the triggers, remember triggers operate on sets of data not one row at a time. They should always be written to handle multiple row inserts,updates or deletes or sooner or later your trigger will cause data integrity problems. Do not do this in a cursor unless you like having your production tables locked for hours at a time when someone needs to put in 3,000,000 new records for a new client.
if this is what you want:
check database A for updates of table1, table2, table3, and or table4
then update database B table5 and/or table6
you need to use a stored procedure to encapsulate all of the necessary logic and transactions for the original updates in database A and the resulting updates in database B
Are you asking if you update these three tables, then fire a trigger? But if you only update two of the three tables do not fire the trigger?
Your triggers can update any number of tables, they can also cause other triggers to fire, and if you like to live on the dangerous side you can even have these be recursive causing the original trigger to fire again.
However, nothing exists that can cordinate what I think you described. Not to say it can't be done.