Entity Framework concurrency issue using stored procedures - c#

I am using ASP.NET to build a application for a retail company. I am using the Entity Framework (model-first) as my data access layer. I am using stored procedures to do my CRUD operations and all columns are mapped and seems to be correct as all CRUD functionality are working as expected.
But I am having concurrency issues with the DELETE operation.
I've added a TimeStamp column to the table I am doing the CRUD operation on. The UPDATE operation works fine as it is updating by primary key and the TimeStamp value. Thus if no rows are affected with the UPDATE operation, because of a change in the TimeStamp value, the Entity Framework throws a OptimisticConcurrencyException.
The DELETE operation works on the same principle as it is deleting by primary key and the TimeStamp value. But no exception is thrown when the TimeStamp value does not match between the entity and the database.
In the C# delete method I do retrieve the latest record first and then update the TimeStamp property to another TimeStamp value (It might be different to the retrieved value). After some investigation by using SQL Profiler I can see that the DELETE stored procedure is executed but the TimeStamp parameter that is passed to the stored procedure is the latest TimeStamp value and not the value that I have set the TimeStamp property to. Thus the record is deleted and the Entity Framework does not throw an exception.
Why would the Entity Framework still pass the retrieved TimeStamp value to the Stored Procedure and not the value that I have assigned the property? Is this be design or am I missing something?
Any help will be appreciated! (where is Julie Lerman when you need her! :-))

Optimistic concurrency in EF works fine. Even with stored procedures.
ObjectContext.DeleteObjects passes original values of entity to delete function. This makes sense. Original values are used to identify the row to delete. When you delete object, you don't (usually) have meaningful edits to your entity. What do you expect EF to do with then? Write? To what records?
One legitimate use for passing modified data to delete function is when you want to track deletes in some other table and you need to throw in some information not accessible at database layer, only at business layer. Examples include application level user name or reason to delete. In this situation you need to construct entity with this values as original values. One way to do it:
var x = db.MyTable.Single(k => k.Id == id_to_delete);
x.UserName = logged_in_user;
x.ReasonForChange = some_reason;
// [...]
db.ObjectStateManager.ChangeObjectState(x, EntityState.Unchanged);
db.MyTable.DeleteObject(x);
db.SaveChanges();
Of course, better strategy might be to do it openly in business layer.
I don't understand your use case with rowversion/timestamp.
To avoid concurrency issues you pass original timestamp to modifying code.
That way it can be compared to current value in database to detect if record changed since you last read it.
Comparing it with new value makes little sense.
You usually use change markers that are automatically updated by database like rowversion/timestamp in SQL Server, rowversion in Oracle or xmin in PostgreSQL.
You don't change its value in your code.
Still, if you maintain row version manually, you need to provide:
a) new version to insert and update to be written, and
b) old version (read from database) to update and delete to check for concurrent changes.
You don't send new value to delete. You don't need to.
Also, when using stored procedures for modification, it's better to compute new version in the procedure and return it to application, not the other way around.

Hard to tell without seeing any code, but maybe when the postback occurs the page is being re-bound before your delete method is firing? On whatever method databinds the form controls (I assume it's OnLoad or OnInit), have you wrapped any databinding calls with if ( !this.IsPostBack ) { ... }?
Also I'm not sure if there's a reason why you're explicitly storing the concurrency flag in viewstate/session variables, but a better way to do it (imo) is to add the timestamp to the DataKeyNames property of the FormView/GridView (ie: <asp:FormView ID='blah' runat='server' DataKeyNames='Id, Timestamp'>.
This way you don't have to worry about manually storing or updating the timestamp. ;)

Related

EF Core - data may have been modified or deleted since entities were loaded with SQL ON UPDATE CURRENT_TIMESTAMP

I'm getting "data may have been modified or deleted since entities were loaded" exception while trying to update the same record for the second time. I found the issue is because I'm using the below column definition in MySQL and it seems I'm not handling this correctly in my entities.
last_updated_at datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
dbItem.LastUpdatedAt = DateTime.Now;
If I update the value from my entity as above and save then it works.
But if I don't update the value from my entity it doesn't work for the second update throwing "data may have been modified or deleted since entities were loaded" exception
What would be a good solution for this.
What you want is, that EF will get the updated value back from database at the end of an add/update operation. You can use ValueGeneratedOnAddOrUpdate configuration to achieve this.
Contrary to what the name suggests, you are still allowed to provide your own value that will be included in the context - just make sure that you don't try to provide the C# default value if you want to transfer the value to the database
If you set a value for the property configured as ValueGeneratedOnAddOrUpdate while the entity is being tracked by the context, the property and the value that you set will be included in any INSERT and UPDATE statements. This value may be saved in the database, depending on how you have configured your value generation strategy. This is only applicable if the value that you provide is not the CLR default value for the data type of the property.
Quoted from https://www.learnentityframeworkcore.com/configuration/fluent-api/valuegeneratedonaddorupdate-method
Disclaimer: I never tried this practically. It's just what I assume to work after reading some docs.

Linq To SQL Delete + Insert best practice

As stated in the title i need to perform delete + insert, i do :
context.DeleteAllOnSubmit ( deleteQuery ) ;
foreach ( var entry in entries )
contex.InsertOnSubmit ( entry ) ;
context.SubmitChanges();
As wrote in that post :
Linq to SQL: execution order when calling SubmitChanges()
I read that the delete operation is the last one applied, but at the moment i see my logic work (i am sure that delete+insert happen dozen of times per day).
What i need is understand if the post is wrong or my logic is and for some reason (update check flag in linq to sql datamodel?) only lucky and avoid the trouble.
After that i would like to know what is the better pattern to make "update" when record cardinality changes.
I mean in my table there is a primary key that identify an entity (an entity has many records) and a subkey that identify each record in the same entity (sub entity).
I need to regenerate (because some sub entity may be inserted, edited or delete) so i use delete + insert (in the messagge form which i write to DB contains only entity and sub enetity that exist, not the deleted ones).
EG:
ID SubID Data
1 1_0 Father
2 2_0 Father
2 2_1 Child 1
3 3_0 Father
3 3_1 Child 1
3 3_2 Child 2
I have no control nor over the table (and data format inside them) nor over the message (that i use to write or delete the table displaied above).
I read that the delete operation is the last one applied, but at the moment i see my logic work (i am sure that delete+insert happen dozen of times per day). What i need is understand if the post is wrong or my logic is and for some reason (update check flag in linq to sql datamodel?) only lucky and avoid the trouble.
Post is correct, delete actually deleted at last.
Your code is working as per design, this is not by chance.
It actually loads all records to be deleted and then deleted all one by one. This happens at last.
This will never fail or will not deleted wrong records, however it has performance issue, you can refer very good msdn article on this
Regardless of how many changes you make to your objects, changes are made only to in-memory replicas. You have made no changes to the actual data in the database. Your changes are not transmitted to the server until you explicitly call SubmitChanges on the DataContext.
When you make this call, the DataContext tries to translate your changes into equivalent SQL commands. You can use your own custom logic to override these actions, but the order of submission is orchestrated by a service of the DataContext known as the change processor.
The sequence of events is as follows: refer msdn
When you call SubmitChanges, LINQ to SQL examines the set of known objects to determine whether new instances have been attached to them. If they have, these new instances are added to the set of tracked objects. This is why we are saying insertion at first
All objects that have pending changes are ordered into a sequence of objects based on the dependencies between them. Objects whose changes depend on other objects are sequenced after their dependencies. then the update
After Update deletion is done
Immediately before any actual changes are transmitted, LINQ to SQL starts a transaction to encapsulate the series of individual commands.
The changes to the objects are translated one by one to SQL commands and sent to the server.
At this point, any errors detected by the database cause the submission process to stop, and an exception is raised.
All changes to the database are rolled back as if no submissions ever occurred. The DataContext still has a full recording of all changes

How to use T-SQL timestamp with Linq to Entities EF4

I currently have a project where we are trying to migrate some data from a client database and into our central DB store. We are then going to expose the data back to the client via web service methods.
One thing we would like to do is make use of a T-SQL timestamp (or rowversion) column on the data, so the client can check their local version of the data against ours, and for instance call a method saying "give me all the data with a version > 10"
This is proving a little problematic for us, because Entity Framework will interpret the timestamp column as a Byte array, so we can't figure out the best way to write code in LINQ to get all rows of data where version > X where the type of version is Byte[].
In pseudo code
getdata(int checkVersion)
{
return Shoppers.Where(s=>s.Version > checkVersion).ToList();
}
//==> need to convert the Version column from Byte[] to int somehow in Linq-Entities
One suggestion would be to create a new computed column in the table of type bigint which converts the version (timestamp) column.
I wonder if there is actually a way to do this in LINQ though, without introducing another column into the table?
i think you will like this msdn article
By default, the Entity Framework implements an optimistic concurrency model. This means that locks are not held on data in the data source between when the data is queried and the data is updated. The Entity Framework saves object changes to the database without checking for concurrency. For entities that might experience a high degree of concurrency, we recommend that the entity define a property in the conceptual layer with an attribute of ConcurrencyMode="fixed", as shown in the following example:
<Property Name="Status" Type="Byte" Nullable="false" ConcurrencyMode="Fixed" />
When this attribute is used, the Entity Framework checks for changes in the database before saving changes to the database. Any conflicting changes will cause an OptimisticConcurrencyException. For more information, see How to: Manage Data Concurrency in the Object Context. An OptimisticConcurrencyException can also occur when you define an Entity Data Model that uses stored procedures to make updates to the data source. In this case, the exception is raised when the stored procedure that is used to perform updates reports that zero rows were updated.

Fine Grained CRUD with Subsonic's SimpleRepository

Let' say I have a TestClass in my C# app with property A and property B.
I change the value of B by my code, I leave property A unchanged.
I update TestClass in database by SimpleRepository's Update method.
As I see it updates also property A value in the database.
It is easy to test: I change value A in my database outside my app ('by hand'), then I make the update from my app. Value of property A changes back to its value according to TestClass's state in my app.
So, my question: is it possible to make updates only to some properties, not for the whole class by SimpleRepository? Are there some 'IgnoreFields' possibilities?
What you need is optimistic concurrency on your UPDATE statement, not to exclude certain fields. In short what that means is when updating a table, a WHERE clause is appended to your UPDATE statement that ensures the values of the fields in the row are in fact what they were when the last SELECT was run.
So, let's assume in your example I selected some data and the values for A and B were 1 and 2 respectively. Now let's assume I wanted to update B (below statement is just an example):
UPDATE TestClass SET B = '3' WHERE Id = 1;
However, instead of running that statement (because there's no concurrency there), let's run this one:
UPDATE TestClass SET B = '3' WHERE Id = 1 AND A = '1' AND B = '2';
That statement now ensures the record hasn't been changed by anybody.
However, at the moment it doesn't appear that Subsonic's SimpleRepository supports any type of concurrency and so that's going to be a major downfall. If you're looking for a very straight forward repository library, where you can use POCO's, I would recommend Dapper. In fact, Dapper is used by Stackoverflow. It's extremely fast and will easily allow you to build in concurrency into your update statements because you send down parameterized SQL statements, simple.
This Stackoverflow article is an overall article on how to use Dapper for all CRUD ops.
This Stackoverflow article shows how to perform inserts and updates with Dapper.
NOTE: with Dapper you could actually do what you're wanting to as well because you send down basic SQL statements, but I just wouldn't recommend not using concurrency.
Don't call the update method on the DataObject for such cases, you are basically indicating that the object has been changed and needs to be updated in the DB.
So subsonic will generate a query like
UPDATE TestClass SET A ='', B='', ModifiedOn = 'DateHere' WHERE PrimaryKey = ID
to change only the property B you need to consturct the UPDATE query manually.
have a look at the Subsonic.Update class.
Ideally you shouldn't be forming a new instance of the data object manually, if you do so make sure
the values are copied from the object retured from the Subsonic.Select query.
So when you update the value of even only one property all other properties will hold their own value from DB rather than a default value depending on the type of the property.

LinqToSQL and auditing changed fields

Here's another one of these LinqToSQL questions where I'm sure I must have missed the boat somewhere, because the behavior of the O/R Designer is very puzzling to me...
I have a base class for my LinqToSQL tables, which I called LinqedTable. I've successfully used reflection to get hold of all the properties of the descendant classes and do other standard stuff.
Now I want to have some automatic auditing of my tables, so that whenever a LinqedTable record is inserted or deleted, or a field value changes, I will insert a record into an audit table, detailing the change type, the field name, and its value pre- and post-save.
I thought I would be able to do it using the PropertyChanging event, keeping track of all the changed properties before a save, then clearing the collection of changes after each SubmitChanges() call. But - the generated code from the O/R designer, for some bizarre reason, doesn't give you the property name in the PropertyChanging event - it sends an empty string! (WHY?!) It does send the property name in the PropertyChanged event, but that's already too late for me to get the original value.
I thought to grab all the original values of all properties using the OnLoaded() partial method - but that is private by definition, and I need access to that method in the base class. Even if I used reflection to get hold of that method, that would mean I would have to implement the other half of the partial method for every one of my tables, which kinda defeats the purpose of having inheritance!
I also can't find any suitable method in the DataContext to use or override.
So what would you recommend to get this audit functionality working?
You can use GetChangeSet on the DataContext to retrieve a list of updates, inserts and deletes that have occurred on all tables within a context. You can use ITable.GetOriginalEntityState to retrieve the original values of a changed entity. However, when you retrieve the original values of a deleted or updated record, the associations will not be available so you will have to rely on foreign key values only in that area if you need to process related entities. You can Use ITable.GetModifiedMembers to help retrieve only values that have changed.
Forgive me for perhaps a stupid answer, but how about doing the audit directly in the SQL Server using triggers (if you are in SQL Server 2005 or 2008 standard) or using the change tracking facilities in SQL server 2008 Enterprise?

Categories

Resources