As stated in the title i need to perform delete + insert, i do :
context.DeleteAllOnSubmit ( deleteQuery ) ;
foreach ( var entry in entries )
contex.InsertOnSubmit ( entry ) ;
context.SubmitChanges();
As wrote in that post :
Linq to SQL: execution order when calling SubmitChanges()
I read that the delete operation is the last one applied, but at the moment i see my logic work (i am sure that delete+insert happen dozen of times per day).
What i need is understand if the post is wrong or my logic is and for some reason (update check flag in linq to sql datamodel?) only lucky and avoid the trouble.
After that i would like to know what is the better pattern to make "update" when record cardinality changes.
I mean in my table there is a primary key that identify an entity (an entity has many records) and a subkey that identify each record in the same entity (sub entity).
I need to regenerate (because some sub entity may be inserted, edited or delete) so i use delete + insert (in the messagge form which i write to DB contains only entity and sub enetity that exist, not the deleted ones).
EG:
ID SubID Data
1 1_0 Father
2 2_0 Father
2 2_1 Child 1
3 3_0 Father
3 3_1 Child 1
3 3_2 Child 2
I have no control nor over the table (and data format inside them) nor over the message (that i use to write or delete the table displaied above).
I read that the delete operation is the last one applied, but at the moment i see my logic work (i am sure that delete+insert happen dozen of times per day). What i need is understand if the post is wrong or my logic is and for some reason (update check flag in linq to sql datamodel?) only lucky and avoid the trouble.
Post is correct, delete actually deleted at last.
Your code is working as per design, this is not by chance.
It actually loads all records to be deleted and then deleted all one by one. This happens at last.
This will never fail or will not deleted wrong records, however it has performance issue, you can refer very good msdn article on this
Regardless of how many changes you make to your objects, changes are made only to in-memory replicas. You have made no changes to the actual data in the database. Your changes are not transmitted to the server until you explicitly call SubmitChanges on the DataContext.
When you make this call, the DataContext tries to translate your changes into equivalent SQL commands. You can use your own custom logic to override these actions, but the order of submission is orchestrated by a service of the DataContext known as the change processor.
The sequence of events is as follows: refer msdn
When you call SubmitChanges, LINQ to SQL examines the set of known objects to determine whether new instances have been attached to them. If they have, these new instances are added to the set of tracked objects. This is why we are saying insertion at first
All objects that have pending changes are ordered into a sequence of objects based on the dependencies between them. Objects whose changes depend on other objects are sequenced after their dependencies. then the update
After Update deletion is done
Immediately before any actual changes are transmitted, LINQ to SQL starts a transaction to encapsulate the series of individual commands.
The changes to the objects are translated one by one to SQL commands and sent to the server.
At this point, any errors detected by the database cause the submission process to stop, and an exception is raised.
All changes to the database are rolled back as if no submissions ever occurred. The DataContext still has a full recording of all changes
Related
I am using ASP.NET to build a application for a retail company. I am using the Entity Framework (model-first) as my data access layer. I am using stored procedures to do my CRUD operations and all columns are mapped and seems to be correct as all CRUD functionality are working as expected.
But I am having concurrency issues with the DELETE operation.
I've added a TimeStamp column to the table I am doing the CRUD operation on. The UPDATE operation works fine as it is updating by primary key and the TimeStamp value. Thus if no rows are affected with the UPDATE operation, because of a change in the TimeStamp value, the Entity Framework throws a OptimisticConcurrencyException.
The DELETE operation works on the same principle as it is deleting by primary key and the TimeStamp value. But no exception is thrown when the TimeStamp value does not match between the entity and the database.
In the C# delete method I do retrieve the latest record first and then update the TimeStamp property to another TimeStamp value (It might be different to the retrieved value). After some investigation by using SQL Profiler I can see that the DELETE stored procedure is executed but the TimeStamp parameter that is passed to the stored procedure is the latest TimeStamp value and not the value that I have set the TimeStamp property to. Thus the record is deleted and the Entity Framework does not throw an exception.
Why would the Entity Framework still pass the retrieved TimeStamp value to the Stored Procedure and not the value that I have assigned the property? Is this be design or am I missing something?
Any help will be appreciated! (where is Julie Lerman when you need her! :-))
Optimistic concurrency in EF works fine. Even with stored procedures.
ObjectContext.DeleteObjects passes original values of entity to delete function. This makes sense. Original values are used to identify the row to delete. When you delete object, you don't (usually) have meaningful edits to your entity. What do you expect EF to do with then? Write? To what records?
One legitimate use for passing modified data to delete function is when you want to track deletes in some other table and you need to throw in some information not accessible at database layer, only at business layer. Examples include application level user name or reason to delete. In this situation you need to construct entity with this values as original values. One way to do it:
var x = db.MyTable.Single(k => k.Id == id_to_delete);
x.UserName = logged_in_user;
x.ReasonForChange = some_reason;
// [...]
db.ObjectStateManager.ChangeObjectState(x, EntityState.Unchanged);
db.MyTable.DeleteObject(x);
db.SaveChanges();
Of course, better strategy might be to do it openly in business layer.
I don't understand your use case with rowversion/timestamp.
To avoid concurrency issues you pass original timestamp to modifying code.
That way it can be compared to current value in database to detect if record changed since you last read it.
Comparing it with new value makes little sense.
You usually use change markers that are automatically updated by database like rowversion/timestamp in SQL Server, rowversion in Oracle or xmin in PostgreSQL.
You don't change its value in your code.
Still, if you maintain row version manually, you need to provide:
a) new version to insert and update to be written, and
b) old version (read from database) to update and delete to check for concurrent changes.
You don't send new value to delete. You don't need to.
Also, when using stored procedures for modification, it's better to compute new version in the procedure and return it to application, not the other way around.
Hard to tell without seeing any code, but maybe when the postback occurs the page is being re-bound before your delete method is firing? On whatever method databinds the form controls (I assume it's OnLoad or OnInit), have you wrapped any databinding calls with if ( !this.IsPostBack ) { ... }?
Also I'm not sure if there's a reason why you're explicitly storing the concurrency flag in viewstate/session variables, but a better way to do it (imo) is to add the timestamp to the DataKeyNames property of the FormView/GridView (ie: <asp:FormView ID='blah' runat='server' DataKeyNames='Id, Timestamp'>.
This way you don't have to worry about manually storing or updating the timestamp. ;)
I'm trying to delete a child property of a domain entity. In the UI, the user selects Delete to remove a CustomVariableGroup from an Application entity.
I thought deleting the property from the LINQ-to-SQL entity & then submitting changes, would cause LINQ-to-SQL to take care of the work on the Database side. But the row never gets deleted from the table. When I refresh the page in my application, the property is still there because it's still there in the Database.
public void Save(Application application)
{
ApplicationRecord appRecord = application.Map(); // Maps domain entity to L2S entity
// Before this line executes, appRecord has 0 CustomVariableGroups, which is correct.
this.Database.ApplicationRecords.Attach(appRecord, true);
// After the attach, appRecord now has 1 CustomVariableGroup again. This is wrong.
appRecord = application.Map(); // Hack to remove the CustomVariableGroup again.
// This doesn't delete the CustomVariableGroup from appRecord. Do I need
// to delete it explicitly? Or should removing it from appRecord, and
// calling SubmitChanges() do it?
this.Database.SubmitChanges();
}
What is the right way for me to get rid of this child property on the entity? I guess I could loop through the list and delete each item individually, but I don't think LINQ-to-SQL is supposed to work that way.
Any ideas are appreciated.
Note: The property ApplicationCustomVariableGroupRecords represents a table that resolves a many-to-many association in the Database. An Application can have one or more CustomVariableGroups, and a CustomVariableGroup can belong to one or more Applications.
Normally you have to specifically delete the object - removing it from a parent collection just means you don't want it to be associated with that particular parent anymore. It can't tell that you don't want to then associate it with a different parent. If you want it to get deleted, you need to make the call to have it deleted (DeleteOnSubmit for L2S, IIRC)
if im not wrong the tables which have n to n relations between them are works like nested..So try to first delete from the 3rd table (which contains IDs of 2 table) and then remove from main table..
[Sorry, i can't see add comment button on the page.. so i wrote this idea as answer ]
I have a library which uses EF4 for accessing a SQL Server data store. For different reasons, I have to use SQL Server specific syntax to read data from the store (for free text search), so I have to create the SQL code by hand and send it through the ExecuteStoreQuery method.
This works fine, except that the query uses joins to request several tables aside the main one (the main one being the one I specify as the target entity set when calling ExecuteStoreQuery), and EF never fills up the main entity's relationship properties with the other table's data.
Is there anything special to do to fill up these relationships? Using other EF methods or using special table names in the query or something?
Thanks for your help.
Executing direct SQL follows very simple rule: It uses column from the result set to fill the property with the same name in materialized entity. I think I read somewhere that this works only with the the main entity you materialize (entity type defined in ExecuteStoreQuery = no relations) but I can't find it now. I did several tests and it really doesn't populate any relation.
Ok so I'll write here what I ended up doing, which does not looks like a perfect solution, but it does not seem that there is any perfect solution in this case.
As Ladislav pointed out, the ExecuteStoreQuery (as well as the other "custom query" method, Translate) only maps the column of the entity you specify, leaving all the other columns aside. Therefore I had to load the dependencies separately, like this :
// Execute
IEnumerable<MainEntity> result = context.ExecuteStoreQuery<MainEntity>(strQuery, "MainEntities", MergeOption.AppendOnly, someParams).ToArray();
// Load relations, first method
foreach (MainEntity e in result)
{
if (!e.Relation1Reference.IsLoaded)
e.Relation1Reference.Load();
if (!e.Relation2Reference.IsLoaded)
e.Relation2Reference.Load();
// ...
}
// Load relations, second method
// The main entity contains a navigation property pointing
// to a record in the OtherEntity entity
foreach(OtherEntity e in context.OtherEntities)
context.OtherEntities.Attach(e);
There. I think these two techniques have to be chosen depending on the number and size of generated requests. The first technique will generate a one-record request for every required side record, but no unnessecary record will be loaded. The second technique uses less requests (one per table) but retrieves all the records so it uses more memory.
I would like to discard all changes made to linq tables (this means -- I use linq, and data are changed on client side, the data on server are intact). How to do this?
EDIT: problem partially solved
http://graemehill.ca/discard-changes-in-linq-to-sql-datacontext/
It works as long as you don't use transaction. When you do and you use mixed "mode" for a record, there is a problem:
begin trans
insert a record
update inserted record
commit trans
When you update record as above Linq counts it as updated record, and in case of exception you have two actions -- transaction is rolled back and data on Linq side are discarded. On discard changes Linq tries to fetch it from the database (discard for update means refetching data for a record), but since all changes were rolled back, there is no records for update.
The question
How to improve DiscardChanges method in a smart general way to work with transactions. Or how to change the workflow of transactions/discard-submitchanges to make all those work together?
Those are not smart solutions:
refetching all data
recreating connection to DB (because it leads to (1))
To add to what Johannes said, I think the confusion here stems from thinking of the DataContext as something similar to a DataSet. It isn't.
A "table" in a DataContext is like a hint on how to retrieve a specific type of data entity from the database. Unlike a DataSet, the DataContext does not actually "contain" data, it simply tracks the discrete entities you've pulled out of it. If the DataContext disappears (is disposed), the entities are still valid, they are simply detached. This is different from a DataSet where the individual DataTables and DataRows are essentially bound to their containers and cannot outlive them.
In order to use the Refresh method of a DataContext, you need to use it on an actual entity or collection of entities. You can't "Refresh" a Table<T> because it's not actually a physical table, it's just a kind of reference.
Changes to entities connected to a DataContext are only persisted when you call the SubmitChanges method. If you dispose of the DataContext, there is absolutely no way that the changes can persist unless you manually reattach the detached entities to a new DataContext.
Simply discard the current DataContext without calling SubmitChanges() and get a new one.
Example:
DataContext myOldDc = new DataContext();
I have a MySql database, whose general structure looks like this:
Manufacturer <== ProbeDefinition <== ImagingSettings
ElementSettings ====^ ^==== ProbeInstance
I'm using InnoDB to allow foreign keys, and all foreign keys pointing to a ProbeDefinition have set ON DELETE CASCADE.
The issue I'm having is when I delete a ProbeDefinition in my code, it gets immediately reinserted. The cascading delete happens properly so other tables are cleared, but it seems that the LINQ to SQL may be sending an insert for no reason. Checking the ChangeSet property on the database shows 1 delete, and no inserts.
I'm using the following small bit of code to perform the delete:
database.ProbeDefinition.DeleteOnSubmit(probe);
database.SubmitChanges();
Logs in MySql show the following commands being executed when this is run:
BEGIN
use `wetscoprobes`; DELETE FROM wetscoprobes.probedefinition WHERE ID = 10
use `wetscoprobes`; INSERT INTO wetscoprobes.probedefinition (CenterFrequency, Elements, ID, IsPhased, ManufacturerID, Name, Pitch, Radius, ReverseElements) VALUES (9500000, 128, 10, 1, 6, 'Super Probe 2', 300, 0, 1)
COMMIT /* xid=2424 */
What could cause this unnecessary INSERT? Note that deleting a Manufacturer in the exact same way deletes correctly, with the following log:
BEGIN
use `wetscoprobes`; DELETE FROM wetscoprobes.manufacturer WHERE ID = 9
COMMIT /* xid=2668 */
Edit: Upon further testing, it seems that this only happens after I've populated a ListBox with a list of ProbeDefinitions.
I tried running the above delete code before and after the following snippet had run:
var manufacturer = (Manufacturer)cbxManufacturer.SelectedItem;
var probes = manufacturer.ProbeDefinition;
foreach (var probe in probes)
{
cbxProbeModel.Items.Add(probe);
}
The object gets deleted properly before said code has run, but anytime after this point, it performs an insert after the delete. Does it not like the fact that the object is referenced somewhere?
Here's the code I'm running to test deleting a definition from the intermediate window:
database.ProbeDefinition.DeleteOnSubmit(database.ProbeDefinition.Last())
database.SubmitChanges()
Turns out there are issues when there are multiple references to your object. Stepping through the DbLinq source, I learned that after a DELETE is completed, it steps through all other "watched" objects, looking for references.
In this case, I have multiple references through the table database.ProbeDefinition as well as through the manufacturer reference, manufacturer.ProbeDefinition. This isn't an issue till I have accessed objects through both methods. Using Remove can delete the reference from manufacturer, using DeleteOnSubmit will delete the the object from the table. If I do one or the other, the other reference still exists, and thus the object is marked to be reinserted. I'm not sure if this is a bug in DbLinq that it doesn't delete other references, or expected behavior.
Either way, in my case, the solution is to either access the table using only a single method, and delete using that method, or to delete using both methods. For the sake of getting it working, I used the second method:
// Delete probe
this.manufacturer.ProbeDefinition.Remove(probe);
database.ProbeDefinition.DeleteOnSubmit(probe);
database.SubmitChanges();
EDIT: Upon further work on the project and similar issues, I have found the true underlying issue of my implementation. I have a long-lived DataContext, and with how caching works (to make SubmitChanges work), you can't do this. The real solution is to have a short-lived DataContext, and reconnect to the database in each method.