I have two tables. One contains binary data and the other contains the metadata. I am attempting to delete the entire row from both tables, but keep getting the error:
Invalid data encountered. A required relationship is missing.
Examine StateEntries to determine the source of the constraint violation.
The rest of the info is not very helpful. Here is my code currently.
var attachment = _attachmentBinaryRepository.Single(w => w.Id == id);
_attachmentBinaryRepository.Delete(attachment);
_unitOfWork.Commit();
return true;
I was handed this project, but understand the basics of table-splitting. I am just lost in regard to deleting both. I assume, this code is just trying to delete from the one table, but on the one containing the binary data.
Anyone have suggestions?
I don't have the code with me, but I ended up fixing this by retrieving corresponding rows from all of the tables in the relationship. The rows then delete without any trouble.
Related
I have a folder filled with about 200 csv files, each containing about 6000 rows of data containing mutual fund data. I have to copy those comma separated data into the database via Entity Framework.
The two major objects are Mutual_Fund_Scheme_Details and Mutual_Fund_NAV_Details.
Mutual_Fund_Scheme_Details - this contains columns like Scheme_Name, Scheme_Code, Id, Last_Updated_On.
Mutual_Fund_NAV_Details - this contains Scheme_Id (foreign key), NAV, NAV_Date.
Each line in the CSV contains all of the above columns so before inserting, I have to -
Split each line.
Extract first the scheme related data and check if the scheme exists and get id. If it does not exist then insert the scheme details and get id.
Using the id obtained from step 2, check if an entry for NAV exists for the same date. If not, then insert it else skip it.
If an entry is inserted in Step 3 then the Last_Updated_On date might need to be updated for the scheme with the NAV date (depending on it is newer than existing value)
All the exists checks are done using ANY linq extension method and all the new entries are inserted into the DbContext but the SaveChanges method is called only at the end of processing of each file. I used to call it after each insert but that just takes even longer than right now.
Now since, this involves at least two exists checks, at the most two inserts and one update, the insertion of each file is taking too long close to 5-7 minutes per file. I am looking for suggestions to improve this. Any help would be useful.
Specifically, I am looking to:
Reduce the time it takes to process each file
Decrease the number of individual exists check (if I can possibly club them in some way)
Decrease individual inserts/updates (if I can possibly club them in some way)
It's going to be hard to optimize it with EF. Here is a suggestion:
Once you process the whole file (~6000) do the exists check with .Where( x => listOfIdsFromFile.Contains(x.Id)). This should work for 6000 ids and it will allow you separate inserts from updates.
I have a issue regarding Merge Replication. I have a table SETTINGS where in i store the settings of my software.
The schema of the table is ID ( PK) , Description , Value.
Suppose i have 15 rows in this table on my server.
Now i have applied filter on this table saying only the first 10 rows would replicate.
Now with this settings when i sync for the first time, i receive the 10 rows on my client (having subscription).
Then i add the remaining 5 on my client.
Now when i sync again it gives me a conflict saying that
A row insert at 'ClientServer.ClientDatabaseName' could not be
propagated to 'MyServer.ServerDatabaseName'. This failure can be
caused by a constraint violation. Violation of PRIMARY KEY constraint
'PK_SETTINGS'. Cannot insert duplicate key in object 'dbo.SETTINGS'.
The duplicate key value is (11).
What i don't understand is why is it trying to replicate something (row) which is outside the subset filter applied on that table ?? Please help guys.
Is this scenario not possible with Merge replication ?
https://msdn.microsoft.com/en-us/library/ms151775.aspx the link suggests that this is possible. But confused.
Filters created on for a merge article are evaluated only at the publisher. Changes made at the subscriber will always be propagated back to the subscriber, even if they are outside the filter criteria. However if the changes from the one subscriber do not meet the filtering criteria, then they will sit on the publisher, but not be replicated to all the other subscribers.
Is this a production scenario, or are you playing around with replication? If you do static filtering, which is what you have above, it is typically done on read-only type of tables. For example, a salesperson in the field may only need prices for products in their region. They are not expected to update this table. If you do dynamic filtering, for example, filtering based on HOSTNAME(), then you would only get data specific for that user. For example, a salesperson in the field would receive only their customer information. Thus, any updates to that information, unless it's shared across multiple salespersons, would propagate back up, and not flow to anyone else.
In your case, i would not recommend updating tables on the subscriber that have static filters, thus i suggest re-evaluating your filtering design to ensure you have the right filtering model for your scenario.
In our project we need the ability to say who and when delete some entity.
So after some investigation I've found the next solutions:
Add IsDeleted and DeletedBy columns to every table and set it before deletion (Using delete event of NH). But here is some drawback of this solution: we have many sql views which should work only with non deleted data. So to achieve this we should write View over each Table which will be something like a filter. (WHERE IsDeleted = 0)
Serialize to xml each entity before deletion and store it in single separate table with the next structure: Id | XML | Deleted By
From your point of view which of these solutions is prefered, or maybe there are other solutions I didn't mention above?
P.S. The deleted rows should be excluded from queries (Both Nhibernate and SQL).
I see three options:
Hard delete. The rows do not exist.
Soft delete. As you describe. Yep, you'll have to tack on IsSoftDeleted checks EVERYWHERE. EVERYWHERE. EVERYWHERE. Its a total pain.
Archive table. Create a table that is an exact replica of the existing table...and do the move (to the archive table) and the delete (from the original table) in a transaction.
I've worked with #2 an #3. I prefer #3 because you avoid the EVERYWHERE additional clauses.
With #2, you may also have to figure out constraints that allow for 1 non-soft-deleted row (based on the unique constraint) but also allow duplicates of soft-deleted-rows that violate the unique-constraint. Yep, good times.
I am trying to merge two DataTables - one representing current data, and one representing proposed insertions to that data.
These tables have the same schema, both with a simple, user-provided primary key string.
What I'd like is, if the a proposed insertion row has a key that is already present in the current data, an error should be thrown. However, the proposed addition just gets merged as a proposed alteration to the existing row, which is not what I want.
My current code is something along the lines of
currentData.EnforceConstraints = false;
currentData.Merge(additions);
currentData.EnforceConstraints = true;
where I'm actually merging whole DataSets, not just DataTables. I was hoping to get an error on the EnforceConstraints = true line, but I don't.
I also tried using diffgrams, but had the same problem - duplicate insertions get treated as modifications.
Is there a way to merge a set of insertions into a DataSet and have duplicate PKs be treated as an error rather than an update?
Similarly, since modified DataRows remember their original values, I'd hope that merging a modified row whose original values don't match the target row's current values would throw an exception too.
Isn't the Unique flag used for this purpose? My understanding is that for Merge it will merge rows based on Primary Key.
I am in the process of moving a database from one server to another. But now I am getting the error 'Invalid column name msrepl_tran_version'. This is a column which I deleted off the new database as it was related to replication which I no longer need.
I have recreated the Datasets, done a search for anything with msrepl_tran_version in the entire solution and nothing. I can't see where it is referencing this column from, it doesn't exist!
Any ideas would be much appreciated.
Thanks.
It is a transactional replication column and not straight forward to remove, it seems you haven't totally removed it...
how to remove msrepltranversion column