I need to make several changes in my database in my controller.
foreach (var valueStream in model.ListValueStream)
{
ValueStreamProduct vsp = new ValueStreamProduct(valueStream.Id, product.Id);
db.ValueStreamProduct.Add(vsp);
}
db.SaveChanges();
Should I call SaveChanges at the end or each time I make changes?
It depends.
With each change - If you want each save to run in its own transaction and be independent of other changes then run the save in the loop or after you make a change. Note that if there is a failure later in the code then the changes that have already occurred are persisted and will not be rolled back. This also has a higher performance cost as you are making more round trips to the data store. There are situations which could warrant this use though, here are 2 quick examples:
You want to track progress of long running action back to the data store and include changes up until that point.
You want to save in batches for large chunks of data that the code is processing and the code knows how to pick up an action after the last save point in the event of failure.
After all changes - If you want more performance and you want all saves to either pass or fail and be rolled back if there is any failure then run them at the end outside the loop or at the end of the code block. Now if there is a failure in the code (anywhere before or on the save changes call) no changes will be persisted to the store. In most situations this is desirable behavior as it ensures a good state for your data store. More frequently than not this is probably what you want your code to do.
It depends, on your requirement
Requirement 1) three table are independent.
public ActionResult Create()
{
db.ValueStreamProduct.Add(vsp);
db.tbl.Add(tbl2);
db.tbl.Add(tbl3);
// only one time you can call savechanges()
db.SaveChanges();
}
Requirement 2) ValueStreamProduct table Id is necessary for tbl2.
But, you need to get your last inserted record's Id for second table. code like this
public ActionResult Create()
{
db.ValueStreamProduct.Add(vsp);
db.SaveChanges();
// getting last inserted record's Id from ValueStreamProduct table.
tbl2.Column = vsp.Id
db.tbl.Add(tbl2);
db.tbl.Add(tbl3);
db.SaveChanges();
}
At the end, all the bunched up changes will get saved with fewer trips to the db and less overhead
Related
I am using the Unit of Work patterns and want to a value back from the database to make sure it has updated the database successfully. Is there a way to do that?
UnitOfWork.Save();
public void Save()
{
try
{
Context.SubmitChanges();
}
catch (Exception dbEx)
{
Trace.WriteLine(
string.Format(
"Exception caught while saving unit of work: {0}", dbEx.StackTrace
));
throw;
}
}
When you call SubmitChanges(), a transaction is created.
All objects that have pending changes are ordered into a sequence of objects based on the dependencies between them. Objects whose changes depend on other objects are sequenced after their dependencies.
Immediately before any actual changes are transmitted, LINQ to SQL starts a transaction to encapsulate the series of individual commands.
Using the default conflict resolution mode (FailOnFirstConflict), the entirety of your changes fail to save when an error occurs in any one of them because the transaction encapsulating all your changes is rolled back.
That means that if your submission succeeds, you can be sure your unit of work was completed. You do not need to get a value back to determine whether or not your changes were saved.
You could also set the conflict resolution mode to ContinueOnConflict if you have changes that you know do not depend on each other and you want all changes to be attempted even if one or more fail.
Context.SubmitChanges(ConflictMode.ContinueOnConflict);
Hello I found problem when I use ASP.NET MVC with EF and call Web API from other website(that have also use Entity Framework)
the problem is that
I want to make sure that both MVC SaveChanges() and Web API SaveChanges() succeed both together.
Here's my dream pseudo code
public ActionResult Operation()
{
Code Insert Update Delete....
bool testMvcSaveSuccess = db.TempSaveChanges(); //it does not have this command.
if(testMvcSaveSuccess == true)
{
bool isApiSuccess = CallApi(); //insert data to Other Web App
if(isApiSuccess == true)
{
db.SaveChanges(); //Real Save
}
}
}
From above code, if it doesn't have db.TempSaveChanges(), maybe Web API will be successful, but MVC SaveChanges() might fail.
So there is nothing like TempSaveChanges because there is something even better: Transactions.
Transaction is an IDisposable (can be used in a using block) and has methods like Commit and Rollback.
Small example:
private void TestTransaction()
{
var context = new MyContext(connectionString);
using (var transaction = context.Database.BeginTransaction())
{
// do CRUD stuff here
// here is your 'TempSaveChanges' execution
int changesCount = context.SaveChanges();
if (changesCount > 0)
// changes were made
{
// this will do the real db changes
transaction.Commit();
}
else
{
// no changes detected -> so do nothing
// could use 'transaction.Rollback();' since there are no changes, this should not be necessary
// using block will dispose transaction and with it all changes as well
}
}
}
I have extracted this example from my GitHub Exercise.EntityFramework repository. Feel free to Star/Clone/Fork...
Yes you can.
you need to overload the .Savechanges in the context class where it will be called first checked and then call the regular after.
Or create you own TempSaveChanges() in the context class call it then if successful call SaveChanges from it.
What you are referring to is known as atomicity: you want several operations to either all succeed, or none of them. In the context of a database you obtain this via transactions (if the database supports it). In your case however, you need a transaction which spans across two disjoint systems. A general-purpose (some special cases have simpler solutions) robust implementation of such a transaction would have certain requirements on the two systems, and also require additional persistence.
Basically, you need to be able to gracefully recover from a sudden stop at any point during the sequence. Each of the databases you are using are most likely ACID compliant, so you can count on each DB transaction to fulfill the atomicity requirement (they either succeed or fail). Therefore, all you need to worry about is the sequence of the two DB transactions. Your requirement on the two systems is a way to determine a posteriori whether or not some operation was performed.
Example process flow:
Operation begins
Generate unique transaction ID and persist (with request data)
Make changes to local DB and commit
Call external Web API
Flag transaction as completed (or delete it)
Operation ends
Recovery:
Get all pending (not completed) transactions from store
Check if expected change to local DB was made
Ask Web API if expected change was made
If none of the changes were made or both of the changes were made then the transaction is done: delete/flag it.
If one of the changes was made but not the other, then either revert the change that was made (revert transaction), or perform the change that was not (resume transaction) => then delete/flag it.
Now, as you can see it quickly gets complicated, specially if "determining if changes were made" is a non-trivial operation. What is a common solution to this is to use that unique transaction ID as a means of determining which data needs attention. But at this point it gets very application-specific and depends entirely on what the specific operations are. For certain applications, you can just re-run the entire operation (since you have the entire request data stored in the transaction) in the recovery step. Some special cases do not need to persist the transaction since there are other ways of achieving the same things etc.
ok so let's clarify things a bit.
you have an MVC app A1, with its own database D1
you then have an API, let's call it A2 with its own database D2.
you want some code in A1 which does a temp save in D1, then fires a call to A2 and if the response is successful then it saves the temp data from D1 in the right place this time.
based on your pseudo code, I would suggest you create a second table where you save your "temporary" data in D1. So your database has an extra table and the flow is like this:
first you save your A1 data in that table, you then call A2, data gets saved in D2, A1 receives the confirmation and calls a method which moves the data from the second table to where it should be.
Scenarios to consider:
Saving the temp data in D1 works, but the call to A2 fails. you now clear the orphan data with a batch job or simply call something that deletes it when the call to A2 fails.
The call to A2 succeeds and the call to D1 fails, so now you have temp data in D1 which has failed to move to the right table. You could add a flag to the second table against each row, which indicates that the second call to A2 succeeded so this data needs to move in the right place, when possible. You can have a service here which runs periodically and if it finds any data with the flag set to true then it moves the data to the right place.
There are other ways to deal with scenarios like this. You could use a queue system to manage this. Each row of data becomes a message, you assign it a unique id, a GUID, that is basically a CorrelationID and it's the same in both systems. Even if one system goes down, when it comes back up the data will be saved and all is good in the world and because of the common id you can always link it up properly.
I have a Web Api 2 (using C#) controller method running an asynchronous save (RESTful POST method). Our QA tester mashed the save button (client-side issue that was fixed, irrelevant to Q). Without a duplicate check, naturally this was saving duplicate entries to the db.
I implemented a duplicate check (duplicateCount just selects the number of items that are exactly like the item passed to post, should be 0, and it works, details irrelevant):
var duplicateCount = (fooCollection.CountAsync(aggregateFilter)).Result;
if (duplicateCount > 0) { return BadRequest(); }
This check works...except on the first button mash - two duplicate entries get saved, each from an individual controller hit.
So, it seems to me that the second controller hit happens before the first controller hit manages to save the item to the db, so the duplicate check passes. Is this possible?
I am more interested in the theory than the particular answer. Also, I know I can check for duplicates in the db as well, this is more of a conceptual question. The MongoDb part is really there just for completeness, I imagine it would be similar if I was doing an async save to SQL.
edit: Someone asked how I am making the call in comments. It's through RestAngular, but imo it's irrelevant because I know the controller is getting hit as many times as I hit the button. I also know for a fact that it does not create duplicates on a single hit.
Quick answer is "yes" - with the ability of a controller to be instantiated on any number of threads simultaneously it acts as though its multi-threaded. Your code is not "thread safe" in that its business operation requires an exclusive lock to be placed on some element of shared information (in this case the state of the database).
You could (I would not recommend) open a Mutex or a database transaction to force single thread behaviour, but your throughput would tank.
I personally don't face this very often because of my (possibly bad) insistence on all my entities having Guid primary keys and use of the SQL Merge command to either insert or update. This may be a helpful pattern (it doesn't matter how many times you send the same "message" to the controller - it will never save a duplicate).
I have a process that is importing an Excel Spreadhseet, and parsing the data into my data objects. The source of this data is very questionable, as we're moving our customer from a spreadsheet-based data management into a managed database system with checks for valid data.
During my import process, I do some basic sanity checks of the data to accommodate just how bad the data could be that we're importing, but I have my overall validation being done in the DbContext.
Part of what I'm trying to do is that I want to provide the Row # in the spreadsheet that the data is bad so they can easily determine what they need to fix to get the file to import.
Once I have the data from the spreadsheet (model), and the Opportunity they're working with from the database (opp), here's the pseudocode of my process:
foreach (var model in Spreadsheet.Rows) { // Again, pseudocode
if(opp != null && ValidateModel(model, opp, row)) {
// Copy properties to the database object
// This is in a Repository-layer method, not directly in my import process.
// Just written here for clarity instead of several nested method calls.
context.SaveChanges();
}
}
I can provide more of the code here if needed, but the problem comes in my DbContext's ValidateEntity() method (override of DbContext).
Again, there's nothing wrong with the code that I've written, as far as I'm aware, but if an Opportunity that failed this level of validation, then it stays as part of the unsaved objects in the context, meaning that it repeatedly tries to get validated every time the ValidateEntity() is called. This leads to a repeat of the same Validation Error message for every row after the initial problem occurs.
Is there a way to [edit]get the Context to stop trying to validate the object after it fails validation once[edit]? I know I could wait until the end and call context.SaveChanges() once at the end to get around this, but I would like to be able to match this with what row it is in the Database.
For reference, I am using Entity Framework 6.1 with a Code First approach.
EDIT Attempting to clarify further for Marc L. (including an update to the code block above)
Right now, my process will iterate through as many rows as there are in the Spreadsheet. The reason why I'm calling my Repository layer with each object to save, instead of working with an approach that only calls context.SaveChanges() once is to allow myself the ability to determine which row is the one that is causing a validation error.
I'm glad that my DbContext's custom ValidateEntity() methods are catching the validation errors, but the problem resides in the fact that it is not throwing the DbEntityValidationException for the same entity multiple times.
I would like it so that if the object fails validation once, the context no longer tries to save the object, regardless of how many times context.SaveChanges() is called.
Your question is not a dupe (this is about saving, not loaded entities) but you could follow Jimmy's advice above. That is, once an entity is added to the context it is tracked in the "added" state and the only way to stop it from re-validating is by detaching it. It's an SO-internal link, but I'll reproduce the code snippet:
dbContext.Entry(entity).State = EntityState.Detached;
However, I don't think that's the way you want to go, because you're using exceptions to manage state unnecessarily (exceptions are notoriously expensive).
Working from the information given, I'd use a more set-based solution:
modify your model class so that it contains a RowID that records the original spreadsheet row (there's probably other good reasons to have this, too)
turn off entity-tracking for the context (turns of change detection allowing each Add() to be O(1))
add all the entities
call context.GetValidationErrors() and get all your errors at once, using the aforementioned RowID to identify the invalid rows.
You didn't indicate whether your process should save the good rows or reject the file as a whole, but this will accommodate either--that is, if you need to save the good rows, detach all the invalid rows using the code above and then SaveChanges().
Finally, if you do want to save the good rows and you're uncomfortable with the set-based method, it would be better to use a new DbContext for every single row, or at least create a new DbContext after each error. The ADO.NET team insists that context-creation is "relatively cheap" (sorry I don't have a cite or stats at hand for this) so this shouldn't damage your throughput too much. Even so, it will at least remain O(n). I wouldn't blame you, managing a large context can open you up to other issues as well.
This is concurrency related. So the SubmitChanges() fails, and a ChangeConflictException is thrown. For each ObjectChangeConflict in db.ChangeConflicts, its Resolve is set to RefreshMode.OverwriteCurrentValues? What does this mean?
http://msdn.microsoft.com/en-us/library/bb399354.aspx
Northwnd db = new Northwnd("...");
try
{
db.SubmitChanges(ConflictMode.ContinueOnConflict);
}
catch (ChangeConflictException e)
{
Console.WriteLine(e.Message);
foreach (ObjectChangeConflict occ in db.ChangeConflicts)
{
// All database values overwrite current values.
occ.Resolve(RefreshMode.OverwriteCurrentValues);
}
}
I added some comments to the code, see if it helps:
Northwnd db = new Northwnd("...");
try
{
// here we attempt to submit changes for the database
// The ContinueOnConflict specifies that all updates to the
// database should be tried, and that concurrency conflicts
// should be accumulated and returned at the end of the process.
db.SubmitChanges(ConflictMode.ContinueOnConflict);
}
catch (ChangeConflictException e)
{
// we got a change conflict, so we need to process it
Console.WriteLine(e.Message);
// There may be many change conflicts (if multiple DB tables were
// affected, for example), so we need to loop over each
// conflict and resolve it.
foreach (ObjectChangeConflict occ in db.ChangeConflicts)
{
// To resolve each conflict, we call
// ObjectChangeConflict.Resolve, and we pass in OverWriteCurrentValues
// so that the current values will be overwritten with the values
// from the database
occ.Resolve(RefreshMode.OverwriteCurrentValues);
}
}
First, you must understand that LinqToSql tracks two states for each database row. The original state and the current state. Original state is what the datacontext thinks is in the database. Current state has your in-memory modifications.
Second, LinqToSql uses optimistic concurrency to perform updates. When SubmitChanges is called, the datacontext sends the original state (as a filter) along with the current state into the database. If no records are modified (because the database's record no longer matches the original state), then a ChangeConflictException is raised.
Third, to resolve a change conflict, you must overwrite the original state so the optimistic concurrency filter can locate the record. Then you have to decide what to do with the current state... You can abandon your modifications (that's what the posted code does), which will result in no change to the record, but you are ready to move on with the current database values in your app.
I think it means that if it detects a conflict, see this under computing science, then it goes into the catch. Within there it goes through each of the conflicts (the foreach loop) and resets the values to what they were before the change tried to occur.
Apparently, any changes you've done to the objects are to be thrown out, since somebody else has stolen a march on you and updated the database while you were busy. In optimistic concurrency, dropping the changes is the only possible automated solution. However, the user probably isn't going to be too happy if they spent any time on inputting the discarded data.
The conflict is propabely caused by the fact that the object in your datacontext (the object that stores and keeps changes etc in .net code) has other values then the ones in your db.
Let's say you load a person object from the db. One of the fields is firstname, firstname is S
oo. Now you have a copy of your record in the datacontext. You change some things and want to write the changes to the db, but when (LINQ? other orm) wants to write the changes to the DB, it notices that the firstname in the DB is already changed.
So someone/something has changed your record, you have kind of a "deadlock" (correct term?) then you have to define what is more important, your changes, or the changes that something/someone else made.
TO THE POINT !!! -> Refreshmode.overwirteCurrentValues Just refreshes the object in your datacontext, it RELOADS the object from the db so that you are working with the updated object.
I hope this was a little clear :)
grtz