What does StoreGeneratedPattern mean? - c#

I'm doing EF design, who could tell me what does StoreGeneratedPattern mean?
I can't find a easy straight answer online.

If you look at the samed called enumeration it tells what should be done if you insert or update rows:
None: No auto generated value is generated
Identity: A new value is generated on insert, but not changed on update
Computed: A new value is generated on insert and update

These answers are also not an easy straight answer and just point to or repeat the same arcane documentation that the OP is referring to.
This attribute is used when the column is computed by the database. So on inserts and updates, the value will not be written.
The value will be read back from the database after inserts and updates, though I would guess that if set to Identity, EF may not read the value after an update since it won't have changed. Whether it really makes that tiny optimisation I don't know.
An example might be an identity column or a last updated time-stamp.

Related

What's causing a Concurrency Violation when using a MySQL linked Dataset via TableAdapters?

The more I read on this, the more confused I get, so hope someone can help. I have a complex database setup, which sometimes produces the error on update:
"Concurrency violation: the UpdateCommand affected 0 of the expected 1 records"
I say sometimes, because I cannot recreate conditions to trigger it consistently. I have a remote mySQL database connected to my app through the DataSource Wizard, which produces the dataset, tables and linked DataTableAdapters.
My reading suggests that this error is meant to occur when there is more than one open connection to the database trying to update the same record? This shouldn't happen in my instance, as the only updates are sequential from my app.
I am wondering whether it has something to do with running the update from a background worker? I have my table updates in one, for example, thusly:
Gi_gamethemeTableAdapter.Update(dbDS.gi_gametheme)
Gi_gameplaystyleTableAdapter.Update(dbDS.gi_gameplaystyle)
Gi_gameTableAdapter.Update(dbDS.gi_game)
These run serially in the backgroundworker, however, so unsure about this. The main thread also waits for it to finish, and there are no other db operations going on before or after this is started.
I did read about going into the dataset designer view, choosing "configure" in the datatableadapter > advanced options and setting "Use optimistic concurrency" to false. This might have worked (hard to say because of the seemingly random nature of the error), however, there are drawbacks to this that I want to avoid:
I have around 60 tables. I don't want to do this for each one.
I sometimes have to re-import the mysql schema into the dataset designer, or delete a table and re-add it. This would obviously lose this setting and I would have to remember to do it on all of them again, potentially. I also can't find a way to do this automatically in code.
I'm afraid I'm not at code level in terms of the database updates etc, relying on the Visual Studio wizards. It's a bit late to change the stack as well (e.g. can't change to Entity Framework etc).
SO my question is:
what is/how can I find what's causing the error?
What can I do about it?
thanks
When you have tableadapters that download data into datatables, they can be configured for optimistic concurrency
This means that for a table like:
Person
ID Name
1 John
They might generate an UPDATE query like:
UPDATE Person SET Name = #newName WHERE ID = #oldID AND Name = #oldName
(In reality they are more complex than this but this will suffice)
Datatables track original values and current values; you download 1/"John", and then change the name to "Jane", you(or the tableadapter) can ask the DT what the original value was and it will say "John"
The datatable can also feed this value into the UPDATE query and that's how we detect "if something else changed the row in the time we had it" i.e. a concurrency violation
Row was "John" when we downloaded it, we edited to "Jane", and went to save.. But someone else had been in and changed it to "Joe". Our update will fail because Name is no longer "John" that it was (and we still think it is) when we downloaded it. By dint of the tableadapter having an update query that said AND Name = #oldName, and setting #oldName parameter to the original value somedatarow["Name", DataRowVersion.Original].Value (i.e. "John") we cause the update to fail. This is a useful thing; mostly they will succeed so we can opportunistically hope our users can update our db without needing to get into locking rows while they have them open in some UI
Resolving the cases where it doesn't work is usually a case of coding up some strategy:
My changes win - don't use an optimistic query that features old values, just UPDATE and erase their changes
Their changes win - cancel your attempts
Re-download the latest DB state and choose what to do - auto merge it somehow (maybe the other person changed fields you didn't), or show the user so they can pick and choose what to keep etc (if both people edited the same fields)
Now you're probably sat there saying "but noone else changes my DB" - we can still get this though, if the database has changed some values upon one save and you don't have the latest ones in your dataset..
There's another option in the tableadapter wizardd - "refresh the dataset" - it's supposed to run a select after a modification to import any latest database calculated values (like auto inc primary keys or triggers/defaults/etc). Some query like INSERT INTO Person(Name) VALUES(#name) is supposed to silently have a SELECT * FROM PERSON WHERE ID = last_inserted_id() tagged on the end of it to retrieve the latest values
Except "refresh the dataset" doesn't work :/
So, while I can't tell you exactly why youre getting your CV exception, I hope that explaining why they occur and pointing out that there are sometimes bugs that cause them (insert new record, calculated ID is not retreieved, edit this recent record, update fails because data wasn't fresh) will hopefully arm you with what you need to find the problem: when you get one, keep the app stopped on the breakpoint and inspect the datarow: take a look at the query being run and what original/current values are being put as parameters - inspect the original and current values held by the row using the overload of the Item indexer that allows you to state the version you want and look in the DB
Somewhere in all of that there will be the mismatch that explains why 0 records were updated - the db has "Joe" as the name or 174354325 as the ID, your datarow has "John" as the original name or -1 as the ID (it never refreshed), and the WHERE clause is finding 0 records as a result
Some of your tables will contain a field that is marked as [ConcurrencyCheck] or [TimeStamp] concurrency token.
When you update a record, the SQL generated will include a WHERE [ConcurrencyField]='Whatever the value was when the record was retrieved'.
If that record was updated by another thread or process or something other than the current thread, then your UPDATE will return 0 records updated, rather than the 1 (or more) that was expected.
What can you do about it? Firstly, put a try/catch(DbConcurrencyException) around your code. Then you can re-read the offending record and try and update it again.

How to cheatedly update identity column?

In the table, there's an identity column incrementing by one each time we add a row (the lowest value is 1). Now, there's a new requirement - we need to implement soft deletion.
So, my idea was to basically multiply any softly deleted row by -1. That ensures the uniqueness and clearly draws a line between active and softly deleted item.
update Things set Id = -101
where Id = 101
Would you know, the stupid computer doesn't let me do that. The suggested work-around is to:
alternate the column
perform the update
alternate back the column
and to me it seems like a Q&D. However, the only alternative I can see is to add a new column carrying the deletion status.
I'm using EF to perform the work with the extra quirk that when I changed the value of the identity and stored it, the software was kind enough to think for me and actually create a new row (with incrementally set identity that was neither the original one, nor the negative of it).
How should I tackle this issue?
I would strongly discourage you from overloading your identity column with any additional meaning. Someone who will look at your database table for the first time has no way of knowing that a negative ID means "deleted".
Introducing a new column Deleted BIT NOT NULL DEFAULT 0 does not have this disadvantage. It is self-explanatory. And it costs almost nothing: in the times of Big Data, an additional BIT column isn't going to fill your hard disk.
All of that being said, if you still want to do that, you could try to SET IDENTITY_INSERT dbo.Things ON before you UPDATE. (I cannot currently verify whether that would work, though.)

How would I audit a MVC3 .NET application?

I'm requiring a way to audit an MVC3 EF application capturing the following values:
Timestamp
Field Name
Old Value
New Value
I think I've wrongfully done the binding manually, and as a result all the row is updated after an edit (so trigger will assume everything is being updated)... therefore rather avoid DB triggers as it'll need a re-write of all the binding.
I would imagine, if I can capture the old values (somehow), and then compare to the new values, I can populate an audit table with the above fields.
Any advice on this would be much appreciated.
Depending on the version of SQL you using you could look in to Change Data Capture
You can subscribe to the saving changes event of the entity. Here is an example... Change History in MVC and EF

Is it considered bad form to delete a nonexistent record in Oracle?

I see that I can "Delete" a record that doesn't exist with impunity; but are there any hidden dangers in this?
If it would be better to first check to see if the record exists, is there some ultra-fast way to do that?
IOW, is there a way to quickly perform this pseudo-SQL:
if recordExists(table, rowval[s])
deleteRecord
?
In general, there's no real reason to check if something exists before you delete it. SQL is a set-based language, it's perfectly valid to delete the elements of an empty set
To check if something exists requires a look-up and, at worst, you'll have to do the same lookup again to delete. The only time that this is good form is when it's possible for the statement to cause an error if the condition isn't met (statements that modify the DDL come to mind)
No, there is no reason to not use a normal DELETE statement to delete a row that may or may not be there:
DELETE FROM Table WHERE Id = 234
This will either delete the specified row, or it won't. In the former case, the update count will be 1 and in the latter case it will be 0. You can use this to perform any additional logic in the case where the record did exist.
So long as you are willing to accept the DELETE action if the record exists you are free to delete away - record or not.
Just to add, the other danger of performing an existence check prior to the "real" query is that the visible version of the data can change between queries unless you've taken the precaution of changing the read consistency to serializable.

Best Practice - Handling multiple fields, user roles, and one stored procedure

I have multiple fields both asp:DropDownList's and asp:TextBox's. I also have a number of user roles that change the Visible property of certain controls so the user cannot edit them. All of this data is saved with a stored procedure call on PostBack. The problem is when I send in the parameters and the control was not on the page obviously there wasn't a value for it, so in the stored procedure I have the parameters initialized to null. However, then the previous value that was in the database that I didn't want changed is overwritten with null.
This seems to be a pretty common problem, but I didn't have a good way of explaining it. So my question is, how should I go about keeping some fields from being on the page but also keeping the values in the database all with one stored procedure?
Apply the same logic when chosing what data to update as the logic you're actually using when chosing what data (and its associated UI) to render.
I think the problem is you want to do the update of all fields in a single SQL update, regardless of their value.
I think you should do some sanity check of your input before your update, even if that implies doing individual updates for certain parameters.
Without an example, it is a little difficult to know your exact circumstances, but here is a fictitious statement that will hopefully give you some ideas. It is using t-sql (MS SQL Server) since you did not mention a specific version of SQL:
UPDATE SomeImaginaryTable
SET FakeMoneyColumn = COALESCE(#FakeMoneyValue, FakeMoneyColumn)
WHERE FakeRowID = #FakeRowID
This basically updates a column to the parameter value, unless the parameter is null, in which case it uses the columns existing value.
Generally to overcome this in my update function
I would load the current values for the user
Replacing any loaded values with the newly changed values from the form
Update in db.
This way I have all the current plus everything that has been changed will get changed.
This logic will also work for an add form because all the fields would be null then get replaced with a new value before being sent to the db. You would of course just have to check whether to do an insert or update.

Categories

Resources