Let' say I have a TestClass in my C# app with property A and property B.
I change the value of B by my code, I leave property A unchanged.
I update TestClass in database by SimpleRepository's Update method.
As I see it updates also property A value in the database.
It is easy to test: I change value A in my database outside my app ('by hand'), then I make the update from my app. Value of property A changes back to its value according to TestClass's state in my app.
So, my question: is it possible to make updates only to some properties, not for the whole class by SimpleRepository? Are there some 'IgnoreFields' possibilities?
What you need is optimistic concurrency on your UPDATE statement, not to exclude certain fields. In short what that means is when updating a table, a WHERE clause is appended to your UPDATE statement that ensures the values of the fields in the row are in fact what they were when the last SELECT was run.
So, let's assume in your example I selected some data and the values for A and B were 1 and 2 respectively. Now let's assume I wanted to update B (below statement is just an example):
UPDATE TestClass SET B = '3' WHERE Id = 1;
However, instead of running that statement (because there's no concurrency there), let's run this one:
UPDATE TestClass SET B = '3' WHERE Id = 1 AND A = '1' AND B = '2';
That statement now ensures the record hasn't been changed by anybody.
However, at the moment it doesn't appear that Subsonic's SimpleRepository supports any type of concurrency and so that's going to be a major downfall. If you're looking for a very straight forward repository library, where you can use POCO's, I would recommend Dapper. In fact, Dapper is used by Stackoverflow. It's extremely fast and will easily allow you to build in concurrency into your update statements because you send down parameterized SQL statements, simple.
This Stackoverflow article is an overall article on how to use Dapper for all CRUD ops.
This Stackoverflow article shows how to perform inserts and updates with Dapper.
NOTE: with Dapper you could actually do what you're wanting to as well because you send down basic SQL statements, but I just wouldn't recommend not using concurrency.
Don't call the update method on the DataObject for such cases, you are basically indicating that the object has been changed and needs to be updated in the DB.
So subsonic will generate a query like
UPDATE TestClass SET A ='', B='', ModifiedOn = 'DateHere' WHERE PrimaryKey = ID
to change only the property B you need to consturct the UPDATE query manually.
have a look at the Subsonic.Update class.
Ideally you shouldn't be forming a new instance of the data object manually, if you do so make sure
the values are copied from the object retured from the Subsonic.Select query.
So when you update the value of even only one property all other properties will hold their own value from DB rather than a default value depending on the type of the property.
Related
I am creating a new contact record and I need to set lastusedincampaign field while creating. But my entity is being created with empty lastusedincampaign field.
I only can set it programatically with Update method after Create.
Where the problem can be?
P.S.: Creating and then updating my entity record is not a good idea, cause I have about 4k entity records to create at once.
UPDATE 1 (test code):
Entity contact = new Entity("contact");
contact["fullname"] = "New contact";
contact["lastusedincampaign"] = DateTime.UtcNow;
CrmHelper.InitializeCrmService().Create(contact);
Looks like CRM ignores lastusedincampaign attribute like it does a few other during create operation. If you do not want to perform create/update operations at the same time, why don't you create a temporary workflow that would run asynchronously and update the field's value? This way async server takes most of the load and the record creations are faster.
Side note 4k records is not an awfully lot of records to perform a create/update simultaneously, I have worked with records in the tens of thousands and CRM never bottle necked on me.
The behavior you described is expected. If you check the Metadata for the attribute lastusedincampaign you will find that the field has IsValidForCreate set to false and IsValidForUpdate set to true.
You must update the record after you create it if you want to fill that field.
I am designing a DAL.dll for a web application. The Scenario is that on the web user gets the entity and modifies some fields and click save. My problem is that how to make sure only the modifield field to be saved.
For Example, an entity:
public class POCO{
public int POCOID {get;set;}
public string StringField {get;set;}
public int IntField {get;set;}
}
and my update interface
//rows affected
int update (POCO entity);
When only the IntField is modified, because StringField is null, so I can ignore it. However, when only the StringField is modifield, because IntField is 0 - default(int), I cannot determine if it should be ignored or not.
Some limitations:
1. stateless, no session. so cannot use "get and update", context, etc.
2. to be consistent to data model, cannot use nullable "int?"
Just a tip, if negtive number is not allow in your business requirement, you can use -1 to indicate this value does not apply.
I don't really understand how you want to work stateless, but update only changed properties. It will never work when stateless, since you will need a before-after comparison, or anything else to track changes (like events on property setters). Special "virgin" values are not a good solution, since I think your user wants to see the actual IntField value.
Also make your database consistent with your application data - if you have standard, not-nullable int values, make the DB column int not null default 0! It is really a pain to have a database value which can't be represented by the program, so that the software "magically" turns DB null into 0. If you have a not-nullable int in your application, you can't distinguish between DB null and zero, or you have to add a property like bool IsIntFieldNull (no good!).
To reference a common Object-relational mapper, NHibernate: it has an option called "dynamic-update" where only changed properties/columns are updated. This requires, however, a before-after check and stateful sessions, and there's debate on whether it helps performance, since sending the same DB query every time (with different parameter values) is better than sending multiple different queries - opposed to unneccessary updates and network load. By default, NHibernate updates the whole row, after checking if any change has been done. If you only have ID, StringField and IntField, dynamic-update instead of full-row update might in fact be a good solution.
Mapping nullable DB columns to not-nullable application data types, such as int, is a common mistake when implementing NHibernate, since it creates self-changing DAL objects.
However, when working with ORM or writing your own DAL, make sure you have proper database knowledge!
Options
Many ORMs (Object-relational mapping) provide this type of functionality. You define your object to work with say "Entity Framework" or "NHibernate". These ORM's take care of reading and writing to the database. Internally, they have their own mechanisms to keep track of what has been modified.
Look into Delta<\T> (right now it's an ODATA thing, so it may not be useful to use, but you can learn from it)
Make your own. Have some type of base class that all your other objects inherit from, and somehow when you set fields it records those somewhere else.
I highly recommend not relying on null or magic numbers (-1) to keep track of this. You will create a nightmare for yourself.
I am using ASP.NET to build a application for a retail company. I am using the Entity Framework (model-first) as my data access layer. I am using stored procedures to do my CRUD operations and all columns are mapped and seems to be correct as all CRUD functionality are working as expected.
But I am having concurrency issues with the DELETE operation.
I've added a TimeStamp column to the table I am doing the CRUD operation on. The UPDATE operation works fine as it is updating by primary key and the TimeStamp value. Thus if no rows are affected with the UPDATE operation, because of a change in the TimeStamp value, the Entity Framework throws a OptimisticConcurrencyException.
The DELETE operation works on the same principle as it is deleting by primary key and the TimeStamp value. But no exception is thrown when the TimeStamp value does not match between the entity and the database.
In the C# delete method I do retrieve the latest record first and then update the TimeStamp property to another TimeStamp value (It might be different to the retrieved value). After some investigation by using SQL Profiler I can see that the DELETE stored procedure is executed but the TimeStamp parameter that is passed to the stored procedure is the latest TimeStamp value and not the value that I have set the TimeStamp property to. Thus the record is deleted and the Entity Framework does not throw an exception.
Why would the Entity Framework still pass the retrieved TimeStamp value to the Stored Procedure and not the value that I have assigned the property? Is this be design or am I missing something?
Any help will be appreciated! (where is Julie Lerman when you need her! :-))
Optimistic concurrency in EF works fine. Even with stored procedures.
ObjectContext.DeleteObjects passes original values of entity to delete function. This makes sense. Original values are used to identify the row to delete. When you delete object, you don't (usually) have meaningful edits to your entity. What do you expect EF to do with then? Write? To what records?
One legitimate use for passing modified data to delete function is when you want to track deletes in some other table and you need to throw in some information not accessible at database layer, only at business layer. Examples include application level user name or reason to delete. In this situation you need to construct entity with this values as original values. One way to do it:
var x = db.MyTable.Single(k => k.Id == id_to_delete);
x.UserName = logged_in_user;
x.ReasonForChange = some_reason;
// [...]
db.ObjectStateManager.ChangeObjectState(x, EntityState.Unchanged);
db.MyTable.DeleteObject(x);
db.SaveChanges();
Of course, better strategy might be to do it openly in business layer.
I don't understand your use case with rowversion/timestamp.
To avoid concurrency issues you pass original timestamp to modifying code.
That way it can be compared to current value in database to detect if record changed since you last read it.
Comparing it with new value makes little sense.
You usually use change markers that are automatically updated by database like rowversion/timestamp in SQL Server, rowversion in Oracle or xmin in PostgreSQL.
You don't change its value in your code.
Still, if you maintain row version manually, you need to provide:
a) new version to insert and update to be written, and
b) old version (read from database) to update and delete to check for concurrent changes.
You don't send new value to delete. You don't need to.
Also, when using stored procedures for modification, it's better to compute new version in the procedure and return it to application, not the other way around.
Hard to tell without seeing any code, but maybe when the postback occurs the page is being re-bound before your delete method is firing? On whatever method databinds the form controls (I assume it's OnLoad or OnInit), have you wrapped any databinding calls with if ( !this.IsPostBack ) { ... }?
Also I'm not sure if there's a reason why you're explicitly storing the concurrency flag in viewstate/session variables, but a better way to do it (imo) is to add the timestamp to the DataKeyNames property of the FormView/GridView (ie: <asp:FormView ID='blah' runat='server' DataKeyNames='Id, Timestamp'>.
This way you don't have to worry about manually storing or updating the timestamp. ;)
I am trying to use the attach method to update an entity that was retrieve via a stored proc.
The stored proc is set up to return a specific instance, which is present in my dbml. The retrieval works as expected and returns a fully populated object. The reason I need to use a stored proc is that I need to update a property on that entity at the same time that it is retrieved.
After I have retrieved this entity, I am mapping it using AutoMapper to another model which is used in another tier of the app. This tier performs a few operations, and makes a change to the entity, and passes it back to the repository for updating.
The repository converts this business model back into a database model, and attempts to attach it to the datacontext in order to take advantage of the automagic updating.
No matter what combination of Attach(entity, true) Attach(entity) etc, and it gives me messages like "Row not found or changed" or "Unable to add an entity with the same primary key".
Does anyone have any experience with the Attach method and how it can be used to update entities that did not necessarily come from the data context using query syntax (ie in this case a stored proc)?
Thanks alot
First, if you are creating a copy of the object, making changes and then trying to attach the copied object to the same DataContext as the one with the original object in it, then this would probably result in the "Unable to add an entity with the same primary key" message. One way to handle this is:
1. Get object from DataContext
2. Make changes and map object (or vice versa - whatever order)
3. Update the original object with the new values made in the other tier
4. SubmitChanges on the DataContext containing the original object
or
Get the object from a DataContext and close the DataContext
Make your changes and do your mapping
Retrieve the object from the DataContext to which you want to save
Update that object with the values from your mapped object
SubmitChanges
Alternately, when you say you are using the proc because you need to update a property at the same time that you retrieve it, I'd need to see the proc, but if you are somehow committing this update after retrieving the information, then indeed the message "row not found or changed" is correct. This would be hard to do, but you could do it if you're loading the data into a temp table, doing the update, and then using a select from the temp table to populate the object. One thing you could try is setting that property, in the L2S designer, to AutoUpdate = Never and see if that makes the problem go away. If so, this is your problem.
1: is it the same data-context, and
2: is it the same entity instance (or one that looks like it)
This would only happen for the same data-context, I suspect. If it is the same entity, then it is already there; just call SumbitChanges. Otherwise, either use a second data-context or detach the original entity.
So if I retrieved the entity via a stored proc, is it being tracked by the datacontext?
The thing is.. I'm going from the data model, to a another model that is used by another component, and then back. Its not.. really the same instance, but it does have all the same properties.
IE
public Models.Tag GetEntity()
{
var dbTag = db.PROJ_GetEntity((int)EntityStatuses.Created, (int)EntityStatuses.CreatingInApi).SingleOrDefault();
return FromDb Entity(dbEntity);
}
var appModel = GetEntity(); // gets an Entity from a stored proc (NOT GetEntity_RESULT)
appModel.MakeSomeChanges();
_Repo.Persist(appModel);
public void Persist(Models.AppModel model)
{
var dbEntity = Mapper.Map(model);
db.Attach(dbEntity);
db.SubmitChanges();
}
This is somewhat pseudo code like.. but it demostrates pretty much exactly what I am doing.
Thanks
I'm upvoting weenet's answer because he's right - you can't use Attach to apply the changes.
Unlike Entity Framework, you can only attach an L2S object to a datacontext if it has never been attached before - i.e. it's a newed entity that you want to Insert into a table.
This does cause numerous problems in multi-layered environments - however I've been able to get around many of the issues by creating a generic entity synchronisation system, which uses reflection and expression trees.
After an object has been modified, I run the dynamic delegate against a new object from the DC and the modified object, so that only the differences are tracked in the DC before generating the Update statement. Does get a bit tricky with related entities, though.
I have a data object (let's say it's called 'Entry') that has a set of potential states that look something like this:
1 - Created
2 - File added
3 - Approved
4 - Invalid
This is represented in the database with a 'Status' table with an autonumber primary key, then a 'StatusId' field in the main table, with the appropriate relationships set up.
In my (custom) data layer, I have the 'Entry' object, and, currently, I also declare an Enum with the states listed above specified. Finally I declare a private instance of this Enum along with the appropriate public property.
In my 'Commit()' method I cast the instance of the Enum to an integer and pass it to an Update stored procedure.
In my static 'GetEntry()' method I will obviously have an integer passed back from the database. I then use the 'Enum.Parse()' method to extract an object which is an instance of my Enum which corresponds to the returned status integer. I cast this to the type of my Enum and assign it to the local private variable.
My question is pretty simple - is this approach appropriate, and if not what alternative, other than just storing the raw integer value (which i'm not necessarily averse to), is better.
My reason for asking is that this all just seems incredibly messy to me, what with all the casting and maintaining two lists of the same set of values. I accept the benefit lies in a better experience for the consumer of the data object, but even so...
Thanks!
We have something familiar in one of our projects.
We have a table containting types of items. These types have an id, in the code we have an enum with the same id's.
The thing is, in the database we don't use autonumber (identity) so we have full control of the id.
And when saving our object we just take the id of the enum to save the object.
I also thought this approach was messy but it's not that bad afterall.
That method seems fine to me.
In the past I've done the same thing but also had a table that contained a row for each member of the enum that table was then the foreign key for any table that used the enum value, just so someone reading the database could understand what each status was without having to see the actual enum.
for example if I had an enum like
enum status
{
Active,
Deleted,
Inactive
}
I would have a table called status that would have the following records
ID Name
0 Active
1 Deleted
2 Inactive
That table would then be the foreign key to any tables that used that enum.
Yup this is fine!
PLEASE always explicitly set the values like this. That way if someone ever goes to add something they'll realize the values are important and shouldn't be messed with.
enum status
{
Active = 1,
Deleted = 2,
Inactive = 3
}
If you're passing the value around via WCF I'd recommend adding
NULL = 0
Otherwise if you try to serialize a 0 coming from the database you'll get a horrible error and it'll take you forever to debug.
the database lookup table is necessary; the programmatic enum is convenient to avoid having 'magic numbers' in the code
if your code does not need to manipulate the status, however, then the enum is unnecessary
I do this approach with Enums all the time. If it is a simple item like status that is not expected to change ever I prefer the Enum. The parsing and casting is a very low impact operation.
I have been doing this successfully with Linq to Sql for sometime now with no issues. Linq will actually convert from Enum to int and back automatically.
Code is more than just about speed but readability. Enums make code readable.
To answer your question directly this is a very valid apporach.
If your code requires setting known "Status" values (which you've defined in your enum), then it's probably also a requirement that those "Status" values exist in the database. Since they must exist you should also have control over the Status_ID assigned to each of those values.
Drop the identity and just explicitly set the lookup value IDs.