How to use T-SQL timestamp with Linq to Entities EF4 - c#

I currently have a project where we are trying to migrate some data from a client database and into our central DB store. We are then going to expose the data back to the client via web service methods.
One thing we would like to do is make use of a T-SQL timestamp (or rowversion) column on the data, so the client can check their local version of the data against ours, and for instance call a method saying "give me all the data with a version > 10"
This is proving a little problematic for us, because Entity Framework will interpret the timestamp column as a Byte array, so we can't figure out the best way to write code in LINQ to get all rows of data where version > X where the type of version is Byte[].
In pseudo code
getdata(int checkVersion)
{
return Shoppers.Where(s=>s.Version > checkVersion).ToList();
}
//==> need to convert the Version column from Byte[] to int somehow in Linq-Entities
One suggestion would be to create a new computed column in the table of type bigint which converts the version (timestamp) column.
I wonder if there is actually a way to do this in LINQ though, without introducing another column into the table?

i think you will like this msdn article
By default, the Entity Framework implements an optimistic concurrency model. This means that locks are not held on data in the data source between when the data is queried and the data is updated. The Entity Framework saves object changes to the database without checking for concurrency. For entities that might experience a high degree of concurrency, we recommend that the entity define a property in the conceptual layer with an attribute of ConcurrencyMode="fixed", as shown in the following example:
<Property Name="Status" Type="Byte" Nullable="false" ConcurrencyMode="Fixed" />
When this attribute is used, the Entity Framework checks for changes in the database before saving changes to the database. Any conflicting changes will cause an OptimisticConcurrencyException. For more information, see How to: Manage Data Concurrency in the Object Context. An OptimisticConcurrencyException can also occur when you define an Entity Data Model that uses stored procedures to make updates to the data source. In this case, the exception is raised when the stored procedure that is used to perform updates reports that zero rows were updated.

Related

Passing model to Data Layer for two or three parameters in Angular

I am working on an Angular (v6) project on ASP.NET MVC for backend and Entity Framework. Sometimes a have some CRUD operations that updates only 2-3 fields on an entity and in this situation I may be confused about which approach would be better for best practices for this scenario. As an example, let's say I have a Employee entity with the following properties shown below:
Employee: Id, Status, Name, Surname, Job, Department, HireDate, BirthDate, Address, Updated...
Assuming update for Status, Department and Updated field, I can perform this for the following approaches:
Approach I:
I can create an instance of employee.ts file and fill it in component.ts by only the fields to be updated and then pass it to service.ts and pass to the Controller.cs. In the Controller I receive the model as Employee entity model and set Updated field in the Controller and pass this Employee entity to the Service.cs and then save this entity using the related EF methods.
Approach II:
I just send Id, Status and Department values from Component.ts to service.ts and then pass to the Controller as int values (Id's). Then in the controller create a new instance of the Employee.cs entity and fill these 3 fields and Updated field. Then pass this entity to the Service.cs and then save this entity using the related EF methods.
Approach III:
Same as approach II until Controller.cs. Then pass these 3 parameters to the Service ts and then retrieve the Employee from database via the Id parameter. Then set the other fields and save the entity.
I think 3 of them can be used but not sure which one is better for this scenario in Angular projects with EF? Any help would be appreciated...
Approach 3, or
Approach 4:
Create an UpdateEmployeeViewModel with the PK & fields you want to update to populate in your TS, pass to the controller which validates the data, loads the entity, transfers the appropriate values and saves. When it's one or two columns then Approach 3 is fine. If it grows to more than that then I typically opt for #4.
I would avoid approach 1 at all costs. It is too convenient to have code trust the entity passed from the client. The call to your service can be intercepted and adjusted so if you server code accepts an entity you may easily find code that does a DbSet.Update or DbSet.Attach which could result in tampered data being persisted to the database.
Approach 2 also leaves issues when performing updates as an entity should always reflect its data row. Creating an entity and only partially filling it then attempting to update the data state could result in unintentional updates such as clearing out values. Down the road you may have other common methods that would accept an entity as a parameter but you have cases where the passed entity may be complete (loaded from DB) vs. incomplete (constructed by an update method)
Loading an entity by ID is quite fast so there is rarely a need to over-optimize. It also can help to check the row version # to ensure that the version of the entity you are updating matches the version still in the DB. (Did someone else update this row since you initially sent it to the client?)

How and when does Entity Framework Code First generate the DB?

I came across this related question: How can I generate DDL scripts from Entity Framework 4.3 Code-First Model?
But this doesn't appear to answer the question of when a Code First application actually checks the existence/correctness of the DB and modifies it if necessary. Is it at run-time or build time? Assuming it's at run-time is it at start-up or when you create the DbContext or at the last possible moment e.g. when you try to write/read the DB table(s) it checks they exist on a case-by-case basis?
It is ceated at rutime the first time you access an entity, ie,
using (var db = new MyDBContext())
{
var items = db.MyObj.Count() // <- Here it is created!
}
There are some flavors on how, like if you set the creating strategy to CreateDatabaseIfNotExists, DropCreateDatabaseAlways, Etc. Please give this a look:
http://www.entityframeworktutorial.net/code-first/database-initialization-strategy-in-code-first.aspx
The column Model in the table __MigrationHistory is serialized and gzipped(base64) version of your EDMX. In code first the column Model is generated by Add-Migration and stored in the second part of the migration partial class and in the database when the database is created as binary stream varbinary(max).
When the database initializer (Database.SetInitializer) is called, then EF generate from the classes on the fly(Runtime) the current Entity Data Model(EDMX). The generated model will be serialized, zipped(base64) and finally compare it with the stored Model of the migration history table.
The comparison happens before the DbContext is created and, if the two Models(binary streams) are not identical then you will get a compatibility exception.

Entity Framework concurrency issue using stored procedures

I am using ASP.NET to build a application for a retail company. I am using the Entity Framework (model-first) as my data access layer. I am using stored procedures to do my CRUD operations and all columns are mapped and seems to be correct as all CRUD functionality are working as expected.
But I am having concurrency issues with the DELETE operation.
I've added a TimeStamp column to the table I am doing the CRUD operation on. The UPDATE operation works fine as it is updating by primary key and the TimeStamp value. Thus if no rows are affected with the UPDATE operation, because of a change in the TimeStamp value, the Entity Framework throws a OptimisticConcurrencyException.
The DELETE operation works on the same principle as it is deleting by primary key and the TimeStamp value. But no exception is thrown when the TimeStamp value does not match between the entity and the database.
In the C# delete method I do retrieve the latest record first and then update the TimeStamp property to another TimeStamp value (It might be different to the retrieved value). After some investigation by using SQL Profiler I can see that the DELETE stored procedure is executed but the TimeStamp parameter that is passed to the stored procedure is the latest TimeStamp value and not the value that I have set the TimeStamp property to. Thus the record is deleted and the Entity Framework does not throw an exception.
Why would the Entity Framework still pass the retrieved TimeStamp value to the Stored Procedure and not the value that I have assigned the property? Is this be design or am I missing something?
Any help will be appreciated! (where is Julie Lerman when you need her! :-))
Optimistic concurrency in EF works fine. Even with stored procedures.
ObjectContext.DeleteObjects passes original values of entity to delete function. This makes sense. Original values are used to identify the row to delete. When you delete object, you don't (usually) have meaningful edits to your entity. What do you expect EF to do with then? Write? To what records?
One legitimate use for passing modified data to delete function is when you want to track deletes in some other table and you need to throw in some information not accessible at database layer, only at business layer. Examples include application level user name or reason to delete. In this situation you need to construct entity with this values as original values. One way to do it:
var x = db.MyTable.Single(k => k.Id == id_to_delete);
x.UserName = logged_in_user;
x.ReasonForChange = some_reason;
// [...]
db.ObjectStateManager.ChangeObjectState(x, EntityState.Unchanged);
db.MyTable.DeleteObject(x);
db.SaveChanges();
Of course, better strategy might be to do it openly in business layer.
I don't understand your use case with rowversion/timestamp.
To avoid concurrency issues you pass original timestamp to modifying code.
That way it can be compared to current value in database to detect if record changed since you last read it.
Comparing it with new value makes little sense.
You usually use change markers that are automatically updated by database like rowversion/timestamp in SQL Server, rowversion in Oracle or xmin in PostgreSQL.
You don't change its value in your code.
Still, if you maintain row version manually, you need to provide:
a) new version to insert and update to be written, and
b) old version (read from database) to update and delete to check for concurrent changes.
You don't send new value to delete. You don't need to.
Also, when using stored procedures for modification, it's better to compute new version in the procedure and return it to application, not the other way around.
Hard to tell without seeing any code, but maybe when the postback occurs the page is being re-bound before your delete method is firing? On whatever method databinds the form controls (I assume it's OnLoad or OnInit), have you wrapped any databinding calls with if ( !this.IsPostBack ) { ... }?
Also I'm not sure if there's a reason why you're explicitly storing the concurrency flag in viewstate/session variables, but a better way to do it (imo) is to add the timestamp to the DataKeyNames property of the FormView/GridView (ie: <asp:FormView ID='blah' runat='server' DataKeyNames='Id, Timestamp'>.
This way you don't have to worry about manually storing or updating the timestamp. ;)

Reading several tables from one single Entity Framework ExecuteStoreQuery request.

I have a library which uses EF4 for accessing a SQL Server data store. For different reasons, I have to use SQL Server specific syntax to read data from the store (for free text search), so I have to create the SQL code by hand and send it through the ExecuteStoreQuery method.
This works fine, except that the query uses joins to request several tables aside the main one (the main one being the one I specify as the target entity set when calling ExecuteStoreQuery), and EF never fills up the main entity's relationship properties with the other table's data.
Is there anything special to do to fill up these relationships? Using other EF methods or using special table names in the query or something?
Thanks for your help.
Executing direct SQL follows very simple rule: It uses column from the result set to fill the property with the same name in materialized entity. I think I read somewhere that this works only with the the main entity you materialize (entity type defined in ExecuteStoreQuery = no relations) but I can't find it now. I did several tests and it really doesn't populate any relation.
Ok so I'll write here what I ended up doing, which does not looks like a perfect solution, but it does not seem that there is any perfect solution in this case.
As Ladislav pointed out, the ExecuteStoreQuery (as well as the other "custom query" method, Translate) only maps the column of the entity you specify, leaving all the other columns aside. Therefore I had to load the dependencies separately, like this :
// Execute
IEnumerable<MainEntity> result = context.ExecuteStoreQuery<MainEntity>(strQuery, "MainEntities", MergeOption.AppendOnly, someParams).ToArray();
// Load relations, first method
foreach (MainEntity e in result)
{
if (!e.Relation1Reference.IsLoaded)
e.Relation1Reference.Load();
if (!e.Relation2Reference.IsLoaded)
e.Relation2Reference.Load();
// ...
}
// Load relations, second method
// The main entity contains a navigation property pointing
// to a record in the OtherEntity entity
foreach(OtherEntity e in context.OtherEntities)
context.OtherEntities.Attach(e);
There. I think these two techniques have to be chosen depending on the number and size of generated requests. The first technique will generate a one-record request for every required side record, but no unnessecary record will be loaded. The second technique uses less requests (one per table) but retrieves all the records so it uses more memory.

update a database that has not a PK column

nice job
is it possible to update a data base which doesn't have primary key column with a datagridview?(in a winform program)
i use sql express 2008,and wanna do this with dataset approach.
Cheers
Without knowing a significant amount about what exactly you are doing and how you are going about your problem the simple answer is. Yes…
The datagridview in the .Net framework allows for binding to Objects exposing public properties and implementing custom select and update methods. Therefore it allows you to implement your own custom update method if required and allows for you to perform the update based on any column in your underlying database.
You still need a unique column or a combination of columns to differenciate the various rows you are about to update. At the end of the day the DataLayer that is used to access the data will just do an ordinary sql update/insert on your data.
Just to have asked you but your data model seems kind of broken. I mean that a primary key or at least a unique column would be preferable in any case.
It's all about where your data is actually coming from, whether it's using datasets with plain-old-sql, some kind of ORM (NHibernate or Entity-Framework or whatever), typed datasets, linq-2-sql ...
Depending on your datasource you might have to introduce a primary key to your database.
The GridView actually doesn't care about that, in the end it's just displaying a list of data, and to the grid there is no such thing as a primary key. This only matters to the data access technique in order to know which row to update.

Categories

Resources