As I've mentioned in a couple other questions, I'm currently trying to replace a home-grown ORM with the Entity Framework, now that our database can support it.
Currently, we have certain objects set up such that they are mapped to a table in our internal database and a table in the database that runs our website (which is not even in the same state, let alone on the same server). So, for example:
Part p = new Part(12345);
p.Name = "Renamed part";
p.Update();
will update both the internal and the web databases simultaneously to reflect that the part with ID 12345 is now named "Renamed part". This logic only needs to go one direction (internal -> web) for the time being. We access the web database through a LINQ-to-SQL DBML and its objects.
I think my question has two parts, although it's possible I'm not asking the right question in the first place.
Is there any kind of "OnUpdate()" event/method that I can use to trigger validation of "Should this be pushed to the web?" and then do the pushing? If there isn't anything by default, is there any other way I can insert logic between .SaveChanges() and when it hits the database?
Is there any way that I can specify for each object which DBML object it maps to, and for each EF auto-generated property which property on the L2S object to map to? The names often match up, but not always so I can't rely on that. Alternatively, can I modify the L2S objects in a generic way so that they can populate themselves from the EF object?
Sounds like a job for Sql Server replication.
You don't need to inter-connect the two together as it seems you're saying with question 2.
Just have the two separate databases with their own EF or L2S models and abstract them away using repositories with domain objects.
This is the solution I ended up going with. Note that the implementation of IAdvantageWebTable is inherited from the existing base class, so nothing special needed to be done for EF-based classes, once the T4 template was modified to inherit correctly.
public partial class EntityContext
{
public override int SaveChanges(System.Data.Objects.SaveOptions options)
{
var modified = this.ObjectStateManager.GetObjectStateEntries(EntityState.Modified | EntityState.Added); // Get the list of things to update
var result = base.SaveChanges(options); // Call the base SaveChanges, which clears that list.
using (var context = new WebDataContext()) // This is the second database context.
{
foreach (var obj in modified)
{
var table = obj.Entity as IAdvantageWebTable;
if (table != null)
{
table.UpdateWeb(context); // This is IAdvantageWebTable.UpdateWeb(), which calls all the existing logic I've had in place for years.
}
}
context.SubmitChanges();
}
return result;
}
}
Related
I want to make a universal method for working with tables. Studied links
Dynamically Instantiate Model object in Entity Framework DB first by passing type as parameter
Dynamically access table in EF Core 2.0
As an example, the ASP.NET CORE controller for one of the SQL tables is shown below. There are many tables. You have to implement such (DEL,ADD,CHANGE) methods for each table :
[Authorize(Roles = "Administrator")]
[HttpPost]
public ActionResult DeleteToDB(string id)
{
webtm_mng_16Context db = new webtm_mng_16Context();
var Obj_item1 = (from o1 in db.IT_bar
where o1.id == int.Parse(id)
select o1).SingleOrDefault();
if ((Obj_item1 != null))
{
db.IT_bar.Remove(Obj_item1);
db.SaveChanges();
}
var Result = "ok";
return Json(Result);
}
I want to get a universal method for all such operations with the ability to change the name of the table dynamically. Ideally, set the table name as a string. I know that this can be done using SQL inserts, but is there really no simple method to implement this in EF CORE
Sorry, but you need to rework your model.
It is possible to do something generic as long as you have one table per type - you can go into the configuration and change the database table. OpenIddict allows that. You can overwrite the constructors of the DbContext and play whatever you want with the object model, and that includes changing table names.
What you can also do is a generic base class taking the classes you deal with as parameters. I have those - taking (a) the db entity type and (b) the api side dto type and then using some generic functions and Automapper to map between them.
But the moment you need to grab the table name dynamically you are in a world of pain. EF standard architecture assumes that an object type is mapped to a database entity. As such, an ID is unique within a table - the whole relational model depends on that. Id 44 has to be unique, for a specific object, not for an object and the table it was at this moment loaded from.
You also miss up significantly on acutally logic, i.e. for delete. I hate to tell you, but while you can implement security on other layers for reading, every single one of my write/update methods are handwritten. Now, it may seem that "Authorize" works - but no, it does not. Or - it does if your application is "Hello world" complex. I run sometimes pages of testing code whether an operation is allowed in a specific business context and this IS specific, whether the user has set an override switch (which may or may not be valid depending on who he is) do bypass certain business rules. All that is anyway specific.
Oh, what you can also do... because you seem to have a lot of tables: do NOT use one class, generate them. Scaffolding is not that complex. I hardly remember when I did generate the last EF core database classes - they nowadays all come out of Entity Developer (tool from Devart), while the db is handled with change scripts (I work db first - i actually want to USE The database and that means filtered indices, triggers, some sp's and views with specific SQL), so migrations do not really work at all.
But now, overwriting the table name dynamically - while keeping the same object in the background - will bite you quite fast. It likely only works for extremely simplistic things - you know, "hello world" example - and breaks apart the moment you actually have logic.
I am currently working towards implementing a charting library with a database that contains a large amount of data. For the table I am using, the raw data is spread out across 148 columns of data, with over 1000 rows. As I have only created models for tables that contain a few columns, I am unsure how to go about implementing a model for this particular table. My usual method of creating a model and using the Entity Framework to connect it to a database doesn't seem practical, as implementing 148 properties for each column does not seem like an efficient method.
My questions are:
What would be a good method to implement this table into an MVC project so that there are read actions that allow one to pull the data from the table?
How would one structure a model so that one could read 148 columns of data from it without having to declare 148 properties?
Is the Entity Framework an efficient way of achieving this goal?
Entity Framework Database First sounds like the perfect solution for your problem.
Data first models mean how they sound; the data exists before the code does. Entity Framework will create the models as partial classes for you based on the table you direct it to.
Additionally, exceptions won't be thrown if the table changes (as long as nothing is accessing a field that doesn't exist), which can be extremely beneficial in a lot of cases. Migrations are not necessary. Instead, all you have to do is right click on the generated model and click "Update Model from Database" and it works like magic. The whole process can be significantly faster than Code First.
Here is another tutorial to help you.
yes with Database First you can create the entites so fast, also remember that is a good practice return onlye the fiedls that you really need, so, your entity has 148 columns, but your app needs only 10 fields, so convert the original entity to a model or viewmodel and use it!
One excelent tool that cal help you is AutoMapper
Regards,
Wow, that's a lot of columns!
Given your circumstances a few thoughts come to mind:
1: If your problem is the leg work of creating that many properties you could look at Entity Framework Power Tools. EF Tools is able to reverse engineer a database and create the necessary models/entity relation mappings for you, saving you a lot of the grunt work.
To save you pulling all of that data out in one go you can then use projections like so:
var result = DbContext.ChartingData.Select(x => new PartialDto {
Property1 = x.Column1,
Property50 = x.Column50,
Property109 = x.Column109
});
A tool like AutoMapper will allow you to do this with ease via simply configurable mapping profiles:
var result = DbContext.ChartingData.Project().To<PartialDto>().ToList();
2: If you have concerns with the performance of manipulating such large entities through Entity Framework then you could also look at using something like Dapper (which will happily work alongside Entity Framework).
This would save you the hassle of modelling the entities for the larger tables but allow you to easily query/update specific columns:
public class ModelledDataColumns
{
public string Property1 { get; set; }
public string Property50 { get; set; }
public string Property109 { get; set; }
}
const string sqlCommand = "SELECT Property1, Property50, Property109 FROM YourTable WHERE Id = #Id";
IEnumerable<ModelledDataColumns> collection = connection.Query<ModelledDataColumns>(sqlCommand", new { Id = 5 }).ToList();
Ultimately if you're keen to go the Entity Framework route then as far as I'm aware there's no way to pull that data from the database without having to create all of the properties one way or another.
I'm very new to this Entity Framework Object Services Overview (Entity Framework), so forgive me if I use the wrong terminology here.
I'm using the EDMX file to connect to an SQLite database. What I'm trying to do is use the ObjectSet<T> normally, to access a collection of objects from a table in the database. However, I want to additionally store some run-time-only data in the objects in that set. In my case, I have a set of devices stored in the database, but upon startup, I want to mark them as "Connected" or "Disconnected", and keep track of this state throughout execution.
Since the (row) types generated by the EDMX are partial I've added another partial definition, and added my public bool Connected property there. This seems to work, I can set it, and future queries provide objects with the same value that I previously set. The problem is, I don't know a) how it is working, or b) whether I can trust it. These doubts come from the fact that these aren't really collections of objects we're dealing with, right?
Hopefully that made sense, else I can provide more detail.
What you're doing is completely safe.
ObjectSet is still a collection of objects. With a lot magic added underneath.
I am not an expert on the internals but here is how I think it works:
The Entity Framework has a StateTracker hat keeps track of all the entities you're working with.
Every class in your EDMX model is required to have a key. EF is using that key internally so that it loads that specific object only once into memory.
var foo = db.Foos.Single(x => x.Id == 1); // foo with Id 1 is unique (in memory)
var foo2 = db.Foos.Single(x => x.Id == 1); // same instance of foo, but with updated values
var foo3 = db.Foos.Single(x => x.Id == 2) // a new unique instance (Id = 2)
bool sameObject = Object.Equals(foo, foo2); // will return true;
At every select the following happens:
Is an instance of class Foo already tracked/does it already exist?
Yes -> update the properties of the existing instance from the database.
No -> create new instance of class Foo (take values from database)
Of course it can only ever update mapped properties. So the ones you defined in the partial class won't be overwritten.
In case you're going to use code first. There is also the [NotMapped] attribute, that makes sure that the property won't be included in the table if you generate a new database from your code first models.
I hope I could clarify some things for you.
I have a Linq object, and I want to make changes to it and save it, like so:
public void DoSomething(MyClass obj) {
obj.MyProperty = "Changed!";
MyDataContext dc = new MyDataContext();
dc.GetTable<MyClass>().Attach(dc, true); // throws exception
dc.SubmitChanges();
}
The exception is:
System.InvalidOperationException: An entity can only be attached as modified without original state if it declares a version member or does not have an update check policy.
It looks like I have a few choices:
put a version member on every one of my Linq classes & tables (100+) that I need to use in this way.
find the data context that originally created the object and use that to submit changes.
implement OnLoaded in every class and save a copy of this object that I can pass to Attach() as the baseline object.
To hell with concurrency checking; load the DB version just before attaching and use that as the baseline object (NOT!!!)
Option (2) seems the most elegant method, particularly if I can find a way of storing a reference to the data context when the object is created. But - how?
Any other ideas?
EDIT
I tried to follow Jason Punyon's advice and create a concurrency field on on table as a test case. I set all the right properties (Time Stamp = true etc.) on the field in the dbml file, and I now have a concurrency field... and a different error:
System.NotSupportedException: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported.
So what the heck am I supposed to attach, then, if not an existing entity? If I wanted a new record, I would do an InsertOnSubmit()! So how are you supposed to use Attach()?
Edit - FULL DISCLOSURE
OK, I can see it's time for full disclosure of why all the standard patterns aren't working for me.
I have been trying to be clever and make my interfaces much cleaner by hiding the DataContext from the "consumer" developers. This I have done by creating a base class
public class LinqedTable<T> where T : LinqedTable<T> {
...
}
... and every single one of my tables has the "other half" of its generated version declared like so:
public partial class MyClass : LinqedTable<MyClass> {
}
Now LinqedTable has a bunch of utility methods, most particularly things like:
public static T Get(long ID) {
// code to load the record with the given ID
// so you can write things like:
// MyClass obj = MyClass.Get(myID);
// instead of:
// MyClass obj = myDataContext.GetTable<MyClass>().Where(o => o.ID == myID).SingleOrDefault();
}
public static Table<T> GetTable() {
// so you can write queries like:
// var q = MyClass.GetTable();
// instead of:
// var q = myDataContext.GetTable<MyClass>();
}
Of course, as you can imagine, this means that LinqedTable must somehow be able to have access to a DataContext. Up until recently I was achieving this by caching the DataContext in a static context. Yes, "up until recently", because that "recently" is when I discovered that you're not really supposed to hang on to a DataContext for longer than a unit of work, otherwise all sorts of gremlins start coming out of the woodwork. Lesson learned.
So now I know that I can't hang on to that data context for too long... which is why I started experimenting with creating a DataContext on demand, cached only on the current LinqedTable instance. This then led to the problem where the newly created DataContext wants nothing to do with my object, because it "knows" that it's being unfaithful to the DataContext that created it.
Is there any way of pushing the DataContext info onto the LinqedTable at the time of creation or loading?
This really is a poser. I definitely do not want to compromise on all these convenience functions I've put into the LinqedTable base class, and I need to be able to let go of the DataContext when necessary and hang on to it while it's still needed.
Any other ideas?
Updating with LINQ to SQL is, um, interesting.
If the data context is gone (which in most situations, it should be), then you will need to get a new data context, and run a query to retrieve the object you want to update. It's an absolute rule in LINQ to SQL that you must retrieve an object to delete it, and it's just about as iron-clad that you should retrieve an object to update it as well. There are workarounds, but they are ugly and generally have lots more ways to get you in trouble. So just go get the record again and be done with it.
Once you have the re-fetched object, then update it with the content of your existing object that has the changes. Then do a SubmitChanges() on the new data context. That's it! LINQ to SQL will generate a fairly heavy-handed version of optimistic concurrency by comparing every value in the record to the original (in the re-fetched) record. If any value changed while you had the data, LINQ to SQL will throw a concurrency exception. (So you don't need to go altering all your tables for versioning or timestamps.)
If you have any questions about the generated update statements, you'll have to break out SQL Profiler and watch the updates go to the database. Which is actually a good idea, until you get confidence in the generated SQL.
One last note on transactions - the data context will generate a transaction for each SubmitChanges() call, if there is no ambient transaction. If you have several items to update and want to run them as one transaction, make sure you use the same data context for all of them, and wait to call SubmitChanges() until you've updated all the object contents.
If that approach to transactions isn't feasible, then look up the TransactionScope object. It will be your friend.
I think 2 is not the best option. It's sounding like you're going to create a single DataContext and keep it alive for the entire lifetime of your program which is a bad idea. DataContexts are lightweight objects meant to be spun up when you need them. Trying to keep the references around is also probably going to tightly couple areas of your program you'd rather keep separate.
Running a hundred ALTER TABLE statements one time, regenerating the context and keeping the architecture simple and decoupled is the elegant answer...
find the data context that originally created the object and use that to submit changes
Where did your datacontext go? Why is it so hard to find? You're only using one at any given time right?
So what the heck am I supposed to attach, then, if not an existing entity? If I wanted a new record, I would do an InsertOnSubmit()! So how are you supposed to use Attach()?
You're supposed to attach an instance that represents an existing record... but was not loaded by another datacontext - can't have two contexts tracking record state on the same instance. If you produce a new instance (ie. clone) you'll be good to go.
You might want to check out this article and its concurrency patterns for update and delete section.
The "An entity can only be attached as modified without original state if it declares a version member" error when attaching an entitity that has a timestamp member will (should) only occur if the entity has not travelled 'over the wire' (read: been serialized and deserialized again). If you're testing with a local test app that is not using WCF or something else that will result in the entities being serialized and deserialized then they will still keep references to the original datacontext through entitysets/entityrefs (associations/nav. properties).
If this is the case, you can work around it by serializing and deserializing it locally before calling the datacontext's .Attach method. E.g.:
internal static T CloneEntity<T>(T originalEntity)
{
Type entityType = typeof(T);
DataContractSerializer ser =
new DataContractSerializer(entityType);
using (MemoryStream ms = new MemoryStream())
{
ser.WriteObject(ms, originalEntity);
ms.Position = 0;
return (T)ser.ReadObject(ms);
}
}
Alternatively you can detach it by setting all entitysets/entityrefs to null, but that is more error prone so although a bit more expensive I just use the DataContractSerializer method above whenever I want to simulate n-tier behavior locally...
(related thread: http://social.msdn.microsoft.com/Forums/en-US/linqtosql/thread/eeeee9ae-fafb-4627-aa2e-e30570f637ba )
You can reattach to a new DataContext. The only thing that prevents you from doing so under normal circumstances is the property changed event registrations that occur within the EntitySet<T> and EntityRef<T> classes. To allow the entity to be transferred between contexts, you first have to detach the entity from the DataContext, by removing these event registrations, and then later on reattach to the new context by using the DataContext.Attach() method.
Here's a good example.
When you retrieve the data in the first place, turn off object tracking on the context that does the retrieval. This will prevent the object state from being tracked on the original context. Then, when it's time to save the values, attach to the new context, refresh to set the original values on the object from the database, and then submit changes. The following worked for me when I tested it.
MyClass obj = null;
using (DataContext context = new DataContext())
{
context.ObjectTrackingEnabled = false;
obj = (from p in context.MyClasses
where p.ID == someId
select p).FirstOrDefault();
}
obj.Name += "test";
using (DataContext context2 = new ())
{
context2.MyClasses.Attach(obj);
context2.Refresh(System.Data.Linq.RefreshMode.KeepCurrentValues, obj);
context2.SubmitChanges();
}
What's the preferred approach when using L2E to add behavior to the objects in the data model?
Having a wrapper class that implements the behavior you need with only the data you need
using (var dbh = new ffEntities())
{
var query = from feed in dbh.feeds select
new FFFeed(feed.name, new Uri(feed.uri), feed.refresh);
return query.ToList();
}
//Later in a separate place, not even in the same class
foreach (FFeed feed in feedList) { feed.doX(); }
Using directly the data model instances and have a method that operates over the IEnumerable of those instances
using (var dbh = new ffEntities())
{
var query = from feed in dbh.feeds select feed;
return query.ToList();
}
//Later in a separate place, not even in the same class
foreach (feeds feed in feedList) { doX(feed); }
Using extension methods on the data model class so it ends up having the extra methods the wrapper would have.
public static class dataModelExtensions {
public static void doX(this feeds source) {
//do X
}
}
//Later in a separate place, not even in the same class
foreach (feeds feed in feedList) { feed.doX(); }
Which one is best? I tend to favor the last approach as it's clean, doesn't interfere with the CRUD facilities (i can just use it to insert/update/delete directly, no need to wrap things back), but I wonder if there's a downside I haven't seen.
Is there a fourth approach? I fail at grasping LINQ's philosophy a bit, especially regarding LINQ to Entities.
The Entity classes are partial classes as far as i know, so you can add another file extending them directly using the partial keyword.
Else, i usually have a wrapper class, i.e. my ViewModel (i'm using WPF with MVVM). I also have some generic Helper classes with fluent interfaces that i use to add specific query filters to my ViewModel.
I think it's a mistake to put behaviors on entity types at all.
The Entity Framework is based around the Entity Data Model, described by one of its architects as "very close to the object data model of .NET, modulo the behaviors." Put another way, your entity model is designed to map relational data into object space, but it should not be extended with methods. Save your methods for business types.
Unlike some other ORMs, you are not stuck with whatever object type comes out of the black box. You can project to nearly any type with LINQ, even if it is shaped differently than your entity types. So use entity types for mapping only, not for business code, data transfer, or presentation models.
Entity types are declared partial when code is generated. This leads some developers to attempt to extend them into business types. This is a mistake. Indeed, it is rarely a good idea to extend entity types. The properties created within your entity model can be queried in LINQ to Entities; properties or methods you add to the partial class cannot be included in a query.
Consider these examples of a business method:
public Decimal CalculateEarnings(Guid id)
{
var timeRecord = (from tr in Context.TimeRecords
.Include(“Employee.Person”)
.Include(“Job.Steps”)
.Include(“TheWorld.And.ItsDog”)
where tr.Id = id
select tr).First();
// Calculate has deep knowledge of entity model
return EarningsHelpers.Calculate(timeRecord);
}
What's wrong with this method? The generated SQL is going to be ferociously complex, because we have asked the Entity Framework to materialize instances of entire objects merely to get at the minority of properties required by the Calculate method. The code is also fragile. Changing the model will not only break the eager loading (via the Include calls), but will also break the Calculate method.
The Single Responsibility Principle states that a class should have only one reason to change. In the example shown on the screen, the EarningsHelpers type has the responsibility both of actually calculating earnings and of keeping up-to-date with changes to the entity model. The first responsibility seems correct, the second doesn't sound right. Let's see if we can fix that.
public Decimal CalculateEarnings(Guid id)
{
var timeData = from tr in Context.TimeRecords
where tr.Id = id
select new EarningsCalculationContext
{
Salary = tr.Employee.Salary,
StepRates = from s in tr.Job.Steps
select s.Rate,
TotalHours = tr.Stop – tr.Start
}.First();
// Calculate has no knowledge of entity model
return EarningsHelpers.Calculate(timeData);
}
In the next example, I have rewritten the LINQ query to pick out only the bits of information required by the Calculate method, and project that information onto a type which rolls up the arguments for the Calculate method. If writing a new type just to pass arguments to a method seemed like too much work, I could have also projected onto an anonymous type, and passed Salary, StepRates, and TotalHours as individual arguments. But either way, we have fixed the dependency of EarningsHelpers on the entity model, and as a free bonus we've gotten more efficient SQL, as well.
You might look at this code and wonder what would happen if the Job property of TimeRecord where nullable. Wouldn't I get a null reference exception?
No, I would not. This code will not be compiled and executed as IL; it will be translated to SQL. LINQ to Entities coalesces null references. In the example query shown on the screen, StepRates would simply return null if Job was null. You can think of this as being identical to lazy loading, except without the extra database queries. The code says, "If there is a job, then load the rates from its steps."
An additional benefit of this kind of architecture is that it makes unit testing of the Web assembly very easy. Unit tests should not access a database, generally speaking (put another way, tests which do access a database are integration tests rather than unit tests). It's quite easy to write a mock repository which returns arrays of objects as Queryables rather than actually going to the Entity Framework.