I want to make a universal method for working with tables. Studied links
Dynamically Instantiate Model object in Entity Framework DB first by passing type as parameter
Dynamically access table in EF Core 2.0
As an example, the ASP.NET CORE controller for one of the SQL tables is shown below. There are many tables. You have to implement such (DEL,ADD,CHANGE) methods for each table :
[Authorize(Roles = "Administrator")]
[HttpPost]
public ActionResult DeleteToDB(string id)
{
webtm_mng_16Context db = new webtm_mng_16Context();
var Obj_item1 = (from o1 in db.IT_bar
where o1.id == int.Parse(id)
select o1).SingleOrDefault();
if ((Obj_item1 != null))
{
db.IT_bar.Remove(Obj_item1);
db.SaveChanges();
}
var Result = "ok";
return Json(Result);
}
I want to get a universal method for all such operations with the ability to change the name of the table dynamically. Ideally, set the table name as a string. I know that this can be done using SQL inserts, but is there really no simple method to implement this in EF CORE
Sorry, but you need to rework your model.
It is possible to do something generic as long as you have one table per type - you can go into the configuration and change the database table. OpenIddict allows that. You can overwrite the constructors of the DbContext and play whatever you want with the object model, and that includes changing table names.
What you can also do is a generic base class taking the classes you deal with as parameters. I have those - taking (a) the db entity type and (b) the api side dto type and then using some generic functions and Automapper to map between them.
But the moment you need to grab the table name dynamically you are in a world of pain. EF standard architecture assumes that an object type is mapped to a database entity. As such, an ID is unique within a table - the whole relational model depends on that. Id 44 has to be unique, for a specific object, not for an object and the table it was at this moment loaded from.
You also miss up significantly on acutally logic, i.e. for delete. I hate to tell you, but while you can implement security on other layers for reading, every single one of my write/update methods are handwritten. Now, it may seem that "Authorize" works - but no, it does not. Or - it does if your application is "Hello world" complex. I run sometimes pages of testing code whether an operation is allowed in a specific business context and this IS specific, whether the user has set an override switch (which may or may not be valid depending on who he is) do bypass certain business rules. All that is anyway specific.
Oh, what you can also do... because you seem to have a lot of tables: do NOT use one class, generate them. Scaffolding is not that complex. I hardly remember when I did generate the last EF core database classes - they nowadays all come out of Entity Developer (tool from Devart), while the db is handled with change scripts (I work db first - i actually want to USE The database and that means filtered indices, triggers, some sp's and views with specific SQL), so migrations do not really work at all.
But now, overwriting the table name dynamically - while keeping the same object in the background - will bite you quite fast. It likely only works for extremely simplistic things - you know, "hello world" example - and breaks apart the moment you actually have logic.
Related
We have a simple database but which can contain thousands of rows. We're trying to build a simple, lightweight ORM layer over it. Think Dapper. However, we're struggling to figure out how to ensure there is one and only ever one object per ID.
Consider the following data:
ID Last First
===== ========== =======
19 Donnely Rick
20 Donovan Sarah
21 Edwards Sandra
Now consider the following SQL which is used to create Person objects in the ORM layer.
Select * From People Where ID = 20;
Select * From People Where LastName Like 'Don*'
In the first case, you'd get back 'Donovan' but in the second, you'd get back both 'Donovan' and 'Donnely'. Since Donovan had already been returned, we want that instance to come back.
Now of course you would need some lookup by ID. That's easy. What isn't is querying the database, returning rows, then determining that when creating objects you either need to create a new one or update an existing one (in case the data changed.)
The only thing I can think of is having the lookup have a method GetObjectById which either returns an existing object, or creates a new one, stores it, then returns that. I assume it would also have to be based on weak references so they don't just 'hang around' in memory all the time.
// Assume the implementation stores the references weakly
public Person GetPersonById(int id)
{
Person person = this[id]; // assume this returns null if not found
if(person == null)
{
person = new Person(id);
this[id] = person;
}
return person;
}
...or am I going about this all wrong?
One option for this is to use the CSLA framework. It isn't an ORM, it is a "smart object" framework. It has simple, efficient change tracking of entities and collections of descendant entities. It also has several optional features, such as:
The ability to configure the application as a 2-tier or 3-tier application through configuration file.
Role based security.
Formalized Validation Rules (including support for validation attributes).
Formalized Business Rules (for synchronizing data between entities).
Several options for configuring the data tier, with support for any data persistence mechanism.
Support for virtually any UI framework in .NET.
Several object prototypes including readonly or read-write entities and collections.
The downside is that there is quite a learning curve to learn the framework (although there is good documentation), and it is not very DI or test framework friendly.
I'm trying to model a database currently using EntityFramework's Fluent Configuration. I cannot edit or otherwise control the database schema. The entity I am trying to model has a lot of look-up tables - for example, one property (it's name) has a whole table devoted to it with a name associated with an id (which is it's language). In other words, it looks a bit like this in the database:
Entity
string[] Names
Entity_Names
string Name
int LanguageId // 9 = English
However, I am trying to condense this into
Entity
string Name // I only want the English name
Using a SQL query, this would be pretty simple - but how can I do this via Entity Framework's fluent configurations? There are a lot more of these instances as well, but this is the simplest example I could come up with.
If you do manage to flatten the model this way, it's almost certainly going to be a read-only view of the data. There's no way for Entity Framework to know that a string property should be looked up in another table and replaced with an integer id.
So that leaves two options if you're okay with it being view-only. Write a database view that replaces the ids with the strings and build an entity for that view.
Or build entities that are compatible with the schema model and project the data into a dto.
The second approach is the one I'd prefer as it means you'd still have a compatible entity model if you do need to CRUD.
As I've mentioned in a couple other questions, I'm currently trying to replace a home-grown ORM with the Entity Framework, now that our database can support it.
Currently, we have certain objects set up such that they are mapped to a table in our internal database and a table in the database that runs our website (which is not even in the same state, let alone on the same server). So, for example:
Part p = new Part(12345);
p.Name = "Renamed part";
p.Update();
will update both the internal and the web databases simultaneously to reflect that the part with ID 12345 is now named "Renamed part". This logic only needs to go one direction (internal -> web) for the time being. We access the web database through a LINQ-to-SQL DBML and its objects.
I think my question has two parts, although it's possible I'm not asking the right question in the first place.
Is there any kind of "OnUpdate()" event/method that I can use to trigger validation of "Should this be pushed to the web?" and then do the pushing? If there isn't anything by default, is there any other way I can insert logic between .SaveChanges() and when it hits the database?
Is there any way that I can specify for each object which DBML object it maps to, and for each EF auto-generated property which property on the L2S object to map to? The names often match up, but not always so I can't rely on that. Alternatively, can I modify the L2S objects in a generic way so that they can populate themselves from the EF object?
Sounds like a job for Sql Server replication.
You don't need to inter-connect the two together as it seems you're saying with question 2.
Just have the two separate databases with their own EF or L2S models and abstract them away using repositories with domain objects.
This is the solution I ended up going with. Note that the implementation of IAdvantageWebTable is inherited from the existing base class, so nothing special needed to be done for EF-based classes, once the T4 template was modified to inherit correctly.
public partial class EntityContext
{
public override int SaveChanges(System.Data.Objects.SaveOptions options)
{
var modified = this.ObjectStateManager.GetObjectStateEntries(EntityState.Modified | EntityState.Added); // Get the list of things to update
var result = base.SaveChanges(options); // Call the base SaveChanges, which clears that list.
using (var context = new WebDataContext()) // This is the second database context.
{
foreach (var obj in modified)
{
var table = obj.Entity as IAdvantageWebTable;
if (table != null)
{
table.UpdateWeb(context); // This is IAdvantageWebTable.UpdateWeb(), which calls all the existing logic I've had in place for years.
}
}
context.SubmitChanges();
}
return result;
}
}
Currently our new database design is changing rapidly and I don't always have time to keep up to date with the latest changes being made. Therefore I would like to create some basic integration tests that are basically sanity checks on my mappings against the database.
Here are a few of the things I'd like to accomplish in these tests:
Detect columns I have not defined in my mapping but exist in the database
Detect columns I have mapped but do NOT exist in the database
Detect columns that I have mapped where the data types between the database and my business objects no longer jive with each other
Detect column name changes between database and my mapping
I found the following article by Ayende but I just want to see what other people out there are doing to handle these sort of things. Basically I'm looking for simplified tests that cover a lot of my mappings but do not require me to write seperate queries for every business object in my mappings.
I'm happy with this test, that comes from the Ayende proposed one:
[Test]
public void PerformSanityCheck()
{
foreach (var s in NHHelper.Instance.GetConfig().ClassMappings)
{
Console.WriteLine(" *************** " + s.MappedClass.Name);
NHHelper.Instance.CurrentSession.CreateQuery(string.Format("from {0} e", s.MappedClass.Name))
.SetFirstResult(0).SetMaxResults(50).List();
}
}
I'm using plain old query since this version comes from a very old project and I'm to lazy to update with QueryOver or Linq2NH or something else...
It basically ping all mapped entities configured and grasp some data too in order to see that all is ok. It does not care if some field exists in the table but not on the mapping, that can generate problem in persistence if not nullable.
I'm aware that Fabio Maulo has something eventually more accurate.
As a personal consideration, if you are thinking on improvement, I would try to implement such a strategy: since mapping are browsable by API, look for any explicit / implicit table declaration in the map, and ping it with the database using the standard schema helperclasses you have inside NH ( they eventually uses the ADO.NET schema classes, but they insulate all the configuration stuff we already did in NH itself) By playng a little with naming strategy we can achieve a one by one table field check list. Another improvement can be done by, in case of unmatching field, looking for a candidate by applying Levensthein Distance to all the available names and choosing one if some threshold requisites are satisfied. This of course is useless in class first scenarios when the DB schema are generated by NH itself.
I use this one too:
Verifying NHibernate Entities Contain Only Virtual Members
I'm currently facing a performance problem with creating POCO objects from my database. I'm using Entity Framework 4 as OR-Mapper.
The whole application is a prototype for now.
Let's assume I want to have some business objects like classes 'Printer' or 'Scanner'. Both classes inherit from a BaseClass called Product.
The business classes exist.
I try to use a more generic database approach. I don't want to create tables for "Printer" nor "Scanner". I want to have 3 tables: One called Product, and the other Property and PropertyValue (which stores all assigned values to a specific Product).
In my business layer I do create a specific object like this:
public Printer GetPrinter(int IDProduct)
{
Printer item = new Printer();
// get the product object with EF
// get all PropertyValues
// (with Reflection) foreach property in item.GetType().GetProperties
// {
// property.SetValue("specific value")
// }
return item;
}
This is how the EF model looks like:
Works fine so far. For now I'm doing performance tests for retrieving multiple sets.
I've created a prototype and improved it several times to increase the performance. It is still far away from being usable.
I takes 919ms to create 300 objects who only contain 3 properties.
The reason for choosing such DB design is to have a generic database design. Adding new properties should only be done in the business model.
Am I just being too stupid to create a performant way of retrieving xx objects or is my approach totally wrong? As far as I understand OR-Mapper, they are basically doing the same?
I think you missed whole point of ORM. The reason why people are using ORM is to be able to persist buisness objects and easily retrieve business objects. You are using ORM to get just data for your business objects' factories. Factories are using reflection to build business object from materialized classes retrieved by ORM. This will always be very slow because:
Query compilation is slow (you can precompile it)
Object materialization is slow (you can't avoid it)
Reflection is slow (you can't avoid it)
IMO if you want to follwo this DB design to have generic tables absolutely independent on your business objects you don't need ORM or at least you don't need EF.
The reason for your performance problems is that generic approach is not follwed in your business model. So you must somewhere convert generic data to specific data = slow operation.
If you want to improve performance define set of shared properties and place them into Product. Then either use your current PropertyValue and Property for additional non shared properties or use simply ExtendedProperties table storing key value pairs. Your entities will be of type Product with inner type property, shared properties and collection of extended properties. That is generic approach.
Firstly, it's not clear to me what you have in the way of POCOs. Did you hand code these and your context or T4 generate them? There are some great articles here that benchmark performance with no POCO, T4 Generated POCOs/Context and hand coded POCOs/Context. As expected there is HUGE performance savings going with POCOs (more than a 15-fold boost in performance in his benchmark) going the POCO route over those generated by the Entity Framework. You don't say what DBMS...if MSSQL have you turned on the profiler and see what's being generated?