DAL update specific field - c#

I am designing a DAL.dll for a web application. The Scenario is that on the web user gets the entity and modifies some fields and click save. My problem is that how to make sure only the modifield field to be saved.
For Example, an entity:
public class POCO{
public int POCOID {get;set;}
public string StringField {get;set;}
public int IntField {get;set;}
}
and my update interface
//rows affected
int update (POCO entity);
When only the IntField is modified, because StringField is null, so I can ignore it. However, when only the StringField is modifield, because IntField is 0 - default(int), I cannot determine if it should be ignored or not.
Some limitations:
1. stateless, no session. so cannot use "get and update", context, etc.
2. to be consistent to data model, cannot use nullable "int?"

Just a tip, if negtive number is not allow in your business requirement, you can use -1 to indicate this value does not apply.

I don't really understand how you want to work stateless, but update only changed properties. It will never work when stateless, since you will need a before-after comparison, or anything else to track changes (like events on property setters). Special "virgin" values are not a good solution, since I think your user wants to see the actual IntField value.
Also make your database consistent with your application data - if you have standard, not-nullable int values, make the DB column int not null default 0! It is really a pain to have a database value which can't be represented by the program, so that the software "magically" turns DB null into 0. If you have a not-nullable int in your application, you can't distinguish between DB null and zero, or you have to add a property like bool IsIntFieldNull (no good!).
To reference a common Object-relational mapper, NHibernate: it has an option called "dynamic-update" where only changed properties/columns are updated. This requires, however, a before-after check and stateful sessions, and there's debate on whether it helps performance, since sending the same DB query every time (with different parameter values) is better than sending multiple different queries - opposed to unneccessary updates and network load. By default, NHibernate updates the whole row, after checking if any change has been done. If you only have ID, StringField and IntField, dynamic-update instead of full-row update might in fact be a good solution.
Mapping nullable DB columns to not-nullable application data types, such as int, is a common mistake when implementing NHibernate, since it creates self-changing DAL objects.
However, when working with ORM or writing your own DAL, make sure you have proper database knowledge!

Options
Many ORMs (Object-relational mapping) provide this type of functionality. You define your object to work with say "Entity Framework" or "NHibernate". These ORM's take care of reading and writing to the database. Internally, they have their own mechanisms to keep track of what has been modified.
Look into Delta<\T> (right now it's an ODATA thing, so it may not be useful to use, but you can learn from it)
Make your own. Have some type of base class that all your other objects inherit from, and somehow when you set fields it records those somewhere else.
I highly recommend not relying on null or magic numbers (-1) to keep track of this. You will create a nightmare for yourself.

Related

EF Code First change data type bool? to bool and int? to int

We have an application where we use Automatic Data Migrations, and Data Loss is not allowed. By company's policy, I am not allowed to change those settings.
We also have a few nullable columns in a few tables of bool? type and int? type.
I just discovered today that there is a piece of code which goes over existing database records and sets values of nullable columns to their default values (false and 0 respectively). Current behavior slows down the application. I want to enforce the database/EF to set default values and not allow nulls.
So far I tried the following with no luck:
Direct type change from bool? to bool and int? to int. It throws Data Loss Exception.
Adding [Required] and [DefaultValue(false)] attributes over a nullable property. Same thing, Data Loss Exception.
Changing .IsOptional() to .IsRequired() in the mapping class also caused same exception.
What I essentially want to replicate in Code First EF is a following MySQL statement:
ALTER TABLE orders
CHANGE COLUMN ScaleToFit ScaleToFit TINYINT(1) NOT NULL DEFAULT 0;
Are there any elegant solutions for the problem with respect to company's policy? Thank you!
This may be my first non-programming answer.
I most sincerely hope there's no elegant solution --and no solution at all-- to achieve this through migrations. It would mean there's a leak in data loss detection, which would be bad news. It's considered data loss because a null value may also bear information, i.e. a deliberate "don't know". In case of a boolean: three-state is turned into two-state. This isn't true in your case, hence this fixing process, but that doesn't change the rule.
By company's policy, I am not allowed to change those settings.
Policies serve a purpose, but nearly always turn into the purpose. Probably because policies are easier to define, enforce and check than their underlying purpose. But as in Goodhart's law...
When a measure becomes a target, it ceases to be a good measure
...each policy may one day defeat its purpose and cease to be a good policy. An ability to recognize this and to flex on policies when necessary is a commendable feature of organizations.
What I'm trying to say is: propose a temporary one-time suspension of this policy to allow for a migration that only applies these non-nullable fields with default values. If the null values are really meaningless and actually should have been not-null from the outset this is the easiest way to get where you want.

F# type providers vs C# interfaces + Entity Framework

The question is very technical, and it sits deeply between F# / C# differences. It is quite likely that I might’ve missed something. If you find a conceptual error, please, comment and I will update the question.
Let’s start from C# world. Suppose that I have a simple business object, call it Person (but, please, keep in mind that there are 100+ objects far more complicated than that in the business domain that we work with):
public class Person : IPerson
{
public int PersonId { get; set; }
public string Name { get; set; }
public string LastName { get; set; }
}
and I use DI / IOC and so that I never actually pass a Person around. Rather, I would always use an interface (mentioned above), call it IPerson:
public interface IPerson
{
int PersonId { get; set; }
string Name { get; set; }
string LastName { get; set; }
}
The business requirement is that the person can be serialized to / deserialized from the database. Let’s say that I choose to use Entity Framework for that, but the actual implementation seems irrelevant to the question. At this point I have an option to introduce “database” related class(es), e.g. EFPerson:
public class EFPerson : IPerson
{
public int PersonId { get; set; }
public string Name { get; set; }
public string LastName { get; set; }
}
along with the relevant database related attributes and code, which I will skip for brevity, and then use Reflection to copy properties of IPerson interface between Person and EFPerson OR just use EFPerson (passed as IPerson) directly OR do something else. This is fairly irrelevant, as the consumers will always see IPerson and so the implementation can be changed at any time without the consumers knowing anything about it.
If I need to add a property, then I would update the interface IPerson first (let’s say I add a property DateTime DateOfBirth { get; set; }) and then the compiler will tell me what to fix. However, if I remove the property from the interface (let’s say that I no longer need LastName), then the compiler won’t help me. However, I can write a Reflection-based test, which would ensure that the properties of IPerson, Person, EFPerson, etc. are identical. This is not really needed, but it can be done and then it will work like magic (and yes, we do have such tests and they do work like magic).
Now, let’s get to F# world. Here we have the type providers, which completely remove the need to create database objects in the code: they are created automatically by the type providers!
Cool! But is it?
First, somebody has to create / update the database objects and if there is more than one developer involved, then it is natural that the database may and will be upgraded / downgraded in different branches. So far, from my experience, this is an extreme pain on the neck when F# type providers are involved. Even if C# EF Code First is used to handle migrations, some “extensive shaman dancing” is required to make F# type providers “happy”.
Second, everything is immutable in F# world by default (unless we make it mutable), so we clearly don’t want to pass mutable database objects upstream. Which means that once we load a mutable row from the database, we want to convert it into a “native” F# immutable structure as soon as possible so that to work only with pure functions upstream. After all, using pure functions decreases the number of required tests in, I guess, 5 – 50 times, depending on the domain.
Let’s get back to our Person. I will skip any possible re-mapping for now (e.g. database integer into F# DU case and similar stuff). So, our F# Person would look like that:
type Person =
{
personId : int
name : string
lastName : string
}
So, if “tomorrow” I need to add dateOfBirth : DateTime to this type, then the compiler will tell me about all places where this needs to be fixed. This is great because C# compiler will not tell me where I need to add that date of birth, … except the database. The F# compiler will not tell me that I need to go and add a database column to the table Person. However, in C#, since I would have to update the interface first, the compiler will tell me which objects must be fixed, including the database one(s).
Apparently, I want the best from both worlds in F#. And while this can be achieved using interfaces, it just does not feel the F# way. After all, the analog of DI / IOC is done very differently in F# and it is usually achieved by passing functions rather than interfaces.
So, here are two questions.
How can I easily manage database up / down migrations in F# world? And, to start from, what is the proper way to actually do the database migrations in F# world when many developers are involved?
What is the F# way to achieve “the best of C# world” as described above: when I update F# type Person and then fix all places where I need to add / remove properties to the record, what would be the most appropriate F# way to “fail” either at compile time or at least at test time when I have not updated the database to match the business object(s)?
How can I easily manage database up / down migrations in F# world?
And, to start from, what is the proper way to actually do the database
migrations in F# world when many developers are involved?
Most natural way to manage Db migrations is to use tools native to db i.e. plain SQL. At our team we use dbup package, for every solution we create a small console project to roll up db migrations in dev and during deployment. Consumer apps are both in F# (type providers) and C# (EF), sometimes with the same database. Works like a charm.
You mentioned EF Code First. F# SQL providers are all inherently "Db First" because they generate types based on external data source (database) and not the other way around. I don't think that mixing two approaches is a good idea. In fact I wouldn't recommend EF Code First to anyone to manage migrations: plain SQL is simpler, doesn't require "extensive shaman dancing", infinitely more flexible and understood by far more people.
If you are uncomfortable with manual SQL scripting and consider EF Code First just for automatic generation of migration script then even MS SQL Server Management Studio designer can generate migration scripts for you
What is the F# way to achieve “the best of C# world” as described
above: when I update F# type Person and then fix all places where I
need to add / remove properties to the record, what would be the most
appropriate F# way to “fail” either at compile time or at least at
test time when I have not updated the database to match the business
object(s)?
My recipe is as follows:
Don't use the interfaces. as you said :)
interfaces, it just does not feel the F# way
Don't let autogenerated types from type provider to leak outside thin db access layer. They are not business objects, and neither EF entities are as a matter of fact.
Instead declare F# records and/or discriminated unions as your domain objects. Model them as you please and don't feel constrained by db schema.
In db access layer, map from autogenerated db types to your domain F# types. Every usage of types autogenerated by Type Provider begins and ends here. Yes, it means you have to write mappings manually and introduce human factor here e.g. you can accidentally map FirstName to LastName. In practice it's a tiny overhead and benefits of decoupling outweigh it by a magnitude.
How to make sure you don't forget to map some property? It's impossible, F# compiler will emit error if record not fully initialized.
How to add new property and not forget to initialize it? Start with F# code: add new property to domain record/records, F# compiler will guide you to all record instantiations (usually just one) and force you to initialize it with something (you will have to add a migration script / upgrade database schema accordingly).
How to remove a property and don't forget to clean up everything up to db schema. Start from the other end: delete column from database. All mappings between type provider types and domain F# records will break and highlight properties that became redundant (more importantly, it will force you to double check that they are really redundant and reconsider your decision).
In fact in some scenarios you may want to preserve database column (e.g. for historical/audit purposes) and only remove property from F# code. It's just one (and rather rare) of multitude of scenarios when it's convenient to have domain model decoupled from db schema.
In Short
migrations via plain SQL
domain types are manually declared F# records
manual mapping from Type Providers to F# domain types
Even Shorter
Stick with Single Responsibility Principle and enjoy the benefits.

Dynamic table name EF CORE 2.2

I want to make a universal method for working with tables. Studied links
Dynamically Instantiate Model object in Entity Framework DB first by passing type as parameter
Dynamically access table in EF Core 2.0
As an example, the ASP.NET CORE controller for one of the SQL tables is shown below. There are many tables. You have to implement such (DEL,ADD,CHANGE) methods for each table :
[Authorize(Roles = "Administrator")]
[HttpPost]
public ActionResult DeleteToDB(string id)
{
webtm_mng_16Context db = new webtm_mng_16Context();
var Obj_item1 = (from o1 in db.IT_bar
where o1.id == int.Parse(id)
select o1).SingleOrDefault();
if ((Obj_item1 != null))
{
db.IT_bar.Remove(Obj_item1);
db.SaveChanges();
}
var Result = "ok";
return Json(Result);
}
I want to get a universal method for all such operations with the ability to change the name of the table dynamically. Ideally, set the table name as a string. I know that this can be done using SQL inserts, but is there really no simple method to implement this in EF CORE
Sorry, but you need to rework your model.
It is possible to do something generic as long as you have one table per type - you can go into the configuration and change the database table. OpenIddict allows that. You can overwrite the constructors of the DbContext and play whatever you want with the object model, and that includes changing table names.
What you can also do is a generic base class taking the classes you deal with as parameters. I have those - taking (a) the db entity type and (b) the api side dto type and then using some generic functions and Automapper to map between them.
But the moment you need to grab the table name dynamically you are in a world of pain. EF standard architecture assumes that an object type is mapped to a database entity. As such, an ID is unique within a table - the whole relational model depends on that. Id 44 has to be unique, for a specific object, not for an object and the table it was at this moment loaded from.
You also miss up significantly on acutally logic, i.e. for delete. I hate to tell you, but while you can implement security on other layers for reading, every single one of my write/update methods are handwritten. Now, it may seem that "Authorize" works - but no, it does not. Or - it does if your application is "Hello world" complex. I run sometimes pages of testing code whether an operation is allowed in a specific business context and this IS specific, whether the user has set an override switch (which may or may not be valid depending on who he is) do bypass certain business rules. All that is anyway specific.
Oh, what you can also do... because you seem to have a lot of tables: do NOT use one class, generate them. Scaffolding is not that complex. I hardly remember when I did generate the last EF core database classes - they nowadays all come out of Entity Developer (tool from Devart), while the db is handled with change scripts (I work db first - i actually want to USE The database and that means filtered indices, triggers, some sp's and views with specific SQL), so migrations do not really work at all.
But now, overwriting the table name dynamically - while keeping the same object in the background - will bite you quite fast. It likely only works for extremely simplistic things - you know, "hello world" example - and breaks apart the moment you actually have logic.

What's the best way to persist a dynamic "info" object, C#/.Net MVC/SQL Server 2008?

This may sound like a pipe dream, I'm wondering if it's possible. I want to be able to take a C# dynamic object, called info, and persist it to a database (I'm currently on a SQL Server 2008 database).
The info object, being dynamic, could have any number of properties: Id, Title, Content, DateExpires, DateAdded, Dateupdated, TypeOf, etc...
Each instance of it could/will contain differing number of properties, depending on what the instance is used for: blog post, classified ad, event, etc... However, there would be a core set of properties every info object would share: Id, MemberId, TypeOf...
The idea is, to have a central table which stores all dynamic info objects, yet, allow me to query based on any property (which may not exist for some objects).
For example, blog posts. They'd have: Id, MemberId, DateAdded, Title, Content, TypeOf, etc... An event would have: Id, MemberId, Title, Content, TypeOf, DateOf, Recurrance, MinAge, MaxAge, etc...
I'd like to build queries based on any given info object property.
Why? Flexibility. If I can get this working, I can use the info object for future cases within my web app. If this is an extremely bad idea, please let me know (and why) please. Thanks!
This is possible and I've seen many systems built like this...however those systems are usually the hardest to maintain due to this "generic nature". There is nothing inherently wrong with this approach. It's just that it's much harder to pull it off and in most instances it ends up being a poorly implemented.
In recent years non-relational databases (like document databases that #Marc Gravell mentioned) have caught up and they are very good for some domains but you need to make sure it's the right fit for your project.
When you take the path of building this "generic database" you are sacrificing other well-known technologies that we take for granted. For example database optimization in relational databases is well-known and there are many tools that work well with them with little or no effort. If you go a different path all of a sudden the tools that you are used to might not work and will end up either building your own to make up for the stuff that does not work (or buying/choosing some esoteric tools.)
Depending on the size of your project it might be wise to build one or two of those systems that you think would be common and then try to see if they are as common as you think.
you could use a 'base' table for the common properties, and a property name-value table for the other properties. meaning:
Table Info
int Id (PK) (FK),
int MemberId,
Date DateAdded //etc...
Table Properties
int InfoId (PK),
varchar PropertyName (PK),
varchar PropertyValue,
string PropertyType //optionaly store information about the type of property
after querying, you can use reflection to translate the properties from (name,value) pairs into proper properties.
That said, I think that this is a very bad idea, for several reasons:
1. this creates further complexity on your CRUD logic
2. you don't have well-defined entities in your domain model, which I don't like
3. validation is that much more difficult- you have to manually verify that Post, for example, doesn't have a property called Recurrance field.
I would use this method only if you truly need this flexibility- for example: if a user can choose to save custom properties that you don't know in advance.
otherwise, if you know that your entities are limited to Post, Event, Employee etc.., I would just limit myself to that.

c# object equality for database persistance

I want to learn how others cope with the following scenario.
This is not homework or an assignment of any kind. The example classes have been created to better illustrate my question however it does reflect a real life scenario which we would like feedback on.
We retrieve all data from the database and place it into an object. A object represents a single record and if multiple records exist in the database, we place the data into a List<> of the record object.
Lets say we have the following classes;
public class Employee
{
public bool _Modified;
public string _FirstName;
public string _LastName;
public List<Emplyee_Address> _Address;
}
public class Employee_Address
{
public bool _Modified;
public string _Address;
public string _City;
public string _State;
}
Please note that the Getters and Setters have been omitted from the classes for the sake of clarity. Before any code police accuse me of not using them, please note that have been left out for this example only.
The database has a table for Employees and another for Employee Addresses.
Conceptually, what we do is to create a List object that represents the data in the database tables. We do a deep clone of this object which we then bind to controls on the front end. We then have two objects (Orig and Final) representing data from the database.
The user then makes changes to the "Final" object by creating, modifying, deleting records. We then want to persist these changes to the database.
Obviously we want to be as elegant as possible, only editing, creating, deleting those records that require it.
We ultimately want to compare the two List objects so that we can;
See what properties have changed so that the changes can be persisted to the database.
See what properties (records) no longer exist in the second List<> so that these records can be deleted from the database.
See what new properties exist in the new List<> so that we can create these in the database.
Who wants to get the ball rolling on how we can best achieve this. Keep in mind that we also need to drill down into the Employee_Address list to check for any changes, not just the top level properties.
I hope I have made myself clear and look forward to any suggestions.
Add nullable ObjectID field to your layer's base type. Pass it to front end and back to see if particular instance persists in the database.
It also has many other uses even if you don't have any kind of Identity Map
I would do exactly the same thing .NET does in their Data classes, that is keep the record state (System.Data.DataRowState comes to mind) and all associated versions together in one object.
This way:
You can tell at a glance whether it has been modified, inserted, deleted, or is still the original record.
You can quickly find what has been changed by querying the new vs old versions, without having to dig in another collection to find the old version.
You should investigate the use of the Identity Map pattern. Coupled with Unit of Work, this allows you to maintain an object "cache" of sorts from which you can check which objects need saving to the database, and when reading, to return objects from the identity map rather than creating new objects and returning those.
Why would you want to compare two list objects? You will potentially be using a lot of memory for what is essentially duplicate data.
I'd suggest having a status property for each object that can tell you if that particular object is New, Deleted, or Changed. If you want go further than making the property an Enum, you can make it an object that contains some sort of Dictionary that contains the changes to update, though that will most likely apply only in the case of the Changed status.
After you've added such a property, it should be easy to go through your list, add the New objects, remove the Deleted objects etc.
You may want to check how the Entity Framework does this sort of thing as well.

Categories

Resources