It's not a really common case, but in this particular one the task is to accomplished this :
Serialize a list of existing entities loaded with NHibernate in database A
Export the serialized entities to a file
Import the serialized entities file to database B, using NHibernate as the persisting agent
The problem is the entities maps is marked with Id generator :
public class EntityMap
{
public EntityMap()
{
Id(x => x.Id)
.GeneratedBy.Guid();
// Other properties
}
}
With that mapping, everytime I called :
ISession session = NHibernateSession.GetSession();
IList<Entity> entities = // Load entity from serialized object in a file
foreach(entity in entities)
{
session.Save(entity);
}
NHibernate keeps generating a new Id for those entities.
Is there a way to keep the mapping having Id generation strategy, but somehow able to persist an existing entity with it's Id predefined?
The logic here is simple:
You want to have application-assigned identifiers. Therefore, you should configure NHibernate for application-assigned identifiers.
What you're asking is basically to tell NHibernate to generate the identifiers for you, but then insisting to do it yourself anyway.
One can also note that NHibernate isn't really intended as a replication tool. Have you considered serializing the entities to an SQL script instead, that you can simple execute on the target database? That would circumvent NHibernate's id assignment.
For doing distributed database, my suggestions are:
To begin with, I would look to see if the database system itself have any sort of replication/data stream/data mirror that could be used for this. If that didn't work, I would probably try to write something similar - that is, bypass NH on the read side as well, just do simple table reads and write out INSERT statements to a file, that the recipient can apply. This works as long as there are no or few data transformations required. If the data needs complex transformations (which couldn't be avoided)... well if the logic gets complex enough we get back to doing it through NH.
Another way is to try to capture the input that modifies the original database (e.g. command pattern), and pass that same input to all the other databases also. Essentially letting each system react to the input - with a deterministic system and the same input, all databases will end up in the same state.
Related
When i don't have anything in my 'bookings' table my GET endpoints for my customer and Accommodation table work perfectly. Once i create a booking every get request for each table returns every entry in every table.
This is my data model
This is my get request for customers
// GET: api/Customer
[ResponseType(typeof(Customer))]
public async Task<IHttpActionResult> GetCUSTOMERs()
{
var customers = await db.Customers.ToListAsync();
return Ok(customers);
}
When i call a get request for the customer table i only want customer data how can i do this?
An entity framework model has lazy loading enabled by default.
When you return Ok(customers); the API will attempt to serialize the entities so that they can be sent as (probably) JSON or XML. As it serializes through each customer entity it will encounter the Bookings property. As the serializer is requesting that property, Entity Framework will "lazy load" in the bookings which are associated with the customer. Then the serializer will attempt to serialize each booking and hit the Accommodations property... and so on.
Your code above is returning all customers, so you will end up returning every accommodation which has been booked. I expect if you made a new Accommodation which had no bookings, it would not be returned in the output from this call.
There are several ways you can prevent all this from happening:
Disable Lazy Loading
You can disable lazy loading on an EF model by opening the model, right click on the white background of the model diagram and choose "Properties", then set "Lazy Loading Enabled" to "False".
If you have other functions where you want to access the related properties from an entity, then you can either load them in to the context with an "Include" or load them separately and let the EF fixup join the entities together.
My personal opinion is that disabling lazy-loading is generally a good idea because it makes you think about the queries you are making to the database and you have to be much more explicit about asking for what data should be returned. However, it can be a lot more effort and is probably something to look at when you start trying to optimise your application rather than just getting it working.
This Microsoft page "Loading Related Entities" also explains various options (as well as describing exactly the issue with lazy loading your entire database!).
Map Your Entities and Return DTOs
You have more control about how the code traverses your model if you map the entities from EF into DTO's.
From an API perspective using DTOs is a great idea because it allows you to more or less define the output of an endpoint like an interface. This can often remain the same while the underlying data structure may change. Returning the output of an EF model means that if the model changes, things which use that data may also need to change.
Something like AutoMapper is often used to map an EF entity into DTOs.
Serializer Settings
There may be some media-type formatter settings which allow you to limit the depth of entities which will be traversed for serialisation. See JSON and XML Serialization in ASP.NET Web API for a place to start.
This is probably too broad of a change, and when you want to actually return related objects would cause a problem there instead.
I want to make a universal method for working with tables. Studied links
Dynamically Instantiate Model object in Entity Framework DB first by passing type as parameter
Dynamically access table in EF Core 2.0
As an example, the ASP.NET CORE controller for one of the SQL tables is shown below. There are many tables. You have to implement such (DEL,ADD,CHANGE) methods for each table :
[Authorize(Roles = "Administrator")]
[HttpPost]
public ActionResult DeleteToDB(string id)
{
webtm_mng_16Context db = new webtm_mng_16Context();
var Obj_item1 = (from o1 in db.IT_bar
where o1.id == int.Parse(id)
select o1).SingleOrDefault();
if ((Obj_item1 != null))
{
db.IT_bar.Remove(Obj_item1);
db.SaveChanges();
}
var Result = "ok";
return Json(Result);
}
I want to get a universal method for all such operations with the ability to change the name of the table dynamically. Ideally, set the table name as a string. I know that this can be done using SQL inserts, but is there really no simple method to implement this in EF CORE
Sorry, but you need to rework your model.
It is possible to do something generic as long as you have one table per type - you can go into the configuration and change the database table. OpenIddict allows that. You can overwrite the constructors of the DbContext and play whatever you want with the object model, and that includes changing table names.
What you can also do is a generic base class taking the classes you deal with as parameters. I have those - taking (a) the db entity type and (b) the api side dto type and then using some generic functions and Automapper to map between them.
But the moment you need to grab the table name dynamically you are in a world of pain. EF standard architecture assumes that an object type is mapped to a database entity. As such, an ID is unique within a table - the whole relational model depends on that. Id 44 has to be unique, for a specific object, not for an object and the table it was at this moment loaded from.
You also miss up significantly on acutally logic, i.e. for delete. I hate to tell you, but while you can implement security on other layers for reading, every single one of my write/update methods are handwritten. Now, it may seem that "Authorize" works - but no, it does not. Or - it does if your application is "Hello world" complex. I run sometimes pages of testing code whether an operation is allowed in a specific business context and this IS specific, whether the user has set an override switch (which may or may not be valid depending on who he is) do bypass certain business rules. All that is anyway specific.
Oh, what you can also do... because you seem to have a lot of tables: do NOT use one class, generate them. Scaffolding is not that complex. I hardly remember when I did generate the last EF core database classes - they nowadays all come out of Entity Developer (tool from Devart), while the db is handled with change scripts (I work db first - i actually want to USE The database and that means filtered indices, triggers, some sp's and views with specific SQL), so migrations do not really work at all.
But now, overwriting the table name dynamically - while keeping the same object in the background - will bite you quite fast. It likely only works for extremely simplistic things - you know, "hello world" example - and breaks apart the moment you actually have logic.
I have to take data from an existing database and move it into a new database that has a new design. So the new database has other columns and tables than the old one.
So basically I need to read tables from the old database and put that data into the new structure, some data won't be used anymore and other data will be placed in other columns or tables etc.
My plan was to just read the data from the old database with basic queries like
Select * from mytable
and use Entity Framework to map the new database structure. Then I can basically do similar to this:
while (result.Read())
{
context.Customer.Add(new Customer
{
Description = (string) result["CustomerDescription"],
Address = (string) result["CuAdress"],
//and go on like this for all properties
});
}
context.saveChanges();
I think it is more convenient to do it like this to avoid writing massive INSERT-statements and so on, but is there any problems in doing like this? Is this considered bad for some reason that I don't understand. Poor performance or any other pitfalls? If anyone has any input on this it would be appreciated, so I don't start with this and it turns out to be a big no-no for some reason.
Something that you could perhaps also try, is merely to write a new DBContext class for the new target database.
Then simply write a console application with a static method which copies entities and properties from the one context to the other.
This will ensure that your referential integrity remains intact and saves you a lot of hassle in terms of having to write SQL code, since EF does all the heavy lifting for you in this regard.
If the dbContext contains a lot of entity dbsets I recommend that you use some sort of automapper.
But, this depends on the amount of data that you are trying to move. If we are talking terrabytes, I would rather suggest you do not take this approach.
Currently our new database design is changing rapidly and I don't always have time to keep up to date with the latest changes being made. Therefore I would like to create some basic integration tests that are basically sanity checks on my mappings against the database.
Here are a few of the things I'd like to accomplish in these tests:
Detect columns I have not defined in my mapping but exist in the database
Detect columns I have mapped but do NOT exist in the database
Detect columns that I have mapped where the data types between the database and my business objects no longer jive with each other
Detect column name changes between database and my mapping
I found the following article by Ayende but I just want to see what other people out there are doing to handle these sort of things. Basically I'm looking for simplified tests that cover a lot of my mappings but do not require me to write seperate queries for every business object in my mappings.
I'm happy with this test, that comes from the Ayende proposed one:
[Test]
public void PerformSanityCheck()
{
foreach (var s in NHHelper.Instance.GetConfig().ClassMappings)
{
Console.WriteLine(" *************** " + s.MappedClass.Name);
NHHelper.Instance.CurrentSession.CreateQuery(string.Format("from {0} e", s.MappedClass.Name))
.SetFirstResult(0).SetMaxResults(50).List();
}
}
I'm using plain old query since this version comes from a very old project and I'm to lazy to update with QueryOver or Linq2NH or something else...
It basically ping all mapped entities configured and grasp some data too in order to see that all is ok. It does not care if some field exists in the table but not on the mapping, that can generate problem in persistence if not nullable.
I'm aware that Fabio Maulo has something eventually more accurate.
As a personal consideration, if you are thinking on improvement, I would try to implement such a strategy: since mapping are browsable by API, look for any explicit / implicit table declaration in the map, and ping it with the database using the standard schema helperclasses you have inside NH ( they eventually uses the ADO.NET schema classes, but they insulate all the configuration stuff we already did in NH itself) By playng a little with naming strategy we can achieve a one by one table field check list. Another improvement can be done by, in case of unmatching field, looking for a candidate by applying Levensthein Distance to all the available names and choosing one if some threshold requisites are satisfied. This of course is useless in class first scenarios when the DB schema are generated by NH itself.
I use this one too:
Verifying NHibernate Entities Contain Only Virtual Members
Okay, so i've studied c# and asp.net long enough and would like to know how all these custom classes i created relate to the database. for example.
i have a class call Employee
public class Employee
{
public int ID { get; set; }
public string Name { get; set; }
public string EmailAddress { get; set; }
}
and i have a database with the following 4 fields:
ID
Name
EmailAddress
PhoneNumber
it seems like the custom class is my database. and in asp.net i can simple run the LINQ to SQL command on my database and get the whole schema of my class without typing out a custom class with getter and setter.
so let's just say that now i am running a query to retrieve a list of employees. I would like to know how does my application map to my Employee class to my database?
by itself, it doesn't. But add any ORM or similar, and you start to get closer. for example, LINQ-to-SQL (which I mention because it is easy to get working with Visual Studio), you typically get (given to you by the tooling) a custom "data context" class, which you use as:
using(var ctx = new MyDatabase()) {
foreach(var emp in ctx.Employees) {
....
}
}
This is generating TSQL and mapping the data to objects automatically. By default the tooling creates a separate Employee class, but you can tweak this via partial classes. This also supports inserts, data changes and deletion.
There are also tools that allow re-use of your existing domain objects; either approach can be successful - each has advantages and disadvantages.
If you only want to read data, then it is even easier; a micro-ORM such as dapper-dot-net allows you to use our type with TSQL that you write, with it handling the tedious materialisation code.
Your question is a little vague, imo. But what you are referring to is the Model of the MVC (Model-View-Controller) architecture.
What the Model , your Employee Class, manages data of the application. So it can not only get and set (save / update) your data, but it can also be used to notify of a data change. (Usually to the view).
You mentioned you where using SQL, so more then likely you could create and save an entire employee record by sending an Associative Array of the table data to save it to the database. Your setting for the Class would handle the unique SQL syntax to INSERT the data. In larger MVC Frameworks. The Model of your application inherits several other classes to handle the proper saving to different types of backends other than MS SQL.
Models will also, normally, have functions to handle finding records and updating records. This is normally by specify a search field, and it returning the record, of which would include the ID and you would normally base this back into a save / update function to make changes to record. You could also tie into this level of the Model to create revision of the data you are saving
So how the model directly correlates to your SQL structure is dependent on how you right it. Or which Framework you decide to use. I believe a common one for asp.net is the Microsoft's ASP.Net MVC
Your class cannot be directly mapped to the database without ORM tool, The ORM tool will read your configuration and will map your class to DB row as per your mappings automatically. That means you don't need to read the row and set the class fields explicitly but you have to provide mapping files and have to go through the ORM framework to load the entities, and the framework will take care of the rest
You can check nHibernate and here is getting started on nHibernate.