When i don't have anything in my 'bookings' table my GET endpoints for my customer and Accommodation table work perfectly. Once i create a booking every get request for each table returns every entry in every table.
This is my data model
This is my get request for customers
// GET: api/Customer
[ResponseType(typeof(Customer))]
public async Task<IHttpActionResult> GetCUSTOMERs()
{
var customers = await db.Customers.ToListAsync();
return Ok(customers);
}
When i call a get request for the customer table i only want customer data how can i do this?
An entity framework model has lazy loading enabled by default.
When you return Ok(customers); the API will attempt to serialize the entities so that they can be sent as (probably) JSON or XML. As it serializes through each customer entity it will encounter the Bookings property. As the serializer is requesting that property, Entity Framework will "lazy load" in the bookings which are associated with the customer. Then the serializer will attempt to serialize each booking and hit the Accommodations property... and so on.
Your code above is returning all customers, so you will end up returning every accommodation which has been booked. I expect if you made a new Accommodation which had no bookings, it would not be returned in the output from this call.
There are several ways you can prevent all this from happening:
Disable Lazy Loading
You can disable lazy loading on an EF model by opening the model, right click on the white background of the model diagram and choose "Properties", then set "Lazy Loading Enabled" to "False".
If you have other functions where you want to access the related properties from an entity, then you can either load them in to the context with an "Include" or load them separately and let the EF fixup join the entities together.
My personal opinion is that disabling lazy-loading is generally a good idea because it makes you think about the queries you are making to the database and you have to be much more explicit about asking for what data should be returned. However, it can be a lot more effort and is probably something to look at when you start trying to optimise your application rather than just getting it working.
This Microsoft page "Loading Related Entities" also explains various options (as well as describing exactly the issue with lazy loading your entire database!).
Map Your Entities and Return DTOs
You have more control about how the code traverses your model if you map the entities from EF into DTO's.
From an API perspective using DTOs is a great idea because it allows you to more or less define the output of an endpoint like an interface. This can often remain the same while the underlying data structure may change. Returning the output of an EF model means that if the model changes, things which use that data may also need to change.
Something like AutoMapper is often used to map an EF entity into DTOs.
Serializer Settings
There may be some media-type formatter settings which allow you to limit the depth of entities which will be traversed for serialisation. See JSON and XML Serialization in ASP.NET Web API for a place to start.
This is probably too broad of a change, and when you want to actually return related objects would cause a problem there instead.
Related
I'm new to C# and .NET core.
I'm wondering why when editing a model using bind, it creates a new model and binds the attributes from the post. But if you do not place all your fields in the form post as hidden and the bind it will null them out?
Shouldn't it load a model and update the bind parameters and leave the ones alone?
For example if I'm updating a person and person has
Id, Name, Age, updated, Created
Edit(int id, [Bind("Id,Name,Age") Person p]
When I go to _context.update(p), it's nulling out updated and Created because they weren't bound.
WHY does it work like that?
How can I make it only update the bound parameters without nulling out the ones I don't need to load?
What you pass in is a deserialized block of data that MVC is mapping into an entity definition. It doesn't auto-magically open a DbContext, load the entity, and overwrite values, it just creates an instance of the entity, and copies in the values across. Everything else would be left as defaults.
As a general rule I advise against ever passing entities from the client to the server to avoid confusion about what is being sent back. When performing an update, accept a view model with the applicable properties to update, and ideally the data model and view model should include a row version #. (I.e. Timestamp column in SQL Server that can be converted to/from Base64 string to send/compare in ViewModel)
From there, when performing an update, you fetch the entity by ID, compare the timestamp, then can leverage Automapper to handle copying data from the ViewModel into the Entity, or copy the values across manually.
That way, when your ViewModel comes back with the data to update:
using (var context = new AppDbContext())
{
var entity = context.Persons.Single(x => x.Id == id);
if (entity.TimeStampBase64 != viewModel.TimestampBase64)
{
// Handle fact that data has changed since the client last loaded it.
}
Mapper.Map(entity, viewModel);
context.SaveChanges();
}
You could use the entity definition as-is and still load the existing entity and use Automapper to copy values from the transit entity class to the DbContext tracked one, however it's better to avoid confusing instances of entities between tracked "real" entity instances, and potentially incomplete untracked transit instances. Code will often have methods that accept entities as parameters to do things like validation, calculations, etc. and it can get confusing if later those methods assume they will get "real" entities vs. getting called somewhere that has only a transient DTO flavour of an entity.
It might seem simpler to take an entity in and just call DbContext.Update(entity) with it, however this leaves you with a number of issues including:
You need to pass everything about the entity to the client so that the client can send it back to the server. This requires hidden inputs or serializing the entire entity in the page exposes more about your domain model to the browser. This increases the payload size to the client and back.
Because you need to serialize everything, "quick fixes" like serializing the entire entity in a <script> block for later use can lead to lazy-load "surprises" as the serializer will attempt to touch all navigation properties.
Passing an entity back to the server to perform an Update() means trusting the data coming back from the client. Browser debug tools can intercept a Form submit or Ajax POST and tamper with the data in the payload. This can open the door to unexpected tampering. DbContext.Update also results in an UPDATE statement that overwrites all columns whether anything changed or not, where change tracked entities will only build UPDATE statements to include values that actually changed only if anything actually changed.
Please see the code below:
public Enquiry GetByID(Guid personID)
{
using (IUnitOfWork<ISession> unitOfWork = UnitOfWorkFactory.Create())
{
IRepository repository = RepostioryFactory.Create(unitOfWork);
var Person = repository.GetById(personID);
return Person;
}
}
It is contained in an application service layer. Person is passed back to the Controller and loaded into a view. The view then errors because it cannot load Person.Collection (a list).
I believe this is because the collection is loaded using lazy loading and the unit of work and NHibernate session is closed once the view is reached. Must I use eager loading in this situation or am I misunderstanding something?
IMHO lazy loading is evil!
The idea behind a repository is to return an aggregate. That aggregate should contain all the relevant data that constitute the aggregate. It is never loaded in bits. An aggregate should, therefore, always be eagerly fetched.
If you remove UoW/ORM from the equation lazy loading just isn't an option.
You should try not to query your domain. If you have a situation where a single aggregate contains all the data you need and that data has been exposed then that would be OK.
However, I would recommend you use a read model. A simple query layer. Give that a try and you may just be surprised :)
I realized the session is ending before the method finishes (its wrapped in a using block), which is before your view code runs. So yes, you do need to eager load the items in the collection property in your Enquiry type you get back from the NHibernate session.
A better way is to setup the unit of work pattern so it wraps around the entire request in the pipeline. For example, if you have a Global.asax file, it has two methods called Application_BeginRequest and Application_EndRequest.
The Application_BeginRequest method would create a new NHibernate session which can be retrieved by your controllers.
The Application_EndRequest method would simply flush your session, saving any data changes to the underlying database.
I refer you to the following StackOverflow question for incorporating NHibernate sessions with the Global.asax component: NHibernate Session in global.asax Application_BeginRequest
Introducing a View Model layer instead of passing the raw entity over to the Controller will solve your problem since mapping to the Person View Model (inside the using clause) will access Person.Collection and trigger the loading.
Alternatively, you could have a whole Read side that doesn't go through the domain, as #EbenRoux suggests.
Must I use eager loading in this situation or am I misunderstanding something?
Well, you want to, don't you? You are in a use case where you know that you want Person.Collection to be available, so why wouldn't you load it right away.
The trick is not to use the same repository implementation that you use when you want the collection to be loaded lazily (or not at all).
Udi Dahan wrote about this a number of times
http://udidahan.com/2007/03/06/better-domain-driven-design-implementation/
http://udidahan.com/2007/04/23/fetching-strategy-design/
Greg Young would caution you that the use of a fetching strategy is an implementation detail, and not part of the contract
http://codebetter.com/gregyoung/2009/01/16/ddd-the-generic-repository/
I've got a couple of entities in a parent-child relationship: Family (parent) and Updates (child). I want to read a list of families without the corresponding updates. There are only 17 families but about 60,000 updates so I really don't want the updates.
I used EntitiesToDTOs to generate a DTO from the Family entity and to create an assembler for converting the Family entity to the FamilyDTO. The ToDTO method of the assembler looks like:
public static FamilyDTO ToDTO(this Family entity)
{
if (entity == null) return null;
var dto = new FamilyDTO();
dto.FamilyCode = entity.FamilyCode;
dto.FamilyName = entity.FamilyName;
dto.CreateDatetime = entity.CreateDatetime;
dto.Updates_ID = entity.Updates.Select(p => p.ID).ToList();
entity.OnDTO(dto);
return dto;
}
When I run the assembler I find each resulting FamilyDTO has the Updates_ID list populated, although lazy loading is set to true for the EF model (edmx file). Is it possible to configure EntitiesToDTOs to support lazy loading of child elements or will it always use eager loading? I can't see any option in EntitiesToDTOs that could be set to support lazy loading when generating the assembler.
By the way, I'm part of a larger team that uses EntitiesToDTOs to regenerate the assemblers on an almost daily basis, so I'd prefer not to modify the assembler by hand if possible.
I'm Fabian, creator of EntitiesToDTOs.
First of all, thanks a lot for using it.
What you have detected is in fact what I don't want the Assembler to do, I want the developer to map the navigation properties only if needed using the partial methods OnDTO and OnEntity. Otherwise you run into problems like you have.
Seems like I never run into that problem using the tool, THANKS A LOT.
So right now I'm fixing this. It's now fixed in version 3.1.
Based on the code that you've posted here, and based on how I think someone would implement such a solution (i.e. to convert records to a DTO format) I think that you would have no choice but to do eager loading.
Some key points:
1) Your Updates_ID field is clearly a List, which means that it's hydrating the collection right away (ToList always executes. Only a regular IEnumerable employs deferred execution).
2) If you're sticking any sort of navigation property in a DTO it would automatically be loaded eagerly. That's because once you so much as touch a navigation property that was brought back by Entity Framework, the framework automatically loads it from the database, and doesn't care that all you wanted was to populate a DTO with it.
I've been reading about serialization of entities graph in an entity framework context using Linq to entities and the different possible serializers: Binary, XmlSerializer and DataContractSerializer.
As i understood the binary and XmlSerializer serialize the entity without its relationships. In case relationships are serialized it would cause a problem because of the nature of the resulting xml file structure ( for XmlSerializer).
The DataContractSerializer serialize the graph in its whole depth unless the lazy load is disabled.
My question is: I want to serialize a part of the graph. For example if I have an entity A and three related entities B, C and D, only B and D would be serialized with A. I want to use the the DataContractSerializer. If I delete the [DataMemberAttribute] of the unwanted navigational properties would that work?
You can actually disable lazy-loading, serialize/deserialize, and then re-enable lazy-loading.
context.ContextOptions.LazyLoadingEnabled = false;
StackOverflow Source
Since attributes are static metadata you can't mess with them at run-time. And if you remove them from your entity they will be permanently removed.
The Lazy loading isn't probably what you want, since when you load you bring the entire graph, the partial graph cenario only usually comes up on updates or partial inserts.
Your cenario, from what I gathered is when you want to update something, you want to update a partial graph only, and not the entire graph that you have on the client. One way to achieve this is to remove manualy the other DataMembers and set them to null, serialize them, update, and them repair the null references you previously set, finally make sure the ChangeTrackers are all in their previous state.
In our specific development cenario, we achieved this behaviour through a T4 template, that generates all the messy code, generating part of a "DataManager" that handles all the Self Tracking Entities once they exist on the client.
In my experience, it seemed like the only reliable way to disable lazy-loading is to go to the Entity Designer winder, right-click in the background, select "Properties", and set "Lazy Loading Enabled" to false in the Properties window.
Let's say I have a relational database with tables: OrderableCategories and Orderables. They are in a relation of one-to-many with one OrderableCategory attached to multiple Orderables. Therefore, an OrderableCategory instance in LINQ has members: ID, Name and an EntitySet<Orderable> Orderables. While sent via WCF (wsHttpBinding if it matters anyhow), the EntitySet is translated to simple Orderable[]. An Orderable Instance also contains a member called OrderableCategory which is simply an instance of this orderable's category. While sent via WCF, I guess something like this happens: an Orderable instance fills its OrderableCategory instance with fields from this category, but its Orderable[] is also filled with other orderables in this category. These orderables have its OrderableCategory filled with this category again and so on, so that I could theoretically call (for a received orderable o): o.OrderableCategory.Orderables[0].OrderableCategory.Orderables[0]. (...) and so on. I'm only guessing that the server gets into an infinite loop and when message size exceeds the quota, it disconnects and I see an exception of service shutting down. How can I avoid this scenario and have the benefits of relations in my database? I think my suspicions are correct, because when I disabled one of the properties (made it internal in LINQ Class Designer), the data is filled "one-way" only and the Orderable has no longer its OrderableCategory member, it works. But I would like to know if this could be achieved without compromising the property.
This must be handled by marking entities with DataContract attribute and setting its IsReference property to true. This will instruct DataContractSerializer to track references instead of serialize objects as you descirbed.
Linq-To-Sql designer / SqlMetal should do this for you by setting Serialization Mode to Unidirectional.
If you send entities over WCF, nice features like lazy loading go out the window, of course.
You basically need to decide which of the two options you'd like to use:
if you ask for the entity OrderableCategory, you can return just its basic "atomic" properties, e.g. ID, Name and so on. The benefit is smaller size - you're sending back less data
or alternatively: if you ask for the entity OrderableCategory, you can return its basic properties plus you could load a whole list of Orderables that this category contains, and return both at the same time; benefit: you have that data available right away, but on the downside, you'll have to send a lot more data.
Obviously, you cannot really do an infinite eager pre-loading - at some point, you have to stop and leave retrieval of more data to a later WCF service call. Your client would have to ask specifically and explicitly for yet another OrderableCategory if you're interested in that.