EF Core allow attaching entity multiple times - c#

This is I guess a bit more sophisticated. I have this model successfully created with EF Core 7 (but I guess it's for all Core versions same behavior) by a code model.
The culprit is the entity SubJourney that appears as child of TrustFrameworkPolicies and as a child of Candidates. I can create the model and the database looks fine in regard to the created schema.
I need to add the data in one step, because in real life it's one single XML import.
However, adding entities has limitations. Let's assume this code to add the data in one step:
var guid = Guid.NewGuid();
var sj1 = new SubJourney {
DbKey = guid,
Type = SubJourneyTYPE.Transfer
};
var sj2 = new SubJourney {
DbKey = guid,
Type = SubJourneyTYPE.Transfer
};
var trustFrameworkPolicy = new TrustFrameworkPolicy {
UserJourneys = new List<UserJourney> {
new UserJourney {
Id = "Journey1",
OrchestrationSteps = new List<OrchestrationStepUserJourney> {
new OrchestrationStepUserJourney {
JourneyList = new List<Candidate> {
new Candidate {
SubJourneyReferenceId = "Test",
SubJourney = sj1
}
}
}
}
}
},
SubJourneys = new List<SubJourney> {
sj2
}
};
context.Set<TrustFrameworkPolicy>().Add(trustFrameworkPolicy);
context.SaveChanges();
As you can see, the objects sj1 and sj2 are identical. That's how they appear in the XML import. However, from perspective of the database it's the same (I want to treat it as the same, actually).
To get it working I just need to use the same object, like so:
var sj = new SubJourney
{
Type = SubJourneyTYPE.Transfer
};
If I reference in both positions just this sj EF Ccore treats it as one. However, because the object is created by a serializer (and it contains hundreds of entities, then), this is not feasible.
The Errors
If I enforce the same primary key for both I get this:
System.InvalidOperationException: The instance of entity type 'SubJourney' cannot be tracked because another instance with the key value '{DbKey: d44948dc-d514-4928-abea-3450150c26c4}' is already being tracked. When attaching existing entities, ensure that only one entity instance with a given key value is attached.
I read: The primary key must not be the same to have the entity twice.
If I do not enforce the same primary key I get this:
The INSERT statement conflicted with the FOREIGN KEY constraint "FK_Candidates_SubJourneys_SubJourneyDbKey"
I read: The primary key should be same to fulfil the constraint.
In the debugger the graph shows properly with the object inserted twice:
The Question
How does EF Core recognizes the object as "same"? The two errors are mutually exclusive. What I want is to add the entity twice (enforced by serializer) and still treat the entity as one (enforced by my schema).
What I tried
I read https://learn.microsoft.com/en-us/ef/core/change-tracking/identity-resolution
The text suggests to use ReferenceLoopHandling, but XmlSerializer doesn't know such an option. I tried to serialize the graph as JSON and deserialized the JSON back using the suggested options, but NewtonsoftJson doesn't see this as loop, because the objects reference each other indirectly. Finally, same error.
Setting its primary key doesn't work as shown. Overriding GetHashcode/Equals doesn't work either.
I have also tried to manipulate the ChangeTracker:
context.ChangeTracker.TrackGraph(trustFrameworkPolicy, node =>
{
//Console.WriteLine($"***** Tracking {node.Entry.Entity.GetType().Name}");
if (node.Entry.Entity is SubJourney subJourney)
{
Console.WriteLine("*** Recognized Subjourney ***");
var keyValue = node.Entry.Property(nameof(SubJourney.Id)).CurrentValue;
// Key is another property I know is set and unique (not the PK)
var entityType = node.Entry.Metadata;
var existingEntity = node.Entry.Context.ChangeTracker.Entries()
.FirstOrDefault(
e => Equals(e.Metadata, entityType)
&& Equals(e.Property(nameof(SubJourney.Id)).CurrentValue, keyValue));
if (existingEntity == null)
{
node.Entry.State = EntityState.Added;
} else
{
// Just ignore (in debugger I see the state is in fact "Detached")
}
} else {
node.Entry.State = EntityState.Added;
}
});
Still same error (foreign key constraint issue).
Now I run a bit out options. Any pointer how to deal with this would be appreciated.
As a playground I created a simple demo project (console app) with code (and all tries) with SqlLocalDb reference for use with VS 2022:
https://github.com/joergkrause/StackoverflowEFCoreIssue
Thanks for reading through this post :-)

EF works with tracked references. When you have two untracked classes, whether the values are identical or different, they are treated as 2 distinct records. When you associate them to new parent records, EF will attempt to insert them as brand new rows, resulting in either duplicate data (if the PKs are overwritten by Identity columns) or you get exceptions like "A entity with the same Id is already tracked" or unique constraint violations when EF attempts to insert a duplicated row.
When performing operations with imported/transformed data, you need to take care to account for data records that might exist, or at minimum, references that the DbContext may already be tracking. This means that given a set of DTOs you cannot simply map them into a set of Entities and then Add/Update them in the DbContext, especially as even with new top-level entities these will often reference existing records especially in the case of many-to-one relationships.
Take for example I have a list of Orders which contain a Customer reference. I might have 3 orders, two associated with Customer ID 1, one with Customer ID 2. In the serialized data I might get something like:
orders [
{
Number: "10123"
Customer:
{
Id: 1,
Name: "Han Solo"
}
},
{
Number: "10124"
Customer:
{
Id: 1,
Name: "Han Solo"
}
},
{
Number: "10125"
Customer:
{
Id: 2,
Name: "Luke Skywalker"
}
}]
The orders might be expected to be uniquely new though anything unique like Order Number should be verified before inserting, however the Customer might be new, or someone that already exists.
If we use Automapper or such to create Order and Customer entities we would get 3 distinct references for the Customers, even though two of the records reference the same customer. Instead we should be explicit about the entities we actually know we want to insert vs. any relations we should check and use:
foreach(var orderDto in orderDtos)
{
// Handle situation where a duplicate record might already exist.
if (_context.Orders.Any(x => x.OrderNumber == orderDto.OrderNumber))
throw new InvalidOperation("Order already exists");
var order = Mapper.Map<Order>(orderDto);
var customer = _context.Customers.SingleOrDefault(x => x.Id == orderDto.Customer.Id);
if (customer != null)
order.Customer = customer;
_context.Orders.Add(order);
}
This assumes that Automapper would create a Customer when mapping an Order. If the Customer is expected to exist, then I would use Single rather than SingleOrDefault and the call would throw if given a Customer ID that doesn't exist. Beyond this you would also want to consider how to scope when work is committed to the DB, whether each order insert is a Unit of Work or the whole batch. Any existing references need to be resolved and overwrite any created entities. The DbContext will check it's local tracking cache first then look to the DB if necessary but it's the best way to guarantee existing records are referenced to avoid duplicate data or exceptions.

As stated in the post I can't control the graph due to the serializer used. However, JSON serializer is more powerful. The links and answers were helpful for further research. I found that a ReferenceResolver shall work it out. In relation to the code in question I got this:
internal class SubJourneyResolver : IReferenceResolver
{
private readonly IDictionary<string, SubJourney> _sjCache = new Dictionary<string, SubJourney>();
public void AddReference(object context, string reference, object value)
{
if (value is SubJourney sj)
{
var id = reference;
if (!_sjCache.ContainsKey(id))
{
_sjCache.Add(id, sj);
}
}
}
public string GetReference(object context, object value)
{
if (value is SubJourney sj)
{
_sjCache[sj.Id] = sj;
return sj.Id;
}
return null;
}
public bool IsReferenced(object context, object value)
{
if (value is SubJourney sj)
{
return _sjCache.ContainsKey(sj.Id);
}
return false;
}
public object ResolveReference(object context, string reference)
{
var id = reference;
_sjCache.TryGetValue(id, out var sj);
return sj;
}
}
In the JSON it add $id properties for read objects and replaces the copied object with $ref property. The graph now looks like this:
{
"$id": null,
"UserJourneys": {
"$id": null,
"$values": [
{
"$id": null,
"Policy": null,
"OrchestrationSteps": {
"$id": null,
"$values": [
{
"$id": null,
"Journey": null,
"JourneyList": {
"$id": null,
"$values": [
{
"$id": null,
"SubJourney": {
"$id": "k1",
"Policy": null,
"Id": "k1",
"Type": 0,
"DbKey": "00000000-0000-0000-0000-000000000000"
},
"SubJourneyReferenceId": "Test",
"DbKey": "00000000-0000-0000-0000-000000000000"
}
]
},
"Type": 0,
"DbKey": "00000000-0000-0000-0000-000000000000"
}
]
},
"Id": "Journey1",
"DbKey": "00000000-0000-0000-0000-000000000000"
}
]
},
"SubJourneys": {
"$id": null,
"$values": [
{
"$ref": "k1"
}
]
},
"DbKey": "00000000-0000-0000-0000-000000000000"
}
I used another unique property (id) that is independent of the primary key. Now I'm going to deserialize the thing back to .NET object graph. Full code here (Newtonsoft.Json needs to be referenced):
var serialized = JsonConvert.SerializeObject(
trustFrameworkPolicy,
new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.All,
TypeNameHandling = TypeNameHandling.Auto,
ReferenceResolver = new SubJourneyResolver() // solution!
});
var deserialized = JsonConvert.DeserializeObject<TrustFrameworkPolicy>(serialized, new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.All,
TypeNameHandling = TypeNameHandling.Auto
});
The deserialized object is now added to EF context and saved properly.
I still dislike the idea of an additional serialization step for the sole purpose of object handling. For big graphs it's a lot memory consumption. However, I see the advantage of extreme control the JSON serializer provides.
Maybe in the future EF Core provides a similar way to add a "ReferenceResolver" natively to smooth a complex graph.

Related

Getting error Newtonsoft.Json.Linq.JProperty cannot have multiple values when adding JToken

I have the following structure of additional information where I need to update the value of one of the tokens in the structure. The data is an array of JTokens with a parent called 'additionalFields' as follows:
{{"additionalFields":
[
{ "name": "NAME1", "value": "VALUE1" },
{ "name": "NAME2", "value": "VALUE2" },
{ "name": "NAME3", "value": "VALUE3" },
{ "name": "NAME4", "value": "VALUE4" }
]}
I'm trying to update the value of one of the tokens e.g. to change VALUE1 to VALUE10.
Once I have located the token I need to update my code removes it as follows.
additionalField.Remove();
I then create a new token to replace the one I have removed (containing the new value) using the following functions.
public static JToken CreateNewToken(string name, string value)
{
var stringToken = CreateNewStringToken(name, value);
var token = JToken.Parse(stringToken);
return (JToken) token;
}
private static string CreateNewStringToken(string name, string value)
{
return $"{{\"name\":\"{name}\",\"value\":\"{value}\"}}";
}
I then add the new token as follows.
additionalFields.AddAfterSelf(updatedToken);
Putting it all together we have the following
foreach (var additionalField in additionalFields)
{
//is this the key we are looking for?
var keyToken = additionalField.First;
if (keyToken?.First == null) continue;
if (string.Equals(keyToken.First.ToString(), "newname", StringComparison.CurrentCultureIgnoreCase))
{
//remove the current token
additionalField.Remove();
//add the updated token
var updatedToken = CreateNewToken("newname", "newvalue");
additionalFields.AddAfterSelf(updatedToken); <-- error occurs here!!
}
}
However after adding the token I'm getting the following error
Newtonsoft.Json.Linq.JProperty cannot have multiple values
I can see in the debugger that the token has been removed (as the token.Count is reduced by 1) so cannot understand why I'm getting an error adding the replacement token.
I was able to reproduce your problem here: https://dotnetfiddle.net/JIVCVB
What is going wrong
additionalFields refers to the JArray of JObjects containing name and value JProperties. You are looping through this JArray to try to find the first JObject having a name property with a certain value, and when you find it you attempt to replace the JObject with a whole new JObject. You successfully remove the old JObject from the JArray, but when you are doing AddAfterSelf to insert the new JObject, you are referencing additionalFields (plural) not additionalField (singular). Recall that additionalFields is the JArray. So you are saying that you want to add the new JObject after the array. The array's parent is a JProperty called additionalFields. A JProperty can only have one value, so AddAfterSelf fails with the error you see.
How to fix your code
I think what you intended to do was additionalField.AddAfterSelf(updatedToken). However, this, too, will fail, for a different reason: you already removed the additionalField from the JArray at that point, so it no longer has a parent context. You would need to AddAfterSelf before you remove the item you are trying to insert after. If you fix that, you still have another problem: your loop doesn't break out after you've done the replacement, so then you will get an error about modifying the collection while looping over it.
Here is the relevant section of code with the corrections:
if (string.Equals(keyToken.First.ToString(), "NAME1", StringComparison.CurrentCultureIgnoreCase))
{
//add the updated token
var updatedToken = CreateNewToken("newname", "newvalue");
additionalField.AddAfterSelf(updatedToken);
//remove the current token
additionalField.Remove();
// we found what we were looking for so no need to continue looping
break;
}
Fiddle: https://dotnetfiddle.net/KcFsZc
A simpler approach
You seem to be jumping through a lot of hoops to accomplish this task. Instead of looping, you can use FirstOrDefault to find the object you are looking for in the array. Once you've found it, you don't need to replace the whole object; you can just update the property values directly.
Here's how:
var rootObject = JToken.Parse(json);
// Get a reference to the array of objects as before
var additionalFields = rootObject["additionalFields"];
// Find the object we need to change in the array
var additionalField = additionalFields.FirstOrDefault(f =>
string.Equals((string)f["name"], "NAME1", StringComparison.CurrentCultureIgnoreCase);
// if the object is found, update its properties
if (additionalField != null)
{
additionalField["name"] = "newname";
additionalField["value"] = "newvalue";
}
Working demo: https://dotnetfiddle.net/ZAKRmi

Assigning entity instance instead of entity id creates new record

I have these two tables:
public class FiscalYear
{
... other fields
public int FiscalYears_Id { get; set; }
}
public class SkipHeader
{
... other fields
public int FiscalYears_Id { get; set; }
public virtual FiscalYear FiscalYear { get; set; }
}
Attempting to create a new SkipHeader like so:
var skipHeader = new SkipHeader()
{
... other fields get assigned to
FiscalYear = Session.FiscalYear,
}
Will cause the database to create a new FiscalYear record instead of using the Session.FiscalYear which is simply a static property that gets assigned to at program start. However, if I assign the FiscalYears_Id instead:
var skipHeader = new SkipHeader()
{
... other fields get assigned to
FiscalYears_Id = Session.FiscalYear.FiscalYears_Id,
}
The program uses the existing record as expected.
This bug eluded me and my colleague for months! Now that I found a solution, I would like to know WHY this is the case?
This bug eluded me and my colleague for months! Now that I found a
solution, I would like to know WHY this is the case?
This occurs because the DbContext doesn't know about your FiscalYear object instance, such as whether it represents a new record or an existing one.
Take the following example:
var fiscalYear = new FiscalYear { Id = 4, Name = "2019/20" };
var skipHeader = new SkipHeader { FiscalYear = fiscalYear };
context.SkipHeaders.Add(skipHeader);
context.SaveChanges();
fiscalYear in this instance is an object instance that has been given an ID and Name. When we associate it to a new SkipHeader and add the SkipHeader to the DbContext, EF will see this fiscalYear. Since it isn't an object tracked by the context, it treats it as a new entity like the SkipHeader.
Depending on how your entities are configured for dealing with the PK will determine what happens.
If your PK (Id) is set up as an Identity column (DB will populate) then the FiscalYear will be inserted and assigned the next available Id value. After the SaveChanges() call, fiscalYear.Id would be "6" or "22" or whatever the next new ID assigned to it would be. (Not "4")
If your PK is not an Identity column (App will populate) and a FiscalYear row already exists in the DB for ID 4, then EF will throw a duplicate key Exception on SaveChanges().
Where people get confused is that they assume that since the FiscalYear was at one point (Say during a web request) loaded from a DbContext, it is still somehow acting as a tracked entity when passed into another method outside of the scope of that DbContext. (During another update web request) It's not. When a web request for instance accepts a FinancialYear as a parameter from the client, it is deserializing a FinancialYear. As far as EF is concerned, that is absolutely no different than the new FinancialYear { } example above. The DbContext is not aware of that entity.
Take the following example:
FiscalYear fiscalYear = null;
using (var context = new AppDbContext())
{
fiscalYear = context.FiscalYears.Single(x => x.Id == 4);
}
using (var context = new AppDbContext())
{
var skipHeader = new SkipHeader { FiscalYear = fiscalYear };
context.SkipHeaders.Add(skipHeader);
context.SaveChanges();
}
This provides a basic outline of a Fiscal Year that was loaded by one instance of a DbContext, but then referenced by another instance of a DbContext. When SaveChanges is called, you will get a behaviour like you are getting now. This is what essentially happens in web requests, as when an entity is returned, the entity definition is merely a contract and the Entity is serialized to send to the client. When it comes back into another request, a new untracked object is deserialized.
As a general rule, Entities should not be passed outside the scope of the DbContext they were read from. EF does support this via detaching and re-attaching entities, but this is honestly more trouble than it is typically worth because you cannot 100% rely on just attaching an entity using DbContext.Attach() as often there can be conditional cases where another entity instance is already being tracked by a context and the Attach will fail. In these cases you'd need to replace references with the already tracked entity. (Messy conditional logic to catch possible scenarios) References are everything when dealing with EF. Two different object references with the same key & values are treated as separate and different objects by EF. Rather than passing references around, it's usually a lot simpler, and better to pass just the FK. This has the benefit of being a smaller payload for web requests.
One option you've found out is to update via the FK:
var skipHeader = new SkipHeader()
{
... other fields get assigned to
FiscalYears_Id = Session.FiscalYear.FiscalYears_Id,
}
This works, however when you have entities that are exposing both FK (FiscalYears_Id) and navigation property (FiscalYear) you can potentially find mismatch scenarios when updating records. This is something to be careful with as an application evolves.
For instance, take an example where you are editing an existing SkipHeader with a FiscalYears_Id of "4". This will have an associated FiscalYear reference available with a PK of "4".
Take the following code:
var skipHeader = context.SkipHeaders.Include(x => x.FiscalYear).Single(x => x.Id == skipHeaderId);
skipHeader.FiscalYears_Id = newFiscalYearId; // update FK from "4" to "6"
var fiscalYearId = skipHeader.FiscalYear.Id; // still returns "6"
context.SaveChanges();
We set the FK value on the skip header, however that does not update the reference for FiscalYear until after we call SaveChanges. This can be an important detail when dealing with FKs alongside navigation properties. Now normally we wouldn't bother going to the Navigation Property to get the ID again, but any code we call that is expecting the new FiscalYear reference to be updated will have a different behavior depending on whether SaveChanges had been called before or after the code in question. If before, all FiscalYear details will be for the old fiscal year even though we changed the FK reference.
This can also lead to odd Lazy Loading errors as well such as:
var skipHeader = context.SkipHeaders.Single(x => x.Id == skipHeaderId);
skipHeader.FiscalYears_Id = newFiscalYearId; // update FK from "4" to "6"
var fiscalYearId = skipHeader.FiscalYear.Id; // NullReferenceException!
context.SaveChanges();
Normally, provided you have lazy loading enabled loading a SkipHeader without eager loading the FiscalYear (.Include(x => x.FiscalYear))and then querying a property from the FiscalYear would lazy load this relative. However, if you change the SkipHeader's FiscalYear_ID FK and then try to access a property off the FiscalYear before calling SaveChanges(), you will get a NullReferenceException on the FiscalYear. EF will NOT lazy load either the old or new FiscalYear entity. Bugs in behaviour like that commonly creep in as applications get developed and code starts calling common functions that assume they are dealing with complete entities.
The alternative to setting updated values for known rows by FK is to load the entity to associate, and associate it by reference:
using (var context = new AppDbContext())
{
var fiscalYear = context.FiscalYears.Single(x => x.Id == fiscalYearId);
var skipHeader = new SkipHeader()
{
... other fields get assigned to
FiscalYear = fiscalYear;
}
context.SaveChanges();
}
This example just uses a locally scoped DbContext. If your method has an injected context then use that instead. The context will return any cached, known instance of the Fiscal Year or retrieve it from the DB. If the FiscalYear ID is invalid then that operation will throw an exception specific to the Fiscal Year not being found due to the Single() call rather than a more vague FK violation on SaveChanges(). (Not an issue when there is only one FK relationship, but in entities that have dozens of relationships...)
The advantage of this approach is that the FiscalYear will be in the scope of the DbContext so any methods/code using it will have a valid reference. The entities can define the navigation properties without exposing the extra FK values,using .Map(x => x.MapKey()) [EF6] or Shadow Properties [EFCore] instead to avoid 2 sources of truth for FK values.
This hopefully will provide some insight into what EF is doing and why it resulted in the behaviour you've seen and/or any errors or buggy behaviour you might have also come across.
Assuming you have pretty standard setup with DbContext being scoped (per request) dependency - the reason is that the new instance of your DbContext does not track the Session.FiscalYear instance - so it creates new. Another way to solve this is using DbContext.Attach:
context.Attach(Session.FiscalYear);
var skipHeader = new SkipHeader()
{
... other fields get assigned to
FiscalYears_Id = Session.FiscalYear.FiscalYears_Id,
}
// save skipHeader
More about change tracker in EF.

How to determine if a node exists in after dynamic parsing of JSON

I have the following piece of code which reads incoming message from an event hub and stores it in blob storage.
dynamic msg = JObject.Parse(myEventHubMessage);
WriteToBlob(msg, enqueuedTimeUtc, myEventHubMessage, binder, log);
Here are some examples of the JSON I receive:
{
"deviceId": "ATT",
"product": "testprod",
"data": {
"001": 1,
"002": 3.1,
"003": {
"lat": 0,
"lng": 0
},
"000": -80
},
"ts": "2020-01-27T19:29:34Z"
}
{
"deviceId": "ATT",
"product": "testprod",
"data_in": {
"ts": "2020-01-27T19:29:34Z",
"001": 1,
"002": 3.1,
"003": {
"lat": 0,
"lng": 0
},
"000": -80
}
}
Now, instead of a 'data' node in the JSON, sometimes the device sends the node with the name 'data_in'. And the ts field can sometimes be inside or outside the data or data_in node, and maybe named timestamp. How can I efficiently determine if a node exists or not?
I was thinking of doing something like this:
if (msg.data_in.ts != null)
{
}
And I would do the same for all conditions. Is there a way to better achieve this? Also, the if condition fails if I check msg.data_in.ts if data_in node doesn't exist.
Your problem is that you have upcast your JObject to dynamic. This makes things difficult for the following reasons:
You loose all compile-time checking for code correctness.
You loose convenient access to the methods and properties of JObject itself (as opposed to the dynamically provided JSON properties).
Since JObject implements interfaces such as IDictionary<string, JToken> leaving it as a typed object will make the job of checking for, adding and removing select JSON properties easier.
To see why working with a typed JObject can be easier, first introduce the following extension methods for convenience:
public static class JsonExtensions
{
public static JProperty Rename(this JProperty old, string newName)
{
if (old == null)
throw new ArgumentNullException();
var value = old.Value;
old.Value = null; // Prevent cloning of the value by nulling out the old property's value.
var #new = new JProperty(newName, value);
old.Replace(#new); // By using Replace we preserve the order of properties in the JObject.
return #new;
}
public static JProperty MoveTo(this JToken token, JObject newParent)
{
if (newParent == null || token == null)
throw new ArgumentNullException();
var toMove = (token as JProperty ?? token.Parent as JProperty);
if (toMove == null)
throw new ArgumentException("Incoming token does not belong to an object.");
if (toMove.Parent == newParent)
return toMove;
toMove.Remove();
newParent.Add(toMove);
return toMove;
}
}
And now you can normalize your messages as follows:
var msg = JObject.Parse(myEventHubMessage);
msg.Property("data_in")?.Rename("data"); // Normalize the name "data_in" to be "data".
msg["data"]?["ts"]?.MoveTo(msg); // Normalize the position of the "ts" property, it should belong to the root object
msg["data"]?["timestamp"]?.MoveTo(msg); // Normalize the position of the "timestamp" property, it should belong to the root object
msg.Property("timestamp")?.Rename("ts"); // And normalize the name of the "timestamp" property, it should be "ts".
Demo fiddle here.

C# Entity Framework 6 with ASP.Net web API returning empty array in JSON response

This is the code I am using to handle a GET request in my "Works Orders" controller:
// GET: api/WorksOrders/5
/// <summary>
/// Fetches the Works Order with corresponding ID (pkOrderItemID)
/// </summary>
[Authorize]
public OrderItem Get(int id)
{
using (var entities = new customappsEntities())
{
return entities.OrderItems.FirstOrDefault(e => e.pkOrderItemID == id);
}
}
When this is run I get this error message in the JSON response:
"Error getting value from 'OrderItemDepartments' on 'System.Data.Entity.DynamicProxies.OrderItem_501562E50E13B847D4A87F7F2DEC7C8CEDAF127355CB4FC30E12653275CE6412'.",
I can fix this by adding
entities.Configuration.ProxyCreationEnabled = false;
This returns the entire table but has a set of empty arrays at the bottom for example the JSON formatted response looks like:
{
"$id": "1",
"pkOrderItemID": 271,
"StartedOn": "2015-01-01T00:00:00",
"CompletedOn": "2014-10-15T00:00:00",
"Costs": [],
"Dispatches": []
}
The arrays appear to be foreign key relationships in the database and I believe they return empty because it is stuck in a self-referencing loop (I was able to get that error message a few times but I haven't been able to recreate it since). I've tried adding
GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings.Re‌​ferenceLoopHandling = ReferenceLoopHandling.Ignore;
From: Self referencing loop detected - Getting back data from WebApi to the browser
And many other solutions from the same or similar threads but it didn't solve the issue. All of my data classes are auto-generated by Entity Framework so I can't modify them, maybe this is the wrong way to have a web API set up?
Any help to figure out why the arrays return empty will be appreciated.
Thank you.
Update
Here's a screenshot of the DataModel Diagram to show the fk relationship
Do not return anything other than POCOs or anonymous objects via JSON serializing. Entity Framework objects have a huge overhead you do not want to see and often have circular relationships that the serializer cannot cope with (and you do not want, even if you configure it to allow them).
Create a POCO or return a select:
[Authorize]
public MyOrderItem Get(int id)
{
using (var entities = new customappsEntities())
{
var item = entities.OrderItems.FirstOrDefault(e => e.pkOrderItemID == id);
return new MyOrderItem()
{
StartedOn = item.StartedOn,
...
};
}
}
The key to correctly using JSON is to return only what you need across the line.
Update
As the item contains child items, you need to ensure you have POCOs for those too:
e.g.
var myOrderItem = new MyOrderItem()
{
StartedOn = item.StartedOn,
Costs = new List<MyCost>(); // <=== This can also go in the default constructor
...
};
foreach (var cost in item.Costs)
{
myOrderItem.Costs.Add(new MyCost()
{
Value = cost.Value
...
});
}
return myOrderItem;
Another update
That is very odd. It certainly should give you your records. You can also try an .Include() with .Select() and FirstOrDefault() instead of just FirstOrDefault() to guarantee the related records are pre-loaded.
e.g.:
entities.OrderItems.Include(x=>x.Costs).Select(x=>x).FirstOrDefault();
Add Newtonsoft Json Library and return the result like this
JsonSerializerSettings jss = new JsonSerializerSettings { ReferenceLoopHandling = ReferenceLoopHandling.Ignore };
return JsonConvert.SerializeObject(result, Formatting.Indented, jss);

How do I replace an entity in the middle of the graph?

I have a disconnected entity framework graph that is sent to the client and returned with updates. One of the updates may replace an entity in the middle of the graph. When I try to replace it, I get an InvalidOperationException with the message:
The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted.
For simplicity, the model would look something like this:
The code that's causing the exception would look like this:
// Create a tree of objects
Root root = new Root() { Id = 1 };
Branch origBranch = new Branch() { Id = 3 };
Twig onlyTwig = new Twig() { Id = 5 };
Leaf onlyLeaf = new Leaf() { Id = 7 };
onlyTwig.Leaves.Add(onlyLeaf);
origBranch.Twigs.Add(onlyTwig);
root.Branches.Add(origBranch);
// Store the structure in the database using the container
using (Pot container1 = new Pot())
{
container1.Roots.Add(root);
container1.SaveChanges();
}
// Create a new Branch to replace the original
Branch newBranch = new Branch() { Id = 11 };
// Add the Twig from the original object
newBranch.Twigs.Add(onlyTwig);
using (Pot container2 = new Pot())
{
container2.Roots.Attach(root);
// Replace the branch
root.Branches.Remove(origBranch);
root.Branches.Add(newBranch);
container2.SaveChanges(); // THROWS EXCEPTION !!
}
I thought by removing the old entity, then adding a new one that I would satisfy the "new relationship must be defined..." criteria, but it's failing. In our case, making the column nullable in the database leads to other issues, the first of which is risking database integrity. So how is this normally handled?
I resolved part of the issue by essentially tricking entity framework into believing it was making an update in place, rather than replacing the item with a new item. The key to the solution is setting the same primary key value, then setting the entity state to Modified The replacement code is posted below.
// Create a new Branch to replace the original
// Make sure to use the id from the original saved object
Branch newBranch = new Branch() { Id = origBranch.Id };
// Add the Twig from the original object
newBranch.Twigs.Add(onlyTwig);
// Replace the original branch with the new branch
root.Branches.Remove(origBranch);
root.Branches.Add(newBranch);
using (Pot container2 = new Pot())
{
// Attach the graph to the container. The default state is unchanged.
container2.Roots.Attach(root);
// Trick entity framework into thinking the new branch with the
// same primary key is the original item with modifications
container2.Entry(newBranch).State = EntityState.Modified;
container2.SaveChanges();
}
However, there are occasions to replace the entity with an entirely new entity, so I still need an answer for how to remove then add.

Categories

Resources