I'm trying my hand at LINQ for the first time and just wanted to post a small question to make sure if this was the best way to go about it. I want a list of every value in a table. So far this is what I have, and it works, but is this the best way to go about collecting everything in a LINQ friendly way?
public static List<Table1> GetAllDatainTable()
{
List<Table1> Alldata = new List<Table1>();
using (var context = new EFContext())
{
Alldata = context.Tablename.ToList();
}
return Alldata;
}
For simple entities, that is an entity that has no references to other entities (navigation properties) your approach is essentially fine. It can be condensed down to:
public static List<Table1> GetAllDatainTable()
{
using (var context = new EFContext())
{
return context.Table1s.ToList();
}
}
However, in most real-world scenarios you are going to want to leverage things like navigation properties for the relationships between entities. I.e. an Order references a Customer with Address details, and contains OrderLines which each reference a Product, etc. Returning entities this way becomes problematic because any code that accepts the entities returned by a method like this should be getting either complete, or completable entities.
For instance if I have a method that returns an order, and I have various code that uses that order information: Some of that code might try to get info about the order's customer, other code might be interested in the products. EF supports lazy loading so that related data can be pulled if, and when needed, however that only works within the lifespan of the DbContext. A method like this disposes the DbContext so Lazy Loading is off the cards.
One option is to eager load everything:
using (var context = new EFContext())
{
var order = context.Orders
.Include(o => o.Customer)
.ThenInclude(c => c.Addresses)
.Include(o => o.OrderLines)
.ThenInclude(ol => ol.Product)
.Single(o => o.OrderId == orderId);
return order;
}
However, there are two drawbacks to this approach. Firstly, it means loading considerably more data every time we fetch an order. The consuming code may not care about the customer or order lines, but we've loaded it all anyways. Secondly, as systems evolve, new relationships may be introduced that older code won't necessarily be noticed to be updated to include leading to potential NullReferenceExceptions, bugs, or performance issues when more and more related data gets included. The view or whatever is initially consuming this entity may not expect to reference these new relationships, but once you start passing around entities to views, from views, and to other methods, any code accepting an entity should expect to rely on the fact that the entity is complete or can be made complete. It can be a nightmare to have an Order potentially loaded in various levels of "completeness" and code handling whether data is loaded or not. As a general recommendation, I advise not to pass entities around outside of the scope of the DbContext that loaded them.
The better solution is to leverage projection to populate view models from the entities suited to your code's consumption. WPF often utilizes the MVVM pattern, so this means using EF's Select method or Automapper's ProjectTo method to populate view models based each of your consumer's needs. When your code is working with ViewModels containing the data views and such need, then loading and populating entities as needed this allows you to produce far more efficient (fast) and resilient queries to get data out.
If I have a view that lists orders with a created date, customer name, and list of products /w quantities we define a view model for the view:
[Serializable]
public class OrderSummary
{
public int OrderId { get; set; }
public string OrderNumber { get; set; }
public DateTime CreatedAt { get; set; }
public string CustomerName { get; set; }
public ICollection<OrderLineSummary> OrderLines { get; set; } = new List<OrderLineSummary>();
}
[Serializable]
public class OrderLineSummary
{
public int OrderLineId { get; set; }
public int ProductId { get; set; }
public string ProductName { get; set; }
public int Quantity { get; set; }
}
then project the view models in the Linq query:
using (var context = new EFContext())
{
var orders = context.Orders
// add filters & such /w Where() / OrderBy() etc.
.Select(o => new OrderSummary
{
OrderId = o.OrderId,
OrderNumber = o.OrderNumber,
CreatedAt = o.CreatedAt,
CustomerName = o.Customer.Name,
OrderLines = o.OrderLines.Select( ol => new OrderLineSummary
{
OrderLineId = ol.OrderLineId,
ProductId = ol.Product.ProductId,
ProductName = ol.Product.Name,
Quantity = ol.Quantity
}).ToList()
}).ToList();
return orders;
}
Note that we don't need to worry about eager loading related entities, and if later down the road an order or customer or such gains new relationships, the above query will continue to work, only being updated if the new relationship information is useful for the view(s) it serves. It can compose a faster, less memory intensive query fetching fewer fields to be passed over the wire from the database to the application, and indexes can be employed to tune this even further for high-use queries.
Update:
Additional performance tips: Generally avoid methods like GetAll*() as a lowest common denominator method. Far too many performance issues I come across with methods like this are in the form of:
var ordersToShip = GetAllOrders()
.Where(o => o.OrderStatus == OrderStatus.Pending)
.ToList();
foreach(order in ordersToShip)
{
// do something that only needs order.OrderId.
}
Where GetAllOrders() returns List<Order> or IEnumerable<Order>. Sometimes there is code like GetAllOrders().Count() > 0 or such.
Code like this is extremely inefficient because GetAllOrders() fetches *all records from the database, only to load them into memory in the application to later be filtered down or counted etc.
If you're following a path to abstract away the EF DbContext and entities into a service / repository through methods then you should ensure that the service exposes methods to produce efficient queries, or forgo the abstraction and leverage the DbContext directly where data is needed.
var orderIdsToShip = context.Orders
.Where(o => o.OrderStatus == OrderStatus.Pending)
.Select(o => o.OrderId)
.ToList();
var customerOrderCount = context.Customer
.Where(c => c.CustomerId == customerId)
.Select(c => c.Orders.Count())
.Single();
EF is extremely powerful and when selected to service your application should be embraced as part of the application to give the maximum benefit. I recommend avoiding coding to abstract it away purely for the sake of abstraction unless you are looking to employ unit testing to isolate the dependency on data with mocks. In this case I recommend leveraging a unit of work wrapper for the DbContext and the Repository pattern leveraging IQueryable to make isolating business logic simple.
Related
public class Student
{
public int StudentId;
public string StudentName;
public int CourseId;
public virtual Course Courses { get; set; }
}
public class Course
{
public int CourseId;
public string CourseName;
public string Description;
public ICollection<Student> Students {get;set;}
public ICollection<Lecture> Lectures { get; set; }
}
public class Lecture
{
public int LectureId;
public string LectureName;
public int CourseId;
public virtual Course Courses { get; set; }
}
What is the keyword virtual used for here?
I was told a virtual is for lazy loading but I don't understand why.
Because when we do
_context.Lecture.FirstOrDefault()
the result returns the first Lecture and it does not include the attribute Course.
To get the Lecture with the Course, we have to use:
_context.Lecture.Include("Courses").FirstOrDefault()
without using a virtual keyword, it's already a lazy-loading.
Then why do we need the keyword?
By declaring it virtual you allow EF to substitute the value property with a proxy to enable lazy loading. Using Include() is telling the EF query to eager-load the related data.
In EF6 and prior, lazy loading was enabled by default. With EF Core it is disabled by default. (Or not supported in the earliest versions)
Take the following query:
var lecture = _context.Lecture.Single(x => x.LectureId == lectureId);
to load one lecture.
If you omit virtual then accessing lecture.Course would do one of two things. If the DbContext (_context) was not already tracking an instance of the Course that lecture.CourseId was pointing at, lecture.Course would return #null. If the DbContext was already tracking that instance, then lecture.Course would return that instance. So without lazy loading you might, or might not get a reference, don't count on it being there.
With virtual and lazy loading in the same scenario, the proxy checks if the Course has been provided by the DbContext and returns it if so. If it hasn't been loaded then it will automatically go to the DbContext if it is still in scope and attempt to query it. In this way if you access lecture.Course you can count on it being returned if there is a record in the DB.
Think of lazy loading as a safety net. It comes with a potentially significant performance cost if relied on, but one could argue that a performance hit is the lesser of two evils compared to runtime bugs with inconsistent data. This can be very evident with collections of related entities. In your above example the ICollection<Student> and such should be marked as virtual as well to ensure those can lazy load. Without that you would get back whatever students might have been tracked at the time, which can be very inconsistent data state at runtime.
Take for example you have 2 courses, Course #1 and #2. There are 4 students, A, B, C, and D. All 4 are registered to Course #1 and only A & B are registered to Course B. If we ignore lazy-loading by removing the virtual then the behavior will change depending on which course we load first if we happen to eager-load in one case and forget in the second...
using (var context = new MyAppDbContext())
{
var course1 = context.Courses
.Include(x => x.Students)
.Single(x => x.CourseId == 1);
var course2 = context.Courses
.Single(x => x.CourseId == 2);
var studentCount = course2.Students.Count();
}
Disclaimer: With collections in entities you should ensure these are always initialized so they are ready to go. This can be done in the constructor or on an auto-property:
public ICollection<Student> Students { get; set; } = new List<Student>();
In the above example, studentCount would come back as "2" because in loading Course #1, both Student A & B were loaded via the Include(x => x.Students) This is a pretty obvious example loading the two courses right after one another but this situation can easily occur when loading multiple records that share data, such as search results, etc. It is also affected by how long the DbContext has been alive. This example uses a using block for a new DbContext instance scope, one scoped to the web request or such could be tracking related instances from earlier in the call.
Now reverse the scenario:
using (var context = new MyAppDbContext())
{
var course2 = context.Courses
.Include(x => x.Students)
.Single(x => x.CourseId == 2);
var course1 = context.Courses
.Single(x => x.CourseId == 1);
var studentCount = course1.Students.Count();
}
In this case, only Students A & B were eager loaded. While Course 1 actually references 4 students, studentCount here would return "2" for the two students associated with Course 1 that the DbContext was tracking when Course 1 was loaded. You might expect 4, or 0 knowing that you didn't eager-load the students. The resulting related data is unreliable and what you might or might not get back will be situational.
Where lazy loading will get expensive is when loading sets of data. Say we load a list of 100 students and when working with those students we access student.Course. Eager loading will generate 1 SQL statement to load 100 students and their related courses. Lazy loading will end up executing 1 query for the students, then 100 queries to load course for each student. (I.e. SELECT * FROM Courses WHERE StudentId = 1; SELECT * FROM Courses WHERE StudentId = 2; ...) If student had several lazy loaded properties then that's another 100 queries per lazy load.
I noticed that when I store an object having a property being of another type, it gets saved but so is the linked in property. It's kind of neat when both are new or both are being updated.
class Customer
{
public Guid Id { get; set; }
public List<Category> Categories { get; set; } ...
}
class Category
{
public Guid Id { get; set; }
public string Name { get; set; } ...
}
However, when I create a new instance of customer but linked to a pre-existing category, it becomes a bit inappropriate. Category in this case is like a tag, so several customers will share a pointer to it and each new customer needs only to reuse the already present category. (If the ID of the category is set as unique, I get error for object already created and when it's not, I get duplicates.)
I understand that it has to do with the state of the objects in relation to EF engine. It simply isn't aware that the category in the new customer is already present in the DB, hence trying to create it.
I played with the status and got into some complicated algorithm, which sort of impacted on readability. Then, I came up with the following approach. I'm not actually doing anything with the data - I simply pull it into the client-side of the EF making it aware of their existence.
List<Guid> ids = data.Categories
.Select(_ => _.Id).ToList();
List<Category> existing = Context.Categories
.Where(_ => ids.Contains(_.Id)).ToList();
To make it simpler, let's assume that the number of the categories is limited, hence reformulating the retrieval to the code below. (It's basically like saying "Hmmm... Speaking of the categories... Well, nevermind.", which seems academically wrong.)
List<Categories> _ = Context.Categories.ToList();
Is there a pattern or approach that is best-practice recommended in this scenario?
I realize that the samples above are just Q&D workaround. The mentioned entity status is complicated. I get the sense that there's a neat way of resolving it.
You do not need to load all existing Categories into memory. Rather, just associate the new Customer with the existing Categories he has by loading them from the Context.
Guid[] categoryIdsOfNewCustomer; // passed as part of the create command
var newCustomer = new Customer();
newCustomer.Categories = Context.Categories
.Where(c => categoryIdsOfNewCustomer.Contains(c.Id))
.ToList();
Context.Customers.Add(newCustomer);
Context.SaveChanges();
I recently started to work with EF and MVC in .Net Web Applications, and I've encountered an issue, and wanted to see what your thoughts were and whether you would be able to point me in the right direction.
So, I've got two classes: DataGroup and DataElements. A DataElement can be in one DataGroup or none. I've used fluent API in the DataContext to achieve this:
modelBuilder.Entity<DataGroup>()
.HasMany(dg => dg.DataElements)
.WithOptional()
.HasForeignKey(de => de.DataGroup_Id);
In my DataGroup class I've got the following:
public virtual ICollection<DataElement> DataElements { get; set; }
[NotMapped]
public ICollection<DataElement> RecentDataElements //returns the last 15 data elements
{
get
{
return DataElements.OrderByDescending(de => de.GeneratedDateTime).Take(15).Reverse().ToList();
}
}
[NotMapped]
public DataElement LatestDataElement //returns the latest data element (if any)
{
return DataElements.OrderByDescending(de => de.GeneratedDateTime).FirstOrDefault();
}
So the issue I have with the code above is that when I call LatestDataElement or RecentDataElements, I end up waiting quite a bit.
I believe this is because the DataElements property in the DataGroup class is an ICollection and RecentDataElements and LatestDataElement end up causing EF to load all the DataElements into memory before sorting and returning a sub-set (there could be thousands of DataElements in a DataGroup).
Is there a way to make this more efficient?
I've toyed with the idea of querying the datacontext directly rather than using the DataElements property, but I wanted to see if there were any other options I should consider. I've been told by colleagues that it would be bad practice to put the datacontext in a model (whether they're right or wrong is a different issue).
Thank you for your help and advice. It's much appreciated :)
In short: No.
As the name of the NotMapped attribute already suggests, this is a property linq-to-entities has no knowledge of whatsoever and it's impossible to populate it by filtering on the database. What I would do in your place is remove the Notmapped properties from your model all together and create a viewmodel:
public class DataGroupViewModel
{
public DataGroup DataGroup {get; set;}
public ICollection<DataGroup> RecentDataElements {get; set;}
}
And use projection in your query to populate this view:
var result = ctx.DataGroups.Where(...).Select(d => new DataGroupViewModel
{
DataGroup = d;
RecentDataElements = d.DataElements.OrderByDescending(de => de.GeneratedDateTime)
.FirstOrDefault();
}
This of course forces you to create another model but it's the cleanest and fastest way to do it. Also your colleagues are right, it is a bad way to do db calls in your model, a model is a container for data, nothing more than that, it shouldn't have logic inside it to query the database.
To make it a bit cleaner you can write some extension methods you can reuse to get the models:
Func<IQueryable<DataGroup>,IEnumerable<DataGroupViewModel>> GetDataGroupDTO =
d => d.Select(dt => new DataGroupViewModel
{
DataGroup = dt;
RecentDataElements = dt.DataElements.OrderByDescending(de => de.GeneratedDateTime)
.FirstOrDefault();
}
Then you can write your query more clean:
IEnumerable<DataGroupViewModel> result = ctx.DataGroups.Where(...).GetDataGroupDTO();
My domain model has a lot of complex financial data that is the result of fairly complex calculations on multiple properties of various entities. I generally include these as [NotMapped] properties on the appropriate domain model (I know, I know - there's plenty of debate around putting business logic in your entities - being pragmatic, it just works well with AutoMapper and lets me define reusable DataAnnotations - a discussion of whether this is good or not is not my question).
This works fine as long as I want to materialize the entire entity (and any other dependent entities, either via .Include() LINQ calls or via additional queries after materialization) and then map these properties to the view model after the query. The problem comes in when trying to optimize problematic queries by projecting to a view model instead of materializing the entire entity.
Consider the following domain models (obviously simplified):
public class Customer
{
public virtual ICollection<Holding> Holdings { get; private set; }
[NotMapped]
public decimal AccountValue
{
get { return Holdings.Sum(x => x.Value); }
}
}
public class Holding
{
public virtual Stock Stock { get; set; }
public int Quantity { get; set; }
[NotMapped]
public decimal Value
{
get { return Quantity * Stock.Price; }
}
}
public class Stock
{
public string Symbol { get; set; }
public decimal Price { get; set; }
}
And the following view model:
public class CustomerViewModel
{
public decimal AccountValue { get; set; }
}
If I attempt to project directly like this:
List<CustomerViewModel> customers = MyContext.Customers
.Select(x => new CustomerViewModel()
{
AccountValue = x.AccountValue
})
.ToList();
I end up with the following NotSupportedException: Additional information: The specified type member 'AccountValue' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported.
Which is expected. I get it - Entity Framework can't convert the property getters into a valid LINQ expression. However, if I project using the exact same code but within the projection, it works fine:
List<CustomerViewModel> customers = MyContext.Customers
.Select(x => new CustomerViewModel()
{
AccountValue = x.Holdings.Sum(y => y.Quantity * y.Stock.Price)
})
.ToList();
So we can conclude that the actual logic is convertible to a SQL query (I.e., there's nothing exotic like reading from disk, accessing external variables, etc.).
So here's the question: is there any way at all to make logic that should be convertible to SQL reusable within LINQ to entity projections?
Consider that this calculation may be used within many different view models. Copying it to the projection in each action is cumbersome and error prone. What if the calculation changes to include a multiplier? We'd have to manually locate and change it everywhere it's used.
One thing I have tried is encapsulating the logic within an IQueryable extension:
public static IQueryable<CustomerViewModel> WithAccountValue(
this IQueryable<Customer> query)
{
return query.Select(x => new CustomerViewModel()
{
AccountValue = x.Holdings.Sum(y => y.Quantity * y.Stock.Price)
});
}
Which can be used like this:
List<CustomerViewModel> customers = MyContext.Customers
.WithAccountValue()
.ToList();
That works well enough in a simple contrived case like this, but it's not composable. Because the result of the extension is an IQueryable<CustomerViewModel> and not a IQueryable<Customer> you can't chain them together. If I had two such properties in one view model, one of them in another view model, and then the other in a third view model, I would have no way of using the same extension for all three view models - which would defeat the whole purpose. With this approach, it's all or nothing. Every view model has to have the exact same set of calculated properties (which is rarely the case).
Sorry for the long-winded question. I prefer to provide as much detail as possible to make sure folks understand the question and potentially help others down the road. I just feel like I'm missing something here that would make all of this snap into focus.
I did a lot of research on this the last several days because it's been a bit of a pain point in constructing efficient Entity Framework queries. I've found several different approaches that all essentially boil down to the same underlying concept. The key is to take the calculated property (or method), convert it into an Expression that the query provider knows how to translate into SQL, and then feed that into the EF query provider.
I found the following libraries/code that attempted to solve this problem:
LINQ Expression Projection
http://www.codeproject.com/Articles/402594/Black-Art-LINQ-expressions-reuse and http://linqexprprojection.codeplex.com/
This library allows you to write your reusable logic directly as an Expression and then provides the conversion to get that Expression into your LINQ query (since the query can't directly use an Expression). The funny thing is that it'll be translated back to an Expression by the query provider. The declaration of your reusable logic looks like this:
private static Expression<Func<Project, double>> projectAverageEffectiveAreaSelector =
proj => proj.Subprojects.Where(sp => sp.Area < 1000).Average(sp => sp.Area);
And you use it like this:
var proj1AndAea =
ctx.Projects
.AsExpressionProjectable()
.Where(p => p.ID == 1)
.Select(p => new
{
AEA = Utilities.projectAverageEffectiveAreaSelector.Project<double>()
});
Notice the .AsExpressionProjectable() extension to set up projection support. Then you use the .Project<T>() extension on one of your Expression definitions to get the Expression into the query.
LINQ Translations
http://damieng.com/blog/2009/06/24/client-side-properties-and-any-remote-linq-provider and https://github.com/damieng/Linq.Translations
This approach is pretty similar to the LINQ Expression Projection concept except it's a little more flexible and has several points for extension. The trade off is that it's also a little more complex to use. Essentially you still define your reusable logic as an Expression and then rely on the library to convert that into something the query can use. See the blog post for more details.
DelegateDecompiler
http://lostechies.com/jimmybogard/2014/05/07/projecting-computed-properties-with-linq-and-automapper/ and https://github.com/hazzik/DelegateDecompiler
I found DelegateDecompiler via the blog post on Jimmy Bogard's blog. It has been a lifesaver. It works well, is well architected, and requires a lot less ceremony. It does not require you to define your reusable calculations as an Expression. Instead, it constructs the necessary Expression by using Mono.Reflection to decompile your code on the fly. It knows which properties, methods, etc. need to be decompiled by having you decorate them with ComputedAttribute or by using the .Computed() extension within the query:
class Employee
{
[Computed]
public string FullName
{
get { return FirstName + " " + LastName; }
}
public string LastName { get; set; }
public string FirstName { get; set; }
}
This can also be easily extended, which is a nice touch. For example, I set it up to look for the NotMapped data annotation instead of having to explicitly use the ComputedAttribute.
Once you've set up your entity, you just trigger decompilation by using the .Decompile() extension:
var employees = ctx.Employees
.Select(x => new
{
FullName = x.FullName
})
.Decompile()
.ToList();
You can encapsulate logic by creating a class that contains the original Entity and the additional calculated property. You then create helper methods that project to the class.
For example, if we were trying to calculate the tax for an Employee and a Contractor entity, we could do this:
//This is our container for our original entity and the calculated field
public class PersonAndTax<T>
{
public T Entity { get; set; }
public double Tax { get; set; }
}
public class PersonAndTaxHelper
{
// This is our middle translation class
// Each Entity will use a different way to calculate income
private class PersonAndIncome<T>
{
public T Entity { get; set; }
public int Income { get; set; }
}
Income calculating methods
public static IQueryable<PersonAndTax<Employee>> GetEmployeeAndTax(IQueryable<Employee> employees)
{
var query = from x in employees
select new PersonAndIncome<Employee>
{
Entity = x,
Income = x.YearlySalary
};
return CalcualateTax(query);
}
public static IQueryable<PersonAndTax<Contractor>> GetContratorAndTax(IQueryable<Contractor> contractors)
{
var query = from x in contractors
select new PersonAndIncome<Contractor>
{
Entity = x,
Income = x.Contracts.Sum(y => y.Total)
};
return CalcualateTax(query);
}
Tax calculation is defined in one place
private static IQueryable<PersonAndTax<T>> CalcualateTax<T>(IQueryable<PersonAndIncome<T>> personAndIncomeQuery)
{
var query = from x in personAndIncomeQuery
select new PersonAndTax<T>
{
Entity = x.Entity,
Tax = x.Income * 0.3
};
return query;
}
}
Our view model projections using the Tax property
var contractorViewModel = from x in PersonAndTaxHelper.GetContratorAndTax(context.Contractors)
select new
{
x.Entity.Name,
x.Entity.BusinessName
x.Tax,
};
var employeeViewModel = from x in PersonAndTaxHelper.GetEmployeeAndTax(context.Employees)
select new
{
x.Entity.Name,
x.Entity.YearsOfService
x.Tax,
};
I'm curious about best practice when developing n-tier application with Linq-to-SQL and WCF service.
In particular, I'm interested, for example, how to return to presentation tier data from two related tables. Suppose next situation (much simplified):
Database has tables:
Orders (id, OrderName)
OrderDetails (id, orderid, DetailName)
Middle tier has CRUD methods for OrderDetails. So, I need to have way to rebuild entity for attaching to the context for update or insert when it come back from presentation layer.
In presentation layer I need to display list of OrderDetails with corresponding OrderName from the parent table.
There are two approach for classes, that returned from the service:
Use DTO custom class that will encapsulate data from both tables and projection:
class OrderDetailDTO
{
public int Id { get; set; }
public string DetailName { get; set; }
public string OrderName { get; set; }
}
IEnumerable<OrderDetailDTO> GetOrderDetails()
{
var db = new LinqDataContext();
return (from od in db.OrderDetails
select new OrderDetailDTO
{
Id = od.id,
DetailName = od.DetailName,
OrderName = od.Order.OrderName
}).ToList();
}
Cons: need to assign every field which is important for presentation layer in both ways (when returning data and when creating new entity for attaching to context, when data comes back from presentation layer)
Use customized Linq-to-SQL entity partial class:
partial class OrderDetail
{
[DataMember]
public string OrderName
{
get
{
return this.Order.OrderName // return value from related entity
}
set {}
}
}
IEnumerable<OrderDetail> GetOrderDetails()
{
var db = new LinqDataContext();
var loadOptions = new DataLoadOptions();
loadOptions.LoadWith<OrderDetail>(item => item.Order);
db.LoadOptions = options;
return (from od in db.OrderDetails
select od).ToList();
}
Cons: database query will include all columns from Orders table, Linq-to-SQL will materialize whole Order entity, although I need only one field from it.
Sorry for such long story. May be I missed something? Will appreciate any suggestions.
I would say use DTO and Automapper, not a good idea to expose DB entity as datacontract
Is usage of Linq to SQL a requirement for you or you are still in design stage where you can choose technologies? If latest, I would suggest using Entity Framework with Self Tracking Entities (STE). Than when you get entity back from client all client changes will be handled for you automatically by STEs, you will just have to call Save. Including related entities is also easy then: (...some query...).Orders.Include(c => c.OrderDetails)