I'm using EF 6.0 .NET-Framework and MS SQL Sever and I have the follow situation: I have a dynamic data selection on a Navigation property from a given entity. This works so far OK. But: I like to add some sortings. But I cannot figure out how to make EF understand, that the sort shall be sent to the database instead sorting on client side afterwards.
The problem seems, that the data is requested from database when I retrieve the navigation property's value and not when I complete the command chain with the sort.
My code is like (simplyfied):
var dynamicRelatedEntityType = typeof(RelatedEntity);
using (var dbContext = new DBContext())
{
var orderByFunction = buildOrderByFunction(dynamicRelatedEntityType ); // this just builds a function for the order by ...
var masterEntity = dbContext.MasterEntity.first(x=> x.Whatever = true);
var navigationProperty = masterEntity.GetType().GetProperty(dynamicRelatedEntityType.Name);
var result = navigationProperty.GetValue(masterEntity).OrderBy(orderByFunction).ToList();
// result is OK, but sort wasn't sent to data base ... it was done by my program which is quite time expensive and silly too ...
}
So, how I can change this behaviour, any ideas?
Thank you in advance!
EDIT
The solution provided for this question solves to do dynamic predicates, but you cannot apply them if you still use navigationProperty.GetValue(masterEntity). In that case the EF will fire SQL immediatley without any order or where clause ...
Your database server is able to process TSQL statements only. Entity Framework (particularly the SQL Server pluging for Entity Framework) is capable of translating a small subset of C# expressions in a valid TSQL (in your case for an order by statement).
When your expression is too complex (for example invokes methods, alters the state) to be translanted into TSQL, Entity Framework will resort to an in memory operation.
If your are using .NET Core, you can use the following snipped while registering the content to spot all the "unsupported" statements that are executed in memory.
var builder = new DbContextOptionsBuilder<MyContext>();
var connectionString = configuration.GetConnectionString("DefaultConnection");
builder.UseSqlServer(connectionString);
// the following line is the one that prevents client side evaluation
builder.ConfigureWarnings(x => x.Throw(RelationalEventId.QueryClientEvaluationWarning));
Givem that, which is important to understand when a custom expression is involved, LINQ requires a static expression to infer the ordering. However, you can generate a dynamic expression as suggested by LINQ Dynamic Expression Generation. Although I never tried the described approach, it seems to me a viable way to achieve what you ask.
Related
Using ExecuteSqlInterpolated with EF Core is a great way to write more complex queries and also keep boilerplate down by doing binding on the fly with string interpolation too. A common clause we use in our code base is the IN clause.
Unfortunately when using ExecuteSqlInterpolated this is not possible because the "translation"/mapping to SQL is not implemented (FYI we are using the Pomelo library).
Here is an example:
var rowsChanged = await dbContext.Database.ExecuteSqlInterpolatedAsync($#"
UPDATE
ExampleTable
SET
ExampleColumn = true,
WHERE
Id IN ({ids})
");
This results in an error
System.InvalidOperationException : The current provider doesn't have a store type mapping for properties of type 'int[]'
To get around this, we build a custom query string and use the non-interpolated alternative, while this may still look clean it gets very messy in even a mildly complex query.
var idPlaceholders = Enumerable.Range(0, ids.Length).Select(i => $"{{{i}}}").StringJoin(", ");
var query = $#"
UPDATE
ExampleTable
SET
ExampleColumn = true,
WHERE
Id IN ({idPlaceholders})
";
var rowsChanged = await dbContext.Database.ExecuteSqlRawAsync(query, ids);
I tried doing this in DbContext ConfigureConventions, but I get the same error, it must be at a different level.
configurationBuilder
.DefaultTypeMapping<int[]>()
.HasConversion<IntArrayDbValueConverter>();
configurationBuilder
.Properties<int[]>()
.HaveConversion<IntArrayDbValueConverter>();
There must be some way to add a custom "translation" to the DbContext or provider. Any of the scarce documentation I can find on this topic has been vague or irrelevant. Looking for help.
Usually the distinction between LINQ to SQL and LINQ to Objects isn't much of an issue, but how can I determine which is happening?
It would be useful to know when writing the code, but I fear one can only be sure at run time sometimes.
It's not micro optimization to make the distinction between Linq-To-Sql and Linq-To-Objects. The latter requires all data to be loaded into memory before you start filtering it. Of course, that can be a major issue.
Most LINQ methods are using deferred execution, which means that it's just building the query but it's not yet executed (like Select or Where). Few others are executing the query and materialize the result into an in-memory collection (like ToLIst or ToArray). If you use AsEnumerable you are also using Linq-To-Objects and no SQL is generated for the parts after it, which means that the data must be loaded into memory (yet still using deferred execution).
So consider the following two queries. The first selects and filters in the database:
var queryLondonCustomers = from cust in db.customers
where cust.City == "London"
select cust;
whereas the second selects all and filters via Linq-To-Objects:
var queryLondonCustomers = from cust in db.customers.AsEnumerable()
where cust.City == "London"
select cust;
The latter has one advantage: you can use any .NET method since it doesn't need to be translated to SQL (e.g. !String.IsNullOrWhiteSpace(cust.City)).
If you just get something that is an IEnumerable<T>, you can't be sure if it's actually a query or already an in-memory object. Even the try-cast to IQueryable<T> will not tell you for sure what it actually is because of the AsQueryable-method. Maybe you could try-cast it to a collection type. If the cast succeeds you can be sure that it's already materialized but otherwise it doesn't tell you if it's using Linq-To-Sql or Linq-To-Objects:
bool isMaterialized = queryLondonCustomers as ICollection<Customer> != null;
Related: EF ICollection Vs List Vs IEnumerable Vs IQueryable
The first solution comes into my mind is checking the query provider.
If the query is materialized, which means the data is loaded into memory, EnumerableQuery(T) is used. Otherwise, a special query provider is used, for example, System.Data.Entity.Internal.Linq.DbQueryProvider for entityframework.
var materialized = query
.AsQueryable()
.Provider
.GetType()
.GetGenericTypeDefinition() == typeof(EnumerableQuery<>);
However the above are ideal cases because someone can implement a custom query provider behaves like EnumerableQuery.
I had the same question, for different reasons.
Judging purely on your title & initial description (which is why google search brought me here).
Pre compilation, given an instance that implements IQueryable, there's no way to know the implementation behind the interface.
At runtime, you need to check the instance's Provider property like #Danny Chen mentioned.
public enum LinqProvider
{
Linq2SQL, Linq2Objects
}
public static class LinqProviderExtensions
{
public static LinqProvider LinqProvider(this IQueryable query)
{
if (query.Provider.GetType().IsGenericType && query.Provider.GetType().GetGenericTypeDefinition() == typeof(EnumerableQuery<>))
return LinqProvider.Linq2Objects;
if (typeof(ICollection<>).MakeGenericType(query.ElementType).IsAssignableFrom(query.GetType()))
return LinqProvider.Linq2Objects;
return LinqProvider.Linq2SQL;
}
}
In our case, we are adding additional filters dynamically, but ran into issues with different handling of case-sensitivity/nullreference handling on different providers.
Hence, at runtime we had to tweak the filters that we add based on the type of provider, and ended up adding this extension method:
Using EF core in net core 6
To see if the provider is an EF provider, use the following code:
if (queryable.Provider is Microsoft.EntityFrameworkCore.Query.Internal.EntityQueryProvider)
{
// Queryable is backed by EF and is not an in-memory/client-side queryable.
}
One could get the opposite by testing the provider against System.Linq.EnumerableQuery (base type of EnumerableQuery<T> - so you don't have to test generics).
This is useful if you have methods like EF.Functions.Like(...) which can only be executed in the database - and you want to branch to something else in case of client-side execution.
I am trying to send a math function to sql server using entity framework 6.
I have a simple query:
using(var db = databaseContext)
{
var query = db.Foo.Select(x => Math.Sin(x.bar));
}
However this gives me an exception.
An exception of type 'System.NotSupportedException' occurred in
EntityFramework.dll but was not handled in user code
Additional information: LINQ to Entities does not recognize the method
'Double Sin(Double)' method, and this method cannot be translated into
a store expression.
The problem is that entity framework doesn't know how to translate Math.sin into a Sql Server equivalent. Are there any other classes that I can use that will work?
i would first load the data from the db and do such calculations in memory
using(var db = databaseContext)
{
var listofFoo= db.Foo.toList();
var listofFooSin = listofFoo.Select(x => Math.Sin(x.bar));
}
A ton of MSSQL-specific functions are exposed as static methods in the class System.Data.Objects.SqlClient.SqlFunctions - but not all.
For Sinyou can use SqlFunctions.Sin and don't need to use linq to object. the default Sin is not a supported function in EF
Your object is not REALIZED yet and as such cannot do complex functions on it. This is similar with using extension methods and other things. The magic of just adding a 'ToList()' in many things with EntityFramework will make it realized and as such able to do more things. When in doubt with Entity use a 'ToList()'. The problem is that the objects are not fulling realized when you do something similar to:
'context.(object).Select(x => (docomplexThing(x))'
Entity has some base properties and operations like equalitative operations and lambda functions it can do. It cannot do heavy duty methods until the object is cast to a more realized object in memory or projected to something first. ToList accomplishes this. It is not just Math operations but encryption and custom made extension methods. Sometimes they will work in cases but most of the time not. I always think of it like the ADO.NET layers where you have a disconnected and a connected layer. The context and grabbing things from it is connected and as such does not expose a lot of options, once it is disconnected through or projected, you have free reign to go nuts on things.
static void Main(string[] args)
{
using (var context = new TesterEntities())
{
var items = context.tePersons.ToList().Select(x => Math.Sin(x.PersonId));
}
Console.ReadLine();
}
I have the following method in a data access class which uses entity framework:
public static IEnumerable<entityType> GetWhere(Func<entityType, bool> wherePredicate)
{
using (DataEntities db = new DataEntities())
{
var query = (wherePredicate != null)
? db.Set<entityType>().Where(wherePredicate).ToList()
: db.Set<entityType>().ToList();
return query;
}
}
This works fine when I use the entities across all layers... however I am trying to move to using a DTO class and I would like to do something like the following:
public static IEnumerable<EntityTypeDTO> GetWhere(Func<EntityTypeDTO, bool> wherePredicate)
{
//call a method here which will convert Func<EntityTypeDTO,bool> to
// Func<EntityType,bool>
using (DataEntities db = new DataEntities())
{
var query = new List<EntityType>();
if (wherePredicate == null)
{
query = db.Set<EntityType>().ToList();
}
else
{
query = (wherePredicate != null)
? db.Set<EntityType>().Where(wherePredicate).AsQueryable<EntityType>().ToList()
: db.Set<EntityType>().ToList();
}
List<EntityTypeDTO> result = new List<EntityTypeDTO>();
foreach(EntityType item in query)
{
result.Add(item.ToDTO());
}
return result;
}
}
Essentially I want a method which will convert Func to Func.
I think I have to break down the Func into an expression tree and then rebuild it somehow in the entityType?
I want to do this to allow the Presentation Layer to just pass the Expression queries?
Am I missing something basic or is there an easier design pattern that can pass a query from a DTO to a data access class without knowing the details of the query?
I have tried making the DTO inherit from the entity which doesn't seem to work either?
If there is a better design pattern that I am missing I would love a pointer and I can investigate from there...
Firstly I would suggest that you put a querying layer of your own in front of Entity Framework rather than allowing any arbitrary Func to be passed in because it will be very easy in the future to pass a Func that Entity Framework can not translate into a SQL statement (it can only translate some expressions - the basics are fine but if your expression calls a C# method, for example, then Entity Framework will probably fail).
So your search layer could have classes that you build up as criteria (eg. a "ContainsName" search class or a "ProductHasId" class) that are then translated into expressions in your search layer. This separates your app entirely from the ORM, which means that ORM details (like the entities or like the limitations of what Funcs can and can't be translated) don't leak out. There's lots out there that's been written about this some of arrangement.
One final note, though, if you are working close to the ORM layer, Entity Framework is very clever and you could probably get a long way without trying to translate your Func<dto, bool> to a Func<entity, bool>. For example, in the below code, accessing "context.Products" returns a "DbSet" and calling Select on it returns an IQueryable and calling Where on that also returns an IQueryable. Entity Framework will translate all of that into a single SQL statement so it won't pull all other Products into memory and then filter the ID on that memory set, it will actually perform the filtering in SQL even though the filter is operating on a projected type (which is equivalent to the DTO in your case) and not the Entity Framework entity -
var results = context.Products
.Select(p => new { ID = p.ProductID, Name = p.ProductName })
.Where(p => p.ID < 10)
.ToList();
The SQL executed is:
SELECT
[Extent1].[ProductID] AS [ProductID],
[Extent1].[ProductName] AS [ProductName]
FROM [dbo].[Products] AS [Extent1]
WHERE [Extent1].[ProductID] < 10
So, if you changed your code to get something like..
return context.Products
.Map<Product, ProductDTO()>()
.Where(productDtoWherePredicate)
.ToList();
.. then you might be just fine with the Funcs that you already have. I presume that you already have some sort of mapping functions to get from EF entities to DTOs (but if not then you might want to look into AutoMapper to help you out - which has support for "projections", which are basically IQueryable maps).
I am going to put this up as an answer.Thanks to Dan for the quick answer. Looking at what you are saying I can write a query/filter set of classes. for example, take the following code:
GetProducts().GetProductsInCategory().GetProductsWithinPriceRange(minPrice, maxPrice);
This code would run like so: Get Products would get all products in the table and the remaining functions would filter the results. if all queries run like this it may put a significant load on the Data Access Layer/ DB Server Connections... not sure.
or
An Alternate I will work on also is:
If each function creates a Linq expression, I could combine them like this: How do I combine multiple linq queries into one results set?
this may allow me to do this in a manner where I can return the filtered results set from the database.
Either way I am marking this as answered. I will update when I have more details.
I have a model-first, entity framework design like this (version 4.4)
When I load it using code like this:
PriceSnapshotSummary snapshot = db.PriceSnapshotSummaries.FirstOrDefault(pss => pss.Id == snapshotId);
the snapshot has loaded everything (that is SnapshotPart, Quote, QuoteType), except the DataInfo. Now looking into the SQL this appears to be because Quote has no FK to DataInfo because of the 0..1 relationship.
However, I would have expected that the navigation property 'DataInfo' on Quote would still go off to the database to fetch it.
My current work around is this:
foreach (var quote in snapshot.ComponentQuotes)
{
var dataInfo = db.DataInfoes.FirstOrDefault(di => di.Quote.Id == quote.InstrumentQuote.Id);
quote.InstrumentQuote.DataInfo = dataInfo;
}
Is there a better way to achieve this? I thought EF would automatically load the reference?
This problem has to do with how the basic linq building blocks interact with Entity Framework.
take the following (pseudo)code:
IQueryable<Address> addresses;
Using (var db = new ObjectContext()) {
addresses = db.Users.Addresses.Where(addr => addr.Number > 1000);
}
addresses.Select(addr => Console.WriteLine(addr.City.Name));
This looks OK, but will throw a runtime error, because of an interface called IQueryable.
IQueryable implements IEnumerable and adds info for an expression and a provider. This basically allows it to build and execute sql statements against a database and not have to load whole tables when fetching data and iterating over them like you would over an IEnumerable.
Because linq defers execution of the expression until it's used, it compiles the IQueryable expression into SQL and executes the database query only right before it's needed. This speeds up things a lot, and allows for expression chaining without going to the database every time a Where() or Select() is executed. The side effect is if the object is used outside the scope of db, then the sql statement is executed after db has been disposed of.
To force linq to execute, you can use ToList, like this:
IQueryable<Address> addresses;
Using (var db = new ObjectContext()) {
addresses = db.Users.Addresses.Where(addr => addr.Number > 1000).ToList();
}
addresses.Select(addr => Console.WriteLine(addr.City.Name));
This will force linq to execute the expression against db and get all addresses with number greater than a thousand. this is all good if you need to access a field within the addresses table, but since we want to get the name of a city (a 1..1 relationship similar to yours), we'll hit another bump before it can run: lazy loading.
Entity framework lazy loads entities by default, so nothing is fetched from the database until needed. Again, this speeds things up considerably, since without it every call to the database could potentially bring the whole database into memory; but has the problem of depending on the context being available.
You could set EF to eager load (in your model, go to properties and set 'Lazy Loading Enabled' to False), but that would bring in a lot of info you probably don't use.
The best fix for this problem is to execute everything inside db's scope:
IQueryable<Address> addresses;
Using (var db = new ObjectContext()) {
addresses = db.Users.Addresses.Where(addr => addr.Number > 1000);
addresses.Select(addr => Console.WriteLine(addr.City.Name));
}
I know this is a really simple example but in the real world you can use a DI container like ninject to handle your dependencies and have your db available to you throughout execution of the app.
This leaves us with Include. Include will make IQueryable include all specified relation paths when building the sql statement:
IQueryable<Address> addresses;
Using (var db = new ObjectContext()) {
addresses = db.Users.Addresses.Include("City").Where(addr => addr.Number > 1000).ToList;
}
addresses.Select(addr => Console.WriteLine(addr.City.Name));
This will work, and it's a nice compromise between having to load the whole database and having to refactor an entire project to support DI.
Another thing you can do, is map multiple tables to a single entity. In your case, since the relationship is 1-0..1, you shouldn't have a problem doing it.