I’ve been using OData for my apis.
While I generally like what is has to offer, it only uses the data Post my query, which forces me to to construct all the relationships ahead of time.
Does an oData EndPoint with EntityFramework pass my oData parameters to be execute pre my SQL query?
Right now if I plan to possibly use oData Syntaxes like $Expand, I have to use EF Include ahead of time. Once again, the issue being that EF must build all of the potential relationships that I may use $Expand with...even if I don’t $expand anything.
Another example is if I am to use the $top(100) syntax. Say I had 10000 results, EF will download all 10000 from the DB, and then OData will select the Top 100.
Would an oData endpoint inject itself between EF and the DB and only select 100 results from the DB in this case?
In general OData and EF go hand in hand, OData translates the incoming HTTP request into a Linq-to-Entities expression that EF then translates to a SQL expression.
tl;dr
All of your comments and observations point to incorrect implementation within your controller, it sounds suspiciously as if you have followed a Repository Pattern based example instead of an EF based examples.
Does an oData EndPoint with EntityFramework pass my oData parameters to be execute pre my SQL query?
That is exactly what the OData framework was designed to enable, but there is 1 caveat, you must configure your controllers in a way that the parameters can be passed through.
There are two mechanisms that allow this to happen, the first is that your controller needs to return an IQueryable<T> result (or you must pass an IQueryable<T> to one of the negotiated response handlers). The other is that you must not apply your own filter expressions that might contradict the parameters, otherwise you may result in no records being returned.
The following is an example of the two standard Get endpoints on an OData controller that will return a Vehicle query that will allow the $expand,$select and $filter expressions to be passed through:
[ODataRoute]
[HttpGet]
[OData.EnableQuery]
public IHttpActionResult Get(ODataQueryOptions<Vehicle> queryOptions)
{
return Ok(db.Vehicles);
}
[ODataRoute]
[HttpGet]
[OData.EnableQuery]
public IHttpActionResult Get([FromOdataUri] int key, ODataQueryOptions<Vehicle> queryOptions)
{
return SingleResultAction(db.Vehicles.Where(x => x.Id == key));
}
If I call this with the following query options:
$filter=Make eq Holden and Model eq Commodore&$orderby=Year desc
then that will translate into a SQL query similar to this:
(SELECT * would be expanded in full)
DECLARE #make varchar(20) = 'Holden';
DECLARE #model varchar(20) = 'Commodore';
SELECT *
FROM Vehicle
WHERE Make = #make
AND Model = #model
ORDER BY Year DESC
Yes the parameters will be properly parameterised as well!
This simple controller will also pass through the $expand request and automatically join the necessary tables, there is no need for us to know or think about the potential includes up front at all.
In fact, if I add $skip=5 and $top=2 those clauses will also be applied directly to the SQL statement, and at most only 2 rows will be returned from the database.
There are two common scenarios where all this auto-magical query translation and pass through mumbo jumbo gets broken.
In the controller method, (for either of the collection or item) you do not return an IQueryable result, or you have otherwise executed the query and resolved it to an IEnumerable and then you have returned that, perhaps by casting it back to an IQueryable.
This tends to happen because when we start out we tend follow simple non EF based OData examples, they usually follow a repository pattern because it is simple to explain the model, example data and implementation in a single page of code.
Public Service Announcement : Do not try to use a Repository Pattern to wrap an EF model and context, almost all the benefits from EF will be lost. EF is a Unit of work pattern, OData is designed to work with it directly.
The following controller implementation will NOT pass through the query options to the database, $top,$skip,$filter,$select,$orderby will still be applied, if it can, but only after retrieving all the records into memory first:
return Ok(db.Vehicles.ToList());
$expand will not be applied, or rather it will result in NULL related records.
The other common issue is after Actions where a data record (or records) have been processed if we want to support the entire range of query options automatically, we again need to make sure we return an IQueryable expression that queries from the DbContext. If the expression is IQueryable, but is only querying against what is already in memory, then $expand and $filter for instance can only be applied to that data which is already loaded.
SingleResultAction is a good helper method for returning an IQueryable that has been scoped toa single item, but it will still allow the full range of QueryOptions to be applied.
Welcome to a world of pain - Odata is a great idea with a pretty bad implementation. But yes, it is a amazing - doing it myself.
it only uses the data Post my query, which forces me to to construct all the
relationships ahead of time.
If this asks what I think it does (your english is VERY unclear) then no, it does not - your fault is exposing ef objects directly. I have separate api objects and expose them using AutoMapper ProjectTo. Yes, I need to define relationships ahead of time, but not on ef level.
I have to use EF Include ahead of time.
That is because you decide to. I acutally use, as I said, Automapper ProjectTo - and get the necessary expands from the OdataOptions SelectExpand information dynamically. No need to send the whole object graph to EF (which is what happens if you expand all possible expansions with includes), this will result in really bad performance. Just a page of programming to get the relevant includes.
Would an oData endpoint inject itself between EF and the DB and only select 100 results
from the DB in this case?
if it is programmed like that, yes. If someone challenged with LINQ programs it and packs ef in one of the ridiculous inefficient repository patterns that return IENumerable - then no, it will pull it possibly all. Been there seen that. But normally TOP(1) results in sql selecting only the first item.
Related
In my application I have the following DTO object which retrieves data, via EF Core, from SQL and computes a certain field:
public class MyDTO
{
public string MyDTOProperty { get; set ; }
public string MyDTOComputedField(){
...
}
}
My query method looks like:
public class MyQueries
{
...
[UseDbContext(typeof(ApiDbContext))]
[UseFiltering(typeof(MyFilter))]
[UseSorting]
public IQueryable<MyDTO> GetObject([ScopedService] ApiDbContext context){
var query = context.MyDB;
return query.Select(fea => new MyDTO(){
MyDTOProperty = fea.property
});
}
}
Filtering and sorting only seems to work on the properties with get and set method. My question is, how can I enable filtering and sorting on my computed fields such that the following GraphQL query would be possible:
{
Object(where: {MyDTOComputedField: {contains: "someSubString"}}, order: {MyDTOComputedField: ASC}){
MyDTOProperty
MyDTOComputedField
}
}
I already tried with defining my own filtering/sorting middleware, without any luck so far.
TL;DR; You are using Custom Resolver (HC feature), not Computed Column (T-SQL feature), which could not be translated to SQL by Entity Framework.
First thing first, this is not a Hot Chocolate problem, but Entity Framework problem.
[UseFiltering]
Use filtering is not a magic nor golden bullet. It is only middleware, which will generate argument where for your endpoint and then, at runtime, it will take this argument (in your case {MyDTOComputedField: {contains: "someSubString"}}), make Linq Expression from it and return input.Where(Expression).
And thats pretty much it.
(Of course, if you ever wrote string -> linq expression piece of code then you know, its not THAT simple, but good folks from HC did exactly that for us :) )
Something like
System.Linq.Expression<Func<MyDTO, bool>> where =
myDto => myDto.MyDTOComputedField.Contains("someSubString");
return input.Where(where);
(remember, every middleware in HC is just a piece of pipe - it have input, some process and output. Thats why order of middlewares matters. Btw, same with "order by", but it will return input.OrderBy(expression))
Now, because input is DbSet<MyDTO>, then nothing is executed "right away" but lazily - real work is done by Entity Framework - it take linq Expresion (.Where().Sort()), translate it to T-SQL and send it as query.
And there is your problem: Your MyDTO.MyDTOComputedField is not translateable to SQL.
Why its not translateable?
Because your MyDTOComputedField is not "computed column" but "custom resolver". It exists only in your app and SQL have no idea what it should contains. Maybe it is something trivial as a + b * 42 (then computed column would be great!) but maybe it is request to another server REST api (why not :) ) - we dont know.
Then why not execute part of query on server and rest locally?
Because this scale reeeeeeeeally badly. You did not show us implementation of MyDTO.MyDTOComputedField, so let assume it do something trivial. Like cast((a + b * 42) as nvarchar(max));. Meaning, it will always be some int but casted as nvarchar. Meaning, if you ask for Contains("someSubString") it will always have 0 results.
Ok, now imagine, your MyDTO table (btw, I expect MyDTO to be EF model even with DataTransferObject in name...) have 10.000.000 rows (in enterprise scale app its bussiness as usual :) ).
Because you are sane person (and because it will make this example much better to understand :) ), you add pagination. Lets say 100 items per page.
In this example, you expect EF to do select top 100 * from MyDto where MyDTOComputedField like '%someSubString%'.
But thats not gonna happen - sql have no idea what MyDTOComputedField is.
So it have two options, both bad: It will execute select top 100, then do filter locally - but there is zero result. So it will take another 100 and another 100 and another and another and (10.000.000/100 = 100.000 select query!) only to found that there is 0 result.
Another possible solution is, when EF found that some part of expression have to be executed locally, it will execute locally whole query. So it will select, fetch, materialise 10.000.000 entities in one go and THEN it will filter them just to see there is 0 result. Somehow better, but still bad.
You just DDoS yourself.
Btw, Option 2 was what Entity Framework before core (Classic?) did. And it was source of soooo much bugs, when you accidentally fetched whole table, that good folks from EF team drop support for it and now they throw
"The LINQ expression 'DbSet()\n .Where(f => new MyDTO{ \r\n id = f.i, \r\n }\r\n.MyDTOProperty == __p_3' could not be translated. Either rewrite the query in a form that can be translated, or switch to client evaluation explicitly by inserting a call to 'AsEnumerable', 'AsAsyncEnumerable', 'ToList', or 'ToListAsync'. See go.microsoft.com/fwlink/?linkid=2101038 for more information."
Ok... But what to do?
I know there will never be many rows.
Maybe MyDTO is just some list which will never explode (lets say, for example, VAT rates - there is pretty much Standard, Zero, Reduced + some states have some more. So we hardly look at table greater then ~5 rows, more when your app is international, but still - a few.)
Then you dont have to be afraid of local execution. Just add ".ToArray()" or ".ToList()" on the end of your DbSet call. As your Exception told you.
But be aware that this can really bite you later, if not done carefully.
Computed Column
If your implementation of MyDTOComputedField is trivial, you can move it to database. Set EF ComputedColumn, do migration, drop your resolver and you are ready to go.
Database View
Another possible option is to make database view.
This is more robust solution then Computed Column (at least, you can optimalise your view really well (custom index(es), better joins, no inner query etc...), but it take more work & you have to know what you are doing. AFAIK EF cant generate view for you, you have to write it by hand.
Just make empty migration, add your view, EF entity (make sure to use ToView() and not ToTable()), drop your resolver and you are ready to go.
In both cases, your query (dto?) model will be different from mutation (domain?) model, but thats ok - you really do not want to let consumer of your api to even try to mutate your MyDTOComputedField anyway.
Its not possible to translate it to SQL
Maybe your custom resolver do something not really under your control / not doable in sql (= not doable in EF). For example, http call to another server. Then, its up to you to do it right within your business logic. Maybe add custom query argument. Maybe write your own implementation of [UseFiltering] (its not THAT hard - HotChocolate is open source with great licencing, so you can basically go and [ctrl] + [c] [ctrl] + [v] current implementation and add what you need to add.)
I can't advise you, i dont know your bussiness requirement for MyDTOComputedField.
I can't wrap my brain around this, I don't get why this behaves likes this.
I have made an OData query that returns a collection of 175 items
from c in navClient.Item where c.No != "" && c.Last_Date_Modified > dtfrom select c
navClient.Item is a System.Data.Services.Client.DataServiceQuery
However I want to take first 100 items from the collection using .Take(100) but get 0 items. It isn't until I do .Take(121) I get my first item, which is the first item in the collection, .Take(122) returns the first two items and so on.
Any idea why this behaves like this?
Edit:
Doing ToList first then Take(100) returns first 100 as expected.
My only theory right know is that the table I'm running my query against is just a temp table that is out of sync with the database.
You have described what looks like an issue in the server implementation that handles your request. This behaviour would occur if the .Take(100) was evaluated before the filter criteria. This issues can easily occur in the client or the server implementations. The .ToList() works because it brings back the entire collection to the client and then applies the filter criteria with standard linq to objects evaluation.
While not the cause today...
as a general rule, whenever you specify .Take() or .Skip() you should have also specified the explicit .OrderBy(), when the order is ambiguous in a limiting or paging query so too can be the results.
There are 3 common levels at play here:
Client Query Resolution
Your linq query is first resolved into a URL that will be used to make the request to the API, you should first check that the URL is correctly constructed.
You should be expecting a URL similar to:
/odata/Item?$top=100&$filter=No ne '' AND Last_Date_Modified gt '2020-01-20T17:24:21.3605918+11:00'
You can inspect the URL using the .RequestUri property on the query, try something like the following to capture it:
var query = from c in navClient.Item
where c.No != "" && c.Last_Date_Modified > dtfrom
select c;
query = query.Take(100);
System.Diagnostics.Debug.WriteLine(((DataServiceQuery<Item>)query).RequestUri);
The URL needs to through both the $Top and the $filter query options to the server.
API Controller
The client is expecting the controller to handle or pass through both the $filter criteria and the $top expression to the underlying data store.
Many default EF or Unit of work based OData implementations would simply return a proper IQueryable<T> expression using deferred execution. However if the backend store uses a repository pattern or the controller is otherwise constructing the dataset then that code may need to explicitly handle the $filter criteria before applying the $top
There are many simple OData controller examples out there that have a mocked List<T> or otherwise IEnumerable<T> backend, the red flag is that if in the controller code you see a .ToList() or a .AsQueryable() on the main expression, or it returns an IEnumerable<T> response, then it indicates that the expression is not using deferred execution at all. This means that it most likely needs to manage the query options manually.
OData Query Expression Resolver
Especially in the .Net Implementation, the EnableQueryAttribute by default applies the query options to the final IQueryable output immediately before or effectively as part of the serialization process.
So the $top and $filter (and $orderby,$select,$expand) will be applied again even if the controller method has already evaluated these options. This isn't normally a problem, its a validation fail-safe and means that you do not strictly have to return IQueryable<T> from your controller methods at all.
Due to this, if our backend supports deferred IQueryable<T> linq expressions (like DbSet<T> in an EF DbContext) then in the controller implementation we do not usually do anything with the query options, you let the EnableQueryAttribute process it for you.
However, if the controller were to process the paging expression first .Take(100) and not apply the $filter correctly or at all, then the EnableQueryAttribute would apply the criteria to a list that had already been restricted, in your example, perhaps the first 100 items
I am making a call to a function in .net from angular js, the time it takes to get the response to angular from .net is more than 5 seconds. How can I make the mapping of the result of a sql query decrease in time, I have the following code.
List<CarDTO> result = new List<CarDTO>();
var cars = await _carsUnitOfWork.CarRepository.GetCarDefault(carFilter,true,_options.Value.priorityLabel);
result = cars.Select(car => _mapper.Map<Car, CarDTO>(car)).ToList();
The code you have provided isn't expanded enough to identify a cause, but there are a number of clues:
Check / post the code for CareRepository.GetCarDefault(). The call implies that this is returning an IEnumerable given it is Awaited. It isn't clear what all of the parameters are and how they affect the query. As your database grows, this appears to return all cars, rather than supporting pagination. (What happens when you have 10,000 Car records, or 1,000,000?)
Next would be the use of Automapper's Map method. Combined with IEnumerable this means that your repository is going through the hassle of loading all Care entities into memory, then Automapper is allocating a duplicate set of DTOs into memory copying across data from the entities.
Lazy loading is a distinct risk with an approach like this. If the CarDTO pulls any fields from entities referenced by a Car, this will trip off additional queries for each individual car.
For best performance, I highly recommend adopting an IQueryable return type on Repository methods and leveraging Automapper's ProjectTo method rather than Map. This is equivalent to using Select in Linq, as ProjectTo will bubble down into the SQL generation to build efficient queries and return the collection of DTOs. This removes the risk of lazy loading calls as well as the double memory allocation for entities then DTOs.
Implementing this with your Unit of Work pattern is a bit of an unknown without seeing the code. However it would look something more like:
var result = await _carsUnitOfWork.CarRepository
.GetCarDefault(carFilter,true,_options.Value.priorityLabel)
.ProjectTo<CarDto>(mapperConfig)
.ToListAsync(); // Add Skip() and Take() to support pagination.
The repository method would be changed from being something like:
public async IEnumerable<Car> GetCarDefault( ... )
to
public IQueryable<Car> GetCarDefault( ... )
Rather than the method returning something like .ToListAsync(), you return the built Linq expression.
I.e. change from something like:
var result = _context.Cars.Include().Where(x => ...).ToListAsync();
return result;
to
var query = _context.Cars.Where(x => ....);
return query;
The key differences is that where the existing method likely returns ToListAsync() we remove that and return the unmaterialized IQueryable that Linq is building. Also, if the current implementation is Eager Loading any relations /w .Include() we exclude those. The caller performing projection doesn't need that. If the caller does need Car entity graphs, (such as when updating data) the caller can append .Include() statements.
It is also worth running an SQL Profiler to look at what queries are being run against the database server. This can give you the queries to inspect and test, as well as highlight any unexpected queries being triggered. (I.e. caused by lazy loading calls)
That should give you some ideas on where to start.
Usually the distinction between LINQ to SQL and LINQ to Objects isn't much of an issue, but how can I determine which is happening?
It would be useful to know when writing the code, but I fear one can only be sure at run time sometimes.
It's not micro optimization to make the distinction between Linq-To-Sql and Linq-To-Objects. The latter requires all data to be loaded into memory before you start filtering it. Of course, that can be a major issue.
Most LINQ methods are using deferred execution, which means that it's just building the query but it's not yet executed (like Select or Where). Few others are executing the query and materialize the result into an in-memory collection (like ToLIst or ToArray). If you use AsEnumerable you are also using Linq-To-Objects and no SQL is generated for the parts after it, which means that the data must be loaded into memory (yet still using deferred execution).
So consider the following two queries. The first selects and filters in the database:
var queryLondonCustomers = from cust in db.customers
where cust.City == "London"
select cust;
whereas the second selects all and filters via Linq-To-Objects:
var queryLondonCustomers = from cust in db.customers.AsEnumerable()
where cust.City == "London"
select cust;
The latter has one advantage: you can use any .NET method since it doesn't need to be translated to SQL (e.g. !String.IsNullOrWhiteSpace(cust.City)).
If you just get something that is an IEnumerable<T>, you can't be sure if it's actually a query or already an in-memory object. Even the try-cast to IQueryable<T> will not tell you for sure what it actually is because of the AsQueryable-method. Maybe you could try-cast it to a collection type. If the cast succeeds you can be sure that it's already materialized but otherwise it doesn't tell you if it's using Linq-To-Sql or Linq-To-Objects:
bool isMaterialized = queryLondonCustomers as ICollection<Customer> != null;
Related: EF ICollection Vs List Vs IEnumerable Vs IQueryable
The first solution comes into my mind is checking the query provider.
If the query is materialized, which means the data is loaded into memory, EnumerableQuery(T) is used. Otherwise, a special query provider is used, for example, System.Data.Entity.Internal.Linq.DbQueryProvider for entityframework.
var materialized = query
.AsQueryable()
.Provider
.GetType()
.GetGenericTypeDefinition() == typeof(EnumerableQuery<>);
However the above are ideal cases because someone can implement a custom query provider behaves like EnumerableQuery.
I had the same question, for different reasons.
Judging purely on your title & initial description (which is why google search brought me here).
Pre compilation, given an instance that implements IQueryable, there's no way to know the implementation behind the interface.
At runtime, you need to check the instance's Provider property like #Danny Chen mentioned.
public enum LinqProvider
{
Linq2SQL, Linq2Objects
}
public static class LinqProviderExtensions
{
public static LinqProvider LinqProvider(this IQueryable query)
{
if (query.Provider.GetType().IsGenericType && query.Provider.GetType().GetGenericTypeDefinition() == typeof(EnumerableQuery<>))
return LinqProvider.Linq2Objects;
if (typeof(ICollection<>).MakeGenericType(query.ElementType).IsAssignableFrom(query.GetType()))
return LinqProvider.Linq2Objects;
return LinqProvider.Linq2SQL;
}
}
In our case, we are adding additional filters dynamically, but ran into issues with different handling of case-sensitivity/nullreference handling on different providers.
Hence, at runtime we had to tweak the filters that we add based on the type of provider, and ended up adding this extension method:
Using EF core in net core 6
To see if the provider is an EF provider, use the following code:
if (queryable.Provider is Microsoft.EntityFrameworkCore.Query.Internal.EntityQueryProvider)
{
// Queryable is backed by EF and is not an in-memory/client-side queryable.
}
One could get the opposite by testing the provider against System.Linq.EnumerableQuery (base type of EnumerableQuery<T> - so you don't have to test generics).
This is useful if you have methods like EF.Functions.Like(...) which can only be executed in the database - and you want to branch to something else in case of client-side execution.
I have been puzzling over a problem this morning with LinqToSQL. I'll try and summarise with the abbreviated example below to explain my point.
I have DB two tables:
table Parent
{
ParentId
}
table Child
{
ChildId
ParentId [FK]
Name
Age
}
These have LinqToSQL equivalent classes in my project, however, I have written two custom model classes that I want my UI to use, instead of using the LinqToSQL classes.
My data access from the front end goes through a service class, which in turn calls a repository class, which queries the data via linq.
At the repository level I return an IQueryable by:
return from data in _data.Children
select new CustomModel.Child
{
ChildId = data.ChildId,
ParentId = date.ParentId
};
My service layer then adds an additional query restriction by parent before returning the list of children for that parent.
return _repository.GetAllChildren().Where(c => c.Parent.ParentId == parentId).ToList();
So at this point, I get the method has no supported translation to sql error when I run everything as the c.Parent property of my custom model cannot be converted. [The c.Parent property is an object reference to the linked parent model class.]
That all makes sense so my question is this:
Can you provide the querying process
with some rules that will convert a
predicate expression into the correct
piece of SQL to run at the database
and therefore not trigger an error?
I haven't done much work with linq up to now so forgive my lack of experience if I haven't explained this well enough.
Also, for those commenting on my choice of architecture, I have changed it to get around this problem and I am just playing around with ideas at this stage. I'd like to know if there is an answer for future reference.
Many thanks if anyone can help.
Firstly, it begs the question: why is the repository returning the UI types? If the repo returned the database types, this wouldn't be an issue. Consider refactoring so that the repo deals only with the data model, and the UI does the translation at the end (after any composition).
If you mean "and have it translate down to the database" - then basically, no. Composable queries can only use types defined in the LINQ-to-SQL model, and a handful of supported standard functions. Something similar came up recently on a related question, see here.
For some scenarios (unusual logic, but using the typed defined in the LINQ-to-SQL model), you can use UDFs at the database, and write the logic yourself (in TSQL) - but only with LINQ-to-SQL (not EF).
If the volume isn't high, you can use LINQ-to-Objects for the last bit. Just add an .AsEnumerable() before the affected Where - this will do this bit of logic back in managed .NET code (but the predicate won't be used in the database query):
return _repository.GetAllChildren().AsEnumerable()
.Where(c => c.Parent.ParentId == parentId).ToList();