I have a simple count query using LINQ and EF:
var count = (from I in db.mytable
where xyz
select I).Count();
the code above shows the query being locked in the database.
while the execute sql executes right away:
var count = db.SqlQuery<int>("select count(*) from mytable where xyz").FirstOrDefault();
the code above returns immediately.
I few have suggested to remove the .ToList() which I did and not difference. One thing is that this only happens on the PROD server. The QA server executes pretty fast as expected. But the prod server shows that it gets suspended. I suspect this could be a data storage limitation or server related. But wanted to make sure I am not doing something stupid in the code.
UPDATE:
One thing I noticed is the first time it execute is takes longer the first time. When I set next statement to run it again, it executes immediately. Is there a compile of the query the first time?
Because you are calling ToList in the first query and that causes fetching all records from DB and do the counting in memory. Instead of ToList you can just call Count() to get the same behaviour:
var count = (from I in db.mytable
where xyz
select I).Count();
You must not call .ToList() method, because you start retrieve all objects from database.
Just call .Count()
var count = (from I in db.mytable
where xyz
select I).Count();
Count can take a predicate. I'm not sure if it will speed up your code any but you can write the count as such.
var count = db.mytable.Count(x => predicate);
Where predicate is whatever you are testing for in your where clause.
Simple fiddling in LINQPad shows that this will generate similar, if not exactly the same, SQL as above. This is about the simplest way, in terseness of code, that I know how to do it.
If you need much higher speeds than what EF provides, yet stay in the confines of EF without using inline SQL you could make a stored procedure and call it from EF.
Related
I have a Linq query that reads from a SQL table and 1 of the fields it returns are from a custom function (in C#).
Something like:
var q = from my in MyTable
select new
{
ID = my.ID,
Amount = GetAmount(ID)
};
If I do a q.Dump() in LinqPad, it shows the results, which tells me that it runs the custom function without trying to send it to SQL.
Now I want to union this to another query, with:
var q1 = (from p in AnotherQuery.Union(q)...
and the I get the error that Method has no supported translation to SQL.
So, my logic tells me that I need to dump q in memory and then try to union to that. I've tried doing that with ToList() and creating a secondary query that populates itself from the List, but that leads to a long list of different errors. Am I on the right track, by trying to get q in memory and union on that, or are there better ways of doing this?
You can't use any custom functions in a LINQ query that gets translated - only the functions supported by the given LINQ provider. If you want your query to happen on the server, you need to stick with the supported functions (even if it sometimes means having to inline code that would otherwise be reused).
The difference between your two queries boils down to when (and where) the projection happens. In your first case, the data from MyTable is returned from the DB - in your sample, just the ID. Then, the projection happens on top of this - the GetAmount method is called in your application for each of ID.
On the other hand, there's no such way for this to happen in your second query, since you're not using GetAmount in the final projection.
You either need to replace the custom function with inlined query the provider understands, or refactor all your queries to use the supported functions in addition with whatever you need to do in-memory. There's no point in giving you any sample code, since it depends entirely on your actual query, and what you're really trying to query for.
I have a difficult question. I have seen that the number of the record in a query can change without re-run the same query.
The code below shows the scenario:
using (var db = new MyContext()) {
var query = from e in db.Entities select e;
//here the query.Count is equals to 100 for example
Thread.Sleep(10000);
//after some times the db has been populated
//here the query.Count is equals to 200 for example without run again the query
}
My question is: why this behaviour? why it seems to be an automatic binding between the query result and the data layer? Entity framework works in background in order to update the query result?
Thanks in advance.
Remember that with an IQueryable, thanks to deferred execution, the query will be evaluated and executed against the database every time that you enumerate it, that is, when you run .Count(), .ToList() etc.
If in doubt, use a profiler, such as MiniProfiler or EF Profiler to understand exactly when you are hitting the database.
I have a situation where my application constructs a dynamic LINQ query using PredicateBuilder based on user-specified filter criteria (aside: check out this link for the best EF PredicateBuilder implementation). The problem is that this query usually takes a long time to run and I need the results of this query to perform other queries (i.e., joining the results with other tables). If I were writing T-SQL, I'd put the results of the first query into a temporary table or a table variable and then write my other queries around that. I thought of getting a list of IDs (e.g., List<Int32> query1IDs) from the first query and then doing something like this:
var query2 = DbContext.TableName.Where(x => query1IDs.Contains(x.ID))
This will work in theory; however, the number of IDs in query1IDs can be in the hundreds or thousands (and the LINQ expression x => query1IDs.Contains(x.ID) gets translated into a T-SQL "IN" statement, which is bad for obvious reasons) and the number of rows in TableName is in the millions. Does anyone have any suggestions as to the best way to deal with this kind of situation?
Edit 1: Additional clarification as to what I'm doing.
Okay, I'm constructing my first query (query1) which just contains the IDs that I'm interested in. Basically, I'm going to use query1 to "filter" other tables. Note: I am not using a ToList() at the end of the LINQ statement---the query is not executed at this time and no results are sent to the client:
var query1 = DbContext.TableName1.Where(ComplexFilterLogic).Select(x => x.ID)
Then I take query1 and use it to filter another table (TableName2). I now put ToList() at the end of this statement because I want to execute it and bring the results to the client:
var query2 = (from a in DbContext.TableName2 join b in query1 on a.ID equals b.ID select new { b.Column1, b.column2, b.column3,...,b.columnM }).ToList();
Then I take query1 and re-use it to filter yet another table (TableName3), execute it and bring the results to the client:
var query3 = (from a in DbContext.TableName3 join b in query1 on a.ID equals b.ID select new { b.Column1, b.column2, b.column3,...,b.columnM }).ToList();
I can keep doing this for as many queries as I like:
var queryN = (from a in DbContext.TableNameN join b in query1 on a.ID equals b.ID select new { b.Column1, b.column2, b.column3,...,b.columnM }).ToList();
The Problem: query1 is takes a long time to execute. When I execute query2, query3...queryN, query1 is being executed (N-1) times...this is not a very efficient way of doing things (especially since query1 isn't changing). As I said before, if I were writing T-SQL, I would put the result of query1 into a temporary table and then use that table in the subsequent queries.
Edit 2:
I'm going to give the credit for answering this question to Albin Sunnanbo for his comment:
When I had similar problems with a heavy query that I wanted to reuse in several other queries I always went back to the solution of creating a join in each query and put more effort in optimizing the query execution (mostly by tweaking my indexes).
I think that's really the best that one can do with Entity Framework. In the end, if the performance gets really bad, I'll probably go with John Wooley's suggestion:
This may be a situation where dropping to native ADO against a stored proc returning multiple results and using an internal temp table might be your best option for this operation. Use EF for the other 90% of your app.
Thanks to everyone who commented on this post...I appreciate everyone's input!
If the size of TableName is not too big to load the whole table you use
var tableNameById = DbContext.TableName.ToDictionary(x => x.ID);
to fetch the whole table and automatically put it in a local Dictionary with ID as key.
Another way is to just "force" the LINQ evaluation with .ToList(), in the case fetch the whole table and do the Where part locally with Linq2Objects.
var query1Lookup = new Hashset<int>(query1IDs);
var query2 = DbContext.TableName.ToList().Where(x => query1IDs.Contains(x.ID));
Edit:
Storing a list of ID:s from one query in a list and use that list as filter in another query can usually be rewritten as a join.
When I had similar problems with a heavy query that I wanted to reuse in several other queries I always went back to the solution of creating a join in each query and put more effort in optimizing the query execution (mostly by tweaking my indexes).
Since you are running a subsequent query off the results, take your first query and use it as a View on your SQL Server, add the view to your context, and build your LINQ queries against the view.
Have you considered composing your query as per this article (using the decorator design pattern):
Composed LINQ Queries using the Decorator Pattern
The premise is that, instead of enumerating your first (very constly) query, you basically use the decorator pattern to produce a chain of IQueryable that is a result of query 1 and query N. This way you always execute the filtered form of the query.
Hope this might help
Another question regarding EF:
I was wondering what's going behind the scenes when iterating over a query result.
For example, check out the following code:
var activeSources = from e in entitiesContext.Sources
where e.IsActive
select e;
and then:
foreach (Source currSource in allSources)
{
code based on the current source...
}
Important note: Each iteration takes a while to complete (from 1 to 25 seconds).
Now, I assume EF is based on DataReaders for maximum efficiency, so based on that assumption, I figure that in the above case, the Database connection will be kept open until I finish iterating over the results, which will be a very long time (when talking in terms of code), which is something I obviously don't want.
Is there a way to fetch the entire data like I would've done with plain old ADO.NET DataAdapters, DataSets and the fill() method instead of using DataReaders?
Or maybe i'm way off with my assumptions?
In any case I would've loved to be pointed to a good source explaining this if available.
Thanks,
Mikey
If you want to get all of the data up front, similar to Fill(), you need to force the query to execute.
var activeSources = from e in entitiesContext.Sources
where e.IsActive
select e;
var results = activeSources.ToList();
After ToList() is called you will have the data and be disconnected from the database.
If you want to return all results at once use .ToList(); Then deferred execution won't happen.
var activeSources = (from e in entitiesContext.Sources
where e.IsActive
select e).ToList();
I've been searching here and Google, but I'm at a loss. I need to let users search a database for reports using a form. If a field on the form has a value, the app will get any reports with that field set to that value. If a field on a form is left blank, the app will ignore it. How can I do this? Ideally, I'd like to just write Where clauses as Strings and add together those that are not empty.
.Where("Id=1")
I've heard this is supposed to work, but I keep getting an error: "could not be resolved in the current scope of context Make sure all referenced variables are in scope...".
Another approach is to pull all the reports then filter it one where clause at a time. I'm hesitant to do this because 1. that's a huge chunk of data over the network and 2. that's a lot of processing on the user side. I'd like to take advantage of the server's processing capabilities. I've heard that it won't query until it's actually requested. So doing something like this
var qry = ctx.Reports
.Select(r => r);
does not actually run the query until I do:
qry.First()
But if I start doing:
qry = qry.Where(r => r.Id = 1).Select(r => r);
qry = qry.Where(r => r.reportDate = '2010/02/02').Select(r => r);
Would that run the query? Since I'm adding a where clause to it. I'd like a simple solution...in the worst case I'd use the Query Builder things...but I'd rather avoid that (seems complex).
Any advice? :)
Linq delays record fetching until a record must be fetched.
That means stacking Where clauses is only adding AND/OR clauses to the query, but still not executing.
Execution of the generated query will be done in the precise moment you try to get a record (First, Any etc), a list of records(ToList()), or enumerate them (foreach).
.Take(N) is not considered fetching records - but adding a (SELECT TOP N / LIMIT N) to the query
No, this will not run the query, you can structure your query this way, and it is actually preferable if it helps readability. You are taking advantage of lazy evaluation in this case.
The query will only run if you enumerate results from it by using i.e. foreach or you force eager evaluation of the query results, i.e. using .ToList() or otherwise force evaluation, i.e evaluate to a single result using i.e First() or Single().
Try checking out this dynamic Linq dll that was released a few years back - it still works just fine and looks to be exactly what you are looking for.