Efficiency of linq to sql vs stored procedure - c#

Hi I'm writing a app which has a search page and does a search on the database.
I'm wondering whether I should do this in linq or a stored procedure.
Is the performance of a stored procedure much better than that of linq to sql?
I'm thinking it would be because in order to write the linq query you need to use the datacontext to access the table on which to query. I'm imagining this in itself means that if the table is big it might become inefficient.
That is if you were using:
context.GetTable<T>();
Can any one advise me here?

There is unlikely to be much difference UNLESS you encounter a situation where the TSQL produced by Linq to SQL is not optimal.
If you want absolute control over the TSQL use a stored procedure.
If speed is critical, benchmark both and also examine the TSQL produced by your Linq to SQL solution.
Also, you should be wary of pulling back entire tables (unless they are small, such as frequently accessed lookup data) across the wire in either solution.

If the speed is so critical to you then you should go ahead and benchmark both options on a reasonable set of data. Technically I would expect the SP to be faster but it might not be that much of a difference.

What does "efficient" mean to you?
I'm working on a website where sub seconds (preferably sub 500ms) is the goal. We're using Linq for search on most of our stuff. The only time we're actually using a SP is when we're using the hierarchyid and other SqlServer data types that don't exist in EF.

GetTable probably isn't going to be that different between the two, as fundamentally it's just SELECT * FROM T. You'll see more significant gains from stored procedures in cases where the query isn't being written very optimally by Linq, or in some very high load situations were caching the execution plan makes a difference.
Benchmarking it is the best answer, but from what it looks like you're doing I don't think the difference is going to amount to much.

Related

Performance comparison Sql query and Linq data query

I have two opinion about manipulating data with c# programming language environment.
(select * from where ...) query with sql and get data.
(select * from) get all data and use Linq query on object list.
What is the performance difference about these opinions for big size or avarage size data. Can I use both of them?
The generic answer to performance is questions is to try it on your data and see which works better.
In your case, though, there is a right answer: Do the work in the database.
Filtering the data in the database (using the where) has two advantages. First, it reduces the amount of data sent from the database to the application. This is almost always a win (unless almost all rows are returned).
Second, it allows the database to optimize the query, using (for instance) available indexes to speed the query.
Personally - if you can reduce the amount of data you suck into memory from the database, do it. Why download 10M records, when you needed 100k.. then refine it more with linq for simplicity maybe using local conditions etc. For small data you can probably try both - although depending on what your linq is connected to object wise you could still be performing sql anyway, so...
I assume you're talking about LinqToSql here and the resulting queries are equivalent. If that is the case the only difference in terms of performance is LinqToSql overhead of translating c# expression tree to SQL query. And it's pretty serious as the process involves DB provider which uses reflection and complex logic of converting the tree.

What is Best way Sql Raw Queries or Linq

I'm more familiar with SQL raw queries. Most of the time I'm using stored procedure to do complex queries and Insert,Delete,Update and Select One record are done by using Simple Entity Framework methods and Linq queries. What are the Advantages and Disadvantages of using Linq and SQL Row queries and what is the best practice.
SQL will almost always be (a lot) quicker as it is highly optimised towards the returning of specific sets of data. It uses complex indexing to manage knowing where to look for the data to be able to do this. This does however also often depend on your database maintenance. For example you can speed up the way your database searches by adding indexes to your databases, so as you can see, SQL requires more work than simply writing a stored procedure if you want to optimise the performance.
Linq on the other hand is a lot quicker to implement as opposed to SQL Stored procedures which tend to take longer to write and don't require you to perform maintenance on the data. Personally I find SQL difficult to read whereas Linq and programmatic code comes quite naturally to me.
Therefore I would say SQL is quicker and more tedious whereas the programmatic approach is slower but easier to implement.
If you are working on a small dataset you could probably get away with Linq, but if your working with a large database SQL is almost always the way to go.

Is SQL or C# faster at pairing?

I have a lot of data which needs to be paired based on a few simple criteria. There is a time window (both records have a DateTime column), if one record is very close in time (within 5 seconds) to another then it is a potential match, the record which is the closest in time is considered a complete match. There are other fields which help narrow this down also.
I wrote a stored procedure which does this matching on the server before returning the
full, matched dataset to a C# application. My question is, would it be better to pull in the 1 million (x2) rows and deal with them in C#, or is sql server better suited to perform this matching? If Sql server is, then what is the fastest way of pairing data using datetime fields?
Right now I select all records from Table 1/Table 2 into temporary tables, iterate through each record in Table 1, look for a match in Table 2 and store the match (if one exists) in a temporary table, then I delete both records in their own temporary tables.
I had to rush this piece for a game I'm writing, so excuse the bad (very bad) procedure... It works, it's just horribly inefficient! The whole SP is available on pastebin: http://pastebin.com/qaieDsW7
I know the SP is written poorly, so saying "hey, dumbass... write it better" doesn't help! I'm looking for help in improving it, or help/advice on how I should do the whole thing differently! I have about 3/5 days to rewrite it, I can push that deadline back a bit, but I'd rather not if you guys can help me in time! :)
Thanks!
Ultimately, compiling your your data on the database side is preferable 99% of the time, as it's designed for data crunching (through the use of indexes, relations, etc). A lot of your code can be consolidated by the use of joins to compile the data in exactly the format you need. In fact, you can bypass almost all your temp tables entirely and just fill a master Event temp table.
The general pattern is this:
INSERT INTO #Events
SELECT <all interested columns>
FROM
FireEvent
LEFT OUTER JOIN HitEvent ON <all join conditions for HitEvent>
This way you match all fire events to zero or more HitEvents. After our discussion in chat, you can even limit it to zero or one hit event by wrapping it in a subquery and using a window function for ROW_NUMBER() OVER (PARTITION BY HitEvent.EventID ORDER BY ...) AS HitRank and add a WHERE HitRank = 1 to the outer query. This is ultimately what you ended up doing and got the results you were expecting (with a bit of work and learning in the process).
If the data is already in the database, that is where you should do the work. You absolutely should learn to display and query plans using SQL Server Management Studio, and become able to notice and optimize away expensive computations like nested loops.
Your task probably does not require any use of temporary tables. Temporary tables tend to be efficient when they are relatively small and/or heavily reused, which is not your case.
I would advise you to try to optimize the stored procedure if is not running fast enough and not rewrite it in C#. Why would you want to transfer millions of rows out of SQL Server anyway?
Unfortunately I don't have an SQL Server installation so I can't test your script, but I don't see any CREATE INDEX statements in there. If you didn't just skipped them for brevity, then you should surely analyze your queries and see which indexes are needed.
So the answer depends on several factors like resources available per client/server (Ram/CPU/Concurrent Users/Concurrent processes, etc.)
Here are some basic rules that will improve your performance regardless of what you use:
Loading a million rows into c# program is not a good practice. Unless this is a stand alone process with plenty of ram.
Uniqueidentifiers will never out perform Integers. Comparisons
Common Table Expression are a good alternative for fast performing matching. How to use CTE
Finally you have to consider output. If there is constant reading and writing that affects the user interface, then you should manage that in memory (c#), otherwise all CRUD operations should be kept inside the database.

Best option for dynamic queries?

I'm working on porting an old application to from WebForms to MVC, and part of that process is tearing out the existing data layer, moving the logic from stored procedures to code. As I have initially only worked with basic C# SQL functions (System.Data.SqlClient), I went with a lightweight pseudo-ORM (PetaPoco), which just takes a SQL statement as a string and executes it. Building dynamic queries would work about the same in SQL - lots of conditionals that add and remove additional code (average query has ~30 filters).
So after looking around a bit, I found some choices:
A bunch of strings and conditionals that add bits of the query as they are needed. Really nasty, especially when queries get complex, and not something I want to pursue if a better solution exists.
A bunch of conditionals using L2E. Looks more elegant, but I tested L2E is too bloated in general was an awful experience. Could I do the same thing in L2S? If so, is L2S going to stick around for the next 5-10 years?
Use a PredicateBuilder. Still looking into this, same questions regarding L2S.
EDIT: I can also just stick to the existing stored procedure model, but I have to rewrite them anyway, so it can't hurt to look at other options as I'm still going to have to do the leg work.
Are there any other options out there? Can anyone weigh in with some experience on any of the mentioned methods - mainly, did the method you choose make you want to build a time machine and kill past you for implementing it?
I'd look at LLBLGen. The code that it generates is quite good and customizable. They also provide a robust linq provider which may help with your queries. I used it for a couple large projects and was quite happy.
http://www.llblgen.com/
In my opinion, neither L2S nor L2E can generate efficient SQL code, especially when it comes to complex queries. Even in some relatively simple cases generating queries via either of the two methods would yield inefficient SQL code, here's an example: Why does this additional join increase # of queries?
That being said, if you're using SQL Server L2S is a better option, as L2E is meant to handle any database; Because of which L2E will generate inefficient SQL code. Also another point to keep in mind is neither L2S or L2E will leverage the tempDB, i.e. generating temp-tables or table variables or CTEs.
I would re-write the stored procedures, optimizing them as much as possible, and use L2S/L2E for simple queries, that would generate one round-trip (this should be as low as possible) to the server, and also ensure that the execution plan SQL Server uses is the most efficient (i.e. uses indexes etc).
Hasanain
Not really an answer, but too long for a comment:
I have built a mid-sized web app using the 'concatenate pieces of SQL' method, and am currently in the process of doing a similar job but using L2E.
I found that with some self-control, the concatenate-pices-of-sql method is not that bad. Of course use parameterized queries, don't try to stick user input into the SQL directly.
I have been slowly growing an appreciation for the L2E method though. It gives you type safety, though you do have to do some things "backwards" from how you might do it with SQL -- such as WHERE X IN (...) constructs. But so far I haven't hit anything that L2E can't handle.
I feel like the L2E approach would be a little easier to maintain if other people were to be heavily involved.
Do you have actual use cases where the "bloat" of L2E is a problem? Or is it just a general sense of malaise where you feel the framework is doing too much behind the scenes?
I definitely had that feeling at first (ok, still do), and certainly don't like reading the generated SQL (esp. compared to my handwritten SQL from the previous project), but so far have found L2E pretty good with regard to only hitting the DB when it is actually necessary.
Another concern is what DB you're using, and how up-to-date its L2E bindings are. If you're using SQL Server, then no problem. MySql might be more flaky though. A chunk of L2E's slickness comes from its nice integration with VStudio, and VStudio's ability to build entity models from your DB automagically. Not sure how good the support is for non-MS DB backends.

Can we convert all SQL scripts to Linq-to-SQL expressions or there is any limitation?

I want to convert all of my db stored procedures to linq to sql expressions, is there any limitation for this work? you must notice that there is some complicated queries in my db.
Several features of SQL Server are not supported by Linq to SQL:
Batch updates (unless you use non-standard extensions);
Table-Valued Parameters;
CLR types, including spatial types and hierarchyid;
DML statements (I'm thinking specifically of table variables and temporary tables);
The OUTPUT INTO clause;
The MERGE statement;
Recursive Common Table Expressions, i.e. hierarchical queries on a nested set;
Optimized paging queries using SET ROWCOUNT (ROW_NUMBER is not the most efficient);
Certain windowing functions like DENSE_RANK and NTILE;
Cursors - although these should obviously be avoided, sometimes you really do need them;
Analytical queries using ROLLUP, CUBE, COMPUTE, etc.
Statistical aggregates such as STDEV, VAR, etc.
PIVOT and UNPIVOT queries;
XML columns and integrated XPath;
...and so on...
With some of these things you could technically write your own extension methods, parse the expression trees and actually generate the correct SQL, but that won't work for all of the above, and even when it is a viable option, it will often simply be easier to write the SQL and invoke the command or stored procedure. There's a reason that the DataContext gives you the ExecuteCommand, ExecuteQuery and ExecuteMethodCall methods.
As I've stated in the past, ORMs such as Linq to SQL are great tools, but they are not silver bullets. I've found that for larger, database-heavy projects, L2S can typically handle about 95% of the tasks, but for that other 5% you need to write UDFs or Stored Procedures, and sometimes even bypass the DataContext altogether (object tracking does not play nice with server triggers).
For smaller/simpler projects it is highly probable that you could do everything in Linq to SQL. Whether or not you should is a different question entirely, and one that I'm not going to try to answer here.
I've found that in almost all cases where I've done a new project with L2S, I've completely removed the need for stored procedures. In fact, many of the cases where I would have been forced to use a stored proc, multivariable filters for instance, I've found that by building the query dynamically in LINQ, I've actually gotten better queries in the vast majority of cases since I don't need to include those parts of the query that get translated to "don't care" in the stored proc. So, from my perspective, yes -- you should be able to translate your stored procs to LINQ.
A better question, thought, might be should you translate your stored procs to LINQ? The answer to that, I think, depends on the state of the project, your relative expertise with C#/VB and LINQ vs SQL, the size of the conversion, etc. On an existing project I'd only make the effort if it improves the maintainability or extensibility of the code base, or if I was making significant changes and the new code would benefit. In the latter case you may choose to incrementally move your code to pure LINQ as you touch it to make changes. You can use stored procs with LINQ so you may not need to change it to make use of LINQ.
I'm not a fan of this approach. This is a major architectural change, because you are now removing a major interface layer you previously put in place to gain a decoupling advantage.
With stored procedures, you have already chosen the interface your database exposes. You will now need to grant users SELECT privileges on all the underlying tables/views instead of EXECUTE on just the application stored procedures and potentially you will need to restrict column read rights at the column level in the tables/views. Now you will need to re-implement at a lower level every explicit underlying table/view/column rights which your stored procedure was previously implementing with a single implicit EXECUTE right.
Whereas before the services expected from the database could be enumerated by an appropriate inventory of stored procedures, now the potential database operations are limited to the exposed tables/views/columns, vastly increasing the coupling and potential for difficulty in estimating scope changes for database refactorings and feature implementations.
Unless there are specific cases where the stored procedure interface is difficult to create/maintain, I see little benefit of changing a working SP-based architecture en masse. In cases where LINQ generates a better implementation because of application-level data coupling (for instance joining native collections to database), it can be appropriate. Even then, you might want to LINQ to the stored procedure on the database side.
If you chose LINQ from the start, you would obviously have done a certain amount of work up front in determining column/view/table permissions and limiting the scope of application code affecting database implementation details.
What does this mean? Does this mean you want to use L2S to call your stored procedures, or do you want to convert all the T-SQL statements in your stored procs to L2S? If it's the later, you should not have too many problems doing this. Most T-SQL statements can be represented in Linq without problem.
I might suggest you investigate a tool like Linqer to help you with your T-SQL conversion. It will convert most any T-SQL statement into Linq. It has saved my quite a bit of time in converting some of my queries.
There are many constructs in T-SQL which have no parallel in LINQ to SQL. Starting with flow control, ability to return multiple row sets, recursive queries.
You will need to approach this on a case by case basis. Remembering any times the SP does significant filtering work on the database much of that filtering may end up on the client, so needing to move far more data from server to client.
If you already have tested and working stored procedures, why convert them at all? That's just making work for no reason.
If you were starting a new product from scratch and were wondering whether to use stored procedures or not, that would be an entirely different question.

Categories

Resources