I have a table with a lot of rows that is the products table, when an user searches the website, I make a select from this table and I also use the Include method in the select, I used profiler and I noticed that EF generates a query with a lot of inner joins, left joins etc.
What I wanted to do is to use this select query and insert the result in a temporary cache table in the same database and then I can create a service to update this table every x minutes.
The problem is, how do I make EF use this cache table to select the rows so I could just make a select * instead of every time query the products table with the joins?
thanks!
I'm pretty certain EF doesn't have temp tables support - at least out of the box - but it's constantly changing.
Your best bet is to do something like this...
dbcontext.Database.ExecuteSqlCommand("...")
...and I'm guessing there you could run an arbitrary SQL (I know most things can be passed in but I'm not sure about the limitations, but you could run a SP, create indexes etc.) - to set up a temp table.
Then the next step would be to do the opposite side something like this..
dbcontext.MyTable.SqlQuery("...").ToList()
...to map back the sql results into some entity of yours - or for a non-mapped entity to string or something. (dbcontext.MyTable.SqlQuery<T>("...").ToList())
The question is how to do it exactly - not sure of your specifics really. But you could create a temp table before hand and have it mapped - and use it for temp purposes.
Basically, that's a DBA thinking - but EF is not perfect for such things (see something similar here Recommed usage of temp table or table variable in Entity Framework 4. Update Performance Entity framework) but you might be ok with a custom ran queries like the above.
hope it helps
EDIT: this might also help from EF forums but it's more involving.
Related
I am working with Entity Framework in a C# project to read/write data from/to a SQL Server database.
Until now, I read the data and did some calculations. Now I want to write the results to the database. What I essentially have is a list of [ID, Date, Value] tuples.
The update logic should be like an UPSERT, that is, an INSERT should occur, if in the target table for the combination of ID and Date no row is present. An UPDATE should be done, if the combination of ID and Date is already present in the target table.
Let's say I have ~1000 records in my result list. If I loop through the list and execute the UPSERT logic for each item in the list, that would result in ~1000 INSERT or UPDATE statements. I think that there should be a more efficient way to do this.
Hence, my two questions are:
How can I implement the described UPSERT logic without generating a separate SQL statement for each result item?
More generally: how would such an UPSERT logic look like in C# using Entity Framework?
I am unable to comment to ask if you have access to stored procedures, or if you have had other considerations, so apologies in advance.
Try a delete where [ID, Date] for all records you are 'upserting' and then a bulk insert. You wouldn't necessary need Entity Framework poco actions, but it would solve your issue.
Ideally, for performance, you want to generate a SQL Server MERGE statement, which does the INSERT and/or UPDATE as a single action. I don't believe that this is built into Entity Framework, unfortunately, but there are fairly straightforward workarounds. For one, see here:
https://www.flexlabs.org/2018/02/adding-upsert-support-for-entity-framework-core
I realize this question has been asked before, but nothing I've read really answers my question.
I have a table with millions of rows of data that is used in multiple queries a day. I want to move the majority of the data to another table with the same schema. The second table will be an "archive" table.
I would like a list of options to archive data, so I can present them to my boss. So far I'm considering an insert into select statement, SQLBulkCopy in a C# console application, and I'm starting to dig in to SSIS to see what it can do. I plan on doing this over a weekend or multiple weekends.
The table has an ID as the primary key
The table also has a few foreign key constraints
Thanks for any help.
I assume that this is for SQL Server. In that case, partitioned tables might be an additional option. Otherwise I'd always go for a INSERT ... SELECT run by a job in SQL Server, or - if you can't run it directly in SQL Server - create a stored procedure and run it through a little C# tool that you schedule.
Try execute something like
CREATE TABLE mynewtable as select * from myoldtable where any_filter..;
You could create new table with data copy with one instruction on most database engines.
Use this, in case of SQL Server 2008
Select * into new_table from old_table
In the event that you have a set data archive interval, you may be able to leverage the partition-to-archive solution described in the following article.
http://blogs.msdn.com/b/felixmar/archive/2011/02/14/partitioning-amp-archiving-tables-in-sql-server-part-1-the-basics.aspx
Our team has leveraged a similar partition / archive solution in the past with good success.
Regards,
I have a scenario where I need to synchronize a database table with a list (XML) from an external system.
I am using EF but am not sure which would be the best way to achieve this in terms of performance.
There are 2 ways to do this as I see, but neither seem to be efficient to me.
Call Db each time
-Read each entry from the XML
-Try and retrieve the entry from the list
-If no entry found, add the entry
-If found , update timestamp
-At end of loop, delete all entries with older timestamp.
Load All Objects and work in memory
Read all EF objects into a list.
Delete all EF objects
Add item for each item in the XML
Save Changes to Db.
The lists are not that long, estimating around 70k rows. I don't want to clear the db table before inserting the new rows, as this table is a source for data from a webservice, and I don't want to lock the table while its possible to query it.
If I was doing this in T-SQL i would most likely insert the rows into a temp table, and join to find missing and deleted entries, but I have no idea how the best way to handle this in Entity Framework would be.
Any suggestions / ideas ?
The general problem with Entity Framework is that, when changing data, it will fire a query for each changed record anyway, regardless of lazy or eager loading. So by nature, it will be extremely slow (think of factor 1000+).
My suggestion is to use a stored procedure with a table valued parameter and ignore Entity Framework all together. You could use a merge statement.
70k rows is not much, but 70k insert/update/delete statements is always going to be very slow.
You could test it and see if the performance is managable, but intuition says entity framework is not the way to go.
I would iterate over the elements in the XML and update the corresponding row in the DB one at a time. I guess that's what you meant with your first option? As long as you have a good query plan to select each row, that should be pretty efficient. Like you said, 70k rows isn't that much so you are better off keeping the code straightforward rather than doing something less readable for a little more speed.
It depends. It's ok to use EF if there'll be not many changes (say less than hundreds). Otherwise, need bulk insert into DB and merge rows inside database.
We use LINQ to SQL in our project. One of the tables is "Users" used in every action in the project.
Recently we were said to add "IsDeleted" column to the table and consider that column in every data fetching in LINQ to SQL queries.
We wouldn't want to add "WHERE IsDeleted = Fasle" to all queries.
Is it possible "to interrupt" to LINQ after the data was fetched but before sending further to code in the project?
This can be solved by C# but it would really be the wrong tool for the job.
Create a view in the database that includes this statement and only work with the view from now on. You can even enforce this by not granting privileges on the table any more.
I am using Entity Framework, C# 4.0 and Visual Studio 2010. I need to execure a simple SQL query to delete the content of four tables. The query contains:
DELETE FROM dbo.tblMFile
DELETE FROM dbo.tblMContacts
DELETE FROM dbo.tblPersonDetails
DELETE FROM dbo.tblAddresses
There is a foreign key contraint between some of the tables.
There seems to be no simple way to do this.
With reference to my comments on the first respondant's answer:
Because of the highly confidential nature of the data I NEED a way to quickly delete all content (a requirement and a security issues)
I am new to EF and keen to learn
I am using delete instead of truncate because of the foreign key constraints mentioned above (also the SQL above is illustrative, not definitive).
I am a firm and long time believer in strong typing and prefixes all my objects with a type indicator. It has saved me hours (or days even) of debugging.
Humans are polite, sensitive and informative (among many other attributes). I aspire to be human.
No, and that is good so. ORM's are not for bulk deletions. If you really want to use LINQ to issue a DELETE, use BlToolkit that allows you to express arbitrary pretty much standard DML non bulk (no truncate) in LINQ.
Delete ALL records?
TRUNCATE TABLE, NOT DELETE.
TRUNCATE TABLE tblMFile
Depending how much the table contains that can be thousands of times faster. Mostly because it does also not log the deleted data - just the fact that a truncate occurred.
To add to the answer from #TomTom. You can't do that directly using Entity Framework. But you can:
create a stored procedure to delete the records (it can also take the table name as parameter)
use ExecuteStoreQuery see see and write the SQL command directly
Try This
_db.ExecuteStoreCommand("delete from MachineMaster where IdMachine =" + obj_Machine.IdMachine + " or IdMasterMachine=" + obj_Machine.IdMachine );