Related
I am looking at Dapper as ORM for our next project, but something is not clear to me.
In this question there are answers on how to do inserts, updates and deletes.
Since this question is already a bit older, maybe there are better ways now a days..
But my biggest concern is how to do an ApplyUpdates on a list.
Suppose you have a List<Customer> that is build like shown here
And suppose you show this list in a DataGridView.
Now, the user will
alter the data of a few rows,
insert a few new rows,
delete a few rows
And when he clicks on the save button, at that time you want to save all these changes in this List<Customer> to your database, using Dapper.
How can I go about that ?
If I have to loop through the list and for each row call an insert, update or delete statement, then how can I determine what operation to use ? The deleted rows will be gone from the list.
I also want to make sure that if one statement fails, all will be rollbacked.
And I need the primary key for all new rows returned and filled in the DataGridView.
In other words, all that ADO DataAdapter/DataTable does for you.
What is the best way to do this using Dapper ?
EDIT
The best way I can think of now is to keep 3 list in memory and when the user alters some data, add a row in the update list, same for insert list and deleted list so I can run through these 3 list on the button click.
But I am hoping there is a better alternative build in Dapper for this kind of situation.
You need to handle this yourself, as Dapper doesn't manage it. There are several theories for how to do it.
Delete all items and then add them again.
Easy to implement.
Bad for DB performance, which is effectively 2 DB writes per row.
Loop through the items and update without checking for changes
Not too difficult to implement.
DB performance better than option 1, but not ideal.
Add and deletes are more complex to detect than updates.
Loop through the items and update only if there are differences
More difficult to implement.
Requires reading from the DB first to compare values (extra DB action)
Store changes in a separate list
Even more difficult to implement, as you need to "wrap" List updates into another class (first class collection?) and store changes
Most efficient for DB, as you execute only the minimum on each DB item.
In the end, you might select different approaches for different Entities depending on how you need to optimise. e.g. Option 1 is fine if you know you will only have a few entities and not many updates.
I am using C# with .NET 4.5. I am making a scraper which collects specific data. Each time a value is scraped, I need to make sure it hasn't already been added to the SQLite db.
To do this, I am making a call each time a value is scraped to query against the db to check if it contains the value, and if not, I make another call to insert the value into the db.
Since I am scraping multiple values per second, this gets to be very IO-intensive, with constant calls to the db.
My question is, is there any better way to do this? Perhaps I could queue the values scraped and then run a batch query at once? Is that possible?
I see three approaches:
Use INSERT OR IGNORE, which will reject an entry if it is already present (based on primary key and unique fields). Or plainly INSERT (or its equivalent (INSERT or ABORT) which will return SQLITE_CONSTRAINT, a value you will have to catch and manage if you want to count failed insertions.
Accumulate, outside the database, the updates you want to make. When you have accumulated enough/all, start a transaction (BEGIN;), do your insertions (you can use INSERT OR IGNORE here as well), commit the transaction (COMMIT;)
You could pre-fetch a list of items you already have, depending, and check against that list, if your data model allows it.
I have a CSV file and I have to insert it into a SQL Server database. Is there a way to speed up the LINQ inserts?
I've created a simple Repository method to save a record:
public void SaveOffer(Offer offer)
{
Offer dbOffer = this.db.Offers.SingleOrDefault (
o => o.offer_id == offer.offer_id);
// add new offer
if (dbOffer == null)
{
this.db.Offers.InsertOnSubmit(offer);
}
//update existing offer
else
{
dbOffer = offer;
}
this.db.SubmitChanges();
}
But using this method, the program is way much slower then inserting the data using ADO.net SQL inserts (new SqlConnection, new SqlCommand for select if exists, new SqlCommand for update/insert).
On 100k csv rows it takes about an hour vs 1 minute or so for the ADO.net way. For 2M csv rows it took ADO.net about 20 minutes. LINQ added about 30k of those 2M rows in 25 minutes. My database has 3 tables, linked in the dbml, but the other two tables are empty. The tests were made with all the tables empty.
P.S. I've tried to use SqlBulkCopy, but I need to do some transformations on Offer before inserting it into the db, and I think that defeats the purpose of SqlBulkCopy.
Updates/Edits:
After 18hours, the LINQ version added just ~200K rows.
I've tested the import just with LINQ inserts too, and also is really slow compared with ADO.net. I haven't seen a big difference between just inserts/submitchanges and selects/updates/inserts/submitchanges.
I still have to try batch commit, manually connecting to the db and compiled queries.
SubmitChanges does not batch changes, it does a single insert statement per object. If you want to do fast inserts, I think you need to stop using LINQ.
While SubmitChanges is executing, fire up SQL Profiler and watch the SQL being executed.
See question "Can LINQ to SQL perform batch updates and deletes? Or does it always do one row update at a time?" here: http://www.hookedonlinq.com/LINQToSQLFAQ.ashx
It links to this article: http://www.aneyfamily.com/terryandann/post/2008/04/Batch-Updates-and-Deletes-with-LINQ-to-SQL.aspx that uses extension methods to fix linq's inability to batch inserts and updates etc.
Have you tried wrapping the inserts within a transaction and/or delaying db.SubmitChanges so that you can batch several inserts?
Transactions help throughput by reducing the needs for fsync()'s, and delaying db.SubmitChanges will reduce the number of .NET<->db roundtrips.
Edit: see http://www.sidarok.com/web/blog/content/2008/05/02/10-tips-to-improve-your-linq-to-sql-application-performance.html for some more optimization principles.
Have a look at the following page for a simple walk-through of how to change your code to use a Bulk Insert instead of using LINQ's InsertOnSubmit() function.
You just need to add the (provided) BulkInsert class to your code, make a few subtle changes to your code, and you'll see a huge improvement in performance.
Mikes Knowledge Base - BulkInserts with LINQ
Good luck !
I wonder if you're suffering from an overly large set of data accumulating in the data-context, making it slow to resolve rows against the internal identity cache (which is checked once during the SingleOrDefault, and for "misses" I would expect to see a second hit when the entity is materialized).
I can't recall 100% whether the short-circuit works for SingleOrDefault (although it will in .NET 4.0).
I would try ditching the data-context (submit-changes and replace with an empty one) every n operations for some n - maybe 250 or something.
Given that you're calling SubmitChanges per isntance at the moment, you may also be wasting a lot of time checking the delta - pointless if you've only changed one row. Only call SubmitChanges in batches; not per record.
Alex gave the best answer, but I think a few things are being over looked.
One of the major bottlenecks you have here is calling SubmitChanges for each item individually. A problem I don't think most people know about is that if you haven't manually opened your DataContext's connection yourself, then the DataContext will repeatedly open and close it itself. However, if you open it yourself, and then close it yourself when you're absolutely finished, things will run a lot faster since it won't have to reconnect to the database every time. I found this out when trying to find out why DataContext.ExecuteCommand() was so unbelievably slow when executing multiple commands at once.
A few other areas where you could speed things up:
While Linq To SQL doesn't support your straight up batch processing, you should wait to call SubmitChanges() until you've analyzed everything first. You don't need to call SubmitChanges() after each InsertOnSubmit call.
If live data integrity isn't super crucial, you could retrieve a list of offer_id back from the server before you start checking to see if an offer already exists. This could significantly reduce the amount of times you're calling the server to get an existing item when it's not even there.
Why not pass an offer[] into that method, and doing all the changes in cache before submitting them to the database. Or you could use groups for submission, so you don't run out of cache. The main thing would be how long till you send over the data, the biggest time wasting is in the closing and opening of the connection.
Converting this to a compiled query is the easiest way I can think of to boost your performance here:
Change the following:
Offer dbOffer = this.db.Offers.SingleOrDefault (
o => o.offer_id == offer.offer_id);
to:
Offer dbOffer = RetrieveOffer(offer.offer_id);
private static readonly Func<DataContext, int> RetrieveOffer
{
CompiledQuery.Compile((DataContext context, int offerId) => context.Offers.SingleOrDefault(o => o.offer_id == offerid))
}
This change alone will not make it as fast as your ado.net version, but it will be a significant improvement because without the compiled query you are dynamically building the expression tree every time you run this method.
As one poster already mentioned, you must refactor your code so that submit changes is called only once if you want optimal performance.
Do you really need to check if the record exist before inserting it into the DB. I thought it looked strange as the data comes from a csv file.
P.S. I've tried to use SqlBulkCopy,
but I need to do some transformations
on Offer before inserting it into the
db, and I think that defeats the
purpose of SqlBulkCopy.
I don't think it defeat the purpose at all, why would it? Just fill a simple dataset with all the data from the csv and do a SqlBulkCopy. I did a similar thing with a collection of 30000+ rows and the import time went from minutes to seconds
I suspect it isn't the inserting or updating operations that are taking a long time, rather the code that determines if your offer already exists:
Offer dbOffer = this.db.Offers.SingleOrDefault (
o => o.offer_id == offer.offer_id);
If you look to optimise this, I think you'll be on the right track. Perhaps use the Stopwatch class to do some timing that will help to prove me right or wrong.
Usually, when not using Linq-to-Sql, you would have an insert/update procedure or sql script that would determine whether the record you pass already exists. You're doing this expensive operation in Linq, which certainly will never hope to match the speed of native sql (which is what's happening when you use a SqlCommand and select if the record exists) looking-up on a primary key.
Well you must understand linq creates code dynamically for all ADO operations that you do instead handwritten, so it will always take up more time then your manual code. Its simply an easy way to write code but if you want to talk about performance, ADO.NET code will always be faster depending upon how you write it.
I dont know if linq will try to reuse its last statement or not, if it does then seperating insert batch with update batch may improve performance little bit.
This code runs ok, and prevents large amounts of data:
if (repository2.GeoItems.GetChangeSet().Inserts.Count > 1000)
{
repository2.GeoItems.SubmitChanges();
}
Then, at the end of the bulk insertion, use this:
repository2.GeoItems.SubmitChanges();
I am new to threads and in need of help. I have a data entry app that takes an exorbitant amount of time to insert a new record(i.e 50-75 seconds). So my solution was to send an insert statement out via a ThreadPool and allow the user to begin entering the data for the record while that insert which returns a new record ID while that insert is running. My problem is that a user can hit save before the new ID is returned from that insert.
I tried putting in a Boolean variable which get set to true via an event from that thread when it is safe to save. I then put in
while (safeToSave == false)
{
Thread.Sleep(200)
}
I think that is a bad idea. If i run the save method before that tread returns, it gets stuck.
So my questions are:
Is there a better way of doing this?
What am I doing wrong here?
Thanks for any help.
Doug
Edit for more information:
It is doing an insert into a very large (approaching max size) FoxPro database. The file has about 200 fields and almost as many indexes on it.
And before you ask, no I cannot change the structure of it as it was here before I was and there is a ton of legacy code hitting it. The first problem is, in order to get a new ID I must first find the max(id) in the table then increment and checksum it. That takes about 45 seconds. Then the first insert is simply and insert of that new id and an enterdate field. This table is not/ cannot be put into a DBC so that rules out auto-generating ids and the like.
#joshua.ewer
You have the proccess correct and I think for the short term I will just disable the save button, but I will be looking into your idea of passing it into a queue. Do you have any references to MSMQ that I should take a look at?
1) Many :), for example you could disable the "save" button while the thread is inserting the object, or you can setup a Thread Worker which handle a queue of "save requests" (but I think the problem here is that the user wants to modify the newly created record, so disabling the button maybe it's better)
2) I think we need some more code to be able to understand... (or maybe is a synchronization issue, I am not a bug fan of threads too)
btw, I just don't understand why an insert should take so long..I think that you should check that code first! <- just as charles stated before (sorry, dind't read the post) :)
Everyone else, including you, addressed the core problems (insert time, why you're doing an insert, then update), so I'll stick with just the technical concerns with your proposed solution. So, if I get the flow right:
Thread 1: Start data entry for
record
Thread 2: Background calls to DB to retrieve new Id
The save button is always enabled,
if user tries to save before Thread
2 completes, you put #1 to sleep for
200 ms?
The simplest, not best, answer is to just have the button disabled, and have that thread make a callback to a delegate that enables the button. They can't start the update operation until you're sure things are set up appropriately.
Though, I think a much better solution (though it might be overblown if you're just building a Q&D front end to FoxPro), would be to throw those save operations into a queue. The user can key as quickly as possible, then the requests are put into something like MSMQ and they can complete in their own time asynchronously.
Use a future rather than a raw ThreadPool action. Execute the future, allow the user to do whatever they want, when they hit Save on the 2nd record, request the value from the future. If the 1st insert finished already, you'll get the ID right away and the 2nd insert will be allowed to kick off. If you are still waiting on the 1st operation, the future will block until it is available, and then the 2nd operation can execute.
You're not saving any time unless the user is slower than the operation.
First, you should probably find out, and fix, the reason why an insert is taking so long... 50-75 seconds is unreasonable for any modern database for a single row insert, and indicates that something else needs to be addressed, like indices, or blocking...
Secondly, why are you inserting the record before you have the data? Normally, data entry apps are coded so that the insert is not attempted until all the necessary data for the insert has been gathered from the user. Are you doing this because you are trying to get the new Id back from the database first, and then "update" the new empty record with the user-entered data later? If so, almost every database vendor has a mechanism where you can do the insert only once, without knowing the new ID, and have the database return the new ID as well... What vendor database are you using?
Is a solution like this possible:
Pre-calculate the unique IDs before a user even starts to add. Keep a list of unique Id's that are already in the table but are effectively place holders. When a user is trying to insert, reserve them one of the unique IDs, when the user presses save, they now replace the place-holder with their data.
PS: It's difficult to confirm this, but be aware of the following concurrency issue with what you are proposing (with or without threads): User A, starts to add, user B starts to add, user A calculates ID 1234 as the max free ID, user B calculates ID 1234 as the max free ID. User A inserts ID 1234, User B inserts ID 1234 = Boom!
Is there any way I can push a new record to SqlDataReader after i pull a table down? I have this piece of trash code that I need to modify, and this seems like the easiest way to do what I need. I understand that this should not be done, and if you have to do it there is something seriously wrong with your logic, but is there a way?
Easiest way from that point is to just manually create a command with a command string of an insert(parametized if not sanitized/clean data, best to do that anyways, but could make code bulkier). Code for that should be quite small, considering you already have everything else setup.
When you say "push a new record to"... do you mean you want to add a record to the results? Or do you mean you want to do an INSERT?
The INSERT cannot be done with a reader; however, you can do things with readers. Of course, it would be simpler to update the original query so that you UNION the data.
In particular: you can't create your own SqlDataReader, but you can create your own bespoke IDataReader implementation; this could wrap the SqlDataReader, simply proxying data from the inner SqlDataReader until the SqlDataReader.Read() method returns false - then you could swap to returning you own data, returning true until you have run out of data. Not trivial to implement (mainly because you need to implement a lot of methods to write your own IDataReader), but certainly not impossible.
SqlDataReaders are forward read only so I doubt you can add a record in (regardless of whether you have pulled the whole table down). In fact anything that inherits DbDataReader is forward read only.
I'm guessing you need to do some manipulation with the records. Maybe what you can do instead is use the SqlDataReader to fill a DataTable and put a new record into the DataTable. But then you'd need to change your code to juggle a DataTable.
You need to expand your question a bit,
If, for example, you need to walk through a million records and update a field on the same table while walking through the data.
You can create a second SqlConnection to your db and execute update statements on the table (prone to locking issues), or better still insert all your changes into a temp table and merge the changes back into the original table after you are done with the reader.
There is little question I am tempted to ask, can this piece of logic be replaced with a single SQL UPDATE statement?