SqlDataReader get specific ResultSet - c#

When having a query that returns multiple results, we are iterating among them using the NextResult() of the SqlDataReader. This way we are accessing results sequentially.
Is there any way to access the result in a random / non sequential way. For example jump first to the third result, then to the first e.t.c
I am searching if there is something like rdr.GetResult(1), or a workaround.
Since I was asked Why I want something like this,
First of all I have no access to the query and so I can not changes, so in my client I will have the Results in the sequence that server writes / produces them.
To process (build collections, entities --> business logic) the first I need the Information from both the second and the third one.
Again since it is not an option to modify some of the code, I can not somehow (without writing a lot of code) 'store' the connection info (eg. ids) in order to connect in a later step the two ResultSets
The most 'elegant' solution (for sure not the only one) is to process the result sets in non sequential way. So that is why I am asking if there is such a way.
Update 13/6
While Jeroen Mostert answer gives a thoughtful explanation on why, Think2ceCode1ce answer shows the right directions for a workaround. The content of the link in the comments how in additional dataset could be utilized to work in an async way. IMHO this would be the way to go if was going to write a general solution. However in my case, I based my solution in the nature of my data and the logic behind them. In short terms, (1) I read the data as they come sequentially using the SqlDataReader; (2) I store some of the data I need in a dictionary and a Collection, when I am reading the first in row but second in logic ResultSet; (3) When I am Reading the third in row, but first in logic ResultSet I am iterating in through the collection I built earlier and based on the dictionary data I am building my final result.
The final code seems more efficient and it is more maintainable than using the async DataAdapter. However this is a very specific solution based on my data.

Provides a way of reading a forward-only stream of rows from a SQL
Server database.
https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader(v=vs.110).aspx
You need to use DataAdapter for disconnected and non-sequential access. To use this you just have to change bit of your ADO.NET code.
Instead of
SqlDataReader sqlReader = sqlCmd.ExecuteReader();
You need
DataTable dt = new DataTable();
SqlDataAdapter sqlAdapter = new SqlDataAdapter(sqlCmd);
sqlAdapter.Fill(dt);
If your SQL returns multiple result sets, you would use DataSet instead of DataTable, and then access result sets like ds.Tables[index_or_name].
https://msdn.microsoft.com/en-us/library/bh8kx08z(v=vs.110).aspx

No, this is not possible. The reason why is quite elementary: if a batch returns multiple results, it must return them in order -- the statement that returns result set #2 does not run before the one that returns result set #1, nor does the client have any way of saying "please just skip that first statement entirely" (as that could have dire consequences for the batch as a whole). Indeed, there's not even any way in general to tell how many result sets a batch will produce -- all of this is done at runtime, the server doesn't know in advance what will happen.
Since there's no way, server-side, to skip or index result sets, there's no meaningful way to do it client-side either. You're free to ignore the result sets streamed back to you, but you must still process them in order before you can move on -- and once you've moved on, you can't go back.
There are two possible global workarounds:
If you process all data and cache it locally (with a DataAdapter, for example) you can go back and forth in the data as you please, but this requires keeping all data in memory.
If you enable MARS (Multiple Active Result Sets) you can execute another query even as the first one is still processing. This does require splitting up your existing single batch code into individual statements (which, if you really can't change anything about the SQL at all, is not an option), but you could go through result sets at will (without caching). It would still not be possible for you to "go back" within a single result set, though.

Related

SQL Query Performance - Which approach is better if any?

I am having to return several record sets from SQL Server to constructs a C# object. While EF doesn't currently support (possibly the beta version) returning complex objects like this I am having to resort to returning a DataSet using ADO.NET to retrieve the data before transforming it into a pleasant C# representation, see below.
SELECT * FROM ...
exec dbo.usp_SP1 #ProductID,#CatalogName
exec dbo.usp_SP2 #ProductID,#CatalogName
Its always better to make fewer database calls however due to how this query is being executed would making several requests for each DataSet be that much worst?
One database roundtrip is always better that three (or two).
Your code could be clean even with this set of queries. You just have to call DataTable.Load three times, in succession, to load all three results sets from the DbDataReader.
So yes, I recommend one bigger query and a single DataSet for all result sets.
You can reference each table result inside of a dataset.
ds.Tables[index].Rows.... So I would make one call, get all the data and return it via whether you call another sproc or just a simple SELECT.

"cursor like" reading inside a CLR procedure/function

I have to implement an algorithm on data which is (for good reasons) stored inside SQL server. The algorithm does not fit SQL very well, so I would like to implement it as a CLR function or procedure. Here's what I want to do:
Execute several queries (usually 20-50, but up to 100-200) which all have the form select a,b,... from some_table order by xyz. There's an index which fits that query, so the result should be available more or less without any calculation.
Consume the results step by step. The exact stepping depends on the results, so it's not exactly predictable.
Aggregate some result by stepping over the results. I will only consume the first parts of the results, but cannot predict how much I will need. The stop criteria depends on some threshold inside the algorithm.
My idea was to open several SqlDataReader, but I have two problems with that solution:
You can have only one SqlDataReader per connection and inside a CLR method I have only one connection - as far as I understand.
I don't know how to tell SqlDataReader how to read data in chunks. I could not find documentation how SqlDataReader is supposed to behave. As far as I understand, it's preparing the whole result set and would load the whole result into memory. Even if I would consume only a small part of it.
Any hint how to solve that as a CLR method? Or is there a more low level interface to SQL server which is more suitable for my problem?
Update: I should have made two points more explicit:
I'm talking about big data sets, so a query might result in 1 mio records, but my algorithm would consume only the first 100-200 ones. But as I said before: I don't know the exact number beforehand.
I'm aware that SQL might not be the best choice for that kind of algorithm. But due to other constraints it has to be a SQL server. So I'm looking for the best possible solution.
SqlDataReader does not read the whole dataset, you are confusing it with the Dataset class. It reads row by row, as the .Read() method is being called. If a client does not consume the resultset the server will suspend the query execution because it has no room to write the output into (the selected rows). Execution will resume as the client consumes more rows (SqlDataReader.Read is being called). There is even a special command behavior flag SequentialAccess that instructs the ADO.Net not to pre-load in memory the entire row, useful for accessing large BLOB columns in a streaming fashion (see Download and Upload images from SQL Server via ASP.Net MVC for a practical example).
You can have multiple active result sets (SqlDataReader) active on a single connection when MARS is active. However, MARS is incompatible with SQLCLR context connections.
So you can create a CLR streaming TVF to do some of what you need in CLR, but only if you have one single SQL query source. Multiple queries it would require you to abandon the context connection and use isntead a fully fledged connection, ie. connect back to the same instance in a loopback, and this would allow MARS and thus consume multiple resultsets. But loopback has its own issues as it breaks the transaction boundaries you have from context connection. Specifically with a loopback connection your TVF won't be able to read the changes made by the same transaction that called the TVF, because is a different transaction on a different connection.
SQL is designed to work against huge data sets, and is extremely powerful. With set based logic it's often unnecessary to iterate over the data to perform operations, and there are a number of built-in ways to do this within SQL itself.
1) write set based logic to update the data without cursors
2) use deterministic User Defined Functions with set based logic (you can do this with the SqlFunction attribute in CLR code). Non-Deterministic will have the affect of turning the query into a cursor internally, it means the value output is not always the same given the same input.
[SqlFunction(IsDeterministic = true, IsPrecise = true)]
public static int algorithm(int value1, int value2)
{
int value3 = ... ;
return value3;
}
3) use cursors as a last resort. This is a powerful way to execute logic per row on the database but has a performance impact. It appears from this article CLR can out perform SQL cursors (thanks Martin).
I saw your comment that the complexity of using set based logic was too much. Can you provide an example? There are many SQL ways to solve complex problems - CTE, Views, partitioning etc.
Of course you may well be right in your approach, and I don't know what you are trying to do, but my gut says leverage the tools of SQL. Spawning multiple readers isn't the right way to approach the database implementation. It may well be that you need multiple threads calling into a SP to run concurrent processing, but don't do this inside the CLR.
To answer your question, with CLR implementations (and IDataReader) you don't really need to page results in chunks because you are not loading data into memory or transporting data over the network. IDataReader gives you access to the data stream row-by-row. By the sounds it your algorithm determines the amount of records that need updating, so when this happens simply stop calling Read() and end at that point.
SqlMetaData[] columns = new SqlMetaData[3];
columns[0] = new SqlMetaData("Value1", SqlDbType.Int);
columns[1] = new SqlMetaData("Value2", SqlDbType.Int);
columns[2] = new SqlMetaData("Value3", SqlDbType.Int);
SqlDataRecord record = new SqlDataRecord(columns);
SqlContext.Pipe.SendResultsStart(record);
SqlDataReader reader = comm.ExecuteReader();
bool flag = true;
while (reader.Read() && flag)
{
int value1 = Convert.ToInt32(reader[0]);
int value2 = Convert.ToInt32(reader[1]);
// some algorithm
int newValue = ...;
reader.SetInt32(3, newValue);
SqlContext.Pipe.SendResultsRow(record);
// keep going?
flag = newValue < 100;
}
Cursors are a SQL only function. If you wanted to read chunks of data at a time, some sort of paging would be required so that only a certain amount of the records would be returned. If using Linq,
.Skip(Skip)
.Take(PageSize)
Skips and takes could be used to limit results returned.
You can simply iterate over the DataReader by doing something like this:
using (IDataReader reader = Command.ExecuteReader())
{
while (reader.Read())
{
//Do something with this record
}
}
This would be iterating over the results one at a time, similiar to a cursor in SQL Server.
For multiple recordsets at once, try MARS
(if SQL Server)
http://msdn.microsoft.com/en-us/library/ms131686.aspx

Can someone explain map-reduce in C#?

Can anyone please explain the concept of map-reduce, particularly in Mongo?
I also use C# so any specifics in that area would also be useful.
One way to understand Map-Reduce coming from C# and LINQ is to think of it as a SelectMany() followed by a GroupBy() followed by an Aggregate() operation.
In a SelectMany() you are projecting a sequence but each element can become multiple elements. This is equivalent to using multiple emit statements in your map operation. The map operation can also chose not to call emit which is like having a Where() clause inside your SelectMany() operation.
In a GroupBy() you are collecting elements with the same key which is what Map-Reduce does with the key value that you emit from the map operation.
In the Aggregate() or reduce step you are taking the collections associated with each group key and combining them in some way to produce one result for each key. Often this combination is simply adding up a single '1' value output with each key from the map step but sometimes it's more complicated.
One important caveat with MongoDB's map-reduce is that the reduce operation must accept and output the same data type because it may be applied repeatedly to partial sets of the grouped data. If you are passed an array of values, don't simply take the length of it because it might be a partial result from an earlier reduce operation.
Here's a spot to get started with Map Reduce in Mongo. The cookbook has a few examples, I would focus on these two.
I like to think of map-reduces in the context of "data warehousing jobs" or "rollups". You're basically taking detailed data and "rolling up" a smaller version of that data.
In SQL you would normally do this with sum() and avg() and group by. In MongoDB you would do this with a Map Reduce. The basic premise of a Map Reduce is that you have two functions.
The first function (map) is a basically a giant for loop that runs over your data and "emits" certain keys and values. The second function (reduce), is a giant loop over all of the emitted data. The map says "hey this is the data you want to summarize" and the reduce says "hey this array of values reduces to this single value"
The output from a map-reduce can come in many forms (typically flat files). In MongoDB, the output is actually a new collection.
C# Specifics
In MongoDB all of the Map Reduces are performed inside of the javascript engine. So both the map & reduce function are all written in javascript. The various drivers will allow you to build the javascript and issue the command, however, this is not how I normally do it.
The preferred method for running Map Reduce jobs is to compile the JS into a file and then mongo map_reduce.js. Generally you'll do this on the server somewhere as a cron job or a scheduled task.
Why?
Well, map reduce is not a "real-time", especially with a big data set. It's really designed to be used in a batch fashion. Don't get me wrong, you can call it from your code, but generally, you don't want users to initiate map reduce jobs. Instead you want those jobs to be scheduled and you want users to be querying the results :)
Map Reduce is a way to process data where you have a map stage/function that identifies all data to be processed and process it, row by row.
Then you have a reduce step/function that can be run multiple times, for example once per server in a cluster and then once in the client to return a final result.
Here is a Wiki article describing it in more detail:
http://en.wikipedia.org/wiki/MapReduce
And here is the documentation for MongoDB for Mapreduce
http://www.mongodb.org/display/DOCS/MapReduce
Simple example, find the longest string in a list.
The map step will loop over the list calculating the length of each string, the reduce step will loop over the result from map and for each line keep the longest one.
This can of cause be much more complex but that's the essence of it.

Speed up LINQ inserts

I have a CSV file and I have to insert it into a SQL Server database. Is there a way to speed up the LINQ inserts?
I've created a simple Repository method to save a record:
public void SaveOffer(Offer offer)
{
Offer dbOffer = this.db.Offers.SingleOrDefault (
o => o.offer_id == offer.offer_id);
// add new offer
if (dbOffer == null)
{
this.db.Offers.InsertOnSubmit(offer);
}
//update existing offer
else
{
dbOffer = offer;
}
this.db.SubmitChanges();
}
But using this method, the program is way much slower then inserting the data using ADO.net SQL inserts (new SqlConnection, new SqlCommand for select if exists, new SqlCommand for update/insert).
On 100k csv rows it takes about an hour vs 1 minute or so for the ADO.net way. For 2M csv rows it took ADO.net about 20 minutes. LINQ added about 30k of those 2M rows in 25 minutes. My database has 3 tables, linked in the dbml, but the other two tables are empty. The tests were made with all the tables empty.
P.S. I've tried to use SqlBulkCopy, but I need to do some transformations on Offer before inserting it into the db, and I think that defeats the purpose of SqlBulkCopy.
Updates/Edits:
After 18hours, the LINQ version added just ~200K rows.
I've tested the import just with LINQ inserts too, and also is really slow compared with ADO.net. I haven't seen a big difference between just inserts/submitchanges and selects/updates/inserts/submitchanges.
I still have to try batch commit, manually connecting to the db and compiled queries.
SubmitChanges does not batch changes, it does a single insert statement per object. If you want to do fast inserts, I think you need to stop using LINQ.
While SubmitChanges is executing, fire up SQL Profiler and watch the SQL being executed.
See question "Can LINQ to SQL perform batch updates and deletes? Or does it always do one row update at a time?" here: http://www.hookedonlinq.com/LINQToSQLFAQ.ashx
It links to this article: http://www.aneyfamily.com/terryandann/post/2008/04/Batch-Updates-and-Deletes-with-LINQ-to-SQL.aspx that uses extension methods to fix linq's inability to batch inserts and updates etc.
Have you tried wrapping the inserts within a transaction and/or delaying db.SubmitChanges so that you can batch several inserts?
Transactions help throughput by reducing the needs for fsync()'s, and delaying db.SubmitChanges will reduce the number of .NET<->db roundtrips.
Edit: see http://www.sidarok.com/web/blog/content/2008/05/02/10-tips-to-improve-your-linq-to-sql-application-performance.html for some more optimization principles.
Have a look at the following page for a simple walk-through of how to change your code to use a Bulk Insert instead of using LINQ's InsertOnSubmit() function.
You just need to add the (provided) BulkInsert class to your code, make a few subtle changes to your code, and you'll see a huge improvement in performance.
Mikes Knowledge Base - BulkInserts with LINQ
Good luck !
I wonder if you're suffering from an overly large set of data accumulating in the data-context, making it slow to resolve rows against the internal identity cache (which is checked once during the SingleOrDefault, and for "misses" I would expect to see a second hit when the entity is materialized).
I can't recall 100% whether the short-circuit works for SingleOrDefault (although it will in .NET 4.0).
I would try ditching the data-context (submit-changes and replace with an empty one) every n operations for some n - maybe 250 or something.
Given that you're calling SubmitChanges per isntance at the moment, you may also be wasting a lot of time checking the delta - pointless if you've only changed one row. Only call SubmitChanges in batches; not per record.
Alex gave the best answer, but I think a few things are being over looked.
One of the major bottlenecks you have here is calling SubmitChanges for each item individually. A problem I don't think most people know about is that if you haven't manually opened your DataContext's connection yourself, then the DataContext will repeatedly open and close it itself. However, if you open it yourself, and then close it yourself when you're absolutely finished, things will run a lot faster since it won't have to reconnect to the database every time. I found this out when trying to find out why DataContext.ExecuteCommand() was so unbelievably slow when executing multiple commands at once.
A few other areas where you could speed things up:
While Linq To SQL doesn't support your straight up batch processing, you should wait to call SubmitChanges() until you've analyzed everything first. You don't need to call SubmitChanges() after each InsertOnSubmit call.
If live data integrity isn't super crucial, you could retrieve a list of offer_id back from the server before you start checking to see if an offer already exists. This could significantly reduce the amount of times you're calling the server to get an existing item when it's not even there.
Why not pass an offer[] into that method, and doing all the changes in cache before submitting them to the database. Or you could use groups for submission, so you don't run out of cache. The main thing would be how long till you send over the data, the biggest time wasting is in the closing and opening of the connection.
Converting this to a compiled query is the easiest way I can think of to boost your performance here:
Change the following:
Offer dbOffer = this.db.Offers.SingleOrDefault (
o => o.offer_id == offer.offer_id);
to:
Offer dbOffer = RetrieveOffer(offer.offer_id);
private static readonly Func<DataContext, int> RetrieveOffer
{
CompiledQuery.Compile((DataContext context, int offerId) => context.Offers.SingleOrDefault(o => o.offer_id == offerid))
}
This change alone will not make it as fast as your ado.net version, but it will be a significant improvement because without the compiled query you are dynamically building the expression tree every time you run this method.
As one poster already mentioned, you must refactor your code so that submit changes is called only once if you want optimal performance.
Do you really need to check if the record exist before inserting it into the DB. I thought it looked strange as the data comes from a csv file.
P.S. I've tried to use SqlBulkCopy,
but I need to do some transformations
on Offer before inserting it into the
db, and I think that defeats the
purpose of SqlBulkCopy.
I don't think it defeat the purpose at all, why would it? Just fill a simple dataset with all the data from the csv and do a SqlBulkCopy. I did a similar thing with a collection of 30000+ rows and the import time went from minutes to seconds
I suspect it isn't the inserting or updating operations that are taking a long time, rather the code that determines if your offer already exists:
Offer dbOffer = this.db.Offers.SingleOrDefault (
o => o.offer_id == offer.offer_id);
If you look to optimise this, I think you'll be on the right track. Perhaps use the Stopwatch class to do some timing that will help to prove me right or wrong.
Usually, when not using Linq-to-Sql, you would have an insert/update procedure or sql script that would determine whether the record you pass already exists. You're doing this expensive operation in Linq, which certainly will never hope to match the speed of native sql (which is what's happening when you use a SqlCommand and select if the record exists) looking-up on a primary key.
Well you must understand linq creates code dynamically for all ADO operations that you do instead handwritten, so it will always take up more time then your manual code. Its simply an easy way to write code but if you want to talk about performance, ADO.NET code will always be faster depending upon how you write it.
I dont know if linq will try to reuse its last statement or not, if it does then seperating insert batch with update batch may improve performance little bit.
This code runs ok, and prevents large amounts of data:
if (repository2.GeoItems.GetChangeSet().Inserts.Count > 1000)
{
repository2.GeoItems.SubmitChanges();
}
Then, at the end of the bulk insertion, use this:
repository2.GeoItems.SubmitChanges();

Add record to SqlDataReader

Is there any way I can push a new record to SqlDataReader after i pull a table down? I have this piece of trash code that I need to modify, and this seems like the easiest way to do what I need. I understand that this should not be done, and if you have to do it there is something seriously wrong with your logic, but is there a way?
Easiest way from that point is to just manually create a command with a command string of an insert(parametized if not sanitized/clean data, best to do that anyways, but could make code bulkier). Code for that should be quite small, considering you already have everything else setup.
When you say "push a new record to"... do you mean you want to add a record to the results? Or do you mean you want to do an INSERT?
The INSERT cannot be done with a reader; however, you can do things with readers. Of course, it would be simpler to update the original query so that you UNION the data.
In particular: you can't create your own SqlDataReader, but you can create your own bespoke IDataReader implementation; this could wrap the SqlDataReader, simply proxying data from the inner SqlDataReader until the SqlDataReader.Read() method returns false - then you could swap to returning you own data, returning true until you have run out of data. Not trivial to implement (mainly because you need to implement a lot of methods to write your own IDataReader), but certainly not impossible.
SqlDataReaders are forward read only so I doubt you can add a record in (regardless of whether you have pulled the whole table down). In fact anything that inherits DbDataReader is forward read only.
I'm guessing you need to do some manipulation with the records. Maybe what you can do instead is use the SqlDataReader to fill a DataTable and put a new record into the DataTable. But then you'd need to change your code to juggle a DataTable.
You need to expand your question a bit,
If, for example, you need to walk through a million records and update a field on the same table while walking through the data.
You can create a second SqlConnection to your db and execute update statements on the table (prone to locking issues), or better still insert all your changes into a temp table and merge the changes back into the original table after you are done with the reader.
There is little question I am tempted to ask, can this piece of logic be replaced with a single SQL UPDATE statement?

Categories

Resources