I got OutOfMemoryException while download bulk reports . So I alter the code by using DataReader instead of DataSet .
How to achieve below code in DataReader because am helpless to use any DataTables & DataSet in my coding.
if (dataSet.Tables[0].Rows[i + 1]["Sr No"].ToString() == dataSet.Tables[0].Rows[i]["Sr No"].ToString())
I don't think you can access a row that has not been executed. The way I understand it, the DataReader returns row by row as it reads it from the Database.
You can try the following:
This will loop through each row that the DataReader will return in the dataset. Here you can check certain values/conditions, as follow:
while (dataReader.Read())
{
var value = dataReader["Sr No"].ToString();
//Your custom code here
}
Alternatively, you can also specify the column index, instead of the name, if you wish to do so, as follow:
while (dataReader.Read())
{
var value = dataReader.GetString(0); //The 0 stands for "the 0'th column", so the first column of the result.
//Your custom code here
}
UPDATE
Why don't you place all the values read from the DataReader, into a list, and you can use this list afterwards for comparison between values if you need it.
Don't filter rows at the client level.
It's better to search for the right rows on the server, before they even reach the client, instead of fetching all the rows into the client and then filtering there. Simply incorporate your search criteria in the SQL query itself, and then fetch the rows that the server has already found for you.
Doing that:
Allows the server to use indexes and other database structures to identify the "interesting" rows potentially much faster than linearly searching through all rows.
You network connection between client and server will not be saturated by gazillion rows, most of which will be discarded at the end anyway.
May allow your client to deal with one row at a time in a simple way (e.g. using DbDataReader), as opposed to additional processing and/or storing multiple rows in memory (or even all rows, as you did).
In your particular case, looks like you'll need a self-join, or perhaps an analytic (aka. "window") function. Without knowing more about your database structure or what are you actually trying to accomplish, I can't know how your exact query is going to look like, bit it will probably be something along these lines:
-- Sample data...
CREATE TABLE T (
ID int PRIMARY KEY,
SR_NO int
);
INSERT INTO T VALUES
(1, 100),
(2, 101),
(3, 101),
(4, 100);
-- The actual query...
SELECT
*
FROM (
SELECT
*,
LEAD(SR_NO) OVER (ORDER BY ID) NEXT_SR_NO
FROM
T T1
) Q
WHERE
SR_NO = NEXT_SR_NO;
Result:
ID SR_NO NEXT_SR_NO
2 101 101
You can do this by using a do-while loop. Before checking the condition, the next row will be read and you can also access that row. Try this code :
do
{
your code
}
while(myreader.Read())
Related
I am filling two tables in DataSet.while retrieving column from 2nd table in dataset i am getting error.Please help!
"There is no row at position 0."
Here is my code.
Stored Procedure
CREATE proc [dbo].[spDispatchDetails]
(
#JobNo int,
#Programme nvarchar(100)
)
as
begin
select ReceivedFrom,ChallanNo,ChallanDate,JobNo,ReceivingDate,LotNo from tblOrders where JobNo=#JobNo and OrderStatus='In Process'
select Quantity from tblProgramme where JobNo=#JobNo and Programme=#Programme
end
I am sharing image of my code.
probably it does not return any rows
you need to check the rows count in ds.Table[1] before you access rows in it.
Make sure that ds.Table[1].Rows.Count > 0
First check if SQL is returning rows, try executing the query manually. While debugging make sure the parameters are being sent correctly (the dropdown selected values)
The error is telling you that there is no rows in the second table. Ideally always check that the row count is greater than zero
On the beginning You should check if Your procedure returns any rows to ds.Tables[1], because this error is very typical to this situation.
Please check this part of Your procedure:
select Quantity from tblProgramme where JobNo=#JobNo and Programme=#Programme
and let us know if You get any rows.
The is no row at position 0
This issue means that there is no row that was returned from the execution of the SQL query for the second table in the dataset. For example, the Quantity value does not exist if the tblProgramme table where the JobNo and Programme match that values passed into the call.
One of the issues with ADO.NET is the amount of effort involved with checking types and row counts, etc.
To solve the issue you should check to ensure that the Table exists and that the Row count is at least 1, and even that the column exists too.
if (ds.Tables.Count == 2 && // Ensure two tables in the dataset
ds.Tables[1].Rows.Count > 0 && // Ensure second table has a row
ds.Tables[1].Rows[0]["Quantity"] != DBNull.Value) // Ensure value for Qty
{
// Then do something with it...
}
As a side note, I would suggest using an ORM as it removes nearly all of the issues that raw ADO.NET boilerplate code introduces.
I am dealing with a huge database with millions of rows. I would like to run an SQL statement through C#, which selects 1.2 million rows from one database, and inserts them into another after parsing and modifying some data.
I originally wanted to do so by first running the select statement and parsing the data by iterating through the MySqlDataReader object which contains the data. This would be a memory overhead, so I have decided to select one row, parse it and insert into the other database, and then move onto the next row.
How can this be done? I have tried the SELECT....INTO syntax for a MySQL query, however this still seems to select all the data, and then inserts it after.
Use SqlBulkCopy Class to move data from one source to other
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy%28v=vs.110%29.aspx
I am not sure if you are able to add a new column to the existing table. If you are able to add a new column, you can use the new column as a flag. It could be "TRANSFERED(boolean)".
You will select one row at a time with the condition TRANSFERED=FALSE and do the process. After that row is processed, you should update as TRANSFERED=TRUE.
Or, you must have a uniqe id column in your existing table. Create a temp table which will store the id of processed rows, that way you will know which rows are processed or not
I am not quite sure what is your error. For your case, I suggest you should use 'select top 1000 ' to get the data because insert row one by one is really slow. After that, you can use 'insert into query', it should be noted that sqlbulkcopy is just for sql server, I suggest you use the stringbuilder to make the sql query for if you use string, it will has a big overhead to concat the string.
I get a list of ID's and amounts from a excel file (thousands of id's and corresponding amounts). I then need to check the database to see if each ID exists and if it does check to make sure the amount in the DB is greater or equal to that of the amount from the excel file.
Problem is running this select statement upwards of 6000 times and return the values I need takes a long time. Even at a 1/2 of a second a piece it will take about an hour to do all the selects. (I normally dont get more than 5 results max back)
Is there a faster way to do this?
Is it possible to somehow pass all the ID's at once and just make 1 call and get the massive collection?
I have tried using SqlDataReaders and SqlDataAdapters but they seem to be about the same (too long either way)
General idea of how this works below
for (int i = 0; i < ID.Count; i++)
{
SqlCommand cmd = new SqlCommand("select Amount, Client, Pallet from table where ID = #ID and Amount > 0;", sqlCon);
cmd.Parameters.Add("#ID", SqlDbType.VarChar).Value = ID[i];
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dataTable);
da.Dispose();
}
Instead of a long in list (difficult to parameterise and has a number of other inefficiencies regarding execution plans: compilation time, plan reuse, and the plans themselves) you can pass all the values in at once via a table valued parameter.
See arrays and lists in SQL Server for more details.
Generally I make sure to give the table type a primary key and use option (recompile) to get the most appropriate execution plans.
Combine all the IDs together into a single large IN clause, so it reads like:
select Amount, Client, Pallet from table where ID in (1,3,5,7,9,11) and Amount > 0;
"I have tried using SqlDataReaders and SqlDataAdapters"
It sounds like you might be open to other APIs. Using Linq2SQL or Linq2Entities:
var someListIds = new List<int> { 1,5,6,7 }; //imagine you load this from where ever
db.MyTable.Where( mt => someListIds.Contains(mt.ID) );
This is safe in terms of avoiding potential SQL injection vulnerabilities and will generate a "in" clause. Note however the size of the someListIds can be so large that the SQL query generated exceeds limits of query length, but the same is true of any other technique involving the IN clause. You can easily workaround that by partitioning lists into large chunks, and still be tremendously better than a query per ID.
Use Table-Valued Parameters
With them you can pass a c# datatable with your values into a stored procedure as a resultset/table which you can join to and do a simple:
SELECT *
FROM YourTable
WHERE NOT EXISTS (SELECT * FORM InputResultSet WHERE YourConditions)
Use the in operator. Your problem is very common and it has a name: N+1 performance problem
Where are you getting the IDs from? If it is from another query, then consider grouping them into one.
Rather than performing a separate query for every single ID that you have, execute one query to get the amount of every single ID that you want to check (or if you have too many IDs to put in one query, then batch them into batches of a few thousand).
Import the data directly to SQL Server. Use stored procedure to output the data you need.
If you must consume it in the app tier... use xml datatype to pass into a stored procedure.
You can import the data from the excel file into SQL server as a table (using the import data wizard). Then you can perform a single query in SQL server where you join this table to your lookup table, joining on the ID field. There's a few more steps to this process, but it's a lot neater than trying to concatenate all the IDs into a much longer query.
I'm assuming a certain amount of access privileges to the server here, but this is what I'd do given the access I normally have. I'm also assuming this is a one off task. If not, the import of the data to SQL server can be done programmatically as well
IN clause has limits, so if you go with that approach, make sure a batch size is used to process X amount of Ids at a time, otherwise you will hit another issue.
A #Robertharvey has noted, if there are not a lot of IDs and there are no transactions occurring, then just pull all the Ids at once into memory into a dictionary like object and process them there. Six thousand values is not alot and a single select could return all those back within a few seconds.
Just remember that if another process is updating the data, your local cached version may be stale.
There is another way to handle this, Making XML of IDs and pass it to procedure. Here is code for procedure.
IF OBJECT_ID('GetDataFromDatabase') IS NOT NULL
BEGIN
DROP PROCEDURE GetDataFromDatabase
END
GO
--Definition
CREATE PROCEDURE GetDataFromDatabase
#xmlData XML
AS
BEGIN
DECLARE #DocHandle INT
DECLARE #idList Table (id INT)
EXEC SP_XML_PREPAREDOCUMENT #DocHandle OUTPUT, #xmlData;
INSERT INTO #idList (id) SELECT x.id FROM OPENXML(#DocHandle, '//data', 2) WITH ([id] INT) x
EXEC SP_XML_removeDOCUMENT #DocHandle ;
--SELECT * FROM #idList
SELECT t.Amount, t.Client, t.Pallet FROM yourTable t INNER JOIN #idList x ON t.id = x.id and t.Amount > 0;
END
GO
--Uses
EXEC GetDataFromDatabase #xmlData = '<root><data><id>1</id></data><data><id>2</id></data></root>'
You can put any logic in procedure. You can pass id, amount also via XML. You can pass huge list of ids via XML.
SqlDataAdapter objects too heavy for that.
Firstly, using stored procedures, it will be faster.
Secondly, use the group operation, for this pass as a parameter to a list of identifiers on the side of the database, run a query on these parameters, and return the processed result.
It will quickly and efficiently, as all data processing logic is on the side of the database server
You can select the whole resultset (or join multiple 'limited' result sets) and save it all to DataTable Then you can do selects and updates (if needed) directly on datatable. Then plug new data back... Not super efficient memory wise, but often is very good (and only) solution when working in bulk and need it to be very fast.
So if you have thousands of records, it might take couple of minutes to populate all records into the DataTable
then you can search your table like this:
string findMatch = "id = value";
DataRow[] rowsFound = dataTable.Select(findMatch);
Then just loop foreach (DataRow dr in rowsFound)
I am doing a conversion with SqlBulkCopy. I currently have an IList collection of classes which basically i can do a conversion to a DataTable for use with SqlBulkCopy.
Problem is that I can have 3 records with the same ID.
Let me explain .. here are 3 records
ID Name Address
1 Scott London
1 Mark London
1 Manchester
Basically i need to insert them sequentially .. hence i insert record 1 if it doesn't exist, then the next record if it exists i need to update the record rather than insert a new 1 (notice the id is still 1) so in the case of the second record i replace both columns Name And Address on ID 1.
Finally on the 3rd record you notice that Name doesn't exist but its ID 1 and has an address of manchester so i need to update the record but NOT CHANGING Name but updating Manchester.. hence the 3rd record would make the id1 =
ID Name Address
1 Mark Manchester
Any ideas how i can do this? i am at a loss.
Thanks.
EDIT
Ok a little update. I will manage and merge my records before using SQLbulkCopy. Is it possible to get a list of what succeeded and what failed... or is it a case of ALL or nothing? I presume there is no other alternative to SQLbulkCopy but to do updates?
it would be ideal to be able to Insert everything and the ones that failed are inserted into a temp table ... hence i only need to worry about correcting the ones in my failed table as the others i know are all OK
Since you need to process that data into a DataTable anyway (unless you are writing a custom IDataReader), you should merge the records before giving them to SqlBulkCopy; for example (in pseudo code):
/* create empty data-table */
foreach(row in list) {
var row = /* try to get exsiting row from data-table based on id */
if(row == null) { row = /* create and append row to data-table */ }
else { merge non-trivial properties into existing row */
}
then pass the DataTable to SqlBulkCopy once you have the desired data.
Re the edit; in that scenario, I would upload to a staging table (just a regular table that has a schema like the uploaded data, but typically no foreign keys etc), then use regular TSQL to move the data into the transactional tables. In addition to full TSQL support this also allows better logging of operations. In particular, perhaps look at the OUTPUT clause of INSERT which can help complex bulk operations.
You can't do updates with bulk copy (bulk insert), only insert. Hence the name.
You need to fix the data before you insert them. If this means you have updates to pre-existing rows, you can't insert those as that will generate the key conflict.
You can either bulk insert into a temporary table, and run the appropriate insert or update statements, only insert the new rows and issue update statements for the rest, or delete the pre-existing rows after fetching them and fixing the data before reinserting.
But there's no way to persuade bulk copy to update an existing row.
Well, i am tring to do something nice (nice for me, simple for you guys), i was told i can do it, but i have no idea where to start.
I have two DDL's in a page, i need on page_load to popolate both, each one gets data from deferent table with no relaition between them (suppliers/catagories). i know how to do it with two DB connections, that is easy, but i was told that i can do it with one connection.
I was not told if it is only the connection that is united or also that SP deal with both tables in one SP (doesn't seem logical to me that i can do it with only one SP...but what do i know..lol)
thanks,
Erez
You can run both the queries in the SP:
your_sp
select * from table1;
select * from table2;
Then on C# side, you open the data reader and you can use the reader.NextResult() method to move to the next result in the result set.
while (reader.Read())
{
// process results from first query
}
reader.NextResult();
while (reader.Read())
{
// process results from second query
}
I think you could separate your SQL statements by a semicolon.
e.g. SELECT myColumns FROM Suppliers; SELECT otherColumns FROM Categories
You could open the datareader in a regular way.
Once you are done reading all the data for 1st resultset, you could make a call to NextResult which will try to execute the next statement and will get you the reader for 2nd resultset.
Note: I have not done this. But this is what I can make out of the documentation.
Not exactly the way I'd do it (i'd use 2 object data sources); but if you really only want to use 1, do this:
Create one sql statement that contains 2 select statements; and load that into a C# DataSet.
Bind your first ddl to DataSet.Tables[0].
Bind your second ddl to DataSet.Tables[1].
There ya go. 1 connection.
EDIT: If you really want to use a DataReader...
You'd probably need 2 SELECT statements, with an additional field to distinguish which DDL you're inserting into. So; something like this:
SELECT 'Table1' AS TableName, Name, Value
FROM dbo.Table1
UNION
SELECT 'Table2' AS TableName, Name, Value
FROM dbo.Table2
and then in whatever method you're using to load items into your DDL's, check the table name to see which one to add it into