Querying data from SQL - c#

I need to query a table from database which has 400 rows and 24 columns. I need to query this table so that on each row and then on each column of row I can perform some C# code ( I can use column information to execute some code).
Now at the moment I am querying each row again and again from table using select statement and storing to a custom list and performing custom operations on it.
Is it best and fastest way of doing it ? or should I just query the whole table one time and store somewhere ? not sure where in a dataset and then run throw custom code to do some operation using information in each row ?

You can fetch the table from database once and store it in datatable and then just use linq to select the column something like this
var data = dt.AsEnumerable().Select(s => s.Field<string>("myColumnName")).ToArray<string>();
and If you don't want to use other columns anywhere in your code then you should select only useful column from the database.
You can also select multiple columns of a database using linq. The values will be stored in anonymous type of object.
var mutipleData = from row
in dt.AsEnumerable()
select new
{ Value1 = row["Column1"].ToString(),
Value2 = row["Column2"].ToString()
};

Assuming that each field is 1000 bytes, the total memory to hold your 400 rows would be 9.6MB. Peanuts! Just read the whole table in a DataTable and process it as you wish.

Select all the records from DB table which are f your concern
Copy the select records into DataTable.
Pseudo Code:
--dt is datatable
foreach(datarow dr in dt.rows)
{
--perform operation
string str=dr["columnname"].tostring
}

400 rows isn't a massive amount, however it depends on the data in each column and how often you are likely to run the query. If all you are going to do is run the query and manipulate the output, use a DataReader instead.

If it's only 400 records, I'd fetch them all at once, store them in a class and iterate over each instance.
Something like:
Class
public class MyTableClass(){
string property1 {get; set;}
string property2 {get; set;}
int property3 {get; set;}
// etc.
}
Logic:
ICollection<MyTableClass> lstMyTableClass = (
from m in db.mytableclass
select m
).ToList();
return lstMyTableClass;
And then a loop:
foreach(var myTableInstance in lstMyTableClass){
myTableInstance.DoMyStuff();
}

If the number of records will always be below thoussand, just query all the records and keep it in a List.
Once the data is in the List you can query the List n number of times using LINQ without hitting the database for each request.

Related

Avoid duplication in DataTable query and build

What would be the right way to avoid duplication when querying datatable and then saving it to DataTable. I'm using the pattern below, which gets very error-prone once tables grow. I looked at below hints. With first one copyToDataTable() looks not really applicable and second is for me much too complex for the task. I would like to split the below code into 2 separate methods (first to build the query and second to retrieve the DataTable). Perhaps if I avoid the anonymous type in the query this should be easier to avoid hardcoding all the column names - but I'm somehow lost with this.
Filling a DataSet or DataTable from a LINQ query result set
or
https://msdn.microsoft.com/en-us/library/bb669096%28v=vs.110%29.aspx
public DataTable retrieveReadyReadingDataTable()
{
DataTable dtblReadyToSaveToDb = RetrieveDataTableExConstraints();
var query = from scr in scrTable.AsEnumerable()
from products in productsTable.AsEnumerable()
where(scr.Field<string>("EAN") == products.Field<string>("EAN"))
select
new
{
Date = DateTime.Today.Date,
ProductId = products.Field<string>("SkuCode"),
Distributor = scr.Field<string>("Distributor"),
Price = float.Parse(scr.Field<string>("Price")),
Url = scr.Field<string>("Url")
};
foreach (var q in query)
{
DataRow newRow = dtblReadyToSaveToDb.Rows.Add();
newRow.SetField("Date", q.Date);
newRow.SetField("ProductId", q.ProductId);
newRow.SetField("Distributor", q.Distributor);
newRow.SetField("Price", q.Price);
newRow.SetField("Url", q.Url);
}
return dtblReadyToSaveToDb;
}
Firstly, you have to decide what "duplicate" means in your case. According to your code i would say a duplicate is a row with the same value in column Date, ProductId and Distributor. So add a multi column primary key for those columns first.
Secondly, you should add some sort of code that first queries existing rows and then compares these existing rows to the rows you want to create. If a match is found, then simply just don't insert a new row.

check if values are in datatable

I have an array or string:
private static string[] dataNames = new string[] {"value1", "value2".... };
I have table in my SQL database with a column of varchar type. I want to check which values from the array of string exists in that column.
I tried this:
public static void testProducts() {
string query = "select * from my table"
var dataTable = from row in dt.AsEnumerable()
where String.Equals(row.Field<string>("columnName"), dataNames[0], StringComparison.OrdinalIgnoreCase)
select new {
Name = row.Field<string> ("columnName")
};
foreach(var oneName in dataTable){
Console.WriteLine(oneName.Name);
}
}
that code is not the actual code, I am just trying to show you the important part
That code as you see check according to dataNames[index]
It works fine, but I have to run that code 56 times because the array has 56 elements and in each time I change the index
is there a faster way please?
the Comparison is case insensitive
First, you should not filter records in memory but in the datatabase.
But if you already have a DataTable and you need to find rows where one of it's fields is in your string[], you can use Linq-To-DataTable.
For example Enumerable.Contains:
var matchingRows = dt.AsEnumerable()
.Where(row => dataNames.Contains(row.Field<string>("columnName"), StringComparer.OrdinalIgnoreCase));
foreach(DataRow row in matchingRows)
Console.WriteLine(row.Field<string>("columnName"));
Here is a more efficient (but less readable) approach using Enumerable.Join:
var matchingRows = dt.AsEnumerable().Join(dataNames,
row => row.Field<string>("columnName"),
name => name,
(row, name) => row,
StringComparer.OrdinalIgnoreCase);
try to use contains should return all value that you need
var data = from row in dt.AsEnumerable()
where dataNames.Contains(row.Field<string>("columnName"))
select new
{
Name = row.Field<string>("columnName")
};
Passing a list of values is surprisingly difficult. Passing a table-valued parameter requires creating a T-SQL data type on the server. You can pass an XML document containing the parameters and decode that using SQL Server's convoluted XML syntax.
Below is a relatively simple alternative that works for up to a thousand values. The goal is to to build an in query:
select col1 from YourTable where col1 in ('val1', 'val2', ...)
In C#, you should probably use parameters:
select col1 from YourTable where col1 in (#par1, #par2, ...)
Which you can pass like:
var com = yourConnection.CreateCommand();
com.CommandText = #"select col1 from YourTable where col1 in (";
for (var i=0; i< dataNames.Length; i++)
{
var parName = string.Format("par{0}", i+1);
com.Parameters.AddWithValue(parName, dataNames[i]);
com.CommandText += parName;
if (i+1 != dataNames.Length)
com.CommandText += ", ";
}
com.CommandText += ");";
var existingValues = new List<string>();
using (var reader = com.ExecuteReader())
{
while (read.Read())
existingValues.Add(read["col1"]);
}
Given the complexity of this solution I'd go for Max' or Tim's answer. You could consider this answer if the table is very large and you can't copy it into memory.
Sorry I don't have a lot of relevant code here, but I did a similar thing quite some time ago, so I will try to explain.
Essentially I had a long list of item IDs that I needed to return to the client, which then told the server which ones it wanted loaded at any particular time. The original query passed the values as a comma separated set of strings (they were actually GUIDs). Problem was that once the number of entries hit 100, there was a noticeable lag to the user, once it got to 1000 possible entries, the query took a minute and a half, and when we went to 10,000, lets just say you could boil the kettle and drink your tea/coffee before it came back.
The answer was to stick the values to check directly into a temporary table, where one row of the table represented one value to check against. The temporary table was keyed against the user who performed the search, so this meant other users searches wouldn't become corrupted with each other, and when the user logged out, then we knew which values in the search table could be removed.
Depending on where this data comes from will depend on the best way for you to load the reference table. But once it is there, then your new query will look something like:-
SELECT Count(t.*), rt.dataName
FROM table t
RIGHT JOIN referenceTable rt ON tr.dataName = t.columnName
WHERE rt.userRef = #UserIdValue
GROUP BY tr.dataName
The RIGHT JOIN here should give you a value for each of your reference table values, including 0 if the value did not appear in your table. If you don't care which one don't appear, then changing it to an INNER JOIN will eliminate the zeros.
The WHERE clause is to ensure that your search only returns the unique items that you are looking for at the moment - the design should consider that concurrent access will someday occur here (even if it doesn't at the moment), so writing something in to protect it is advisable.

Querying inside a Dataset C#

I have an ADO.NET dataset which is set by a certain query,
say
SELECT ID,USER,PRODUCT,COUNT FROM PRODUCTION
Without using a where clause I need to derive some results from the dataset. Say I want to get the User and Product count of the user who has the maximum product count. (And I want to do it by using the existing dataset. I can't derive this from dataset.)
Any idea of a way to query inside the dataset? Since there are Datatables my thought was there is some way to query it.
Traditional SQL queries cannot be applied to the DataSet. The following is possible, however:
Filter rows using DataTable.Select. See here for detailed information about expressions in DataTables.
Calculate totals etc. using DataTable.Compute.
If these two don't do the trick, there's always LINQ.
Quick-and-dirty LINQ example: (which doesn't return a DataTable, but a list containing an anonymous type):
var joinedResult = dataTable1
// filtering:
.Select("MyColumn = 'value'")
// joining tables:
.Join(
dataTable2.AsEnumerable(),
row => row.Field<long>("PrimaryKeyField"),
row => row.Field<long?>("ForeignKeyField"),
// selecting a custom result:
(row1, row2) => new { AnotherColumn = row1.Field<string>("AnotherColumn") });
AsEnumerable converts a DataTable into an IEnumerable on which LINQ queries can be performed. If you are new to LINQ, check out this introduction.
Yes, you can use DataTable.Select method.
DataTable table = DataSet1.Tables["Orders"];
// Presuming the DataTable has a column named Date.
string expression;
expression = "Date > #1/1/00#";
DataRow[] foundRows;
// Use the Select method to find all rows matching the filter.
foundRows = table.Select(expression);
// Print column 0 of each returned row.
for(int i = 0; i < foundRows.Length; i ++)
{
Console.WriteLine(foundRows[i][0]);
}
Also see this link.
You can do cross-table queries of a Dataset object using LINQ to DataSet:
msdn.microsoft.com/en-us/library/bb386969.aspx

What is the best way to fast insert SQL data and dependant rows?

I need to write some code to insert around 3 million rows of data.
At the same time I need to insert the same number of companion rows.
I.e. schema looks like this:
Item
- Id
- Title
Property
- Id
- FK_Item
- Value
My first attempt was something vaguely like this:
BaseDataContext db = new BaseDataContext();
foreach (var value in values)
{
Item i = new Item() { Title = value["title"]};
ItemProperty ip = new ItemProperty() { Item = i, Value = value["value"]};
db.Items.InsertOnSubmit(i);
db.ItemProperties.InsertOnSubmit(ip);
}
db.SubmitChanges();
Obviously this was terribly slow so I'm now using something like this:
BaseDataContext db = new BaseDataContext();
DataTable dt = new DataTable("Item");
dt.Columns.Add("Title", typeof(string));
foreach (var value in values)
{
DataRow item = dt.NewRow();
item["Title"] = value["title"];
dt.Rows.Add(item);
}
using (System.Data.SqlClient.SqlBulkCopy sb = new System.Data.SqlClient.SqlBulkCopy(db.Connection.ConnectionString))
{
sb.DestinationTableName = "dbo.Item";
sb.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Title", "Title"));
sb.WriteToServer(dt);
}
But this doesn't allow me to add the corresponding 'Property' rows.
I'm thinking the best solution might be to add a Stored Procedure like this one that generically lets me do a bulk insert (or at least multiple inserts, but I can probably disable logging in the stored procedure somehow for performance) and then returns the corresponding ids.
Can anyone think of a better (i.e. more succinct, near equal performance) solution?
To combine the previous best two answers and add in the missing piece for the IDs:
1) Use BCP to Load the data into a temporary "staging" table defined like this
CREATE TABLE stage(Title AS VARCHAR(??), value AS {whatever});
and you'll need the appropriate index for performance later:
CREATE INDEX ix_stage ON stage(Title);
2) Use SQL INSERT to load the Item table:
INSERT INTO Item(Title) SELECT Title FROM stage;
3) Finally load the Property table by joining stage with Item:
INSERT INTO Property(FK_ItemID, Value)
SELECT id, Value
FROM stage
JOIN Item ON Item.Title = stage.Title
The best way to move that much data into SQL Server is bcp. Assuming that the data starts in some sort of file, you'll need to write a small script to funnel the data into the two tables. Alternately you could use bcp to funnel the data into a single table and then use an SP to INSERT the data into the two tables.
Bulk copy the data into a temporary table, and then call a stored proc that splits the data into the two tables you need to populate.
You can bulk copy in code as well, using the .NET SqlBulkCopy class.

HowTo reorder items in DataView object

I have a DataView object with productID and productName,
i get all items from Products Table and store them in Cache.
I want to get items that user bought in order he bought them without using another query or joining tables.
ie.
Products DataView
1. Foo
2. Foo2
3. Foo3
Filtering to ProductsBoughtByUser using RowFilter (productID in (1,3))
UserProducts DataView
1.Foo
3.Foo3
Now user first bought Foo3, how do i reorder items based on correct ( chronological ) Array. ie 3 1
Thanks
Here is the syntax for DataView.Sort:
private void SortByTwoColumns()
{
// Get the DefaultViewManager of a DataTable.
DataView view = DataTable1.DefaultView;
// By default, the first column sorted ascending.
view.Sort = "State, ZipCode DESC";
}
From your original post, it sounds like you only have Product ID and Product Name. You can't sort these in memory unless you're also retrieving a purchase/order date.
Probably the easiest thing to do would be to add the date column (PurchaseDate) to your sql statement that you're already using, then do a view.Sort = "PurchaseDate DESC";
That way, you don't have to write another statement or hit the database a second time for this sort.
Edit:
The DataRowView has integral and string indexers. e.g.:
foreach (DataRowView row in dataView)
{
var value = row["COLUMN_NAME"];
}
The column is stored as an object, so you'll have to check for null and convert to your data type

Categories

Resources