Passing List<> to SQL Stored Procedure - c#

I've often had to load multiple items to a particular record in the database. For example: a web page displays items to include for a single report, all of which are records in the database (Report is a record in the Report table, Items are records in Item table). A user is selecting items to include in a single report via a web app, and let's say they select 3 items and submit. The process will add these 3 items to this report by adding records to a table called ReportItems (ReportId,ItemId).
Currently, I would do something like this in in the code:
public void AddItemsToReport(string connStr, int Id, List<int> itemList)
{
Database db = DatabaseFactory.CreateDatabase(connStr);
string sqlCommand = "AddItemsToReport"
DbCommand dbCommand = db.GetStoredProcCommand(sqlCommand);
string items = "";
foreach (int i in itemList)
items += string.Format("{0}~", i);
if (items.Length > 0)
items = items.Substring(0, items.Length - 1);
// Add parameters
db.AddInParameter(dbCommand, "ReportId", DbType.Int32, Id);
db.AddInParameter(dbCommand, "Items", DbType.String, perms);
db.ExecuteNonQuery(dbCommand);
}
and this in the Stored procedure:
INSERT INTO ReportItem (ReportId,ItemId)
SELECT #ReportId,
Id
FROM fn_GetIntTableFromList(#Items,'~')
Where the function returns a one column table of integers.
My question is this: is there a better way to handle something like this? Note, I'm not asking about database normalizing or anything like that, my question relates specifically with the code.

If going to SQL Server 2008 is an option for you, there's a new feature called "Table-valued parameters" to solve this exact problem.
Check out more details on TVP here and here or just ask Google for "SQL Server 2008 table-valued parameters" - you'll find plenty of info and samples.
Highly recommended - if you can move to SQL Server 2008...

Your string join logic can probably be simplified:
string items =
string.Join("~", itemList.Select(item=>item.ToString()).ToArray());
That will save you some string concatenation, which is expensive in .Net.
I don't think anything is wrong with the way you are saving the items. You are limiting trips to the db, which is a good thing. If your data structure was more complex than a list of ints, I would suggest XML.
Note: I was asked in the comments if this would save us any string concatenation (it does indeeed). I think it is an excellent question and would like to follow up on that.
If you peel open string.Join with Reflector you will see that Microsoft is using a couple of unsafe (in the .Net sense of the word) techniques, including using a char pointer and a structure called UnSafeCharBuffer. What they are doing, when you really boil it down, is using pointers to walk across an empty string and build up the join. Remember that the main reason string concatenation is so expensive in .Net is that a new string object is placed on the heap for every concatenation, because string is immutable. Those memory operations are expensive. String.Join(..) is essentially allocating the memory once, then operating upon it with a pointer. Very fast.

One potential issue with your technique is that it doesn't handle very large lists - you may exceed the maximum string length for your database. I use a helper method that concatenates the integer values into an enumeration of strings, each of which is less than a specified maximum (the following implementation also optionally checks for and removes duplicates ids):
public static IEnumerable<string> ConcatenateValues(IEnumerable<int> values, string separator, int maxLength, bool skipDuplicates)
{
IDictionary<int, string> valueDictionary = null;
StringBuilder sb = new StringBuilder();
if (skipDuplicates)
{
valueDictionary = new Dictionary<int, string>();
}
foreach (int value in values)
{
if (skipDuplicates)
{
if (valueDictionary.ContainsKey(value)) continue;
valueDictionary.Add(value, "");
}
string s = value.ToString(CultureInfo.InvariantCulture);
if ((sb.Length + separator.Length + s.Length) > maxLength)
{
// Max length reached, yield the result and start again
if (sb.Length > 0) yield return sb.ToString();
sb.Length = 0;
}
if (sb.Length > 0) sb.Append(separator);
sb.Append(s);
}
// Yield whatever's left over
if (sb.Length > 0) yield return sb.ToString();
}
Then you use it something like:
using(SqlCommand command = ...)
{
command.Connection = ...;
command.Transaction = ...; // if in a transaction
SqlParameter parameter = command.Parameters.Add("#Items", ...);
foreach(string itemList in ConcatenateValues(values, "~", 8000, false))
{
parameter.Value = itemList;
command.ExecuteNonQuery();
}
}

You either do what you've already got, pass in a delimited string and then parse out to a table value, or the other choice is passing in a wodge of XML and kinda much the same:
http://weblogs.asp.net/jgalloway/archive/2007/02/16/passing-lists-to-sql-server-2005-with-xml-parameters.aspx
I haven't had a chance to look at SQL 2008 yet to see if they've added any new functionality to handle this type of thing.

Why not use a table-valued parameter?
https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/table-valued-parameters

See http://www.sommarskog.se/arrays-in-sql-2005.html for a detailed discussion of this issue and the different approaches that you could use.

Here's a very clear-cut explanation to Table Valued Parameters from sqlteam.com: Table Valued Parameters

Query a Single Field for Multiple Values in a Stored Procedure
http://www.norimek.com/blog/post/2008/04/Query-a-Single-Field-for-Multiple-Values-in-a-Stored-Procedure.aspx

Related

Remove foreach loop with the help of linq

I have some problem with my code. I want to replace the ForEach loop with the help of LINQ here, is there any way or solution to solve my problem? My code is given bellow.
static public string table2Json(DataSet ds, int table_no)
{
try
{
object[][] tb = new object[ds.Tables[table_no].Rows.Count][];
int r = 0;
foreach (DataRow dr in ds.Tables[table_no].Rows)
{
tb[r] = new object[ds.Tables[table_no].Columns.Count];
int col = 0;
foreach (DataColumn column in ds.Tables[table_no].Columns)
{
tb[r][col] = dr[col];
if ((tb[r][col]).Equals(System.DBNull.Value))
{
tb[r][col] = "";
}
col++;
}
r++;
}
string table = JsonConvert.SerializeObject(tb, Formatting.Indented);
return table;
}
catch (Exception ex)
{
tools.log(ex.Message);
throw ex;
}
}
This question really asks 3 different things:
how to serialize a DataTable
how to change the DataTable serialization format and finally
how to replace nulls with empty strings, even though an empty string isn't a NULL.
JSON.NET already handles DataSet and DataTable instance serialization with a DataTableConverter whose source can be found here. You could just write :
var str = JsonConvert.SerializeObject(data);
Given this DataTable :
var dataTable=new DataTable();
dataTable.Columns.Add("Name",typeof(string));
dataTable.Columns.Add("SurName",typeof(string));
dataTable.Rows.Add("Moo",null);
dataTable.Rows.Add("AAA","BBB");
You get :
[{"Name":"Moo","SurName":null},{"Name":"AAA","SurName":"BBB"}]
DataTables aren't 2D arrays and the column names and types matter. Generating a separate row object with named fields is far better than generating an object[] array. It also allows makes it far easier for clients to handle the JSON string without knowing its schema in advance. With an object[] for each row, the clients will have to know what's stored in each location in advance.
If you want to use a different serialization format, you could customize the DataTableConverter. Another option though, is to use DataRow.ItemArray to get the values as an object[] and LINQ to get the rows, eg :
object[][] values=dataTable.Rows.Cast<DataRow>()
.Select(row=>row.ItemArray)
.ToArray();
Serializing this produces :
[["Moo",null],["AAA","BBB"]]
And there's no way to tell which item is the name and which is the surname any more.
Replacing DBNulls with strings in this last form needs an extra Select() to replace DBNull.Value with "" :
object[][] values=dataTable.Rows.Cast<DataRow>()
.Select(row=>row.ItemArray
.Select(x=>x==DBNull.Value?"":x)
.ToArray())
.ToArray();
Serializing this produces :
[["Moo",""],["AAA","BBB"]]
That's what was asked, but now we have no way to tell whether the Surname is an empty string, or just doesn't exist.
This may sound strange, but Arabic names may be one long name without surname. Makes things interesting for airlines or travel agents that try to issue tickets (ask me how I know).
We can get rid of ToArray() if we use var :
var values=dataTable.Rows.Cast<DataRow>()
.Select(row=>row.ItemArray
.Select(x=>x==DBNull.Value?"":x));
JSON serialization will work the same.
LINQ is not a nice fit for this sort of thing because you are using explicit indexes r and col into multiple "array structures" (and there is no easy/tidy way to achieve multiple, parallel enumeration).
Other issues
tb is repeatedly newed, filled with data and then replaced in the next iteration, so you end up capturing only the last row of input to the JSON string - that's a logical bug and won't work as I think you intend.
The inner foreach loop declares but does not use the iteration variable column - that's not going to break anything but it is redundant.
You will get more mileage out of using JSON.Net properly (or coding the foreach loops as for loops instead if you want to navigate the structures yourself).

Seeking for help for SQL server query to search multiple columns of a table using a combination of keywords

I am having a nightmare in building an efficient search query. I am using LINQ. The requirement is as follows:
In the application there is one text fox field which is used as a Quick Search.
The table I will be searching holds three fields Make, Model and Extension.
A typical keyword user can enter is like Honda Accord XL
Based on the keywords database should return me the matching rows and here the problem starts. There is no restriction on the order which keywords will be entered to prepare the phrase, i.e. one can enter Accord XL Honda or it could be like XL Accord or could just be Honda. In the example Honda is Make, Accord is the Model and XL is the extension.
Ideally the search result should only pull up perfect matches like if Honda Accord is entered it will not bring up other models from Honda. But the major problem is I don't know what they will enter and I have to look into three different columns of the table using Contains operator.
Here is what I tried:
I spit the phrase into words and place them in an array
var arr = keyWord.Split(new [] {' '}). Next step I build the query inside a loop of those array elements:
foreach (var k in arr)
{
var item = new Vehicle();
var arrayItem = k;
var query = DataContext.Vehicles.Where(v =>v.RealName.Contains(arrayItem)
|| v.Model.Contains(arrayItem)
|| v.Style.Contains(arrayItem)).ToList();
foreach (var v in query)
{
if(!result.Contains(v))
result.Add(v);
}
}
return result;
Now when the loop is executing and matching records for Make it already fills the list with say 250 items. But how can I remove unwanted items like when a record has CT as Model or TYL as Extension? If I knew the order of the words in which the keyword was created then I will have the option to remove unmatched Make, Model or Extension from the list by using one line of code for each and return the final result. But in this case if I have to do it I again have to use loop and remove unmatched items, even that will not probably give me the correct data. And definitely this is not the efficient way to do this.
--- Try to use Below way your code first retrieve respective make to database ------then filter gettting previous result.
List<Vehicle> lsvehicle= new List<Vehicle()>;
foreach (var k in arr)
{
var arrayItem = k;
lsvehicle = DataContext.Vehicles.Where(v =>v.RealName.Contains(arrayItem) ).ToList();
}
foreach (var k in arr)
{
lsvehicle = lsvehicle.Where(v =>v.Model.Contains(arrayItem) || v.Style.Contains(arrayItem)).tolist();
}
return lsvehicle ;
This can be achieved by concatenating the Make + Model + Extension string and then comparing that the whole array is contained by this string
var query = DataContext.Vehicles;
foreach (var k in arr)
{
var item = new Vehicle();
var arrayItem = k;
query = query.Where(v => (v.RealName + v.Model + v.Style).Contains(arrayItem)).ToList();
}
return query;
NOTE: Logical answer, may require syntax error correction if any
I would suggest following approach, assuming you have the option to create view in database
and you are searching through three columns
1)Create a view With all combination
2)Use linq to get the record from the view
sql
create view [Vw_VehicleSearch]
AS
Select
M+V+E [MVE],
M+E+V [MEV],
V+M+E [VME],
v+E+M [VEM],
E+M+V [EMV],
E+V+M [EVM]
from
vehicletable
c#
public List<string> Search(string quickSearchText)
{
using(var ctx=new model()))
{
var result=ctx
.Vw_VehicleSearch
.Where(v=>v.MVE.Contains(quickSearchText)
|| v=>v.MEV.Contains(quickSearchText)
.|| v=>v.VME.Contains(quickSearchText)
.|| v=>v.VEM.Contains(quickSearchText)
.|| v=>v.EMV.Contains(quickSearchText)
.|| v=>v.EVM.Contains(quickSearchText)
.ToList();
return result.Select(r=>r.MVE).ToList();
}
}
What you want is: all vehicles of which either RealName, or Model, or Style contains all keywords. This can be achieved by:
var query = DataContext.Vehicles.Where(v =>
arr.All(s => v.RealName.Contains(s))
|| arr.All(s => v.Model.Contains(s))
|| arr.All(s => v.Style.Contains(s)))
.ToList();
Entity Framework is able to translate this query into SQL because it has a trick to convert the array arr into a table (of sorts) that can be used in the SQL statement (you should take a look at the generated SQL).
This is not the most efficient way to run a query. It will become considerably slower when the number of keywords becomes "large". I don't think that will be an issue here though.

Looping on IEnumerator<T>, Any Suggestions

I am having a situation where looping through the result of LINQ is getting on my nerves. Well here is my scenario:
I have a DataTable, that comes from database, from which I am taking data as:
var results = from d in dtAllData.AsEnumerable()
select new MyType
{
ID = d.Field<Decimal>("ID"),
Name = d.Field<string>("Name")
}
After doing the order by depending on the sort order as:
if(orderBy != "")
{
string[] ord = orderBy.Split(' ');
if (ord != null && ord.Length == 2 && ord[0] != "")
{
if (ord[1].ToLower() != "desc")
{
results = from sorted in results
orderby GetPropertyValue(sorted, ord[0])
select sorted;
}
else
{
results = from sorted in results
orderby GetPropertyValue(sorted, ord[0]) descending
select sorted;
}
}
}
The GetPropertyValue method is as:
private object GetPropertyValue(object obj, string property)
{
System.Reflection.PropertyInfo propertyInfo = obj.GetType().GetProperty(property);
return propertyInfo.GetValue(obj, null);
}
After this I am taking out 25 records for first page like:
results = from sorted in results
.Skip(0)
.Take(25)
select sorted;
So far things are going good, Now I have to pass this results to a method which is going to do some manipulation on the data and return me the desired data, here in this method when I want to loop these 25 records its taking a good enough time. My method definition is:
public MyTypeCollection GetMyTypes(IEnumerable<MyType> myData, String dateFormat, String offset)
I have tried foreach and it takes like 8-10 secs on my machine, it is taking time at this line:
foreach(var _data in myData)
I tried while loop and is doing same thing, I used it like:
var enumerator = myData.GetEnumerator();
while(enumerator.MoveNext())
{
int n = enumerator.Current;
Console.WriteLine(n);
}
This piece of code is taking time at MoveNext
Than I went for for loop like:
int length = myData.Count();
for (int i = 0; i < 25;i++ )
{
var temp = myData.ElementAt(i);
}
This code is taking time at ElementAt
Can anyone please guide me, what I am doing wrong. I am using Framework 3.5 in VS 2008.
Thanks in advance
EDIT: I suspect the problem is in how you're ordering. You're using reflection to first fetch and then invoke a property for every record. Even though you only want the first 25 records, it has to call GetPropertyValue on all the records first, in order to order them.
It would be much better if you could do this without reflection at all... but if you do need to use reflection, at least call Type.GetProperty() once instead of for every record.
(In some ways this is more to do with helping you diagnose the problem more easily than a full answer as such...)
As Henk said, this is very odd:
results = from sorted in results
.Skip(0)
.Take(25)
select sorted;
You almost certainly really just want:
results = results.Take(25);
(Skip(0) is pointless.)
It may not actually help, but it will make the code simpler to debug.
The next problem is that we can't actually see all your code. You've written:
After doing the order by depending on the sort order
... but you haven't shown how you're performing the ordering.
You should show us a complete example going from DataTable to its use.
Changing how you iterate over the sequence will not help - it's going to do the same thing either way, really - although it's surprising that in your last attempt, Count() apparently works quickly. Stick to the foreach - but work out exactly what that's going to be doing. LINQ uses a lot of lazy evaluation, and if you've done something which makes that very heavy going, that could be the problem. It's hard to know without seeing the whole pipeline.
The problem is that your "results" IEnumerable isn't actually being evaluated until it is passed into your method and enumerated. That means that the whole operation, getting all the data from dtAllData, selecting out the new type (which is happening on the whole enumerable, not just the first 25), and then finally the take 25 operation, are all happening on the first enumeration of the IEnumerable (foreach, while, whatever).
That's why your method is taking so long. It's actually doing some of the work defined elsewhere inside the method. If you want that to happen before your method, you could do a "ToList()" prior to the method.
You might find it easier to adopt a hybrid approach;
In order:
1) Sort your datatable in-situ. It's probably best to do this at the database level, but, if you can't, then DataTable.DefaultView.Sort is pretty efficient:
dtAllData.DefaultView.Sort = ord[0] + " " + ord[1];
This assumes that ord[0] is the column name, and ord[1] is either ASC or DESC
2) Page through the DefaultView by index:
int pageStart = 0;
List<DataRowView> pageRows = new List<DataRowView>();
for (int i = pageStart; i < dtAllData.DefaultView.Count; i++ )
{
if(pageStart + 25 > i || i == dtAllData.DefaultView.Count - 1) { break; //Exit if more than the number of pages or at the end of the rows }
pageRows.Add(dtAllData.DefaultView[i]);
}
...and create your objects from this much smaller list... (I've assumed the columns are called Id and Name, as well as the types)
List<MyType> myObjects = new List<MyType>();
foreach(DataRowView pageRow in pageRows)
{
myObjects.Add(new MyObject() { Id = Convert.ToInt32(pageRow["Id"]), Name = Convert.ToString(pageRow["Name"])});
}
You can then proceed with the rest of what you were doing.

Would there be any performance difference between looping every row of dataset and same dataset list form

I need to loop every row of a dataset 100k times.
This dataset contains 1 Primary key and another string column. Dataset has 600k rows.
So at the moment i am looping like this
for (int i = 0; i < dsProductNameInfo.Tables[0].Rows.Count; i++)
{
for (int k = 0; k < dsFull.Tables[0].Rows.Count; k++)
{
}
}
Now dsProductNameInfo has 100k rows and dsFull has 600k rows. Should i convert dsFull to a KeyValuePaired string list and loop that or there would not be any speed difference.
What solution would work fastest ?
Thank you.
C# 4.0 WPF application
In the exact scenario you mentioned, the performance would be the same except converting to the list would take some time and cause the list approach to be slower. You can easily find out by writing a unit test and timing it.
I would think it'd be best to do this:
// create a class for each type of object you're going to be dealing with
public class ProductNameInformation { ... }
public class Product { ... }
// load a list from a SqlDataReader (much faster than loading a DataSet)
List<Product> products = GetProductsUsingSqlDataReader(); // don't actually call it that :)
// The only thing I can think of where DataSets are better is indexing certain columns.
// So if you have indices, just emulate them with a hashtable:
Dictionary<string, Product> index1 = products.ToDictionary( ... );
Here are references to the SqlDataReader and ToDictionary concepts that you may or may not be familiar with.
The real question is, why isn't this kind of heavy processing done at the database layer? SQL servers are much more optimized for this type of work. Also, you may not have to actually do this, why don't you post the original problem and maybe we can help you optimize deeper?
HTH
There might be quite a few things that could be optimized not related to the looping. E.g. reducing the number of iteration would yield a lot at pressent the body of the inner loop is executed 100k * 600k times so eliminating one iteration of the outer loop would eliminate 600k iterations of the inner (or you might be able to switch the inner and outer loop if it's easier to remove iterations from the inner loop)
One thing that you could do in any case is only index once for each table:
var productNameInfoRows = dsProductNameInfo.Tables[0].Rows
var productInfoCount = productNameInfoRows.Count;
var fullRows = dsFull.Tables[0].Rows;
var fullCount = fullRows.Count;
for (int i = 0; i < productInfoCount; i++)
{
for (int k = 0; k < fullCount; k++)
{
}
}
inside the loops you'd get to the rows with productNameInfoRows[i] and FullRows[k] which is faster than using the long hand I'm guessing there might be more to gain from optimizing the body than the way you are looping over the collection. Unless of course you have already profiled the code and found the actual looping to be the bottle neck
EDIT After reading your comment to Marc about what you are trying to accomplish. Here's a go at how you could do this. It's worth noting that the below algorithm is probabalistic. That is there's a 1:2^32 for two words being seen as equal without actually being it. It is however a lot faster than comparing strings.
The code assumes that the first column is the one you are comparing.
//store all the values that will not change through the execution for faster access
var productNameInfoRows = dsProductNameInfo.Tables[0].Rows;
var fullRows = dsFull.Tables[0].Rows;
var productInfoCount = productNameInfoRows.Count;
var fullCount = fullRows.Count;
var full = new List<int[]>(fullCount);
for (int i = 0; i < productInfoCount; i++){
//we're going to compare has codes and not strings
var prd = productNameInfoRows[i][0].ToString().Split(';')
.Select(s => s.GetHashCode()).OrderBy(t=>t).ToArray();
for (int k = 0; k < fullCount; k++){
//caches the calculation for all subsequent oterations of the outer loop
if (i == 0) {
full.Add(fullRows[k][0].ToString().Split(';')
.Select(s => s.GetHashCode()).OrderBy(t=>t).ToArray());
}
var fl = full[k];
var count = 0;
for(var j = 0;j<fl.Length;j++){
var f = fl[j];
//the values are sorted so we can exit early
for(var m = 0;m<prd.Length && prd[m] <= f;m++){
count += prd[m] == f ? 1 : 0;
}
}
if((double)(fl.Length + prd.Length)/count >= 0.6){
//there's a match
}
}
}
EDIT your comment motivated me to give it another try. The below code could have fewer iterations. Could have is because it depends on the number of matches and the number of unique words. A lot of unique words and a lot of matches for each (which would require a LOT of words per column) would potentially yield more iterations. However under the assumption that each row has few words this should yield substantial fewer iterations. your code has a NM complexity this has N+M+(matchesproductInfoMatches*fullMatches). In other words the latter would have to be almost 99999*600k for this to have more iterations than yours
//store all the values that will not change through the execution for faster access
var productNameInfoRows = dsProductNameInfo.Tables[0].Rows;
var fullRows = dsFull.Tables[0].Rows;
var productInfoCount = productNameInfoRows.Count;
var fullCount = fullRows.Count;
//Create a list of the words from the product info
var lists = new Dictionary<int, Tuple<List<int>, List<int>>>(productInfoCount*3);
for(var i = 0;i<productInfoCount;i++){
foreach (var token in productNameInfoRows[i][0].ToString().Split(';')
.Select(p => p.GetHashCode())){
if (!lists.ContainsKey(token)){
lists.Add(token, Tuple.Create(new List<int>(), new List<int>()));
}
lists[token].Item1.Add(i);
}
}
//Pair words from full with those from productinfo
for(var i = 0;i<fullCount;i++){
foreach (var token in fullRows[i][0].ToString().Split(';')
.Select(p => p.GetHashCode())){
if (lists.ContainsKey(token)){
lists[token].Item2.Add(i);
}
}
}
//Count all matches for each pair of rows
var counts = new Dictionary<int, Dictionary<int, int>>();
foreach(var key in lists.Keys){
foreach(var p in lists[key].Item1){
if(!counts.ContainsKey(p)){
counts.Add(p,new Dictionary<int, int>());
}
foreach(var f in lists[key].Item2){
var dic = counts[p];
if(!dic.ContainsKey(f)){
dic.Add(f,0);
}
dic[f]++;
}
}
}
If performance is the critical factor, then I would suggest trying an array-of-struct; this has minimal indireaction (DataSet/DataTable has quite a lot of indirection). You mention KeyValuePair, and that would work, although it might not necessarily be my first choice. Milimetric is right to say that there is an overhead if you create a DataSet first and then build an array/list from tht - however, even then the time savings when looping may exceed the build time. If you can restructure the load to remove the DataSet completely, great.
I would also look carefully at the loops, to see if anything could reduce the actual work needed; for example, would building a dictionary/grouping allow faster lookups? Would sorting allow binary search? Can any operations be per-aggregated and applied at a higher level (with fewer rows)?
What are you doing with the data inside the nested loop?
Is the source of your datasets a SQL database? If so, the best possible performance you could get would be to perform your calculation in SQL using an inner join and return the result to .net.
Another alternative would be to use the dataset's built in querying methods that act like SQL, but in-memory.
If neither of those options are appropriate, you would get a performance improvement by retrieving the 'full' dataset as a DataReader and looping over it as the outer loop. A dataset loads all of the data from SQL into memory in one hit. With 600k rows, this will take up a lot of memory! Whereas a DataReader will keep the connection to the DB open and stream rows as they are read. Once you have read a row the memory will be reused/reclaimed by the garbage collector.
In your comment reply to my earlier answer you said that both datasets are essentially lists of strings and each string a delimited list of tags effectively. I would first look to normalise the csv strings in the database. I.e. Split the CSVs, add them to a tag table and link from the product to the tags via a link table.
You can then quite easily create a SQL statement that will do your matching according to the link records rather than by string (which be more performant in it's own right).
The issue you would then have is that if your sub-set product list needs to be passed into SQL from .net you would need to call the SP 100k times. Thankfully SQL 2008 R2, introduced TableTypes. You could define a table type in your database with one column to hold your product ID, have your SP accept that as an input parameter and then perform an inner join between your actual tables and your table parameter.. I've used this in my own project with very large datasets and the performance gain was massive.
On the .net side you can create a DataTable matching the structure of the SQL table type and then pass that as a command parameter when calling your SP (once!).
This article shows you how to do both the SQL and .net sides. http://www.mssqltips.com/sqlservertip/2112/table-value-parameters-in-sql-server-2008-and-net-c/

Nested dictionary objects?

just messing around, trying to expand my bag o' tricks: I was just experimenting and want to do something like a Dictionary object with another inner Dictionary as the outside Dictionary's .Value
var dictionary = new Dictionary<ObjectType, Dictionary<string, string>>();
ObjectType is an enum
so...what then...either you're not suppose to do this or I just don't know how 'cause I started running into a wall when I was trying to figure out how to populate and retrieve data from it.
Purpose might help: I'm being passed an ObjectGUID and need to flip through a bunch of database tables to determine which table the object exists in. The method I've already written just queries each table and returns count (here are a couple examples)
// Domain Check
sql = string.Format(#"select count(domainguid) from domains where domainguid = ?ObjectGUID");
count = (int)MySQLHelper.ExecuteScalar(ConnectionStrings.ConnectionStrings.V4DB_READ, sql, pObjectGUID).ToString().Parse<int>();
if (count > 0)
return ObjectType.Domain;
// Group Check
sql = string.Format(#"select count(domaingroupguid) from domaingroups where domaingroupguid = ?ObjectGUID");
count = (int)MySQLHelper.ExecuteScalar(ConnectionStrings.ConnectionStrings.V4DB_READ, sql, pObjectGUID).ToString().Parse<int>();
if (count > 0)
return ObjectType.Group;
So, that's all done and works fine...but because the fieldname and table name are the only things that change for each check I started thinking about where I could re-use the repetitive code, I created a dictionary and a foreach loop that flips through and changes the sql line (shown below)...but, as you can see below, I need that ObjectType as kind of the key for each table/fieldname pair so I can return it without any further calculations
Dictionary<string, string> objects = new Dictionary<string,string>();
objects.Add("domains", "domainguid");
objects.Add("domaingroups", "domaingroupguid");
objects.Add("users", "userguid");
objects.Add("channels", "channelguid");
objects.Add("categorylists", "categorylistguid");
objects.Add("inboundschemas", "inboundschemaguid");
objects.Add("catalogs", "catalogguid");
foreach (var item in objects)
{
sql = string.Format(#"select count({0}) from {1} where {0} = ?ObjectGUID", item.Value, item.Key);
count = (int)MySQLHelper.ExecuteScalar(ConnectionStrings.ConnectionStrings.V4DB_READ, sql, pObjectGUID).ToString().Parse<int>();
if (count > 0)
return ?????
}
This isn't all that important since my original method works just fine but I thought you StackOverflow geeks might turn me on to some new clever ideas to research...I'm guessing someone is going to smack me in the head and tell me to use arrays... :)
EDIT # Jon Skeet ------------------------------------------
Heh, sweet, think I might have come upon the right way to do it...haven't run it yet but here's an example I wrote for you
var objectTypes = new Dictionary<string, string>();
objectTypes.Add("domainguid", "domains");
var dictionary = new Dictionary<ObjectType, Dictionary<string, string>>();
dictionary.Add(ObjectType.Domain, objectTypes);
foreach(var objectType in dictionary)
{
foreach(var item in objectType.Value)
{
sql = string.Format(#"select count({0}) from {1} where {0} = ?ObjectGUID", item.Key, item.Value);
count = (int)MySQLHelper.ExecuteScalar(ConnectionStrings.ConnectionStrings.V4DB_READ, sql, pObjectGUID).ToString().Parse<int>();
if (count > 0)
return objectType.Key;
}
}
This chunk should hit the domains table looking for domainguid and if count > 0 return ObjectType.Domain...look right? Only problem is, while it might seem somewhat clever, it's like 2 dictionary objects, a couple strings, some nested loops, harder to read and debug than my first version, and about 10 more lines per check hehe...fun to experiment though and if this looks like to you then I guess it's one more thing I can add to my brain :)
also found this how to fetch data from nested Dictionary in c#
You can definitely do it, although you're currently missing a closing angle bracket and parentheses. It should be:
var dictionary = new Dictionary<ObjectType, Dictionary<string, string>>().
To add a given value you probably want something like:
private void AddEntry(ObjectType type, string key, string value)
{
Dictionary<string, string> tmp;
// Assume "dictionary" is the field
if (!dictionary.TryGetValue(type, out tmp))
{
tmp = new Dictionary<string, string>();
dictionary[type] = tmp;
}
tmp.Add(key, value);
}
If that doesn't help, please show the code that you've tried and failed with - the database code in your question isn't really relevant as far as I can tell, as it doesn't try to use a nested dictionary.

Categories

Resources