I am using an array to hold the results of an SQLite query, at the moment I am using a two dimensional table to do this.
The problem that I have is that I have to manually specify the size of one of the indexes before the array can be used but it feels like a waste as I have no real knowledge of how many rows will be returned. I do use a property which allows me to retrieve the amount of columns that are present which is used in the arrays initialisation.
Example:
using (SQLiteDataReader dr = cmd.ExecuteReader())
{
results = new object[10,dr.FieldCount];
while (dr.Read())
{
jIterator++;
for (int i = 0; i < dr.FieldCount; i++)
{
results[jIterator, i] = dr.GetValue(i);
}
}
}
//I know there are a few count bugs
Example Storage:
I simply add the data to the array for as long as the while loop returns true, in this instance I set the first index to 10 as I already know how many elements are in the database and 10 will be more than enough.
How would I go about changing the array so it's size can be dynamic, Is this even possible based on the way I am getting the database results?
You should not be using an array, you should be using a dynamically-sized container like List<T>. Or, better yet, you could use something like the ADO.NET DataTable, since this is exactly what it's designed to store, and using a DbDataAdapter will avoid having to repeat the same IDataReader code all over the place.
Don't use an array. Use a generic list System.Collections.Generic.List<type>, you can even have a List of a List. These are resizable and do all of that plumbing for you.
I would suggest defining a structure to hold your contact information, then just creating a generic list of that type. They can expand without limit.
You have several options:
Use a List<T[]> instead of a 2 dimensional array.
Load your datareader into a dataset instead of an array
Skip the datareader entirely and use a dataadapter to fill a dataset
Use an iterator block to transform the datareader into an IEnumerable instead of an array
Of these, in most cases by far your best option is the last; it means you don't need to have the entire result set of your query in memory at one time, which is the main point for using an SqlDataReader in the first place.
You can create dynamic arrays in C#.
int[] intarray;
Although other responses that suggest List may be more appropriate.
Related
I have a method in which Sql stored procedure is executed taking values from Datagridview cells.This part of code is in Try Catch block.
What I need is to get all SP output values and other information from Datagridview cells to array or list, and after all rows have been processed to create a datatable and to use it as datasource for another Datagridview.
Can you please suggest how I can take values from Try block? And which one is recommended to use in this situation: list or array?
You might simply return the result object from inside the try-block. It's totally valid to have a return-command inside a try catch.
List or Array depends on whether you know the amount of elements before creating the list or array. Since you don't need the result list outside the method (and so you can assume that nobody else needs to add elements to it) using an array might save you a fraction of a second in processing time - but the list implementation of .net uses an array behind the scenes, so when you can call the constructor with a parameter of type int for the expected amount of elements, the overhead of a lisst is far less than calling the SP - so you can simply use the type you like.
I have number of string arrays. For example an array of 13 usernames then a seperate array of 13 passwords. Could someone please tell me what the most efficient way of getting these into a WFP datagrid is?
The simple option I can think of is to loop through the arrays, pick out the values and add them as a row into the datagrid but I was wondering if I can pass the arrays in as columns or something?
Please let me know if you need anymore information.
DataGrid works on attributes (columns) and items (rows) concept. So datastructures like collection of objects, data table or XML works best for loading data ibnto DataGrid intuitively.
With arrays of plain value types, you would have to convert them into a data structure. Use linq for your advantage...
var consolidatedList =
arrayUserName.Select(
usrNm =>
new {
UserName = usrName,
Password = arrayPasswords[arrayUserName.IndexOf(usrName)]
}).ToList();
dataGrid.ItemsSource = consolidatedList;
Of couse the list generation would be slow for large number of items in the arrays. In such case run a loop or use PLINQ (in case of .Net 4.0).
I am working with SqlXml and the stored procedure that returns a xml rather than raw data. How does one actually read the data when returned is a xml and does not know about the column names. I used the below versions and have heard getting data from SqlDataReader through ordinal is faster than through column name. Please advice on which is best and with a valid reason or proof
sqlDataReaderInstance.GetString(0);
sqlDataReaderInstance[0];
and have heard getting data from SqlDataReader through ordinal is faster than through column name
Both your examples are getting data through the index (ordinal), not the column name:
Getting data through the column name:
while(reader.Read())
{
...
var value = reader["MyColumnName"];
...
}
is potentially slower than getting data through the index:
int myColumnIndex = reader.GetOrdinal("MyColumnName");
while(reader.Read())
{
...
var value = reader[myColumnIndex];
...
}
because the first example must repeatedly find the index corresponding to "MyColumnName". If you have a very large number of rows, the difference might even be noticeable.
In most situations the difference won't be noticeable, so favour readability.
UPDATE
If you are really concerned about performance, an alternative to using ordinals is to use the DbEnumerator class as follows:
foreach(IDataRecord record in new DbEnumerator(reader))
{
...
var value = record["MyColumnName"];
...
}
The DbEnumerator class reads the schema once, and maintains an internal HashTable that maps column names to ordinals, which can improve performance.
Compared to the speed of getting data from disk both will be effectively as fast as each other.
The two calls aren't equivalent: the version with an indexer returns an object, whereas GetString() converts the object to a string, throwing an exception if this isn't possible (i.e. the column is DBNull).
So although GetString() might be slightly slower, you'll be casting to a string anyway when you use it.
Given all the above I'd use GetString().
Indexer method is faster because it returns data in native format and uses ordinal.
Have a look at these threads:
Maximize Performance with SqlDataReader
.NET SqlDataReader Item[] vs. GetString(GetOrdinal())?
I have an arraylist called backuplist.
this arraylist has structures in it.
So what i need to do is transfer this arraylist in a table and then store this table in my SQL database.
Anybody with ideas as to what i should do..??
Even if it is a different way to do this please let me know.
Thanks
If you are using VS2008 (tags), you should ideally use List<T>, not ArrayList. You can convert from a List<T> to a DataTable like so; then just use a SqlDataAdapter or SqlBulkCopy to get the data into the database.
This isn't a complete solution, but I thought I'd point you in the right direction. The problem is I don't really know your application, and your experience is limited, so it's going to be a stab in the dark.
Anyway, here are a couple of resources to get you started:
Converting Custom Collections To and From DataTable
http://blog.lozanotek.com/archive/2007/05/09/Converting_Custom_Collections_To_and_From_DataTable.aspx
Inserting New Records into a Database
http://msdn.microsoft.com/en-us/library/ms233812(VS.80).aspx
I'd agree on using a strongly typed list as Marc suggested. Another option to getting those into the database would be to plow through with a foreach (on either your list or array), and use the structure properties as parameters to an insert stored procedure.
We do this all the time in our app, where we might have a List coming from a business component and being tossed to the data layer, where we'll loop through, do any necesary manipulation, then run our update SP on each row.
Let me know if you need a snippet.
-Bob
I've been using an ArrayList returned by MySQL so it's populated with column names and types etc.
ArrayList list = new ArrayList();
// Add Items to list
DataTable table = new DataTable();
table.Load(list);
Recently I had to do some very processing heavy stuff with data stored in a DataSet. It was heavy enough that I ended up using a tool to help identify some bottlenecks in my code. When I was analyzing the bottlenecks, I noticed that although DataSet lookups were not terribly slow (they weren't the bottleneck), it was slower than I expected. I always assumed that DataSets used some sort of HashTable style implementation which would make lookups O(1) (or at least thats what I think HashTables are). The speed of my lookups seemed to be significantly slower than this.
I was wondering if anyone who knows anything about the implementation of .NET's DataSet class would care to share what they know.
If I do something like this :
DataTable dt = new DataTable();
if(dt.Columns.Contains("SomeColumn"))
{
object o = dt.Rows[0]["SomeColumn"];
}
How fast would the lookup time be for the Contains(...) method, and for retrieving the value to store in Object o? I would have thought it be very fast like a HashTable (assuming what I understand about HashTables is correct) but it doesn't seem like it...
I wrote that code from memory so some things may not be "syntactically correct".
Actually it's advisable to use integer when referencing column, which can improve a lot in terms of performance. To keep things manageable, you could declare constant integer. So instead of what you did, you could do
const int SomeTable_SomeColumn = 0;
DataTable dt = new DataTable();
if(dt.Columns.Contains(SomeTable_SomeColumn))
{
object o = dt.Rows[0][SomeTable_SomeColumn];
}
Via Reflector the steps for DataRow["ColumnName"] are:
Get the DataColumn from ColumnName. Uses the row's DataColumnCollection["ColumnName"]. Internally, DataColumnCollection stores its DataColumns in a Hastable. O(1)
Get the DataRow's row index. The index is stored in an internal member. O(1)
Get the DataColumn's value at the index using DataColumn[index]. DataColumn stores its data in a System.Data.Common.DataStorage (internal, abstract) member:
return dataColumnInstance._storage.Get(recordIndex);
A sample concrete implementation is System.Data.Common.StringStorage (internal, sealed). StringStorage (and the other concrete DataStorages I checked) store their values in an array. Get(recordIndex) simply grabs the object in the value array at the recordIndex. O(1)
So overall you're O(1) but that doesn't mean the hashing and function calling during the operation is without cost. It just means it doesn't cost more as the number of DataRows or DataColumns increases.
Interesting that DataStorage uses an array for values. Can't imagine that's easy to rebuild when you add or remove rows.
I imagine that any lookups would be O(n), as I don't think they would use any type of hashtable, but would actually use more of an array for finding rows and columns.
Actually, I believe the columns names are stored in a Hashtable. Should be O(1) or constant lookup for case-sensitive lookups. If it had to look through each, then of course it would be O(n).