OleDB Data not reading from the correct row - c#

I have the following method created and previously stock1Label to stock3Label were able to output the correct values from the database however after i added more rows to my ProductsTable, source.Rows[0][0], [1][0], etc. seems to be taking values from row 8 onwards of my table instead of row 1, anyone know why this is happening?
private void UpdateStocks()
{
string query = "SELECT pQty FROM ProductsTable";
OleDbDataAdapter dAdapter = new OleDbDataAdapter(query, DBconn);
DataTable source = new DataTable();
dAdapter.Fill(source);
stock1Label.Text = source.Rows[0][0].ToString();
stock2Label.Text = source.Rows[1][0].ToString();
stock3Label.Text = source.Rows[2][0].ToString();
stock4Label.Text = source.Rows[3][0].ToString();
stock5Label.Text = source.Rows[4][0].ToString();
stock6Label.Text = source.Rows[5][0].ToString();
}

Most (all?) database systems do not have defined orders.
You will receive rows in non-determinstic storage order, not in the order you inserted them.
To receive a meaningful consistent ordering, you need to add an ORDER BY clause.

Related

Updating MySQL returns rows affected, but doesn't actually update the Database

I'm currently using Mono on Ubuntu with MonoDevelop, running with a DataTable matching a table in the database, and should be attempting to update it.
The code following uses a Dataset loaded from an XML file, which was created from a Dataset.WriteXML on another machine.
try
{
if(ds.Tables.Contains(s))
{
ds.Tables[s].AcceptChanges();
foreach(DataRow dr in ds.Tables[s].Rows)
dr.SetModified(); // Setting to modified so that it updates, rather than inserts, into the database
hc.Data.Database.Update(hc.Data.DataDictionary.GetTableInfo(s), ds.Tables[s]);
}
}
catch (Exception ex)
{
Log.WriteError(ex);
}
This is the code for inserting/updating into the database.
public override int SQLUpdate(DataTable dt, string tableName)
{
MySqlDataAdapter da = new MySqlDataAdapter();
try
{
int rowsChanged = 0;
int tStart = Environment.TickCount;
da.SelectCommand = new MySqlCommand("SELECT * FROM " + tableName);
da.SelectCommand.Connection = connection;
MySqlCommandBuilder cb = new MySqlCommandBuilder(da);
da.UpdateCommand = cb.GetUpdateCommand();
da.DeleteCommand = cb.GetDeleteCommand();
da.InsertCommand = cb.GetInsertCommand();
da.ContinueUpdateOnError = true;
da.AcceptChangesDuringUpdate = true;
rowsChanged = da.Update(dt);
Log.WriteVerbose("Tbl={0},Rows={1},tics={2},", dt.TableName, rowsChanged, Misc.Elapsed(tStart));
return rowsChanged;
catch (Exception ex)
{
Log.WriteError("{0}", ex.Message);
return -1
}
I'm trying the above code, and rowsChanged becomes 4183, the number of rows I'm editing. However, when I use HeidiSQL to check the database itself, it doesn't change anything at all.
Is there a step I'm missing?
Edit: Alternatively, being able to overwrite all rows in the database would work as well. This is a setup for updating remote computers using USB sticks, forcing it to match a source data table.
Edit 2: Added more code sample to show the source of the DT. The DataTable is prefilled in the calling function, and all rows have DataRow.SetModified(); applied.
Edit 3: Additional information. The Table is being filled with data from an XML file. Attempting fix suggested in comments.
Edit 4: Adding calling code, just in case.
Thank you for your help.
The simplest way which you may want to look into might be to TRUNCATE the destination table, then simply save the XML import to it (with AI off so it uses the imported ID if necessary). The only problem may be with the rights to do that. Otherwise...
What you are trying to do can almost be handled using the Merge method. However, it can't/won't know about deleted rows. Since the method is acting on DataTables, if a row was deleted in the master database, it will simply not exist in the XML extract (versus a RowState of Deleted). These can be weeded out with a loop.
Likewise, any new rows may get a different PK for an AI int. To prevent that, just use a simple non-AI PK in the destination db so it can accept any number.
The XML loading:
private DataTable LoadXMLToDT(string filename)
{
DataTable dt = new DataTable();
dt.ReadXml(filename);
return dt;
}
The merge code:
DataTable dtMaster = LoadXMLToDT(#"C:\Temp\dtsample.xml");
// just a debug monitor
var changes = dtMaster.GetChanges();
string SQL = "SELECT * FROM Destination";
using (MySqlConnection dbCon = new MySqlConnection(MySQLOtherDB))
{
dtSample = new DataTable();
daSample = new MySqlDataAdapter(SQL, dbCon);
MySqlCommandBuilder cb = new MySqlCommandBuilder(daSample);
daSample.UpdateCommand = cb.GetUpdateCommand();
daSample.DeleteCommand = cb.GetDeleteCommand();
daSample.InsertCommand = cb.GetInsertCommand();
daSample.FillSchema(dtSample, SchemaType.Source);
dbCon.Open();
// the destination table
daSample.Fill(dtSample);
// handle deleted rows
var drExisting = dtMaster.AsEnumerable()
.Select(x => x.Field<int>("Id"));
var drMasterDeleted = dtSample.AsEnumerable()
.Where( q => !drExisting.Contains(q.Field<int>("Id")));
// delete based on missing ID
foreach (DataRow dr in drMasterDeleted)
dr.Delete();
// merge the XML into the tbl read
dtSample.Merge(dtMaster,false, MissingSchemaAction.Add);
int rowsChanged = daSample.Update(dtSample);
}
For whatever reason, rowsChanged always reports as many changes as there are total rows. But changes from the Master/XML DataTable do flow thru to the other/destination table.
The delete code gets a list of existing IDs, then determines which rows needs to be deleted from the destination DataTable by whether the new XML table has a row with that ID or not. All the missing rows are deleted, then the tables are merged.
The key is dtSample.Merge(dtMaster,false, MissingSchemaAction.Add); which merges the data from dtMaster with dtSample. The false param is what allows the incoming XML changes to overwrite values in the other table (and eventually be saved to the db).
I have no idea whether some of the issues like non matching AI PKs is a big deal or not, but this seems to handle all that I could find. In reality, what you are trying to do is Database Synchronization. Although with one table, and just a few rows, the above should work.

Table schema as DataTable?

I did a search and found some seemingly related answers, but they don't really do what I'm after.
Given a valid connection string and a table name, I want to get a DataTable of the table. I.e. if the table has a column called "Name", I want the DataTable set up so I can do dt["Name"] = "blah";
The trick is, I don't want to hard code any of that stuff, I want to do it dynamically.
People tell you to use SqlConnection.GetSchema, but that gives you back a table with a bunch of stuff in it.
Everybody has random tricks like TOP 0 * from the table and get the schema from there, etc.
But is there a way to get the table with the primary keys, unique indexes, etc. Ie.. in the final format to do a bulk insert.
You can use SqlDataAdapter.FillSchema:
var connection = #"Your connection string";
var command = "SELECT * FROM Table1";
var dataAdapter = new System.Data.SqlClient.SqlDataAdapter(command, connection);
var dataTable = new DataTable();
dataAdapter.FillSchema(dataTable, SchemaType.Mapped);
This way you will have an empty DataTable with columns and keys defined and ready to use in code like dataTable["Name"] = "blah";.

How do I override the insert command of a MySQLdataadapter to use ON DUPLICATE KEY UPDATE with C#

My application is extracting data from one DB and inserting the results into a different DB using MySqlDataAdapters and DataSets. I created a unique key in the target DB and want to update values in cases where I recalculated them.
I am hoping to use the data adapter Update method to handle this since there is a variable number of rows to INSERT/UPSERT each time. I also want the update to be as fast as possible since this is a repeating process.
If I exclude the possibility of update and just do inserts my code is functional as follows:
... // dsDSErrors is the dataset returned from my initial query
MySqlCommand cmdLocal = new MySqlCommand(queryLocal, myLocal);
MySqlDataAdapter daLocal = new MySqlDataAdapter(cmdLocal);
daLocal.MissingSchemaAction = MissingSchemaAction.AddWithKey;
DataSet dsLocal = new DataSet();
daLocal.Fill(dsLocal);
foreach (DataRow drError in dsDSErrors.Tables[0].Rows)
{
DataRow curRow = dsLocal.Tables[0].NewRow();
foreach (DataColumn hdr in dsDSErrors.Tables[0].Columns)
curRow[hdr.ColumnName] = drError[hdr.ColumnName];
dsLocal.Tables[0].Rows.Add(curRow);
}
MySqlCommandBuilder bldDSErrors = new MySqlCommandBuilder(daLocal);
daLocal.Update(dsLocal);
I understand that I can use daLocal.InsertCommand = new MySqlCommand(insert command) but was wondering if it is necessary to then loop through the results and add them to the VALUES section of the insert statement or if there was a method I'm not aware of that will handle this.
Barring built in functionality, I assume its best to use string builder to build up the insert query with all my values rows?

Best Approch for bulk insert to Acces database in c#

I working on windows appliaction which uses Microsoft access as database with oldedb data provider.
In this project I used to import xml file and write the data to database.
I want do bulk insert instead of inserting one record at one time.
So I tried with DAO approch but sometimes ended up with the exception like
"Currently locked unable to update"
Here is the code I used.
using TEST = Microsoft.Office.Interop.Access.Dao;
Pubic void Insert()
{
string sBaseDirectory = (AppDomain.CurrentDomain.BaseDirectory).ToString();
string sODBPath = sBaseDirectory + #"\TEST.accdb";
TEST.DBEngine dbEngine = new TESt.DBEngine();
TEST.Database db = dbEngine.OpenDatabase(sODBPath);
TEST.Recordset rsTest = db.OpenRecordset("dtTest");
for(int i=0;i<1000;i++)
{
rsTest.AddNew();
rsTest.Fields["ID"].Value =i;
rsTest.Fields["Name"].Value ="Test";
rsTest.update();
}
rsTest.close();
db.close();
}
With Oldedb:
DataTable dt = new DataTable();
dt.Columns.Add("ID", typeof(int));
dt.Columns.Add("Name", typeof(string));
string TableSQl = "Select * from dtTest where ID=0";
OleDbDataAdapter dataAdapter=new OleDbDataAdapter(TableSQl,ConnectionString);
dataAdapter.InsertCommand = new OleDbCommand(INSERT);
OleDbConnection OleConn = new OleDbConnection(ConnectionString);
for(int i=0;i<1000;i++)
{
dataAdapter.InsertCommand.Parameters.Add("ID", OleDbType.BigInt, 8,i.ToString());
dataAdapter.InsertCommand.Parameters.Add("Name", OleDbType.BigInt, 8, "test");
}
dataAdapter.InsertCommand.Connection = OleConn;
dataAdapter.InsertCommand.Connection.Open();
dataAdapter.update(dt);
dataAdapter.InsertCommand.Connection.Close();
here its not inserting the records in table.
Please guide what is wrong with this code and good approach as well.
See http://msdn.microsoft.com/en-us/library/bbw6zyha(v=vs.80).aspx - 'Using Parameters with a DataAdapter'
The Add method of the Parameters collection takes the name of the
parameter, the DataAdapter specific type, the size (if applicable to
the type), and the name of the SourceColumn from the DataTable.
So you need to create the parameters correctly, populate the DataTable, and then call Update.
Is there any documentation to suggest that this approach will batch the inserts rather than doing them one at a time anyway?

C# and NpgsqlDataAdapter returning a single string instead of a data table

I have a postgresql db and a C# application to access it. I'm having a strange error with values I return from a NpgsqlDataAdapter.Fill command into a DataSet.
I've got this code:
NpgsqlCommand n = new NpgsqlCommand();
n.Connection = connector; // a class member NpgsqlConnection
DataSet ds = new DataSet();
DataTable dt = new DataTable();
// DBTablesRef are just constants declared for
// the db table names and columns
ArrayList cols = new ArrayList();
cols.Add(DBTablesRef.all); //all is just *
ArrayList idCol = new ArrayList();
idCol.Add(DBTablesRef.revIssID);
ArrayList idVal = new ArrayList();
idVal.Add(idNum); // a function parameter
// Select builder and Where builder are just small
// functions that return an sql statement based
// on the parameters. n is passed to the where
// builder because the builder uses named
// parameters and sets them in the NpgsqlCommand
// passed in
String select = SelectBuilder(DBTablesRef.revTableName, cols) +
WhereBuilder(n,idCol, idVal);
n.CommandText = select;
try
{
NpgsqlDataAdapter da = new NpgsqlDataAdapter(n);
ds.Reset();
// filling DataSet with result from NpgsqlDataAdapter
da.Fill(ds);
// C# DataSet takes multiple tables, but only the first is used here
dt = ds.Tables[0];
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
So my problem is this: the above code works perfectly, just like I want it to. However, if instead of doing a select on all (*) if I try to name individual columns to return from the query I get the information I asked for, but rather than being split up into seperate entries in the data table I get a string in the first index of the data table that looked something like:
"(0,5,false,Bob Smith,7)"
And the data is correct, I would be expecting 0, then 5, then a boolean, then some text etc. But I would (obviously) prefer it to not be returned as just one big string.
Anyone know why if I do a select on * I get a datatable as expected, but if I do a select on specific columns I get a data table with one entry that is the string of the values I'm asking for?
Ok, I figured it out, it was in the SelectBuilder function. When more than one column was listed in the select statement it was wrapping the columns in ()'s, and apparently this causes either postgreSQL or Npgsql to interpret that as a desire to return a list in string form.

Categories

Resources