I am guessing that not all SQL is created equally. I am diving into the world of DSNs and ODBC drivers in c# and having a bit of a go at it. I am trying to get all the tables in a database that is defined by a DSN that all I know about it is using a Transoft ODBC driver. I can connect to it and get back tables using the code:
public void ConnectToData(String dsn)
{
System.Data.Odbc.OdbcConnection conn =
new System.Data.Odbc.OdbcConnection();
//conn.ConnectionString = "FIL=MS Access;DSN=" + dsn;
conn.ConnectionString = "DSN=" + dsn; //dsn equals "Company_Shared"
try
{
conn.Open();
MessageBox.Show("Connected!");
lstBoxLogs.Items.Add("Connected");
DataTable tableschema = conn.GetSchema("TABLES");
DataSet set = tableschema.DataSet;
// first column name
for (int i = 0; i < tableschema.Columns.Count; i++)
{
lstBoxLogs.Items.Add(tableschema.Columns[i].ColumnName);
}
lstBoxLogs.Refresh();
MessageBox.Show(tableschema.Columns.Count + " tables found");
}
catch (Exception ex)
{
MessageBox.Show("Failed to connect to data source: " + ex.GetBaseException().Message);
}
finally
{
conn.Close();
}
}
It connects fine, and reports tables but not the tables I know I am looking for in the database what comes back is the following:
TABLE_QUALIFIER
TABLE_OWNER
TABLE_NAME
TABLE_TYPE
REMARKS
I am not sure how to get the actual table names from this information so I can just dump all the data in every table (This is all I want to do). Is this cause I have to read up on what kind of SQL a transoft database uses and does it render the conn.GetSchema("TABLES"); call useless?
Would this work?
using(DataTable tableschema = conn.GetSchema("TABLES"))
{
// first column name
foreach(DataRow row in tableschema.Rows)
{
lstBoxLogs.Items.Add(row["TABLE_NAME"].ToString());
}
}
EDIT: Fixed the code to not use the DataSet.
EDIT: Updated the code for future readers to implement the best practice of disposing of the DataTable.
Related
Before you comment please note that I understand that my code is vulnerable to SQL injection, please disregard any comments about it being vulnerable for purposes of simplicity
I've checked around the website for answers but none seem to fit my situation, many are PHP.
I am trying to update information on a MySQL database from C# Forms Application on Visual Studio 2012, so I've allowed the user to input data but I want them to be able to update their data.
I've tried all sorts of different methods many give me errors, I feel like I'm very close with this method.
string Connection = "server = xxxx; " + "database = xxxxx; " + "uid = xxxx;"+ "pwd = xxxxx;";
MySqlConnection Conn = new MySqlConnection(Connection);
try
{
MySqlDataAdapter dAdapter = new MySqlDataAdapter("SELECT * FROM example", Conn);
DataTable dTable = new DataTable();
dAdapter.Fill(dTable);
DataRow dr = dTable.NewRow();
dr["TestData1"] = Convert.ToInt32(cboTestData1.Text);
dr["TestData2"] = txtTestData2.Text;
dr["TestData3"] = Convert.ToInt32(txtTestData3.Text);
dTable.Rows.Add(dr);
string Query = "Update example(field 1, field 2, field 3) VALUES ("TestData1", "TestData2", "TestData3")";
dTable.Rows.Add(Query);
MySqlCommandBuilder commandBuilder = new MySqlCommandBuilder(dAdapter);
int iRowsAffected = dAdapter.Update(dTable);
if (iRowsAffected == 1)
{
MessageBox.Show("Record Added", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
else
{
MessageBox.Show("Error adding record", "Record Added", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
}
catch (MySqlException ex)
{
MessageBox.Show(ex.Message);
}
The issue is that it doesn't like the 'Query' code due to it being bad. It gives me this error message
Additional information: Input string was not in a correct
format.Couldn't store in ID Column. Expected
type is Int32.
I've looked around the internet for solutions but all either do not offer the same situation as mine or are related to PHP code.
The update query should be in a syntax of...
update SomeTable
set SomeField = NewValue,
AnotherField = AnotherValue
where
SomeKey = KeyIDTheUserWasWorkingWith
Also, for future, I know this is sample mach-up data/columns, but you should really use real table / column names. The sample data, we know could be made up to prevent confidentiality, but real structures are more practical to get answers accurate.
The INSERT statement is closer to what you have and is ...
insert into SomeTable
( fld1, fld2, fld3 )
values
( someFld1, anotherFld2, lastField )
Finally, with your column names, if you DO (but I never do), have columns with embedded spaces, be sure to
`wrap in tic marks`
, so the engine recognizes the whole string as the column name.
I think there is some confusion in your code.
The SELECT statement may be bringing back 4 fields such as: ID, TestData1, TestData2, TestData3.
You then fill a DataTable with the records retrieved from the database.
Next, you create a new DataRow in the DataTable (that will have the four columns that match the SELECT statement). You place values into the editable fields (not the ID field).
Then you add the DataRow to the DataTable.
Here its where it gets mixed up...
You create a SQL Update Query String - then add that string as a DataRow to the DataTable.
When updating the DataTable via the MySqlDataAdapter, the last DataRow is not a valid record to be parsed by the Adapter.
Try removing the two lines:
string Query = "Update example(field 1, field 2, field 3) VALUES ("TestData1", "TestData2", "TestData3")";
dTable.Rows.Add(Query);
I have a sqlite database consist of 50 columns and more than 1.2 million rows. I'm using System.Data.Sqlite to work with Visual Studio 2013 C# language.
I used a very simple code to retrieve my data from database but it is taking too much time.
private SQLiteConnection sqlite;
public MySqlite(string path)
{
sqlite = new SQLiteConnection("Data Source="+path+"\\DBName.sqlite");
}
public DataTable selectQuery(string query)
{
SQLiteDataAdapter ad;
DataTable dt = new DataTable();
try
{
SQLiteCommand cmd;
sqlite.Open();
cmd = sqlite.CreateCommand();
cmd.CommandText = query; //set the passed query
ad = new SQLiteDataAdapter(cmd);
ad.Fill(dt); //fill the datasource
}
catch (SQLiteException ex)
{
//exception code here.
}
sqlite.Close();
return dt;
}
And, the select statement is:
select * from table
As I told you, it is a very simple code.
I want to know how to boost select operation performance to get the appropriate result. for this code the process takes up to 1 minute which I want to get to less than 1 second.
and another thing is that there seems to be some tags for configuring sqlite database but I don't know where to apply them. could some one tell me how to configure sqlite database with System.Data.Sqlite;
Consider narrowing your result set by getting necessary columns or paging.
I have Handheld device that connect to Sql Server database, read the Sql server data and get it on the SQL Compact database that is located on device. This is my code:
public void InsertData() // Function insert data into SQL commapct database
{
dt = new DataTable();
dt = SqlServer_GetData_dt("Select id_price, price, id_item from prices", SqlCeConnection); // get data form sql server
if (dt.Rows.Count > 0)
{
for (int i = 0; i < dt.Rows.Count; i++)
{
string sql = "";
sql = "insert into prices" +
" ( id_prices, price,id_item) values('"
+ dt.Rows[i]["id_price"].ToString().Trim() + "', '"
+ dt.Rows[i]["price"].ToString().Trim() + "', '"
+ dt.Rows[i]["id_item"].ToString().Trim() + "')";
obj.SqlCE_WriteData_bit(sql, connection.ConnectionString);//insert into sql compact
}
}
}
public DataTable SqlServer_GetData_dt(string query, string conn)
{
try
{
DataTable dt = new DataTable();
string SqlCeConnection = conn;
SqlConnection sqlConnection = new SqlConnection(SqlCeConnection);
sqlConnection.Open();
{
SqlDataReader darSQLServer;
SqlCommand cmdCESQLServer = new SqlCommand();
cmdCESQLServer.Connection = sqlConnection;
cmdCESQLServer.CommandType = CommandType.Text;
cmdCESQLServer.CommandText = query;
darSQLServer = cmdCESQLServer.ExecuteReader();
dt.Load(darSQLServer);
sqlConnection.Close();
}
return dt;
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
DataTable dt = new DataTable();
return dt;
}
}
public object SqlCE_WriteData_bit(string query, string conn)
{
try
{
string SqlCeConnection = conn;
SqlCeConnection sqlConnection = new SqlCeConnection(SqlCeConnection);
if (sqlConnection.State == ConnectionState.Closed)
{
sqlConnection.Open();
}
SqlCeCommand cmdCESQLServer = new SqlCeCommand();
cmdCESQLServer.Connection = sqlConnection;
cmdCESQLServer.CommandType = CommandType.Text;
cmdCESQLServer.CommandText = query;
object i = cmdCESQLServer.ExecuteScalar();
sqlConnection.Close();
return i;
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
return 0;
}
}
This is all work fine but the problem is that all this work very slow. I have 20 000 row that's need to be inserted into SQL compact database.
Is there any way for faster insert?
Thanks.
Aside from the obvious poor usage of the Connection for every call, you can greatly improve things by also eliminating the query processor altogether. That means don't use SQL. Instead open the destination table with TableDirect and a SqlCeResultset. The iterate through the source data (a DataTable is a bad idea, but that's a completely different thing) and use a series of CreateRecord, SetValues and Insert.
A pretty good example can be found here (though again, I'd use SetValues to set the entire row, not each individual field).
Reuse your connection and don't create a new connection for every INSERT statement.
Instead of passing a connection string to your SqlCE_WriteData_bit method, create the connection once in the InsertData method, and pass the connection object to SqlCE_WriteData_bit.
Put all the data into a DataTable and then use a SqlCeDataAdapter to save all the data with one call to Update. You may have to fiddle with the UpdateBatchSize to get the best performance.
Looking more closely, I see that you already have a DataTable. Looping through it yourself is therefore ludicrous. Just note that, as you have it, the RowState of every DataRow will be Unchanged, so they will not be inserted. I think that you can call DataTable.Load such that all RowState values are left as Added but, if not, then use a SqlDataAdapter instead, set AcceptChangesDuringFill to false and call Fill.
Is there any way for faster insert?
Yes, but it probably won't be "acceptably fast enough" when we're talking about inserting 20k rows.
The problem I can see is that you are opening a connection for every single row you are retrieving from SqlServer_GetData_dt, that is, you open a connection to insert data 20k times...opening a connection is an expensive transaction. You should build the whole query using a StringBuilder object and then execute all the insert statements in one batch.
This will bring some performance gains but don't expect it to solve you're problem, inserting 20k rows will still take some time, specially if indexes need to be re-built. My suggestion is that you should thoroughly analyse your requirements and be a bit smarter about how you approach it. Options are:
bundle a pre-populated database if possible so your app doesn't have to suffer the population process performance penalties
if not, run the insert process in the background and access the data only when the pre-population is finished
My goal is to copy generic tables from one database to another. I would like to have it copy the data as is and it would be fine to either delete whatever is in the table or to add to it with new columns if there are new columns. The only thing I may want to change is to add something for versioning which can be done in a seperate part of the query.
Opening the data no problem but when I try a bulk copy but it is failing. I have gone though several posts and the closest thing is this one:
SqlBulkCopy Insert with Identity Column
I removed the SqlBulkCopyOptions.KeepIdentity from my code but it still is throwing
"The given ColumnMapping does not match up with any column in the source or destination" error
I have tried playing with the SqlBulkCopyOptions but so far no luck.
Ideas?
public void BatchBulkCopy(string connectionString, DataTable dataTable, string DestinationTbl, int batchSize)
{
// Get the DataTable
DataTable dtInsertRows = dataTable;
using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString))
{
sbc.DestinationTableName = DestinationTbl;
// Number of records to be processed in one go
sbc.BatchSize = batchSize;
// Finally write to server
sbc.WriteToServer(dtInsertRows);
}
}
If I could suggest another approach, I would have a look at the SMO (SQL Server Management Objects) library to perform such tasks.
You can find an interesting article here.
Using SMO, you can perform tasks in SQL Server, such a bulk copy, treating tables, columns and databases as objects.
Some time ago, I used SMO in a small open source application I developed, named SQLServerDatabaseCopy.
To copy the data from table to table, I created this code (the complete code is here):
foreach (Table table in Tables)
{
string columnsTable = GetListOfColumnsOfTable(table);
string bulkCopyStatement = "SELECT {3} FROM [{0}].[{1}].[{2}]";
bulkCopyStatement = String.Format(bulkCopyStatement, SourceDatabase.Name, table.Schema, table.Name, columnsTable);
using (SqlCommand selectCommand = new SqlCommand(bulkCopyStatement, connection))
{
LogFileManager.WriteToLogFile(bulkCopyStatement);
SqlDataReader dataReader = selectCommand.ExecuteReader();
using (SqlConnection destinationDatabaseConnection = new SqlConnection(destDatabaseConnString))
{
if (destinationDatabaseConnection.State == System.Data.ConnectionState.Closed)
{
destinationDatabaseConnection.Open();
}
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationDatabaseConnection))
{
bulkCopy.DestinationTableName = String.Format("[{0}].[{1}]", table.Schema, table.Name);
foreach (Column column in table.Columns)
{
//it's not needed to perfom a mapping for computed columns!
if (!column.Computed)
{
bulkCopy.ColumnMappings.Add(column.Name, column.Name);
}
}
try
{
bulkCopy.WriteToServer(dataReader);
LogFileManager.WriteToLogFile(String.Format("Bulk copy successful for table [{0}].[{1}]", table.Schema, table.Name));
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
}
finally
{
//closing reader
dataReader.Close();
}
}
}
}
}
As you can see, you have to add the ColumnMappings to the BulkCopy object for each column, because you have to define which column of source table must be mapped to a column of destination table. This is the reason of your error that says: The given ColumnMapping does not match up with any column in the source or destination.
I would add some validation to this to check what columns your source and destination tables have in common.
This essentially queries the system views (I have assumed SQL Server but this will be easily adaptable for other DBMS), to get the column names in the destination table (excluding identity columns), iterates over these and if there is a match in the source table adds the column mapping.
public void BatchBulkCopy(string connectionString, DataTable dataTable, string DestinationTbl, int batchSize)
{
using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString))
{
sbc.DestinationTableName = DestinationTbl;
string sql = "SELECT name FROM sys.columns WHERE is_identity = 0 AND object_id = OBJECT_ID(#table)";
using (var connection = new SqlConnection(connectionString))
using (var command = new SqlCommand(sql, connection))
{
command.Parameters.AddWithValue("#table", DestinationTbl);
connection.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
var column = reader.GetString(0);
if (dataTable.Columns.Contains(column))
{
sbc.ColumnMappings.Add(column, column);
}
}
}
}
// Number of records to be processed in one go
sbc.BatchSize = batchSize;
// Finally write to server
sbc.WriteToServer(dataTable);
}
}
This could still get invalid cast errors as there is no data type check, but should get you started for a generic method.
You can add
sbc.ColumnMappings.Add(0, 0);
sbc.ColumnMappings.Add(1, 1);
sbc.ColumnMappings.Add(2, 2);
sbc.ColumnMappings.Add(3, 3);
sbc.ColumnMappings.Add(4, 4);
before executing
sbc.WriteToServer(dataTable);
Thank you !!
I have a question regarding performance. This is my scenario.
I have a MYSQL database and a application that from time to time moves records, according to the criteria from a query, from one table to another. The way this is done is:
foreach(object obj in list)
{
string id = obj.ToString().Split(',')[0].Trim();
string query = " insert into old_records select * from testes where id='" +
id + "';" + " delete from testes where id='" + id +"'";
DB _db = new DB();
_db.DBConnect(query);
this is the way I connect to the database:
DataTable _dt = new DataTable();
MySqlConnection _conn = new MySqlConnection(connectionString);
MySqlCommand _cmd = new MySqlCommand
{
Connection = _conn,
CommandText = query
};
MySqlDataAdapter _da = new MySqlDataAdapter(_cmd);
MySqlCommandBuilder _cb = new MySqlCommandBuilder(_da);
_dt.Clear();
try
{
_conn.Open();
_cmd.ExecuteNonQuery();
_da.Fill(_dt);
}
catch (MySqlException ex)
{
Console.WriteLine(ex.Message);
}
finally
{
if (_conn != null) _conn.Close();
}
return _dt;
So my question is, I have like 4000 rows in the table, and it takes a lot of time to move all the records from one table to another, specially across a network. Is there a way to make this faster?
I have been doing some reading and there are several option to handle data from DB like data adapters, reader, set, and tables. Which one is faster for this case? Should I be using a different method?
Two things I see is that first you're opening and closing your connection for each insert, that's usually your most expensive operation so you won't want to do that. You can also try batching them rather than doing them at once. When you do that you have to be careful because things could break in the middle of a large update so you would want to do things in a transaction. Without knowing too much about what your data structure looks like I refactored your method to do batching 100 at a time. First create a little helper method called move items that takes a connection and a list of ids. Don't do a try catch in this, you'll see why later.
Note: This method doesn't use parameters, I highly recommend you change it to do that.
private static void MoveItems(MySqlConnection conn, List<string> moveList)
{
string query = string.Format("insert into old_records select * from testes where id IN({0});" + " delete from testes where id IN({0})", string.Join(",", moveList.ToArray()));
var cmd = new MySqlCommand
{
Connection = conn,
CommandText = query
};
cmd.ExecuteNonQuery();
}
Next you'll change your main method to open the database connection once and then call this method 100 id's at a time. This method will have a try catch therefore if the call to MoveItems throws an exception it will be caught in this main method.
// the using statement will call your dispose method
using (var conn = new MySqlConnection(connectionString))
{
// open the connection and start the transaction
conn.Open();
var transaction = conn.BeginTransaction();
// createa list to temporarily store the ids
List<string> moves = new List<string>();
try
{
// clean the list, do the trim and get everything that's not null or empty
var cleanList = list.Select(obj => obj.ToString().Split(',')[0].Trim()).Where(s => !string.IsNullOrEmpty(s));
// loop over the clean list
foreach (string id in cleanList)
{
// add the id to the move list
moves.Add("'" + id + "'");
// batch 100 at a time
if (moves.Count % 100 == 0)
{
// when I reach 100 execute them and clear the list out
MoveItems(conn, moves);
moves.Clear();
}
}
// The list count might not be n (mod 100) therefore see if there's anything left
if (moves.Count > 0)
{
MoveItems(conn, moves);
moves.Clear();
}
// wohoo! commit the transaction
transaction.Commit();
}
catch (MySqlException ex)
{
// oops! something happened roll back everything
transaction.Rollback();
Console.WriteLine(ex.Message);
}
finally
{
conn.Close();
}
}
You may have to play with that 100 number. I remember when I worked with MySQL a lot I saw some performance differences between doing an IN and giving it a list of Or statements (Id = 'ID1' OR id = 'ID2' ...). But executing 40 statements or 80 statements will certainly have better performance, and opening the database connection once instead of 4000 times should also give you much better performance.
I might be wrong, but there is not much you can do in order to make it faster. After all you want to get the entire table data and insert its information into another table. The process will take some time if your table isn't small. However, you can try using the code below. It should do the trick and save some time.
INSERT INTO TABLE2 (FIELDNAME_IN_TABLE2, FIELDNAME2_IN_TABLE2)
SELECT FIELDNAME_IN_TABLE1, FIELDNAME2_IN_TABLE1
FROM TABLE1