Hi I'm writing a program that IO from databses and I have the following function to read from the database and adds the rows to a combobox:
private void loadFromTuzel()
{
string constring = "Server=localhost;Database=ozturk;Uid=____;pwd=_____";
MySqlConnection newCon = new MySqlConnection(constring);
string selectCommand = "SELECT * FROM ozturk.tuzelkisi";
MySqlCommand cmd = new MySqlCommand(selectCommand, newCon);
MySqlDataReader myReader;
newCon.Open();
myReader = cmd.ExecuteReader();
while (myReader.Read())
{
cbselected.Items.Add(myReader["name"].ToString() + " " + myReader["Surname"].ToString());
}
}
as can be seen from the code the program loads the data from database to the combobox...
I need to use this function in a different form but need to load the data to a different combo box and I'm wondering if adding a toolbox item as a parameter to my function is possible so that it would be something like this
private void myfunction(thecombobox parameter comes here)
{
// The execution code and than
// thecomboboxparameter.items.add......
}
so I can use this function over and over at different forms just by adding the parameter value, is something like this possible?
Thanks
Yes, it is possible, but you'd be much better suited by moving that logic outside of your UI code altogether, and just return the items from the database. Don't pass the ComboBox to a parameter, but get the results from that method, and in your UI then tie everything together.
This is the basis of Separation of Concerns. Your ComboBoxes shouldn't care where the data comes from. They only care that they have data to display.
For the simple answer to your question, just adjust your method like this:
public static class Utilities
{
public static void loadFromTuzel(ComboBox cbo)
{
/// All of you other logic
while (myReader.Read())
{
cbo.Items.Add(myReader["name"].ToString() + " " +
myReader["Surname"].ToString());
}
}
}
If you follow SoC, then you'd have something like this:
public class Repository {
public IEnumerable<string> GetNamesFromTuzel()
{
// All of the same logic
while (myReader.Read())
{
yield return myReader["name"].ToString() + " " +
myReader["Surname"].ToString();
}
}
}
Related
I have created a simplified SQL Data class, and a class method for returning a ready to use resultset:
public SQL_Data(string database) {
string ConnectionString = GetConnectionString(database);
cn = new SqlConnection(ConnectionString);
try {
cn.Open();
} catch (Exception e) {
Log.Write(e);
throw;
}
}
public SqlDataReader DBReader(string query) {
try {
using (SqlCommand cmd = new SqlCommand(query, this.cn)) {
return cmd.ExecuteReader(CommandBehavior.CloseConnection);
}
} catch {
Log.Write("SQL Error with either Connection String:\n" + cn + " \nor Query:\n" + query);
throw;
}
}
(I catch any errors, log them, and then catch the error higher up the chain. Also, I did not include the ConnectionString() code for brevity. It just returns the requested connection string. That's all.)
This all works just fine, and with a single line of code, I'm ready to .Read() rows.
SqlDataReader rs = new SQL_Data("MyDatabase").DBReader(#"SELECT * FROM Employees");
while (rs.Read()) {
// code
}
rs.Close();
I want to expand this and add a .ColumnReader() method that I want to chain to .DBReader() like this:
string empID = new SQL_Data("MyDatabase").DBReader(#"SELECT * FROM Employees).ColumnReader("EmpID");
I attempted this by adding a .ColumnReader() method, but it ends up being a method of SQL_Data() class directly, not a member or extension of .DBReader(). I also tried adding the .ColumnReader() inside the .DBReader() (like a "closure"), but that didn't work either.
Can this be done?
This ended up working for me:
public static class SQLExtentions {
public static dynamic ColumnReader(this SqlDataReader rs, string colName) {
return rs[colName];
}
}
I will have to expand on it a bit to add some error checking, and perhaps return more than just the dynamic value - like return an object with the value and it's SQL data type. But Paul and Bagus' comments got me on the right track.
I am coming from Laravel and new to ASP.net MVC. In Laravel I used to do this to assert if a record was created in database or not:
public function test_a_user_is_created_in_database()
{
// Arrange
// Act
$this->assertDatabaseHas('users', [
'email' => 'sally#example.com'
]);
}
Is there a way to accomplish the same thing in Xunit?
There is probably a more elegant way to accomplish the goal, but this works fine for my purposes:
public static void AssertDatabaseHas(string table, Dictionary<string, object> filters,
bool assertMissing = false) {
using (MySqlCommand cmd = new MySqlCommand()) {
cmd.Connection = GetDbConnection();
// Assemble the WHERE part of the query
// and add parameters to the command.
var filterStr = "1 = 1";
foreach (KeyValuePair<string, object> item in filters) {
if (string.IsNullOrEmpty(item.Value.ToString())) {
filterStr += " AND " + item.Key + " IS NULL";
} else {
filterStr += " AND " + item.Key + " = #" + item.Key;
cmd.Parameters.AddWithValue(item.Key, item.Value);
}
}
// Put the query together.
cmd.CommandText = string.Format("SELECT 1 FROM {0} WHERE {1}", table, filterStr);
// Execute the query and check the result.
using (MySqlDataReader rdr = cmd.ExecuteReader()) {
if (assertMissing) {
Assert.False(rdr.HasRows, "Undesired record exists.");
} else {
Assert.True(rdr.HasRows, "Desired record does not exist.");
}
}
}
}
A reverse of the function is also easily added:
public static void AssertDatabaseMissing(string table, Dictionary<string, object> filters) {
AssertDatabaseHas(table, filters, assertMissing: true);
}
When both are added to a MyCustomAssertions class, they can be called like this:
public void test_a_user_is_created_in_database()
{
MyCustomAssertions.AssertDatabaseHas("users", new Dictionary<string, object> {
{ "email", "sally#example.com" }
});
MyCustomAssertions.AssertDatabaseMissing("users", new Dictionary<string, object> {
{ "email", "sally#example.com" }, { "id", "10" }
});
}
Note:
The code can be easily adapted for MSTest if you happen to be using that; all you need to change is Assert.False to Assert.IsFalse and the same for True.
This example uses MySQL but can probably be modified for any engine. For Npgsql (PostgreSQL) for example, change MySqlCommand to NpgsqlCommand and MySqlDataReader to NpgsqlDataReader.
That is an integration test.
You probably don’t want to do an integration test, typically in .net and unit test, you would use something like FakeItEasy or Moq to provide the correctly typed data, so that the code under test is supplied with the data matching the scenario you want to test in a unit test. If you are testing whether a user is present, you would set it up so that the called to load the data returns the appropriate response, and if you were testing what happens when the user is present you would provide data and calls that return the data appropriate to the user you want to test.
Integration test might be appropriate when integrating a web service, and you don’t quite know for sure what it will return, but if you are using something like dbcontext and entity framework (and you probably should be) then there’s no question about what successfully loading a user should return.
I have a WPF application using C# and VS.
And I am using an Access database.
I have a loop that has to run in a maximum time of 500MS, But its take 570+-
In my program, I have a wait time of ~340MS in total and more ~160MS that I can to optimize
After checking with a Stopwatch I found that when I write my data to my Access Database Its take about ~50MS (I have a 3 writes to there).
And I have no Idea how to optimize my Database write
My Class that connect and using the database is an external DLL file
that look like that (I also give an example of one method that take a 50MS of runtime, named as "AddDataToLocalHeaderResult"):
namespace DataBaseManager
{
public class LocalPulserDBManager
{
private string localConnectionString;
private string databaseName = $#"C:\Pulser\LocalPulserDB.mdb";
private readonly int _30DaysBack = -30;
private static readonly Lazy<LocalPulserDBManager> lazy =new Lazy<LocalPulserDBManager>(() => new LocalPulserDBManager());
public static LocalPulserDBManager LocalPulserDBManagerInstance { get { return lazy.Value; } }
private void CreateConnectionString()
{
localConnectionString = $#"Provider=Microsoft.Jet.OLEDB.4.0;Data Source={databaseName};Persist Security Info=True";
}
private LocalPulserDBManager()
{
CreateConnectionString();
}
public void AddDataToLocalHeaderResult(string reportNumber,string reportDescription,
string catalog,string workerName,int machineNumber, Calibration c,string age)
{
if (IsHeaderLocalDataExist(reportNumber, catalog, machineNumber, c) == false)
{
using (OleDbConnection openCon = new OleDbConnection(localConnectionString))
{
string query = "INSERT into [HeaderResult] ([ReportNumber],[ReportDescription],[CatalogNumber], " +
"[WorkerName], [LastCalibrationDate], [NextCalibrationDate], [MachineNumber], [Age]) " +
"VALUES (#report ,#reportDescription ,#catalog, #workerName," +
" #LastCalibrationDate, #NextCalibrationDate, #machineNumber, #age)";
using (OleDbCommand command = new OleDbCommand(query))
{
command.Parameters.AddWithValue("#report", reportNumber);
command.Parameters.AddWithValue("#reportDescription", reportDescription);
command.Parameters.AddWithValue("#catalog", catalog);
command.Parameters.AddWithValue("#workerName", workerName);
command.Parameters.AddWithValue("#LastCalibrationDate", c.LastCalibrationDate);
command.Parameters.AddWithValue("#NextCalibrationDate", c.NextCalibrationDate);
command.Parameters.AddWithValue("#machineNumber", machineNumber);
command.Parameters.AddWithValue("#age", age);
command.Connection = openCon;
openCon.Open();
int recordsAffected = command.ExecuteNonQuery();
openCon.Close();
}
}
}
}
....
....
METHODS
....
}
}
In my executable program I use that like that :
I have usings as that : using static DataBaseManager.LocalPulserDBManager;
and in my code I exeute the method like that LocalPulserDBManagerInstance.AddDataToLocalHeaderResult(ReportNumber, Date_Description,CatalogNumber, WorkerName, (int)MachineNumber, calibrationForSave, AgeCells);
One of my access database table look like that :
One row in that table look like that:
50MS it is normal runtime in that situation?
If here is missing any information please tell me...
********************* EDITING **************************
I have change my AddDataToLocalHeaderResult method as the first command told me
I got the same result
public void AddDataToLocalHeaderResult(string reportNumber,string reportDescription,
string catalog,string workerName,int machineNumber, Calibration c,string age)
{
if (IsHeaderLocalDataExist(reportNumber, catalog, machineNumber, c) == false)
{
using (OleDbConnection openCon = new OleDbConnection(localConnectionString))
{
string query = "INSERT into [HeaderResult] ([ReportNumber],[ReportDescription],[CatalogNumber], " +
"[WorkerName], [LastCalibrationDate], [NextCalibrationDate], [MachineNumber], [EditTime], [Age]) " +
"VALUES (#report ,#reportDescription ,#catalog, #workerName," +
" #LastCalibrationDate, #NextCalibrationDate, #machineNumber,#edittime, #age)";
DateTime dt = DateTime.Now;
DateTime edittime = new DateTime(dt.Year, dt.Month, dt.Day, dt.Hour, dt.Minute, dt.Second);
using (OleDbCommand command = new OleDbCommand(query))
{
command.Parameters.AddWithValue("#report", reportNumber);
command.Parameters.AddWithValue("#reportDescription", reportDescription);
command.Parameters.AddWithValue("#catalog", catalog);
command.Parameters.AddWithValue("#workerName", workerName);
command.Parameters.AddWithValue("#LastCalibrationDate", c.LastCalibrationDate);
command.Parameters.AddWithValue("#NextCalibrationDate", c.NextCalibrationDate);
command.Parameters.AddWithValue("#machineNumber", machineNumber);
command.Parameters.AddWithValue("#edittime", edittime);
command.Parameters.AddWithValue("#age", age);
command.Connection = openCon;
openCon.Open();
int recordsAffected = command.ExecuteNonQuery();
openCon.Close();
}
}
}
}
Using the method you're showing here, you're adding one row at a time. So the server is opening a connection to the db, the data's being written to memory, then to the physical file (mdb), then the indexes are being updated. To that's a full four steps per row you're trying to execute. Worse than that, the data write to the physical file is time consuming.
I think that if you use a different approach, do these four steps (connection, memory, data write, re-index) for the entire set of data you're trying to insert. So, let's say you're adding 1000 records, rather than 4000 steps (4x1000), you could reduce this processing to 1400 processing steps (1 connection, super-fast 1000 memory writes, 1 data file write, 1 index revision).
The following code gives the rough idea of what I'm talking about:
class Program
{
static void Main(string[] args)
{
//memory-only list for data loading
List<HeaderResult> mylist = new List<HeaderResult>(){ new HeaderResult("report1","desc of report","ete"), new HeaderResult("report2", "desc of report2", "ete2")};
var tableForInsert = new DataTable();
using (SqlDataAdapter dataAdapter = new SqlDataAdapter("SELECT * from HeaderResult", "my conneciton string")) {
dataAdapter.Fill(tableForInsert);
//now I have a live copy of the table into which I want to insert data
//blast in the data
foreach (HeaderResult hr in mylist) {
tableForInsert.Rows.Add(hr);
}
//now all the data is written at once and sql will take care of the indexes after the datat's written
dataAdapter.Update(tableForInsert);
}
}
//class should have same fields as your table
class HeaderResult
{
string report;
string reportDescription;
string etc;
public HeaderResult(string rpt, string desc, string e)
{
report = rpt;
reportDescription = desc;
etc = e;
}
}
I got two standard projects in a solution. The UI and the Logic.
As usual, you need to take the inputs from the UI and do whatever you want with them in the back end part.
So in the UI class, I have this
private void btnAddItems_Click(object sender, RoutedEventArgs e)
{
item_name = lbl_item_name.Text;
item_quantity = lbl_item_quantity.Text;
store_ime = store_Name.Text;
logika.storeInDb(store_ime, item_name, item_quantity);
}
It just stores the input in variables and then sends them to this
public void storeInDb(string store_name, string item_name, string item_quantity)
{
using (MySqlConnection mySqlConn = new MySqlConnection(Logic.connStr))
{
dbInsert($"INSERT INTO soping(store_name, item_name, item_quantity, payment_type, date) VALUES('{store_name}', '{item_name}', '{item_quantity}', 'visa', 'danas')");
}
}
And this is the dbInsert method
public void dbInsert(string query)
{
using (MySqlConnection mySqlConn = new MySqlConnection(Logic.connStr))
{
try
{
mySqlConn.Open();
MySqlCommand cmd = new MySqlCommand(query, mySqlConn);
cmd.ExecuteNonQuery();
mySqlConn.Close();
}
catch (MySqlException e)
{
System.Diagnostics.Debug.WriteLine(e);
}
}
}
It doesn't store anything. And when I use breakpoints, it seems like the button method runs after storeInDb, even though the variables in the query are perfectly fine. And I can't find anything wrong with the code that would make it behave weird like this.
This code have some issues:
1- You should use parameters instead of direct strings in your sql query;
2- You don't need a connection outside your dbInsert Method
However, this code should work. I guess the problem you are having is located elsewhere, not in the code you posted here. Something simpler, maybe connectionstring problem (saving in other place where you don't expect to) or bad uses of threads...Maybe hitting deadlocks, long processing or something like that (the only way i can think of having button click apparently happenning after the code it calls).
I am confuse where can i make my code to n-tier:
While learning n-tier i know now how to insert,delete,update.
But now i am confused how to deal with sqldatareader to bind data on listbox and combo box:
This code works on my presentation layer but dont know how to convert it to layers as DataAccess,BusinessObject,BusinessLogic.
FormLoad
{
getlistview();
cboStatus();
}
#region "fill listview"
public void GetlistView()
{
int i = 0;
SqlConnection sqlcon = new SqlConnection(connStr);
lstBF.Items.Clear();
SqlCommand sqlcom = new SqlCommand("sp_LoadNew", sqlcon);
SqlDataReader dr;
lstBF.Items.Clear();
sqlcon.Open();
dr = sqlcom.ExecuteReader();
while (dr.Read())
{
lstBF.Items.Add(dr["SerialNumber"].ToString());
lstBF.Items[i].SubItems.Add(dr["PartNumber"].ToString());
lstBF.Items[i].SubItems.Add(dr["StatusDescription"].ToString());
lstBF.Items[i].SubItems.Add(dr["CustomerName"].ToString());
lstBF.Items[i].SubItems.Add(dr["DateCreated"].ToString());
lstBF.Items[i].SubItems.Add(dr["CreatedBy"].ToString());
lstBF.Items[i].SubItems.Add(dr["ModifiedBy"].ToString());
i = i + 1;
}
if (sqlcon.State == ConnectionState.Open) sqlcon.Close();
}
#endregion
#region "ListviewChange"
private void lstBF_SelectedIndexChanged(object sender, EventArgs e)
{
if (lstBF.SelectedItems.Count == 1)
{
txtSerialNumber.Text = lstBF.SelectedItems[0].Text;
txtPartNumber.Text = lstBF.SelectedItems[0].SubItems[1].Text;
lblStatus.Text = lstBF.SelectedItems[0].SubItems[2].Text;
lblcustomer.Text = lstBF.SelectedItems[0].SubItems[3].Text;
lblModifiedBy.Text = lstBF.SelectedItems[0].SubItems[6].Text;
}
}
#endregion
#region "FILL combo"
public void cboStatus()
{
try
{
SqlConnection conn = new SqlConnection(connStr);
SqlCommand sqlcom = new SqlCommand("sp_loadStatus",conn);
SqlDataReader dr = null;
conn.Open();
dr = sqlcom.ExecuteReader();
cmbStatus.Items.Clear();
while (dr.Read())
{
cmbStatus.Items.Add((dr["StatusDescription"]));
}
if (conn.State == ConnectionState.Open) conn.Close();
}
catch (Exception ex)
{
MessageBox.Show("Error Occurred:" + ex);
}
finally
{
}
}
#endregion
If you want to have a nice, clean separation, here's what you should do:
never ever pass something like a SqlDataReader or any other database-dependant object up from your data layer - encapsulate everything in your data layer, and from there on up, use your own domain model (classes)
the data layer should turn your database requests into objects of your domain model. You can definitely do that by hand - but it's a lot of boring and error prone code to do all the DataReader, read each row, convert to object kind of stuff - here, a tool called an OR mapper (object-relational mapper) can help tremendously, since it does all of this for you - more or less for free. Check out SubSonic, Linq-to-SQL and quite a few more out there.
for things like combobox lookup lists, you would typically design a "view model", e.g. a class for that "view" (or webform, or winform) that will hold the data that this view is supposed to a) show, and b) needs for its job. Typically, such a "view model" is just another class - no magic about it. It will contain one or several of your domain model classes (the actual data you want to show), and one or several lookup lists that contain the possible values for all the dropdowns etc.
With this approach, you should be fine and well on track to a good solid design, and by using an ORM, you can save yourself a ton of boring code and concentrate on the more interesting parts of your app.
Update:
Sample for binding your combo box:
create a class for your lookup values, typically something like:
public class StatusCode
{
public int ID { get; set; }
public string Description { get; set; }
}
have a method in your data layer to retrieve all values from your StatusCode table into a List<StatusCode>
public List<StatusCode> GetAllStatusCodes();
have your combo box in the UI bound to that list:
cbxStatusCode.DataSource = statusCodeList;
cbxStatusCode.DisplayMember = "Description";
cbxStatusCode.ValueMember = "ID";
Note: this is slightly different depending on whether you use Winforms or ASP.NET webforms.
There you have it!
One place you could start is using the Entity Framework or a class generator like Subsonic.
watch this podcast, follow it through and have a look at the code it creates for you:
http://www.techscreencast.com/language/dotnet/subsonic-getting-started-webcast/227