Multiple stored procedure calls or loop through an array? - c#

I am currently setting the tooltips on a report grid based on values stored in a table. I do this because I have a LOT of grids and a lot of tooltips and this makes it easy to manage them all from one place without updating source code.
My question. Is it faster to load the tooltips in this fashion or to load them by loading all of tooltips at once and looping through an array?
It seems that one SP call for all of the tooltips would be faster than 10-20. Is this assumption correct? If so, can I see an example of how you'd do this in an array or list?
sqlconn.Open();
SqlCommand com = new SqlCommand("sp_ToolTipLookup", sqlconn) { CommandType = System.Data.CommandType.StoredProcedure };
SqlParameter pFieldName = new SqlParameter("#FieldName", "");
for (int i = 0; i < rptgrid.Columns.Count; i++)
{
pFieldName.Value = rptgrid.Columns[i].ToString();
com.Parameters.Add(pFieldName); //adding the field name to the SP
SqlDataReader data = com.ExecuteReader(); //Open the SP
if (data.Read()) rptgrid.Columns[i].ToolTip = data["ToolTip"].ToString(); //If there is a resulting Tooltip, apply it to the grid
data.Close();
com.Parameters.Remove(pFieldName);
}
sqlconn.Close();
An example using a list would be more like this (and if this is faster, I could potentially load the list once per session and just store it in memory).
sqlconn.Open();
SqlCommand com = new SqlCommand("Select * from ToolTips", sqlconn) { CommandType = System.Data.CommandType.Text };
SqlDataReader data = com.ExecuteReader();
List<ToolTip> tips = new List<ToolTip>();
while (data.Read())
{
tips.Add(new ToolTip { fieldname = data["FieldName"].ToString(), tooltip = data["ToolTip"].ToString() } );
}
for (int i = 0; i < rptgrid.Columns.Count; i++) //Changed to visible column to speed it up a bit.
{
for (int x = 0; x < tips.Count; x++)
{
if (rptgrid.Columns[i].Name == tips[x].fieldname)
{
rptgrid.Columns[i].ToolTip = tips[x].tooltip;
}
}
}
data.Close();
sqlconn.Close();

The stored proc sp_ToolTipLookup must return at least the data ToolTip and FieldName, but you have to remove the filter about the tool tip name in the where clause..
string connectionString = ... //web|app.config
using (SqlConnection sqlconn = new SqlConnection(connectionString)){
using(SqlCommand com = new SqlCommand("sp_ToolTipLookup", sqlconn)){
com.CommandType = System.Data.CommandType.StoredProcedure
sqlconn.Open();
using (SqlDataReader data = com.ExecuteReader()){ //Call the SP
while(data.Read()) {
foreach(var col in rptgrid.VisibleColumns){
if (col.Name == data["FieldName"].ToString()){
rptgrid.VisibleColumns[col.Index].ToolTip = data["ToolTip"].ToString();
}
}
}
}
}
}

Correct one SP call loading all tooltips would be faster provided that the SP is designed efficiently.

Since the Tooltips are likely not going to change while the application is running, I would recommend actually loading your tooltips into your application as a public static property of your Main or Program or whatever your root class is for your app. This would make tooltips available to the entire application and avoid different parts of the apps having to make different database calls to get their tooltips. I'd also put a time checker in the property Get method so that every few hours the data is refreshed.

Ok so to summarize:
The answer is a combination of three received so far.
One SP to load all ToolTips
Do this once and make them available to the application.
Thanks guys. I wish I could select more than one correct answer.

Related

How can I retrieve data from my PostgreSQL database? WPF/C#

For reference, I am new to C#/WPF/PostgreSQL and I am trying to create a project for practice, however I've hit a bit of a roadblock. I found this earlier and tried following along with the answers (I understand it isn't 1 to 1) with my own code: Retrieving data from database in WPF Desktop application but it didn't work in my case.
I am creating a simple recipe app where a user can create a recipe (e.g., put in the title, steps, things they need, etc.) and on the home screen, they can see a link to the recipe that was saved, which would take them to the Recipe Screen to be displayed if clicked. I am using PostgreSQL for my database and I do see the correct information on there after the user would submit all of the necessary info, I just need to retrieve it and put it in a data grid possibly? Unless there is a better way other than a data grid.
Regardless, I plan to have it shown as a list of just the title of the recipe, where a user can click on it and it would load up the page, but that's something I can tackle another time if that is outside of the scope in regards to my question.
Here is a visual idea of what I'm trying to accomplish:
Here is my code for the submit button found in the Create Screen if it helps, however I have no idea what to do in terms of actually retrieving that data and then displaying it on my Home Screen.
private static NpgsqlConnection GetConnection()
{
return new NpgsqlConnection(#"Server=localhost;Port=5432;User Id=postgres;Password=123;Database=RecipeProj;");
}
private void SubmitButton_Click(object sender, RoutedEventArgs e)
{
Recipe recipe = new Recipe();
recipe.Title = TitleBox.Text;
recipe.Step1 = StepBox1.Text;
recipe.Step2 = StepBox2.Text;
recipe.Step3 = StepBox3.Text;
recipe.Step4 = StepBox4.Text;
recipe.Step5 = StepBox5.Text;
recipe.Step6 = StepBox6.Text;
recipe.Ingredients = IngredientBox.Text;
recipe.Tools = ToolBox.Text;
recipe.Notes = NoteBox.Text;
void InsertRecord()
{
using (NpgsqlConnection con = GetConnection())
{
string query = #"insert into public.Recipes(Title, Ingredients, Tools, Notes, StepOne, StepTwo, StepThree, StepFour, StepFive, StepSix)
values(#Title, #Ingredients, #Tools, #Notes, #StepOne, #StepTwo, #StepThree, #StepFour, #StepFive, #StepSix)";
NpgsqlCommand cmd = new NpgsqlCommand(query, con);
cmd.Parameters.AddWithValue("#Title", recipe.Title);
cmd.Parameters.AddWithValue("#Ingredients", recipe.Ingredients);
cmd.Parameters.AddWithValue("#Tools", recipe.Tools);
cmd.Parameters.AddWithValue("#Notes", recipe.Notes);
cmd.Parameters.AddWithValue("#StepOne", recipe.Step1);
cmd.Parameters.AddWithValue("#StepTwo", recipe.Step2);
cmd.Parameters.AddWithValue("#StepThree", recipe.Step3);
cmd.Parameters.AddWithValue("#StepFour", recipe.Step4);
cmd.Parameters.AddWithValue("#StepFive", recipe.Step5);
cmd.Parameters.AddWithValue("#StepSix", recipe.Step6);
con.Open();
int n = cmd.ExecuteNonQuery();
if (n == 1)
{
MessageBox.Show("Record Inserted");
TitleBox.Text = IngredientBox.Text = ToolBox.Text = NoteBox.Text = StepBox1.Text = StepBox2.Text = StepBox3.Text = StepBox4.Text = StepBox5.Text = StepBox6.Text = null;
}
con.Close();
}
}
InsertRecord();
}
string query = #"select * from Recipes";
NpgsqlCommand cmd = new NpgsqlCommand(query, con);
con.Open();
var reader = cmd.ExecuteReader();
var recipes = new List<Recipe>();
while(reader.Read()){
//Recipe is just a POCO that represents an entire
//row inside your Recipes table.
var recipe = new Recipe(){
Title = reader.GetString(reader.GetOrdinal("Title")),
//So on and so forth.
//...
};
recipes.Add(recipe);
}
con.Close();
You can use this same exact query to fill in a List of titles and a DataGrid that shows all the contents of a recipe.

How to refactor a large class that stores numerous SqlCommands in a Dictionary

I have recently started refactoring an old system designed by someone with little experience in OOP. Thankfully, (nearly) all access to the database are within a single 3000 lines long file. That files contains a Dictionary<string, SqlCommand>, the SqlConnection, a very long function adding every single SQL query to the dictionary like this:
cmd = new SqlCommand(null, _sqlConnection);
cmd.CommanText = "SELECT * FROM User WHERE User.UserID = #id;" // Most queries are far from being this simple
cmd.Parameters.Add(new SqlParameter("#id", SqlDbType.Int, 0));
cmd.Prepare();
_cmds.Add("getUser", cmd);
Those queries are used by functions within that same file that would look like this:
public void deleteUser(int userId)
{
if (_cmds.TryGetValue("deleteUser", out SqlCommand cmd))
{
lock(cmd)
{
cmd.Parameters[0].Value = userId;
cmd.ExecuteNonQuery();
}
}
}
public int isConnected(int userId, out int amount)
{
bool result = false;
amount = 0;
if (_cmds.TryGetValue("userInfo", out SqlCommand cmd))
{
lock (cmd)
{
cmd.Parameters[0].Value = userId;
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
}
}
}
}
return result;
}
Now this is horrible to work with and maintain. I finally have the time to refactor this. I wanted to turn this into a proper DAL with repositories which would be used by services and be dependency injectable.
I don't really care to change the functions or the queries (using a ORM for example). What I'm more interested in is to split the file into many files in a way that would allow me to mock, test and modify it more easily. I'm looking for a way to better structure the existing code, though I know a lot of copy/pasting and recoding will be required.
Would recommend replacing the manually written object-mapping code with using an Object-Relational Mapper like NHibernate, which will save the time and effort of creating and maintaining a data access layer.
Check out Dapper. It is a "micro-ORM" and offers high-performance object-oriented data access. You can continue to use all the existing queries, but replace all the boiler-plate ADO.NET code with Dapper.
This is going to take some repetitive work, but here are a few ideas on how to get a handle on it. This won't put the code in some ideal state, but might make it a little bit more manageable. One challenge is that every method has parts in two places - one in the method and one where the command is stored in the dictionary.
Don't add any more SQL to this class, ever. Begin defining and using the new repositories you want.
Being able to mock it is easy, too. You can use the extract interface refactoring to create an interface so that you can mock this class, even in its current form. That's going to be a big, ugly interface, but at least you can mock methods if you need to.
That's the easy part. How can the entire class be refactored without breaking any one part of it? These steps are just some ideas:
A first step is just to inject the connection string the class needs:
public class YourDataAccessClass
{
private readonly string _connectionString;
public YourDataAccessClass(string connectionString)
{
_connectionString = connectionString;
}
}
You'll use it one method at a time. Initially you can leave most of the class, including the dictionary, as-is. That way the methods you haven't modified will continue to work.
Next, you could open up the class in two separate windows so that you can see the dictionary function that contains the SQL and the functions that use it side-by-side. This will be a lot harder if you have to scroll back up and down.
You'll likely want to move the SQL for each function into that function. You could do this as you refactor each function, but it might be less painful to do it all at once so that you gain efficiency from repetition.
You could define a new variable in each function and copy and paste:
var sql = "SELECT * FROM User WHERE User.UserID = #id;";
(Again, not the way I'd normally write this.)
Now you've got a function or 100 functions that look like this:
public void deleteUser(int userId)
{
var sql = "DELETE User WHERE User.UserID = #id;";
if (_cmds.TryGetValue("deleteUser", out SqlCommand cmd))
{
lock(cmd)
{
cmd.Parameters[0].Value = userId;
cmd.ExecuteNonQuery();
}
}
}
For the non-query commands you could write a function like this in your class which will eliminate the repetitive code to open a connection, create a command, etc:
private void ExecuteNonQuery(string sql, Action<SqlCommand> addParameters = null)
{
using (var connection = new SqlConnection(_connectionString))
using (var command = new SqlCommand(sql))
{
addParameters?.Invoke(command);
connection.Open();
command.ExecuteNonQuery();
}
}
Save the following snippet of code. You might even just be able to keep it in the clipboard most of the time. Paste it into each one of your non-query methods right beneath the SQL:
ExecuteNonQuery(sql, command =>
{
});
After you paste it, move the line or lines that add parameters into the body of the cmd argument (which is named cmd so that you can move the lines without changing the variable name) and then delete the existing code that executed the query previously.
ExecuteNonQuery(sql, cmd =>
{
cmd.Parameters[0].Value = userId;
});
Now your function looks like this:
public void deleteUser(int userId)
{
var sql = "DELETE User WHERE User.UserID = #id;";
ExecuteNonQuery(sql, cmd =>
{
cmd.Parameters[0].Value = userId;
});
}
I'm not saying that's fun, but it will make the process of editing those functions more efficient since you're typing less and just moving things around in exactly the same way over and over.
The ones that actually return data are less fun, but still manageable.
First, take pretty much the same boilerplate code. This could likely be improved because it's still a little repetitive, but at least it's more self-contained:
using (var connection = new SqlConnection(_connectionString))
using (var cmd = new SqlCommand(sql)) // again, named "cmd" on purpose
{
connection.Open();
}
Starting with this:
public int isConnected(int userId, out int name)
{
var sql = "SELECT * FROM User WHERE User.UserID = #id;";'
bool result = false;
amount = 0;
if (_cmds.TryGetValue("userInfo", out SqlCommand cmd))
{
lock (cmd)
{
cmd.Parameters[0].Value = userId;
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
}
}
}
}
}
Paste your boilerplate into the method:
public int isConnected(int userId, out int name)
{
var sql = "SELECT * FROM User WHERE User.UserID = #id;";'
bool result = false;
amount = 0;
using (var connection = new SqlConnection(_connectionString))
using (var cmd = new SqlCommand(sql)) // again, named "cmd" on purpose
{
connection.Open();
}
if (_cmds.TryGetValue("userInfo", out SqlCommand cmd))
{
lock (cmd)
{
cmd.Parameters[0].Value = userId;
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
// was this a typo? The code in the question doesn't
// return anything or set the "out" variable. But
// if that's in the method then that will be part of
// what gets copied.
}
}
}
}
}
Then, just like before, move the part where you add your parameters above connection.Open(); and move the part where you use the command just beneath connection.Open(); and delete what's left. The result is this:
public int isConnected(int userId, out int name)
{
var sql = "SELECT * FROM User WHERE User.UserID = #id;";'
bool result = false;
amount = 0;
using (var connection = new SqlConnection(_connectionString))
using (var cmd = new SqlCommand(sql)) // again, named "cmd" on purpose
{
cmd.Parameters[0].Value = userId;
connection.Open();
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
}
}
}
}
You can probably get into a groove and do these in a minute or two each, which means that it will only take a few hours.
Once all of this is done you can delete your massive dictionary function. Now the class depends on an injected connection string and opens and closes connections normally instead of storing a connection and using it over and over.
You can also break it up. One way is to move the connection string and the helper function into a base class (or just duplicate the helper function - it's really small) and you can move any of the query functions into a smaller class because each function is self-contained.

Autocomplete textbox in layered architecture

I want to create an autocomplete textbox with my database.
I'm programming my application in a layered architecture (models, DAL, BLL, Presentation).
I've already made a method with an arraylist that reads and returns my select command in the database, which is filling (I've tested on a combobox).
But when I try to insert in the the textbox, nothing happens... it doesn't show the suggestion.
I looked for something in the forum but I just found examples with one layer and, since I'm developing in layers I cannot increment the property AutoCompleteStringCollection in my DAL to be filled by my select command.
If anyone has any idea how to solve this problem, please explain to me!
Additional information: I'm using winForm with C# and SQL Server.
I think you want to say that "But when i try to insert in the textbox, nothing happens... it doesn't show the sugestion."
well i cannot just code all layers here but can suggest in your DAL create a method which returns List and then on your form page provide code like this
txtName.AutoCompleteMode = AutoCompleteMode.Suggest;
txtName.AutoCompleteSource = AutoCompleteSource.CustomSource;
var autoCompleteCollection = new AutoCompleteStringCollection();
autoCompleteCollection.AddRange(DAL.GetMethod().ToArray());
textbox.AutoCompleteCustomSource = autoCompleteCollection;
Thanks for the help!!
I used your suggest and had made some little changes and it work just fine to me...
Turns out that the only problem was my method list, once I change it do a List < String > things got better.
For who is wondering, here is how I do it:
DAL LAYER:
public List<string> LoadList()
{
List<string> tagsList = new List<string>();
using (SqlConnection connection = new SqlConnection(ADados.StringDeConexao))
{
connection.Open();
using (SqlCommand command = connection.CreateCommand())
{
command.CommandText = "SELECT column FROM table";
using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
if (!reader.IsDBNull(0))
tagsList.Add(reader.GetString(0));
}
reader.Close();
}
connection.Close();
return tagsList;
}
PRESENTATION LAYER (Event TextChanged):
PedidoBLL pedido = new PedidoBLL();
txtName.AutoCompleteMode = AutoCompleteMode.Suggest;
txtName.AutoCompleteSource = AutoCompleteSource.CustomSource;
AutoCompleteStringCollection popula = new AutoCompleteStringCollection();
popula.AddRange(pedido.LoadList().ToArray());
txtName.AutoCompleteCustomSource = popula;
In the BLL Layer i just call and return the DAL method LoadList...

How to show data using datagrid with C# + SQL Server

I want to ask more to show data from SQL Server to WinForm using a datagrid.
I've been creating a datagrid and the stored procedure to show data is
ALTER PROC [dbo].[SP_GetData]
AS
SELECT nama , nim
FROM tabledata
and I've created the function to access the database and the stored procedure in C#
string Sp_Name = "dbo.SP_GetData";
SqlConnection SqlCon = new SqlConnection("Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=DBMahasiswa;Data Source=.");
SqlCon.Open();
SqlCommand SqlCom = new SqlCommand(Sp_Name , SqlCon);
SqlCom.CommandType = CommandType.StoredProcedure;
List<mahasiswaData> listMahasiswa = new List<mahasiswaData>();
using (SqlDataReader sqlDataReader = SqlCom.ExecuteReader())
{
if (sqlDataReader.HasRows)
{
while (sqlDataReader.Read())
{
mahasiswaData DataMhs = new mahasiswaData();
DataMhs.Nama = sqlDataReader["Name"].ToString();
DataMhs.Umur = Convert.ToInt32(sqlDataReader["Age"]);
listMahasiswa.Add(DataMhs);
}
}
}
SqlCon.Close();
return listMahasiswa;
and finally, in the show button I add this code
dgvmahasiswa.DataSource = new MahasiswaDB().LoadMahasiswa();
Could somebody tell me where the fault is or the alternatives one?
Thank You So Much! :D
Some things to think about:
At the moment, if your code runs into exceptions, you'll leave a
SqlConnection hanging around; you've used the using pattern for your
SqlDataReader; you should extend it to all of your disposable
objects.
You are swallowing exceptions; if your query fails, the connection
cannot be made, or something else happens, you'll never really know - your function will just return null.
Is it possible for name or age to be null? Age to be non-numeric?
There's no test for any unexpected values, which you'll also never
know about.
If you don't have any records, you'll return an empty list. Is this
desired? Or would you prefer to know there were no records?
You might prefer to look at something like this:
public List<mahasiswaData> GetData(){
List<mahasiswaData> gridData = new List<mahasiswaData>();
try{
using(SqlConnection conn = new SqlConnection("Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=DBMahasiswa;Data Source=."))
{
using(SqlCommand command = new SqlCommand())
{
command.Connection = conn;
command.CommandType = CommandType.StoredProcedure;
command.Text = "dbo.SP_GetData";
using(SqlDataReader reader = command.ExecuteReader())
{
if(reader.HasRows){
while(reader.Read())
{
object rawName = reader.GetValue(reader.GetOrdinal("Name"));
object rawAge = reader.GetValue(reader.GetOrdinal("Age"));
if(rawName == DBNull.Value || rawAge == DBNull.Value)
{
//Use logging to indicate name or age is null and continue onto the next record
continue;
}
//Use the object intializer syntax to create a mahasiswaData object inline for simplicity
gridData.Add(new mahasiswaData()
{
Nama = Convert.ToString(rawName),
Umur = Convert.ToInt32(rawAge)
});
}
}
else{
//Use logging or similar to record that there are no rows. You may also want to raise an exception if this is important.
}
}
}
}
}
catch(Exception e)
{
//Use your favourite logging implementation here to record the error. Many projects use log4Net
throw; //Throw the error - display and explain to the end user or caller that something has gone wrong!
}
return gridData;
}
Note that if you are sure that age or name will never be null then you can simplify the middle section:
while (reader.Read())
{
//Use the object intializer syntax to create a mahasiswaData object inline for simplicity
gridData.Add(new mahasiswaData()
{
Nama = reader.GetString(reader.GetOrdinal("Name")),
Umur = reader.GetInt32(reader.GetOrdinal("Age"))
});
}

using winforms to insert data in sql database , drawback of opening connection frequently

I am developing a front-end sales application.
Is this an efficient way of inserting data multiple times into a sql table, from a single button:
private void button1_Click(object sender, EventArgs e)
{
c.Open();
string w = "insert into checkmultiuser(username) values (#username)";
SqlCommand cmd = new SqlCommand(w, c);
cmd.Parameters.Add("#username", SqlDbType.VarChar);
cmd.Parameters["#username"].Value = textBox1.Text;
//cmd.ExecuteNonQuery();
cmd.ExecuteReader();
c.Close();
}
What are its drawbacks? One would be that again and again the connection is opened and closed when the button is clicked which would effect the speed greatly.
You ar edoing the right way: see this question: to close connection to database after i use or not? too.
Perhaps don't do the database insert for each entry, but store each entry in a DataSet, then insert them all at once, a la a save button.
For each entry do this:
String s = textBox1.Text;
If ( *\Enter validation logic*\ )
{
//Insert data into DataSet
}
else
{
//Throw error for user.
}
Then once you're ready to commit to DB, insert each item from the DataSet, similar to the examples in the other answers here.
I would open the connection once when the form opens and re-use that connection until the form is closed.
As for inserting records, the code you have is right.
From a resource management point of view it would be better if you can work out how many times you need to insert the data and then perform the operation in the one button click, perhaps iterating through a loop until the correct amount of insert operations has been completed. This means you are not constantly opening and closing the connection with each button press but instead opening it, performing the insert queries and closing the connection.
Also I recommend that you implement your code with the "using" statement, this way it will automatically handle the disposal and release of resources.
private void button1_Click(object sender, EventArgs e, string[] value)
{
try
{
using(SQLConnection c = new SQLConnection(connectionString))
using(SQLCommand cmd = new SQLCommand(c))
{
c.Open();
string w = "insert into checkmultiuser(username) values (#username)";
cmd.CommandText = w;
cmd.Parameters.Add("#username", SqlDbType.VarChar);
for(int i = 0; i < value.Length; i++)
{
cmd.Parameters["#username"].Value = value[i];
cmd.ExecuteReader();
}
}
}
catch(Exception e)
{
Console.WriteLine(e.Message);
}
}
If you can create the SQLConnection in the method then it will also allow you create it in a using statement, again taking care of managing and releasing resources.
In terms of the statement you are using I can't see any problems with it, you're using parameterized queries which is a good step to take when interacting with SQL databases.
References:
try-catch - MSDN
I don't think you should have to worry about the time lag due to opening and closing a connection, particularly if it is happening on a manually triggered button click event. Human perceivable response time is about 200 milliseconds. At best, I'd guess someone could click that button once every 100 milliseconds or so. Plenty of time to open and close a connection.
If, however, you are dealing with a routine that will be connecting to your database, you could pass in the connection, include a using statement as Mr. Keeling mentioned already, and just verify that it is ready.
Here is yet another approach, which returns a DataTable (since your original post displayed executing a Data Reader):
public static DataTable UpdateRoutine(SQLConnection c, string value) {
const string w = "insert into checkmultiuser(username) values (#username)";
DataTable table = new DataTable();
using(SQLCommand cmd = new SQLCommand(w, c)) {
cmd.Parameters.Add("#username", SqlDbType.VarChar);
cmd.Parameters["#username"].Value = value;
try {
if ((cmd.Connection.State & ConnectionState.Open) != ConnectionState.Open) {
cmd.Connection.Open();
}
using (SqlDataReader r = cmd.ExecuteReader()) {
table.Load(r);
}
}
return table;
} catch(SqlException err) { // I try to avoid catching a general exception.
MessageBox.Show(err.Message, "SQL Error");
}
return null;
}

Categories

Resources