How can I improve this method so that it works with multiple tables?
public void ExecuteStoredProcedure(string StoredProcedureName)
{
using (var connection = new SqlConnection(provider.ConnectionString))
{
using (var command = new SqlCommand(StoredProcedureName, connection))
{
command.CommandType = System.Data.CommandType.StoredProcedure;
using (var reader = command.ExecuteReader())
{
while (reader.Read())//problem is here
{
Console.WriteLine(reader[0].ToString());
}
}
}
}
}
I could return the reader (but I think that means I'd have to drop my using statements). Or, I could create a factory that processes each table depending on a parameter that I add to the ExecuteStoredProcedure(). Or whatever.
How can I get the reader functionality outta here?
Use SqlDataReader.NextResult. Have a look at article - How To Handle Multiple Results by Using the DataReader in Visual C# .NET
Related
I have recently started refactoring an old system designed by someone with little experience in OOP. Thankfully, (nearly) all access to the database are within a single 3000 lines long file. That files contains a Dictionary<string, SqlCommand>, the SqlConnection, a very long function adding every single SQL query to the dictionary like this:
cmd = new SqlCommand(null, _sqlConnection);
cmd.CommanText = "SELECT * FROM User WHERE User.UserID = #id;" // Most queries are far from being this simple
cmd.Parameters.Add(new SqlParameter("#id", SqlDbType.Int, 0));
cmd.Prepare();
_cmds.Add("getUser", cmd);
Those queries are used by functions within that same file that would look like this:
public void deleteUser(int userId)
{
if (_cmds.TryGetValue("deleteUser", out SqlCommand cmd))
{
lock(cmd)
{
cmd.Parameters[0].Value = userId;
cmd.ExecuteNonQuery();
}
}
}
public int isConnected(int userId, out int amount)
{
bool result = false;
amount = 0;
if (_cmds.TryGetValue("userInfo", out SqlCommand cmd))
{
lock (cmd)
{
cmd.Parameters[0].Value = userId;
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
}
}
}
}
return result;
}
Now this is horrible to work with and maintain. I finally have the time to refactor this. I wanted to turn this into a proper DAL with repositories which would be used by services and be dependency injectable.
I don't really care to change the functions or the queries (using a ORM for example). What I'm more interested in is to split the file into many files in a way that would allow me to mock, test and modify it more easily. I'm looking for a way to better structure the existing code, though I know a lot of copy/pasting and recoding will be required.
Would recommend replacing the manually written object-mapping code with using an Object-Relational Mapper like NHibernate, which will save the time and effort of creating and maintaining a data access layer.
Check out Dapper. It is a "micro-ORM" and offers high-performance object-oriented data access. You can continue to use all the existing queries, but replace all the boiler-plate ADO.NET code with Dapper.
This is going to take some repetitive work, but here are a few ideas on how to get a handle on it. This won't put the code in some ideal state, but might make it a little bit more manageable. One challenge is that every method has parts in two places - one in the method and one where the command is stored in the dictionary.
Don't add any more SQL to this class, ever. Begin defining and using the new repositories you want.
Being able to mock it is easy, too. You can use the extract interface refactoring to create an interface so that you can mock this class, even in its current form. That's going to be a big, ugly interface, but at least you can mock methods if you need to.
That's the easy part. How can the entire class be refactored without breaking any one part of it? These steps are just some ideas:
A first step is just to inject the connection string the class needs:
public class YourDataAccessClass
{
private readonly string _connectionString;
public YourDataAccessClass(string connectionString)
{
_connectionString = connectionString;
}
}
You'll use it one method at a time. Initially you can leave most of the class, including the dictionary, as-is. That way the methods you haven't modified will continue to work.
Next, you could open up the class in two separate windows so that you can see the dictionary function that contains the SQL and the functions that use it side-by-side. This will be a lot harder if you have to scroll back up and down.
You'll likely want to move the SQL for each function into that function. You could do this as you refactor each function, but it might be less painful to do it all at once so that you gain efficiency from repetition.
You could define a new variable in each function and copy and paste:
var sql = "SELECT * FROM User WHERE User.UserID = #id;";
(Again, not the way I'd normally write this.)
Now you've got a function or 100 functions that look like this:
public void deleteUser(int userId)
{
var sql = "DELETE User WHERE User.UserID = #id;";
if (_cmds.TryGetValue("deleteUser", out SqlCommand cmd))
{
lock(cmd)
{
cmd.Parameters[0].Value = userId;
cmd.ExecuteNonQuery();
}
}
}
For the non-query commands you could write a function like this in your class which will eliminate the repetitive code to open a connection, create a command, etc:
private void ExecuteNonQuery(string sql, Action<SqlCommand> addParameters = null)
{
using (var connection = new SqlConnection(_connectionString))
using (var command = new SqlCommand(sql))
{
addParameters?.Invoke(command);
connection.Open();
command.ExecuteNonQuery();
}
}
Save the following snippet of code. You might even just be able to keep it in the clipboard most of the time. Paste it into each one of your non-query methods right beneath the SQL:
ExecuteNonQuery(sql, command =>
{
});
After you paste it, move the line or lines that add parameters into the body of the cmd argument (which is named cmd so that you can move the lines without changing the variable name) and then delete the existing code that executed the query previously.
ExecuteNonQuery(sql, cmd =>
{
cmd.Parameters[0].Value = userId;
});
Now your function looks like this:
public void deleteUser(int userId)
{
var sql = "DELETE User WHERE User.UserID = #id;";
ExecuteNonQuery(sql, cmd =>
{
cmd.Parameters[0].Value = userId;
});
}
I'm not saying that's fun, but it will make the process of editing those functions more efficient since you're typing less and just moving things around in exactly the same way over and over.
The ones that actually return data are less fun, but still manageable.
First, take pretty much the same boilerplate code. This could likely be improved because it's still a little repetitive, but at least it's more self-contained:
using (var connection = new SqlConnection(_connectionString))
using (var cmd = new SqlCommand(sql)) // again, named "cmd" on purpose
{
connection.Open();
}
Starting with this:
public int isConnected(int userId, out int name)
{
var sql = "SELECT * FROM User WHERE User.UserID = #id;";'
bool result = false;
amount = 0;
if (_cmds.TryGetValue("userInfo", out SqlCommand cmd))
{
lock (cmd)
{
cmd.Parameters[0].Value = userId;
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
}
}
}
}
}
Paste your boilerplate into the method:
public int isConnected(int userId, out int name)
{
var sql = "SELECT * FROM User WHERE User.UserID = #id;";'
bool result = false;
amount = 0;
using (var connection = new SqlConnection(_connectionString))
using (var cmd = new SqlCommand(sql)) // again, named "cmd" on purpose
{
connection.Open();
}
if (_cmds.TryGetValue("userInfo", out SqlCommand cmd))
{
lock (cmd)
{
cmd.Parameters[0].Value = userId;
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
// was this a typo? The code in the question doesn't
// return anything or set the "out" variable. But
// if that's in the method then that will be part of
// what gets copied.
}
}
}
}
}
Then, just like before, move the part where you add your parameters above connection.Open(); and move the part where you use the command just beneath connection.Open(); and delete what's left. The result is this:
public int isConnected(int userId, out int name)
{
var sql = "SELECT * FROM User WHERE User.UserID = #id;";'
bool result = false;
amount = 0;
using (var connection = new SqlConnection(_connectionString))
using (var cmd = new SqlCommand(sql)) // again, named "cmd" on purpose
{
cmd.Parameters[0].Value = userId;
connection.Open();
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
}
}
}
}
You can probably get into a groove and do these in a minute or two each, which means that it will only take a few hours.
Once all of this is done you can delete your massive dictionary function. Now the class depends on an injected connection string and opens and closes connections normally instead of storing a connection and using it over and over.
You can also break it up. One way is to move the connection string and the helper function into a base class (or just duplicate the helper function - it's really small) and you can move any of the query functions into a smaller class because each function is self-contained.
Perhaps I am too lazy, but nested usings in a method make the method look complicated and difficult to understand. Is there any way to make all IDisposable objects that were created in a method be automatically disposed when exiting the method due to an exception?
PS. Thank you for the feedback. I now know there is no such feature right now. But, theoretically, is the following hypothetical syntax plausible?
void SomeMethod()
{
autodispose var com = new SQLCommand();
...
autodispose var file = new FileStream();
....
autodispose var something = CreateAnObjectInAMethod();
....
}
All three objects are automatically disposed when exiting the method for any reason.
No but you can tidy a bit like so...
This...
using (var conn = new SqlConnection("blah"))
{
using (var cmd = new SqlCommand("blah"))
{
using (var dr = cmd.ExecuteReader())
{
}
}
}
Becomes...
using (var conn = new SqlConnection("blah"))
using (var cmd = new SqlCommand("blah"))
using (var dr = cmd.ExecuteReader())
{
}
Is it possible for me to create a snippet and have it analyze a current class, get the properties of said class, and then create a sql function that writes out line by line each property in a command parameter.
What I am looking for is doing something like this:
public static int Add(MyObject Message) {
MySqlConnection connection = new MySqlConnection(MySqlConnection);
MySqlCommand command = new MySqlCommand("Add_Message", connection);
command.CommandType = CommandType.StoredProcedure;
command.Parameters.AddWithValue("#IMFromUserID", Message.IMFromUserID);
command.Parameters.AddWithValue("#IMToUserID", Message.IMToUserID);
command.Parameters.AddWithValue("#IMMessage", Message.IMMessage);
command.Parameters.AddWithValue("#IMTimestamp", Message.IMTimestamp);
connection.Open();
MySqlDataReader reader = command.ExecuteReader();
while (reader.Read()) {
Message.IMID = (int)reader["IMID"];
}
command.Dispose();
connection.Close();
connection.Dispose();
return Message.IMID;
}
Basically I want the snippet to populate the entire Add function and fill in the #PropertyName and the Message.PropertyName in the command.Parameters.AddWithValue
I don't think code snippets are powerful enough. Maybe the ReSharper's code templates are powerful enough but I don't think so, too. You could look into using T4 templates if you really need or want code generation.
Personally I would suggest to avoid compiletime code generation altogether. You could use reflection - easy but slow - or runtime code generation - complex but fast. If performance is not a primary concern I suggest to use reflection.
public static Int32 Add<TMessage>(TMessage message)
where TMessage: IMessageWithIMID
{
using (var connection = new MySqlConnection(connectionString))
using (var command = new MySqlCommand("Add_Message", connection))
{
command.CommandType = CommandType.StoredProcedure;
// We look only at public instance properties but you can easily
// change this and even use a custom attribute to control which
// properties to include.
var properties = typeof(TObject).GetProperties(BindingFlags.Public |
BindingFlags.Instance);
foreach (var property in properties)
{
var parameterName = "#" + property.Name;
var value = property.GetValue(message, null);
command.Parameters.AddWithValue(parameterName, value);
}
connection.Open();
message.IMID = (Int32)command.ExecuteScalar();
return message.IMID;
}
}
Note that you have to introduce and implement the interface IMessageWithIMID in order to access the property IMID.
internal interface IMessageWithIMID
{
Int32 IMID { get; set; }
}
Note that you also don't need a data read - you can just use ExecuteScalar(). This turns
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
message.IMID = (Int32)reader["IMID"];
}
}
into
message.IMID = (Int32)command.ExecuteScalar();
and you are done.
I'm a big fan of keeping my code simple and trim so it can be re-usable, on thing i'm struggling with is using the data reader for different types of objects, I had it in a method and found there were problems with connections closed or being left open. SO I am being forced, for the mean time to copy and paste the code, which is something I hate!!!
Is there any way I can scale this down so I can put it in a method and make it re-usable and nice?
ENT_AuctionBid ret = new ENT_AuctionBid();
try
{
SqlParameter[] Params = new SqlParameter[]{
new SqlParameter("#ID", ID )
};
using (SqlConnection conn = new SqlConnection(this.ConnectionString))
{
using (SqlCommand command = new SqlCommand("GetItem", conn))
{
SqlDataReader reader;
command.CommandType = CommandType.StoredProcedure;
conn.Open();
command.Parameters.AddRange(Params);
reader = command.ExecuteReader(CommandBehavior.SingleRow);
while (reader.HasRows)
{
while (reader.Read())
{
//
ret = this.Convert(reader);
}
reader.NextResult();
}
reader.Close();
}
}
}
catch (Exception ex)
{
}
return ret;
You should use SQLDataAdapter.
Here's a nice example on how to use it:
http://www.dotnetperls.com/sqldataadapter
Also, you might want to consider switching to Entity Framework, it will make your data access much, much easier, but might be complicated in an existing project.
You can make it using a lot less lines:
// Skipped creating temp variable
try {
using (SqlConnection conn = new SqlConnection(this.ConnectionString))
using (SqlCommand command = new SqlCommand("GetItem", conn) { CommandType = CommandType.StoredProcedure} ) {
command.Parameters.AddWithValue(#ID, ID);
conn.Open();
// reader is IDisposable, you can use using
using (var reader = command.ExecuteReader(CommandBehavior.SingleRow)) {
// Skipped parsing multiple result sets, you return after the first
// otherwise there's no point using SingleRow
// If nothing is read, return default value
return reader.Read() ? this.Convert(reader) : new ENT_AuctionBid();
}
}
}
catch (Exception ex) {
// Handle your exception here
}
// Return default value for error
return new ENT_AuctionBid();
All connections are closed using this code (because using is used). No unneeded loops are created, becuase you only expect a single row. And the temporary variable is not needed, so the abondend object is not created, only when it is used it is created.
This is a bit smaller:-
try
{
using (SqlConnection conn = new SqlConnection(this.ConnectionString))
{
using (SqlCommand command = new SqlCommand("GetItem", conn))
{
command.Paramaters.AddWithValue("#ID",ID);
command.CommandType = CommandType.StoredProcedure;
conn.Open();
reader = command.ExecuteReader();
while (reader.Read())
{
//
ret = this.Convert(reader);
}
}
}
}
catch (Exception ex)
{
}
Create helper methods for creating and returning an object of type SqlCommand. Pass a connection object to this helper method as well as stored procedure name and parameters list (if any). If you have different objects that are created from the data reader, pass the data reader to a constructor and let it generate an object based on that data.
As for closing the connection you should always have try...catch...finally. In the finally section close the connection.
In my projects i usually solve this problem creating an utility class that contains all the methods to access to the DB and manage inside all the stuff related to the db connection and the adapter.
For example a class called DBSql which contains a connection (SqlConnection connection;) as private member and the following methods:
//execute the query passed to the function
public System.Data.DataSet ExecuteQuery(string query)
//returns if a query returns rows or not
public bool HasRows(string query)
//execute commands like update/insert/etc...
public int ExcuteNonQuery(string sql)
In my class, you just pass a string and the class initialize the various DataAdapter and Command to execute it and return a dataset. Obiously you can complicate it to manage parameters/transaction and everything else.
In this way you are sure that the connection and the object are always handled the same way, and, hopefully, in a correct way.
You can use a utility file, such as SqlHelper.cs from Microsoft Data Access Application Block. Then all the code you need is this:
using (SqlDataReader sdr = SqlHelper.ExecuteReader(this.ConnectionString, "GetItem", ID))
{
while (sdr.Read())
{
ret = this .Convert(sdr);
}
}
You could start using LINQ-to-SQL, which has it's own DataClass system in which you just drag-&-drop your database tables and stored procedures. Then you just have to create an instance at the top of your classes -- private MyCustomDataClass _db = new MyCustomDataClass(); and then you can just type in _db.<Here all datatables and SPROCs will appaer for you to choose>.
Example (from when all SPROCs are added to the DataClass)
private MyCustomDataClass _db = new MyCustomDataClass();
public void MethodToRunSPROC(string email, Guid userId)
{
_db.MySPORC_AddEmailToUser(email, userId);
}
I have two questions.
1) Should you always use a using statement on a connection? So, I would use it on the connection and then another one on a reader within the connection? So I would be using two using statements.
2) Lets say you use the using statement on the connection and also a reader being returned on the connection. So you have two using statements. Does it create two Try{}Finally{} blocks or just one?
Thanks!
Be careful here. You should always have a using statement on any local object that implements IDisposable. That includes not only connections and readers, but also the command. But it can be tricky sometimes exactly where that using statement goes. If you're not careful it can cause problems. For example, in the code that follows the using statement will close your reader before you ever get to use it:
DataReader MyQuery()
{
string sql="some query";
using (var cn = new SqlConnection("connection string"))
using (var cmd = new SqlCommand(sql, cn))
{
cn.Open();
using (var rdr = cmd.ExecuteReader())
{
return rdr;
}
}
}
Instead, you have four options. One is to wait to create the using block until you call the function:
DataReader MyQuery()
{
string sql="some query";
using (var cn = new SqlConnection("connection string"))
using (var cmd = new SqlCommand(sql, cn))
{
cn.Open();
return cmd.ExecuteReader();
}
}
using (var rdr = MyQuery())
{
while (rdr.Read())
{
//...
}
}
Of course, you still have to careful with your connection there and it means remember to write a using block everywhere you use the function.
Option two is just process the query results in the method itself, but that breaks separation of your data layer from the rest of the program. A third option is for your MyQuery() function to accept an argument of type Action that you can call inside the while (rdr.Read()) loop, but that's just awkward.
I generally prefer option four: turn the data reader into an IEnumerable, like this:
IEnumerable<IDataRecord> MyQuery()
{
string sql="some query";
using (var cn = new SqlConnection("connection string"))
using (var cmd = new SqlCommand(sql, cn))
{
cn.Open();
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read())
yield return rdr;
}
}
}
Now everything will be closed correctly, and the code that handles it is all in one place. You also get a nice bonus: your query results will work well with any of the linq operators.
Finally, something new I'm playing with for the next time I get to build a completely new project that combines the IEnumerable with passing in a delegate argument:
//part of the data layer
private static IEnumerable<IDataRecord> Retrieve(string sql, Action<SqlParameterCollection> addParameters)
{
//DL.ConnectionString is a private static property in the data layer
// depending on the project needs, it can be implementing to read from a config file or elsewhere
using (var cn = new SqlConnection(DL.ConnectionString))
using (var cmd = new SqlCommand(sql, cn))
{
addParameters(cmd.Parameters);
cn.Open();
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read())
yield return rdr;
}
}
}
And then I'll use it within the data layer like this:
public IEnumerable<IDataRecord> GetFooChildrenByParentID(int ParentID)
{
//I could easily use a stored procedure name instead, and provide overloads for commandtypes.
return Retrieve(
"SELECT c.*
FROM [ParentTable] p
INNER JOIN [ChildTable] c ON c.ParentID = f.ID
WHERE f.ID= #ParentID", p =>
{
p.Add("#ParentID", SqlDbType.Int).Value = ParentID;
}
);
}
1) Should you always use a using
statement on a connection? So, I would
use it on the connection and then
another one on a reader within the
connection? So I would be using two
using statements.
Yes, because they implement IDisposable. And don't forget a using statement on the command too :
using (DbConnection connection = GetConnection())
using (DbCommand command = connection.CreateCommand())
{
command.CommandText = "SELECT FOO, BAR FROM BAZ";
connection.Open();
using (DbDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
....
}
}
}
2) Lets say you use the using
statement on the connection and also a
reader being returned on the
connection. So you have two using
statements. Does it create two
Try{}Finally{} blocks or just one?
Each using statement will create its own try/finally block
You should always use a using statement when an object implements IDisposable. This includes connections.
It will create two nested try{}finally{} blocks.
Special point on 1). You need to specifically avoid that technique when the connection is used in asynchronous ADO.NET methods - like BeginExecuteReader, because more than likely, you will fall out of scope and try to dispose the connection while the async operation is still in progress. This is similar to the case when you are using class variables and not local variables. Often times the connection reference is stored in a class used as the "control block" for the asynchronous operation.
To answer each one:
1) Yes, this would be best practice to dispose both as soon as possible.
2) using() will create two blocks, wrapped in each other in the same order. It will dispose the inner object (the reader) first, then dispose the object from the outer using (the connection).
Probably this article will be interesting for you: How to Implement IDisposable and Finalizers: 3 Easy Rules