I have a database from which I want to read a particular value.
When I am reading ID is always showing up as 0.
public void ClustersStatus(Clusters cl)
{
using (SqlConnection con = new SqlConnection("Data Source=DESKTOP-JMGPJ6J;Initial Catalog=Infinity;Integrated Security=True"))
{
using (SqlCommand cmd = new SqlCommand("Select Approved from Master_Cluster where ClusterID = #id", con))
{
con.Open();
cmd.Parameters.AddWithValue("#id", cl.ClusterID);
cmd.Connection = con;
SqlDataReader rdr = cmd.ExecuteReader();
if (rdr.HasRows)
{
rdr.Read();
if (Convert.ToInt32(rdr["Approved"]) == 1)
{
}
else
{
}
}
}
// return cl;
}
}
It is really hard to answer your question without even more information about your specific problem, but I think the real issue in your
rdr.Read();
line. SqlDataReader.Read method advances your data reader to the next record which means that you are reading your rdr["Approved"] column of your next line with your
if (Convert.ToInt32(rdr["Approved"]) == 1)
code which seems weird and I think this is not what you want. You might wanna consider to delete it.
Also I have a few suggestions;
Do not put your connection string directly in your code. These settings can be change over time to time. Put it some configuration file or read it from some settings file.
Open your connection object just before you use it. Not a big deal, but it is a good practice.
Do not use AddWithValue method. It may generate unexpected results sometimes. Use Add method instead (preferably Add(String, SqlDbType, Int32) overload)
Supposed your query offers you what you want, to be able to use the data read with SqlDataReader, you need to use the IDataRecord interface. You cannot use directly the rdr variable.
For ease of use, the I wrote the reading engine into a function. I am writing here the idea for you to adapt it to your needs:
private static int ReadSingleRow(IDataRecord record)
{
//below is an example of processing using `record`. You adapt it to your needs
int.TryParse(record[0].ToString(), out int temp);
MessageBox.Show (temp.ToString());
//...do the processing you need
return temp;
}
After you wrote the function, the main becomes:
if (rdr.HasRows)
{
while (rdr.Read());
int x=ReadSingleRow((IDataRecord) rdr);
//or use, somehow, the above function
// other processing
}
Related
I have recently started refactoring an old system designed by someone with little experience in OOP. Thankfully, (nearly) all access to the database are within a single 3000 lines long file. That files contains a Dictionary<string, SqlCommand>, the SqlConnection, a very long function adding every single SQL query to the dictionary like this:
cmd = new SqlCommand(null, _sqlConnection);
cmd.CommanText = "SELECT * FROM User WHERE User.UserID = #id;" // Most queries are far from being this simple
cmd.Parameters.Add(new SqlParameter("#id", SqlDbType.Int, 0));
cmd.Prepare();
_cmds.Add("getUser", cmd);
Those queries are used by functions within that same file that would look like this:
public void deleteUser(int userId)
{
if (_cmds.TryGetValue("deleteUser", out SqlCommand cmd))
{
lock(cmd)
{
cmd.Parameters[0].Value = userId;
cmd.ExecuteNonQuery();
}
}
}
public int isConnected(int userId, out int amount)
{
bool result = false;
amount = 0;
if (_cmds.TryGetValue("userInfo", out SqlCommand cmd))
{
lock (cmd)
{
cmd.Parameters[0].Value = userId;
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
}
}
}
}
return result;
}
Now this is horrible to work with and maintain. I finally have the time to refactor this. I wanted to turn this into a proper DAL with repositories which would be used by services and be dependency injectable.
I don't really care to change the functions or the queries (using a ORM for example). What I'm more interested in is to split the file into many files in a way that would allow me to mock, test and modify it more easily. I'm looking for a way to better structure the existing code, though I know a lot of copy/pasting and recoding will be required.
Would recommend replacing the manually written object-mapping code with using an Object-Relational Mapper like NHibernate, which will save the time and effort of creating and maintaining a data access layer.
Check out Dapper. It is a "micro-ORM" and offers high-performance object-oriented data access. You can continue to use all the existing queries, but replace all the boiler-plate ADO.NET code with Dapper.
This is going to take some repetitive work, but here are a few ideas on how to get a handle on it. This won't put the code in some ideal state, but might make it a little bit more manageable. One challenge is that every method has parts in two places - one in the method and one where the command is stored in the dictionary.
Don't add any more SQL to this class, ever. Begin defining and using the new repositories you want.
Being able to mock it is easy, too. You can use the extract interface refactoring to create an interface so that you can mock this class, even in its current form. That's going to be a big, ugly interface, but at least you can mock methods if you need to.
That's the easy part. How can the entire class be refactored without breaking any one part of it? These steps are just some ideas:
A first step is just to inject the connection string the class needs:
public class YourDataAccessClass
{
private readonly string _connectionString;
public YourDataAccessClass(string connectionString)
{
_connectionString = connectionString;
}
}
You'll use it one method at a time. Initially you can leave most of the class, including the dictionary, as-is. That way the methods you haven't modified will continue to work.
Next, you could open up the class in two separate windows so that you can see the dictionary function that contains the SQL and the functions that use it side-by-side. This will be a lot harder if you have to scroll back up and down.
You'll likely want to move the SQL for each function into that function. You could do this as you refactor each function, but it might be less painful to do it all at once so that you gain efficiency from repetition.
You could define a new variable in each function and copy and paste:
var sql = "SELECT * FROM User WHERE User.UserID = #id;";
(Again, not the way I'd normally write this.)
Now you've got a function or 100 functions that look like this:
public void deleteUser(int userId)
{
var sql = "DELETE User WHERE User.UserID = #id;";
if (_cmds.TryGetValue("deleteUser", out SqlCommand cmd))
{
lock(cmd)
{
cmd.Parameters[0].Value = userId;
cmd.ExecuteNonQuery();
}
}
}
For the non-query commands you could write a function like this in your class which will eliminate the repetitive code to open a connection, create a command, etc:
private void ExecuteNonQuery(string sql, Action<SqlCommand> addParameters = null)
{
using (var connection = new SqlConnection(_connectionString))
using (var command = new SqlCommand(sql))
{
addParameters?.Invoke(command);
connection.Open();
command.ExecuteNonQuery();
}
}
Save the following snippet of code. You might even just be able to keep it in the clipboard most of the time. Paste it into each one of your non-query methods right beneath the SQL:
ExecuteNonQuery(sql, command =>
{
});
After you paste it, move the line or lines that add parameters into the body of the cmd argument (which is named cmd so that you can move the lines without changing the variable name) and then delete the existing code that executed the query previously.
ExecuteNonQuery(sql, cmd =>
{
cmd.Parameters[0].Value = userId;
});
Now your function looks like this:
public void deleteUser(int userId)
{
var sql = "DELETE User WHERE User.UserID = #id;";
ExecuteNonQuery(sql, cmd =>
{
cmd.Parameters[0].Value = userId;
});
}
I'm not saying that's fun, but it will make the process of editing those functions more efficient since you're typing less and just moving things around in exactly the same way over and over.
The ones that actually return data are less fun, but still manageable.
First, take pretty much the same boilerplate code. This could likely be improved because it's still a little repetitive, but at least it's more self-contained:
using (var connection = new SqlConnection(_connectionString))
using (var cmd = new SqlCommand(sql)) // again, named "cmd" on purpose
{
connection.Open();
}
Starting with this:
public int isConnected(int userId, out int name)
{
var sql = "SELECT * FROM User WHERE User.UserID = #id;";'
bool result = false;
amount = 0;
if (_cmds.TryGetValue("userInfo", out SqlCommand cmd))
{
lock (cmd)
{
cmd.Parameters[0].Value = userId;
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
}
}
}
}
}
Paste your boilerplate into the method:
public int isConnected(int userId, out int name)
{
var sql = "SELECT * FROM User WHERE User.UserID = #id;";'
bool result = false;
amount = 0;
using (var connection = new SqlConnection(_connectionString))
using (var cmd = new SqlCommand(sql)) // again, named "cmd" on purpose
{
connection.Open();
}
if (_cmds.TryGetValue("userInfo", out SqlCommand cmd))
{
lock (cmd)
{
cmd.Parameters[0].Value = userId;
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
// was this a typo? The code in the question doesn't
// return anything or set the "out" variable. But
// if that's in the method then that will be part of
// what gets copied.
}
}
}
}
}
Then, just like before, move the part where you add your parameters above connection.Open(); and move the part where you use the command just beneath connection.Open(); and delete what's left. The result is this:
public int isConnected(int userId, out int name)
{
var sql = "SELECT * FROM User WHERE User.UserID = #id;";'
bool result = false;
amount = 0;
using (var connection = new SqlConnection(_connectionString))
using (var cmd = new SqlCommand(sql)) // again, named "cmd" on purpose
{
cmd.Parameters[0].Value = userId;
connection.Open();
using (SqlDataReader reader = new cmd.ExecuteReader())
{
if (reader.HasRows)
while (reader.Read())
{
amount = (int)Math.Round(reader.GetDecimal(0));
result = reader.GetInt32(1);
}
}
}
}
You can probably get into a groove and do these in a minute or two each, which means that it will only take a few hours.
Once all of this is done you can delete your massive dictionary function. Now the class depends on an injected connection string and opens and closes connections normally instead of storing a connection and using it over and over.
You can also break it up. One way is to move the connection string and the helper function into a base class (or just duplicate the helper function - it's really small) and you can move any of the query functions into a smaller class because each function is self-contained.
Folks I ran into a bit of code and I'm a tad confused about whats going on.
I'm working to refactor the code to instead process a handful of SqlCommands rather than the single SqlCommand that it currently works with. It's my hope all SqlCommands can be processed under ONE transaction.
Each SqlCommand is a Stored Procedure, so in effect my transaction would call one (or many) Stored Procedures - first off, is that even possible?
Regardless, here's the code block:
public virtual void Execute()
{
using (SqlConnection conn = new SqlConnection(ConnectionString))
{
SqlCommand storedProcedure = new SqlCommand(BlahBah, conn);
storedProcedure.CommandType = CommandType.StoredProcedure;
conn.Open();
**conn.BeginTransaction(IsolationLevel.ReadUncommitted).Commit();**
storedProcedure.ExecuteNonQuery();
conn.Close();
}
}
In particular the highlighted statement that sets a Transaction on the Connection object, appended with a ".Commit()".
The actual source code has no ROLLBACK, nor anywhere is there a COMMIT. Am I essentially seeing some sort of AutoCommit? Does it even make sense to have a TRANSACTION here if for example the DB doesn't require TRANSACTIONal processing?
Perhaps more important to my refactoring efforts, would something like this make sense? That's to ask, if I processed 10 Stored Procedures and the last one threw an error, would there be an auto ROLLBACK on all 10?
Here's where I want to land:
public virtual void ExecuteTest()
{
using (SqlConnection conn = new SqlConnection(ApplicationConfig.DbConnectInfo.ConnectionString))
{
var errorText = string.Empty;
conn.Open();
conn.BeginTransaction(IsolationLevel.ReadUncommitted).Commit();
foreach (var storedProcedure in StoredProcedures)
{
storedProcedure.Connection = conn;
storedProcedure.ExecuteNonQuery();
}
conn.Close();
}
}
EDIT: this MS link suggests the approach will work:
SqlConnection.BeginTransaction Method (IsolationLevel)
Thank you for your interest.
I am doing some research, to get a better understanding for SQL and working with DataTables.
So I was trying to work on good performance reading data from a MS SQL database and load it into a datagridview.
I've created a SQL function, which I am calling from my tool and load the results into a datatable.
If I execute this function within SSMS it takes 11-12 seconds to load the results (almost 1,5 million entries), but if I am executing this function by using the tool, I coded, it will take more than 30 seconds (just for doing DataTable.Load(SqlDataReader))
What I've done so far is:
private DataTable GetDataFromDB(string userId, string docId, DateTimeOffset date)
{
string cmd = String.Format("select * from dbo.GetData(#userId, #docId, #dueDate);");
using (SqlConnection conn = new SqlConnection(connectionString))
{
if (conn.State != ConnectionState.Open)
conn.Open();
SqlCommand command = new SqlCommand(cmd, conn);
if (String.IsNullOrEmpty(userId))
command.Parameters.AddWithValue("#userId", DBNull.Value);
else
command.Parameters.AddWithValue("#userId", userId);
if (String.IsNullOrEmpty(docId))
command.Parameters.AddWithValue("#docId", DBNull.Value);
else
command.Parameters.AddWithValue("#docId", docId);
command.Parameters.AddWithValue("#dueDate", dueDate);
SqlDataReader reader = command.ExecuteReader();
stopWatch.Reset();
stopWatch.Start();
table.BeginLoadData();
table.Load(reader, LoadOption.Upsert);
table.EndLoadData();
stopWatch.Stop();
reader.Close();
reader.Dispose();
conn.Close();
}
return table;
}
I've already done some google research and this is the best I could come up with. So far it is working well, but I am curios if it is possible to get the results faster. Any ideas?
Also I have a problem with the memory. As soon as I start the call, the tool will allocate up to 900MB RAM, until it has all the entries. I've already managed it, to free the memory everytime I call the function above, but I think, 900MB are quite a lot and another problem is, that it won't release all the RAM, which was needed. So for example: When I start the tool, it needs roundabout 7MB of RAM, if I call the method above the first time, it will need 900MB. If I call it the next time, it will release most of the memory, but will still need 930MB. The third time 960MB and so on. So, with every call it allocates more memory, which will cause a "System out of memory"-Exception, if this method get called often.
Thank you a lot!
Performance will vary based on what and how much data you are loading.
Below link will give you the solution with performance report. Try something like this with your approach and compare the performance.
Hope this will help you.
Fastest way to populate datatable
for me using DataTable.Load(SqlDataReader) is the fastest way
DataTable dt = new DataTable();
using (var con = new SqlConnection { ConnectionString = "ConnectionString" })
{
using (var command = new SqlCommand { Connection = con })
{
con.Open();
command.CommandText = #"SELECT statement.....";
command.Parameters.AddWithValue("#param", "Param");
//load the into DataTable
dt.Load(command.ExecuteReader(), LoadOption.Upsert);
}// this will dispose command
}// this will dispose and close connection
1) I have the following codes:
private static sqlDataReader gCandidateList = null;
public SqlDataReader myCandidateList
{
set
{
gCandidateList = value;
}
get
{
return gCandidateList;
}
}
2) In FormA I have:
sqlConn.ConnectionString = mySettings.myConnString;
sqlConn.Open();
SqlCommand cmdAvailableCandidate = new SqlCommand(tempString, sqlConn);
SqlDataReader drAvailableCandidate = cmdAvailableCandidate.ExecuteReader();
mySettings.myCandidateList = drAvailableCandidate;
sqlConn.Close();
3) In FormB I want to reuse the data saved in myCandidatList so I use:
SqlDataReader drCandidate = mySettings.myCandidateList;
drCandidate.Read();
4) I then got the error "Invalide attempt to call Read when reader is closed."
5) I tried mySettings.myCandidateList.Read() in (3) above and again received the same error message.
6) How can I re-open SqlDataReader drCandidate to read data?
7) Would appreciate very much for advise and help, please.
You can't read reader once the connection is closed or disposed. If you want to use those rows (fetch result) later in your code you need to create a List or DataTable.
For instance,
System.Data.DataTable dt = new System.Data.DataTable();
dt.Load(drAvailableCandidate);
If you want to use the datareader at later stage, you have to specify the same as a parameter to the ExecuteReader Method. Your code in FormA should be changed as below.
sqlConn.ConnectionString = mySettings.myConnString;
sqlConn.Open();
SqlCommand cmdAvailableCandidate = new SqlCommand(tempString, sqlConn);
SqlDataReader drAvailableCandidate = cmdAvailableCandidate.ExecuteReader(CommandBehavior.CloseConnection);
mySettings.myCandidateList = drAvailableCandidate;
sqlConn.Close();
Make sure to dispose the datareader once it is used, as the connection to the database will be held open till the datareader is closed. Better change your code in FormB as below.
using (mySettings.myCandidateList)
{
mySettings.myCandidateList.Read();
}
You're closing the connection before you attempt to read from the reader. That won't work.
When you call Close on the SqlConnection object (sqlConn.Close();) it closes the connection and your data reader. That is why you are getting the error when you try to read from your SqlDataReader from FormB.
What you need to do is change the definition of your myCandidateList property to instead return a representation of the data that you have extracted from the your drAvailableCandidate reader.
Essentially what you need to do is iterate through the rows in the drAvailableCandidate object, extract the values and cache them in your property for later retrieval.
Just to add to the answers already given, if you're using async/await then it's easy to get caught out with this by not awaiting an operation inside a using block of a SqlConnection. For example, doing the following can give the reported error
public Task GetData()
{
using(new SqlConnection(connString))
{
return SomeAsyncOperation();
}
}
The problem here is we're not awaiting the operation inside the using, therefore it's being disposed of before we actually execute the underling async operation. Fairly obvious, but has caught me out before.
The correct thing to do being to await inside the using.
public async Task GetData()
{
using(new SqlConnection(connString))
{
await SomeAsyncOperation();
}
}
I'm a big fan of keeping my code simple and trim so it can be re-usable, on thing i'm struggling with is using the data reader for different types of objects, I had it in a method and found there were problems with connections closed or being left open. SO I am being forced, for the mean time to copy and paste the code, which is something I hate!!!
Is there any way I can scale this down so I can put it in a method and make it re-usable and nice?
ENT_AuctionBid ret = new ENT_AuctionBid();
try
{
SqlParameter[] Params = new SqlParameter[]{
new SqlParameter("#ID", ID )
};
using (SqlConnection conn = new SqlConnection(this.ConnectionString))
{
using (SqlCommand command = new SqlCommand("GetItem", conn))
{
SqlDataReader reader;
command.CommandType = CommandType.StoredProcedure;
conn.Open();
command.Parameters.AddRange(Params);
reader = command.ExecuteReader(CommandBehavior.SingleRow);
while (reader.HasRows)
{
while (reader.Read())
{
//
ret = this.Convert(reader);
}
reader.NextResult();
}
reader.Close();
}
}
}
catch (Exception ex)
{
}
return ret;
You should use SQLDataAdapter.
Here's a nice example on how to use it:
http://www.dotnetperls.com/sqldataadapter
Also, you might want to consider switching to Entity Framework, it will make your data access much, much easier, but might be complicated in an existing project.
You can make it using a lot less lines:
// Skipped creating temp variable
try {
using (SqlConnection conn = new SqlConnection(this.ConnectionString))
using (SqlCommand command = new SqlCommand("GetItem", conn) { CommandType = CommandType.StoredProcedure} ) {
command.Parameters.AddWithValue(#ID, ID);
conn.Open();
// reader is IDisposable, you can use using
using (var reader = command.ExecuteReader(CommandBehavior.SingleRow)) {
// Skipped parsing multiple result sets, you return after the first
// otherwise there's no point using SingleRow
// If nothing is read, return default value
return reader.Read() ? this.Convert(reader) : new ENT_AuctionBid();
}
}
}
catch (Exception ex) {
// Handle your exception here
}
// Return default value for error
return new ENT_AuctionBid();
All connections are closed using this code (because using is used). No unneeded loops are created, becuase you only expect a single row. And the temporary variable is not needed, so the abondend object is not created, only when it is used it is created.
This is a bit smaller:-
try
{
using (SqlConnection conn = new SqlConnection(this.ConnectionString))
{
using (SqlCommand command = new SqlCommand("GetItem", conn))
{
command.Paramaters.AddWithValue("#ID",ID);
command.CommandType = CommandType.StoredProcedure;
conn.Open();
reader = command.ExecuteReader();
while (reader.Read())
{
//
ret = this.Convert(reader);
}
}
}
}
catch (Exception ex)
{
}
Create helper methods for creating and returning an object of type SqlCommand. Pass a connection object to this helper method as well as stored procedure name and parameters list (if any). If you have different objects that are created from the data reader, pass the data reader to a constructor and let it generate an object based on that data.
As for closing the connection you should always have try...catch...finally. In the finally section close the connection.
In my projects i usually solve this problem creating an utility class that contains all the methods to access to the DB and manage inside all the stuff related to the db connection and the adapter.
For example a class called DBSql which contains a connection (SqlConnection connection;) as private member and the following methods:
//execute the query passed to the function
public System.Data.DataSet ExecuteQuery(string query)
//returns if a query returns rows or not
public bool HasRows(string query)
//execute commands like update/insert/etc...
public int ExcuteNonQuery(string sql)
In my class, you just pass a string and the class initialize the various DataAdapter and Command to execute it and return a dataset. Obiously you can complicate it to manage parameters/transaction and everything else.
In this way you are sure that the connection and the object are always handled the same way, and, hopefully, in a correct way.
You can use a utility file, such as SqlHelper.cs from Microsoft Data Access Application Block. Then all the code you need is this:
using (SqlDataReader sdr = SqlHelper.ExecuteReader(this.ConnectionString, "GetItem", ID))
{
while (sdr.Read())
{
ret = this .Convert(sdr);
}
}
You could start using LINQ-to-SQL, which has it's own DataClass system in which you just drag-&-drop your database tables and stored procedures. Then you just have to create an instance at the top of your classes -- private MyCustomDataClass _db = new MyCustomDataClass(); and then you can just type in _db.<Here all datatables and SPROCs will appaer for you to choose>.
Example (from when all SPROCs are added to the DataClass)
private MyCustomDataClass _db = new MyCustomDataClass();
public void MethodToRunSPROC(string email, Guid userId)
{
_db.MySPORC_AddEmailToUser(email, userId);
}