Hi all I just had a quick question for you all. For whatever reason, a piece of code periodically does not return and I am not 100% sure yet. To combat this for now, I want to know, using the Close() method below, is there a way to put a timeout on it? So, if it does not finish within 1 minute or so, it just moves on?
Any advice would be appreciated. Thank you,
If it makes any difference, the original writer who wrote this noted that he believed it hangs on the close() and noted "Maybe Too fast?" (The connection is an oledb connection to Netezza, the whole applications is heavily multi-threaded).
Anyways, for now, I just want to be able to get the application to at least finish instead of hanging on that exception catch.
below is the Close(); which I believe is not returning.
catch(Exception){
Close(); //-- if we have an error, close everything down and then return the error
throw;}
public void Close() {
if (null != Command) {
Command.Cancel();
Command.Dispose();
Command = null;
}
if (null != Connection) {
if (Connection.State != System.Data.ConnectionState.Closed)
Connection.Close();
Connection.Dispose();
Connection = null;
}
}
Rather than timeout on a Method do you really mean timeout on a Command?
Based on that Close() you are sharing Command and Connection.
That is not a good design for a heavily multi-threaded application.
That is not a good design from even a lightly multi-threaded application.
DbCommand has a timeout property
Using statement will perform cleanup (including close)
string connectionString = "";
// Wait for 5 second delay in the command
string queryString = "waitfor delay '00:00:05'";
using (OleDbConnection connection = new OleDbConnection(connectionString )) {
connection.Open();
SqlCommand command = new connection.CreateCommand();
// Setting command timeout to 1 second
command.CommandText = queryString;
command.CommandTimeout = 1;
try {
command.ExecuteNonQuery();
}
catch (DbException e) {
Console.WriteLine("Got expected DbException due to command timeout ");
Console.WriteLine(e);
}
}
Assuming you're using .NET 4.0 and above, you can use the TPL to do so using the System.Threading.Tasks.Task object. You create a Task to run a method asynchronously, then Wait on that task for your timeout duration, and if it expires - let the main thread continue.
Task timeoutTask = new Task(Close); // create a Task around the Close method.
timeoutTask.Start(); // run asynchronously.
bool completedSuccessfully = timeoutTask.Wait(TimeSpan.FromMinutes(1));
if (completedSuccessfully)
{
// Yay!
}
else
{
logger.Write("Close command did not return in time. Continuing");
}
In this example, the Close method will keep on running in the background, but your main thread can continue.
Related
I'm just starting out with async and Task's and my code has stopped processing. It happens when I have an incoming network packet and I try and communicate with the database inside the packet handler.
public class ClientConnectedPacket : IClientPacket
{
private readonly EntityFactory _entityFactory;
public ClientConnectedPacket(EntityFactory entityFactory)
{
_entityFactory= entityFactory;
}
public async Task Handle(NetworkClient client, ClientPacketReader reader)
{
client.Entity = await _entityFactory.CreateInstanceAsync( reader.GetValueByKey("unique_device_id"));
// this Console.WriteLine never gets reached
Console.WriteLine($"Client [{reader.GetValueByKey("unique_device_id")}] has connected");
}
}
The Handle method gets called from an async task
if (_packetRepository.TryGetPacketByName(packetName, out var packet))
{
await packet.Handle(this, new ClientPacketReader(packetName, packetData));
}
else
{
Console.WriteLine("Unknown packet: " + packetName);
}
Here is the method which I think is causing the issue
public async Task<Entity> CreateInstanceAsync(string uniqueId)
{
await using (var dbConnection = _databaseProvider.GetConnection())
{
dbConnection.SetQuery("SELECT COUNT(NULL) FROM `entities` WHERE `unique_id` = #uniqueId");
dbConnection.AddParameter("uniqueId", uniqueId);
var row = await dbConnection.ExecuteRowAsync();
if (row != null)
{
return new Entity(uniqueId, false);
}
}
return new Entity(uniqueId,true);
}
DatabaseProvider's GetConnection method:
public DatabaseConnection GetConnection()
{
var connection = new MySqlConnection(_connectionString);
var command = connection.CreateCommand();
return new DatabaseConnection(_logFactory.GetLogger(), connection, command);
}
DatabaseConnection's constructor:
public DatabaseConnection(ILogger logger, MySqlConnection connection, MySqlCommand command)
{
_logger = logger;
_connection = connection;
_command = command;
_connection.Open();
}
When I comment out this line, it reaches the Console.WriteLine
_connection.Open();
I ran a POC project spinning 100 parallel tasks both with MySql.Data 8.0.19 and MySqlConnector 0.63.2 on .NET Core 3.1 console application. I create, open and dispose the connection into the context of every single task. Both providers runs to completion without errors.
The specifics are that MySql.Data queries run synchronously although the library provide async methods signature e.g. ExecuteReaderAsync() or ExecuteScalarAsync(), while MySqlConnector run truly asynchronously.
You may be running into:
a deadlock situation not specifically related to the mysql provider
not properly handling exceptions inside your tasks (you may inspect the task associated aggregate exception and also monitor mysql db logs)
you execution be still blocked (not returning result) when you assume it’s not working, if you running a high number of parallel tasks with MySql.Data as it executes synchronously
Multi-threading with MySQL must use independent connections. Given that, multithreading is not a MySQL question but an issue for the client language, C# in your question.
That is, build your threads without regard to MySQL, then create a connection in each thread that needs to do queries. It will be on your shoulders if you need to pass data between the threads.
I usually find that optimizing queries eliminates the temptation to multi-thread my applications.
I have an odd situation with TransactionScope and async/synchronous SQL calls that I'm having difficulty understanding. I hope that someone with a deeper understanding of the ins and outs of these kinds of operations can shed some light on the issue.
The situation:
I have a NUnit testfixture which creates a TransactionScope during [SetUp] and Disposes it at [TearDown] to let each test run on the same data. I have a series of tests which kick off an asynchronous operation on the database and then execute a synchronous operation on the database. The first such test completes successfully. The second such test fails with "There is already an open DataReader associated with this Command which must be closed first.".
If I comment out the TransactionScope entirely, all the tests pass.
I tried various different TransactionScope options, and Complete / Dispose, but the same issue occurs.
I am using the Resharper test runner on an NUnit test, .NET 4.5.1.
I realize the "correct" answer may be "make everything async await". That's not an option for me, unfortunately.
I don't want to enable MARS, as this issue only occurs in tests.
I don't want to use GetAwaiter().GetResult() due to the potential deadlocks.
What it looks like to me is that once a TransactionScope.Dispose/Complete is called, the automatic SQLConnection pooling loses track of which connections have open DataReaders. It hands out the same SqlConnection to two simultaneously running operations, and the second dies.
My primary question is "what is causing this behavior (specifically)?"
My secondary question is "is there anything that can be done to safely resolve the issue?"
The replicating code below prints out the client connection Ids. On my machine, the ClientConnectionId for the ASYNC and SYNC calls in the Second test case are always the same.
Replicating Code:
[TestFixture]
public class DataReaderTests
{
private TransactionScope _scope;
private string _connString = #"my connection string";
[SetUp]
public void Setup()
{
var options = new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadCommitted,
Timeout = TimeSpan.FromMinutes(1)
};
_scope = new TransactionScope(TransactionScopeOption.RequiresNew, options, TransactionScopeAsyncFlowOption.Enabled);
}
[Test]
[TestCase("First")]
[TestCase("Second")]
public void Test(string name)
{
DoAsyncThing().ConfigureAwait(false);
using (var conn = new SqlConnection(_connString))
{
try
{
conn.Open();
Console.WriteLine("SYNC: " + conn.ClientConnectionId);
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = "SELECT 1";
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
int id = reader.GetInt32(0);
}
}
}
}
catch (TransactionAbortedException tax)
{
Console.WriteLine("ERROR: " + ((SqlException)tax.InnerException.InnerException).ClientConnectionId);
throw;
}
}
}
private async Task DoAsyncThing()
{
using (var connection = new SqlConnection(_connString))
{
await connection.OpenAsync();
Console.WriteLine("ASYNC: " + connection.ClientConnectionId);
using (var cmd = connection.CreateCommand())
{
cmd.CommandText = "WAITFOR DELAY '00:02';";
await cmd.ExecuteNonQueryAsync();
Console.WriteLine("ASYNC COMPLETE");
}
}
}
[TearDown]
public void Teardown()
{
_scope.Dispose();
}
}`
Check out this answer
I think the gist is that you cannot have two active sql commands executing over the same connection at the same time without a special connection string property. When you are operating under the transaction scope, you should find that both SqlConnection objects have the same client ID. However, if you remove the transaction scope they are different, which I believe implies that they are operating on separate connections.
Adding "MultipleActiveResultSets=true" to the connection string fixed the issue for me. Another alternative is to replace
DoAsyncThing().ConfigureAwait(false);
with
DoAsyncThing().ConfigureAwait(false).GetAwaiter().GetResult();
which will terminate the first command before starting the second command.
I made a code that create a Database in .sqlite, all working good but I want to be sure that when the user start for the first time the application the Database population must be completed. If the user abort the database population, the database must be deleted (because the application don't working with an incomplete resource). Now I've used the thread for execute the method that create this Database, and I've declared the thread variable global in the class, like:
Thread t = new Thread(() => Database.createDB());
The Database.createDB() method create the DB. All working perfect, the DB is created correctly. Now I fire the closing of the window that creating the DB like:
protected override void OnClosing(System.ComponentModel.CancelEventArgs e)
{
MessageBoxResult result = MessageBox.Show(
#"Sure?",
"Attention", MessageBoxButton.YesNo, MessageBoxImage.Question);
try
{
if (result == MessageBoxResult.Yes)
{
t.Abort();
if (File.Exists("Database.sqlite"))
{
File.Delete("SoccerForecast.sqlite");
Process.GetCurrentProcess().Kill();
} ....
The event was fired correct and the thread stopped, but when the condition start if (File.Exists("Database.sqlite")) the compiler tell me:
Can't delete file - in using by another process.
But I've stopped the thread, why this exception appear? What I doing wrong?
UPDATE:
In CreateDb() method I also have a call to other method of different class, one of this have the structure like this:
public void setSoccer()
{
Database.m_dbConnection.Open();
string requestUrl = "...";
string responseText = Parser.Request(requestUrl);
List<SoccerSeason.RootObject> obj = JsonConvert.DeserializeObject<List<SoccerSeason.RootObject>>(responseText);
foreach (var championships in obj)
{
string sql = "string content";
SQLiteCommand command = new SQLiteCommand(sql, Database.m_dbConnection);
try
{
command.ExecuteNonQuery();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
string query = "select * from SoccerSeason";
SQLiteCommand input = new SQLiteCommand(query, Database.m_dbConnection);
SQLiteDataReader reader = input.ExecuteReader();
int i = 0;
while (reader.Read())
{
//reading data previously inserted in the database
}
Database.m_dbConnection.Close(); /
}
I was wondering where I should put the flag variable because this code have a different loop inside.
It could be that when you're aborting the thread it's not cleanly closing the database connections, hence the error you're seeing.
Might I suggest a slight redesign because using Thread.Abort is not ideal.
Instead use a variable as a cancel flag to notify the thread to shut down.
Then when the thread detects that this cancel flag is set it can properly close connections and handle the database delete itself.
Update:
A brief example to illustrate what I mean; it ain't pretty and it won't compile but it gives the general idea.
public class Database
{
public volatile bool Stop= false;
public void CreateDb()
{
if(!Stop)
{
// Create database
}
if(!Stop)
{
// Open database
// Do stuff with database
}
// blah blah ...
if(Stop)
{
// Close your connections
// Delete your database
}
}
}
...
protected override void OnClosing(CancelEventArgs e)
{
Database.Stop = true;
}
And now that you know roughly what you're looking for I heartily recommend Googling for posts on thread cancellation by people who know what they're talking about that can tell you how to do it right.
These might be reasonable starting points:
How to: Create and Terminate Threads
.NET 4.0+ actually has a CancellationToken object with this very purpose in mind Cancellation in Managed Threads
I have a problem at work with a simple insert method occasionally timing out due to a scheduled clean-up task on a database table. This task runs every ten minutes and during its execution my code often records an error in the event log due to 'the wait operation timed out'.
One of the solutions I'm considering is to make the code calling the stored procedure asynchronous, and in order to do this I first started looking at the BeginExecuteNonQuery method.
I've tried using the BeginExecuteNonQuery method but have found that it quite often does not insert the row at all. The code I've used is as follows:
SqlConnection conn = daService.CreateSqlConnection(dataSupport.DBConnString);
SqlCommand command = daService.CreateSqlCommand("StoredProc");
try {
command.Connection = conn;
command.Parameters.AddWithValue("page", page);
command.Parameters.AddWithValue("Customer", customerId);
conn.Open();
command.BeginExecuteNonQuery(delegate(IAsyncResult ar) {
SqlCommand c = (SqlCommand)ar.AsyncState;
c.EndExecuteNonQuery(ar);
c.Connection.Close();
}, command);
} catch (Exception ex) {
LogService.WriteExceptionEntry(ex, EventLogEntryType.Error);
} finally {
command.Connection.Close();
command.Dispose();
conn.Dispose();
}
Obviously, I'm not expecting an instant insert but I am expecting it to be inserted after five minutes on a low usage development database.
I've now switched to the following code, which does do the insert:
System.Threading.ThreadPool.QueueUserWorkItem(delegate {
using (SqlConnection conn = daService.CreateSqlConnection( dataSupport.DBConnString)) {
using (SqlCommand command = daService.CreateSqlCommand("StoredProcedure")) {
command.Connection = conn;
command.Parameters.AddWithValue("page", page);
command.Parameters.AddWithValue("customer", customerId);
conn.Open();
command.ExecuteNonQuery();
}
}
});
I've got a few questions, some of them are assumptions:
As my insert method's signature is void, I'm presuming code that calls it doesn't wait for a response. Is this correct?
Is there a reason why BeginExecuteNonQuery doesn't run the stored procedure? Is my code wrong?
Most importantly, if I use the QueueUserWorkItem (or a well-behaved BeginExecuteNonQuery) am I right in thinking this will have the desired result? Which is, that an attempt to run the stored procedure whilst the scheduled task is running will see the code executing after the task completes, rather than its current timing out?
Edit
This is the version I'm using now in response to the comments and answers I've received.
SqlConnection conn = daService.CreateSqlConnection(
string.Concat("Asynchronous Processing=True;",
dataSupport.DBConnString));
SqlCommand command = daService.CreateSqlCommand("StoredProc");
command.Connection = conn;
command.Parameters.AddWithValue("page", page);
command.Parameters.AddWithValue("customer", customerId);
conn.Open();
command.BeginExecuteNonQuery(delegate(IAsyncResult ar) {
SqlCommand c = (SqlCommand)ar.AsyncState;
try {
c.EndExecuteNonQuery(ar);
} catch (Exception ex) {
LogService.WriteExceptionEntry(ex, EventLogEntryType.Error);
} finally {
c.Connection.Close();
c.Dispose();
conn.Dispose();
}
}, command);
Is there a reason why BeginExecuteNonQuery doesn't run the stored
procedure? Is my code wrong?
Probably you didn't add the Asynchronous Processing=True in the connection string.
Also - there could be a situation that when the reponse from sql is ready - the asp.net response has already sent.
that's why you need to use : Page.RegisterASyncTask (+AsyncTimeout)
(if you use webform asynchronous pages , you should add in the page directive : Async="True")
p.s. this line in :
System.Threading.ThreadPool.QueueUserWorkItem is dangerouse in asp.net apps. you should take care that the response is not already sent.
I have a very basic C# console app that connects to a db, executes a query, closes the connection, and exits out of the app.
The problem is, the app takes almost 3 seconds to exit.
I have displayed the time at each step to see why it is running slowly and it isn't during any of the processing, just when it exits out of the app.
Does anyone know how to speed this up?
Here is the output:
Opening Connection:94ms
26:OK
Closing Connection:356ms
Closed Connection:357ms
Exiting:358ms
[Delay of about 3 seconds before it exits]
And here is the code:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Diagnostics;
namespace CheckSQL
{
class Program
{
static Stopwatch watch = new Stopwatch();
static void Main(string[] args)
{
if (args.Length == 0) return;
watch.Start();
string connstring = args[0];
string sqlquery = args[1];
ExecuteScalar(connstring, sqlquery);
watch.Stop();
Console.WriteLine(string.Format("Exiting:{0}ms", watch.ElapsedMilliseconds));
}
private static void ExecuteScalar(string connstring, string sqlquery)
{
SqlConnection sqlconn = new SqlConnection(connstring);
SqlCommand sqlcmd = new SqlCommand(sqlquery, sqlconn);
try
{
Console.WriteLine(string.Format("Opening Connection:{0}ms", watch.ElapsedMilliseconds));
sqlconn.Open();
Console.WriteLine(string.Format("{0}:OK", sqlcmd.ExecuteScalar()));
}
catch (Exception ex)
{
Console.WriteLine(string.Format("0:{0}", ex.Message));
}
finally
{
if (sqlconn.State == ConnectionState.Open)
{
Console.WriteLine(string.Format("Closing Connection:{0}ms", watch.ElapsedMilliseconds));
sqlconn.Close();
Console.WriteLine(string.Format("Closed Connection:{0}ms", watch.ElapsedMilliseconds));
}
}
}
}
}
I had a similar problem with a C# Console app, and found that the issue had something to do with the cleanup of the Connection Pool when the app exits. With connections in the pool, I measured a 1.6 second delay in exiting (Timed by an external script calling my EXE). Although I wasn't entirely happy with the solution, I found that issuing the following, before exiting removed the delay:
System.Data.SqlClient.SqlConnection.ClearAllPools();
I would guess that using "Pooling=False", in your connection strings would also do the trick... But you would only do that if you didn't need the benefits of pooling.
Closing a connection (calling sqlconn.Close() ) only means returning it to the ConnectionPool.
So there still is some housekeeping to be done on exit.
3 seconds seems a bit long, but there are several components (CLR, Database) in play here.
I think its impossible to do in correct way. How can you seepd up job which takes some time? The only possible way in this case - to optimize algorythm, but you cant do this. As I understand you should return control immediately after checking some database information. You can workaround this by creating two process system. First process would startup second and the second in turn would check infomation in database and send results to the first process which would communicate with user. So you alway would return control after retrieving results. The second process would take some time to terminate but this fact should not worry you because you would have control by that moment.