I have Blocking Collection of some lists and I process it in a task - add the data from blocking collection to database.
Here is some part of the code:
private static Task _databaseTask;
private static readonly BlockingCollection<List<SomeItem>> DataToBeInsertedToDatabase = new BlockingCollection<List<SomeItem>>();
public static void ProcessInsertsCollection()
{
_databaseTask = Task.Factory.StartNew(() =>
{
foreach (List<SomeItem> data in DataToBeInsertedToDatabase.GetConsumingEnumerable())
{
try
{
DateTime[] dateTimes = data.Select(d => d.ContributionDateTime).ToArray();
string[] values = data.Select(d => d.Value).ToArray();
string[] someOtherValues = data.Select(d => d.SomeOtherValues).ToArray();
Program.IncrementDatabaseRecordsRegistered(data.Count);
DatabaseClass.InsertValues(dateTimes, values, someOtherValues);
}
catch (Exception ex)
{
//log the error
}
}
});
}
Function from DatabaseClass:
public static bool InsertValues(DateTime[] dateTimes, string[] values, string[] someOtherValues)
{
if (!IsConnected())
{
Connect();
}
var rowsInserted = 0;
try
{
using (OracleCommand command = _connection.CreateCommand())
{
command.CommandText =
string.Format(
"Insert into {0} (*****) VALUES (:1, :2, :3)",
_tableName);
command.Parameters.Add(new OracleParameter("1",
OracleDbType.Date,
dateTimes,
ParameterDirection.Input));
command.Parameters.Add(new OracleParameter("2",
OracleDbType.Varchar2,
values,
ParameterDirection.Input));
command.Parameters.Add(new OracleParameter("3",
OracleDbType.Varchar2,
someOtherValues,
ParameterDirection.Input));
command.ArrayBindCount = dateTimes.Length;
rowsInserted = command.ExecuteNonQuery();
}
}
catch (Exception ex)
{
//log the error
}
return rowsInserted != 0;
}
The problem is that after few hours of application working data are still being added to blocking collection but not processed. When I debug it then it does not stop at any breakpoint inside the task. When I check the variable _databaseTask it shows that task is running. The _connection variable shows that database is connected. I added try/catch to the foreach and also in the InsertValues function but it did not help. I made everything static because I firstly thought that this task was collected by GC. But it was not.
Probably the problem is connected with calling database because in my application I have another blocking collection and processing it in a task works without any problems. I do not call there any database function.
Could anyone please help me to find out why the collection is not consumed after few hours?
EDIT:
Please do not vote down when you do not understand the question. You lower the possibility that someone who knows the solution sees my question.
I did a lot of research on the problem.
Today I notice that probably the thread hangs on line rowsInserted = command.ExecuteNonQuery(); I will try to add the timeout there and also to add transaction.
Afer difficult investigation I found the issue. I add the answer, maybe it will help someone.
The problem was with line rowsInserted = command.ExecuteNonQuery();
Default timeout for OracleCommand is 0 which enforces no time limit. As it was blocked by some other session it hangs forever. The solution was to add the timeout for the command by using CommandTimeout property of OracleCommand. And then implement the mechanism of retrying the insertion.
Related
I'm having connection pool issues on my service (max reached), everywhere that I try to open a connection I wrap it on a using statement to dispose it correctly, but I think something is not allowing it to work. I think it is because I'm using a method that expects a SqlCommand as a parameter, this is an example:
private void QueryDB(string sConnString, SqlCommand oComm)
{
using (SqlConnection connection = new SqlConnection(sConnString))
{
try
{
connection.Open();
oComm.Connection = connection;
oComm.CommandTimeout = 2;
oComm.ExecuteNonQuery();
}
catch (SqlException e)
{
//log exception
}
catch (Exception e)
{
//log exception
}
}
}
The reason why I do this is because I need to assemble the parameters outside that method, like this:
public void Example1()
{
using (SqlCommand command = new SqlCommand())
{
command.CommandText = "SELECT TOP 1 FROM Table ORDER BY column1 DESC";
QueryDB(_connString, command));
}
}
public void Example2()
{
SqlCommand command= new SqlCommand();
command.CommandText = "UPDATE Table set column1 = #value where column2 = #number";
command.Parameters.Add(new SqlParameter { ParameterName = "#value", Value = "someValue", SqlDbType = SqlDbType.VarChar });
command.Parameters.Add(new SqlParameter { ParameterName = "#number", Value = 3, SqlDbType = SqlDbType.Int });
QueryDB(_connString, command));
}
In Example1 I try disposing the SqlCommand but I don't know if it works like that. Another thing to considerate is that I'm running a timer every second that executes Example1 or Example2, and I don't know if that has something to do with the problem, the Max pool size error happens sometimes, not everyday and it delays other queries. Is there something that I can do to improve this behavior? Thanks
I don't really know if this will solve your problem with respect to the connection pool issues, but, expanding on #Jack A's comment to your question, perhaps, a better way to structure your code would be to change your QueryDB method to take a delegate that updates the SqlCommand variable with the necessary information and, then, you can make sure both your SqlConnection and SqlCommand and taken care of correctly within that method.
private void QueryDB(string sConnString, Action<SqlCommand> commandDelegate)
{
using (SqlConnection oCon = new SqlConnection(sConnString))
using(SqlCommand oComm = new SqlCommand())
{
try
{
oCon.Open();
oComm.Connection = oCon;
oComm.CommandTimeout = 2;
commandDelegate(oComm);
oComm.ExecuteNonQuery();
}
catch (SqlException e)
{
//log exception
}
catch (Exception e)
{
//log exception
}
}
}
You could then use it in either of the following ways in your code:
public void Uses()
{
QueryDB(_connString, (oComm) => oComm.CommandText = "SELECT TOP 1 FROM Table ORDER BY column1 DESC");
QueryDB(_connString, longerDelegate);
}
private void longerDelegate(SqlCommand oComm)
{
oComm.CommandText = "UPDATE Table set column1 = #value where column2 = #number";
oComm.Parameters.Add(new SqlParameter { ParameterName = "#value", Value = "someValue", SqlDbType = SqlDbType.VarChar });
oComm.Parameters.Add(new SqlParameter { ParameterName = "#number", Value = 3, SqlDbType = SqlDbType.Int });
}
Again, I'm not sure this will solve your pooling problem, but it ensures everything is, at least, neatly wrapped in your QueryDB method.
I want to thank you all for your responses! After doing a lot of research and modifications, I implemented #Jack A's and #Jhon Busto's recomendations. But you where right John it didn't solve the connection pool problem, it turns out that the real problem was the timer, I didn't notice that it was constantly executing Example1 and Example2 but not every second, it was every 50ms or less, so I presume that it created a lot of connections in the pool. I was changing the Timer.Interval property of the timer but I didn't know this:
If Enabled and AutoReset are both set to false, and the timer has previously been enabled, setting the Interval property causes the Elapsed event to be raised once, as if the Enabled property had been set to true. To set the interval without raising the event, you can temporarily set the Enabled property to true, set the Interval property to the desired time interval, and then immediately set the Enabled property back to false.
Source: https://learn.microsoft.com/en-us/dotnet/api/system.timers.timer.interval?view=netframework-4.7.2
So, if I needed to change the Timer.Interval I followed Microsoft's documentation, I tested everything again and it worked. Be careful using Timers! hehe :)
I am trying to understand what's happening in the background, when a simple select query executed by client.
I am using C# Asp.Net Webforms, and i checked the processes with WireShark.
public DBC(string procedureName, params object[] procParams)
{
strError = null;
using (MySqlConnection connection = new MySqlConnection(GetConnectionString()))
{
connection.Close();
try
{
connection.Open();
MySqlCommand cmd = new MySqlCommand(procedureName, connection);
cmd.CommandType = CommandType.StoredProcedure;
//if we use params for stored procedure
if (procParams != null)
{
int i = 1;
foreach (object paramValue in procParams)
{
cmd.Parameters.Add(new MySqlParameter("#param_" + i, paramValue.ToString()));
i++;
}
}
if (procedureName.Contains("get"))
{
dtLoaded = new DataTable();
dtLoaded.Load(cmd.ExecuteReader());
}
else
{
cmd.ExecuteNonQuery();
}
}
catch (Exception ex)
{
strError = ErrorHandler.ErrorToMessage(ex);
}
finally
{
connection.Close();
connection.Dispose();
}
}
}
This is a simple SELECT * FROM TABLE query, in a try-catch statement. At the finally state, the connection was closed and disposed.
Why is it causes 43 process? I don't understand, why is there so much. Somebody could explain me?
Many thanks!
I assume you're using Oracle's Connector/NET. It performs a lot of not-strictly-necessary queries after opening a connection, e.g., SHOW VARIABLES to retrieve some server settings. (In 8.0.17 and later, this has been optimised slightly.)
Executing a stored procedure requires retrieving information about the stored procedure (to align parameters); it's more "expensive" than just executing a SQL statement directly. (You can disable this with CheckParameters=false, but I wouldn't recommend it.)
You can switch to MySqlConnector if you want a more efficient .NET client library. It's been tuned for performance (in both client CPU time and network I/O) and won't perform as much unnecessary work when opening a connection and executing a query. (MySqlConnector is the client library used for the .NET/MySQL benchmarks in the TechEmpower Framework Benchmarks.)
I have an application leveraging Entity Framework 6. For queries that are relatively fast (e.g. taking less than a minute to execute) it is working fine.
But I have a stored procedure that queries a table which doesn't have appropriate indices and so the time taken to execute the query has been clocked to take anywhere between 55 and 63 seconds. Obviously, indexing the table would bring that time down but unfortunately I don't have the luxury of controlling the situation and have to deal the hand I was dealt.
What I am seeing is when EF6 is used to call the stored procedure it continues through the code in less than 3 seconds total time and returns a result of 0 records; when I know there are 6 records the SPROC will return when executed directly in the database.
There are no errors whatsoever, so the code is executing fine.
Performing a test; I constructed some code using the SqlClient library and made the same call and it returned 6 records. Also noted that unlike the EF6 execution, that it actually took a few more seconds as if it were actually waiting to receive a response.
Setting the CommandTimeout on the context doesn't appear to make any difference either and I suspect possibly because it isn't timing out but rather not waiting for the result before it continues through the code?
I don't recall seeing this behavior in prior versions but then again maybe the time required to execute my prior queries were within the expected range of EF???
Is there a way to set the actual time that EF will wait for a response before continuing through the code? Or is there a way that I can enforce an asynchronous operation since it seems to be a default synchronous task by default?? Or is there a potential flaw in the code?
Sample of Code exhibiting (synchronous) execution: No errors but no records returned
public static List<Orphan> GetOrphanItems()
{
try
{
using (var ctx = new DBEntities(_defaultConnection))
{
var orphanage = from orp in ctx.GetQueueOrphans(null)
select orp;
var orphans = orphanage.Select(o => new Orphan
{
ServiceQueueId = o.ServiceQueueID,
QueueStatus = o.QueueStatus,
OrphanCode = o.OrphanCode,
Status = o.Status,
EmailAddress = o.EmailAddress,
TemplateId = o.TemplateId
}).ToList();
return orphans;
}
}
catch(Exception exc)
{
// Handle the error
}
}
Sample Code using SqlClient Library (asynchronous) takes slightly longer to execute but returns 6 records
public static List<Orphan> GetOrphanItems()
{
long ServiceQueueId = 0;
bool QueueStatus;
var OrphanCode = String.Empty;
DateTime Status;
var EmailAddress = String.Empty;
int TemplateId = 0;
var orphans = new List<Orphan> ();
SqlConnection conn = new SqlConnection(_defaultConnection);
try
{
var cmdText = "EXEC dbo.GetQueueOrphans";
SqlCommand cmd = new SqlCommand(cmdText, conn);
conn.Open();
SqlDataReader reader;
reader = cmd.ExecuteReader();
while(reader.Read())
{
long.TryParse(reader["ServiceQueueId"].ToString(), out ServiceQueueId);
bool.TryParse(reader["QueueStatus"].ToString(), out QueueStatus);
OrphanCode = reader["OrphanCode"].ToString();
DateTime.TryParse(reader["Status"].ToString(), out Status);
EmailAddress = reader["EmailAddress"].ToString();
int.TryParse(reader["TemplateId"].ToString(), out TemplateId);
orphans.Add(new Orphan { ServiceQueueId = ServiceQueueId, QueueStatus=QueueStatus, OrphanCode=OrphanCode,
EmailAddress=EmailAddress, TemplateId=TemplateId});
}
conn.Close();
catch(Exception exc)
{
// Handle the error
}
finally
{
conn.Close();
}
}
Check the type of executing method.
private async void MyMethod()
{
db.executeProdecudeAsync();
}
Forgetting to await task in async void method can cause described behavior without any InteliSense warning.
Fix:
private async Task MyMethod()
{
await db.executeProdecudeAsync();
}
Or just use db.executeProdecudeAsync().Wait() if you want to run in synchronous mode.
I have a problem at work with a simple insert method occasionally timing out due to a scheduled clean-up task on a database table. This task runs every ten minutes and during its execution my code often records an error in the event log due to 'the wait operation timed out'.
One of the solutions I'm considering is to make the code calling the stored procedure asynchronous, and in order to do this I first started looking at the BeginExecuteNonQuery method.
I've tried using the BeginExecuteNonQuery method but have found that it quite often does not insert the row at all. The code I've used is as follows:
SqlConnection conn = daService.CreateSqlConnection(dataSupport.DBConnString);
SqlCommand command = daService.CreateSqlCommand("StoredProc");
try {
command.Connection = conn;
command.Parameters.AddWithValue("page", page);
command.Parameters.AddWithValue("Customer", customerId);
conn.Open();
command.BeginExecuteNonQuery(delegate(IAsyncResult ar) {
SqlCommand c = (SqlCommand)ar.AsyncState;
c.EndExecuteNonQuery(ar);
c.Connection.Close();
}, command);
} catch (Exception ex) {
LogService.WriteExceptionEntry(ex, EventLogEntryType.Error);
} finally {
command.Connection.Close();
command.Dispose();
conn.Dispose();
}
Obviously, I'm not expecting an instant insert but I am expecting it to be inserted after five minutes on a low usage development database.
I've now switched to the following code, which does do the insert:
System.Threading.ThreadPool.QueueUserWorkItem(delegate {
using (SqlConnection conn = daService.CreateSqlConnection( dataSupport.DBConnString)) {
using (SqlCommand command = daService.CreateSqlCommand("StoredProcedure")) {
command.Connection = conn;
command.Parameters.AddWithValue("page", page);
command.Parameters.AddWithValue("customer", customerId);
conn.Open();
command.ExecuteNonQuery();
}
}
});
I've got a few questions, some of them are assumptions:
As my insert method's signature is void, I'm presuming code that calls it doesn't wait for a response. Is this correct?
Is there a reason why BeginExecuteNonQuery doesn't run the stored procedure? Is my code wrong?
Most importantly, if I use the QueueUserWorkItem (or a well-behaved BeginExecuteNonQuery) am I right in thinking this will have the desired result? Which is, that an attempt to run the stored procedure whilst the scheduled task is running will see the code executing after the task completes, rather than its current timing out?
Edit
This is the version I'm using now in response to the comments and answers I've received.
SqlConnection conn = daService.CreateSqlConnection(
string.Concat("Asynchronous Processing=True;",
dataSupport.DBConnString));
SqlCommand command = daService.CreateSqlCommand("StoredProc");
command.Connection = conn;
command.Parameters.AddWithValue("page", page);
command.Parameters.AddWithValue("customer", customerId);
conn.Open();
command.BeginExecuteNonQuery(delegate(IAsyncResult ar) {
SqlCommand c = (SqlCommand)ar.AsyncState;
try {
c.EndExecuteNonQuery(ar);
} catch (Exception ex) {
LogService.WriteExceptionEntry(ex, EventLogEntryType.Error);
} finally {
c.Connection.Close();
c.Dispose();
conn.Dispose();
}
}, command);
Is there a reason why BeginExecuteNonQuery doesn't run the stored
procedure? Is my code wrong?
Probably you didn't add the Asynchronous Processing=True in the connection string.
Also - there could be a situation that when the reponse from sql is ready - the asp.net response has already sent.
that's why you need to use : Page.RegisterASyncTask (+AsyncTimeout)
(if you use webform asynchronous pages , you should add in the page directive : Async="True")
p.s. this line in :
System.Threading.ThreadPool.QueueUserWorkItem is dangerouse in asp.net apps. you should take care that the response is not already sent.
*Edit: Please see my answer below for the solution.
Is there any danger in the following? I'm trying to track down what I think might be a race condition. I figured I'd start with this and go from there.
private BlockingCollection<MyTaskType>_MainQ = new BlockingCollection<MyTaskType>();
private void Start()
{
_CheckTask = new Timer(new TimerCallback(CheckTasks), null, 10, 5000);
}
private void CheckTasks(object state)
{
_CheckTask.Change(Timeout.Infinite, Timeout.Infinite);
GetTask();
_CheckTask.Change(5000,5000);
}
private void GetTask()
{
//get task from database to object
Task.Factory.StartNew( delegate {
AddToWorkQueue(); //this adds to _MainQ which is a BlockingCollection
});
}
private void AddToWorkQueue()
{
//do some stuff to get stuff to move
_MainQ.Add(dataobject);
}
edit: I am also using a static class to handle writing to the database. Each call should have it's own unique data called from many threads, so it is not sharing data. Do you think this could be a source of contention?
Code below:
public static void ExecuteNonQuery(string connectionString, string sql, CommandType commandType, List<FastSqlParam> paramCollection = null, int timeout = 60)
{
//Console.WriteLine("{0} [Thread {1}] called ExecuteNonQuery", DateTime.Now.ToString("HH:mm:ss:ffffff"), System.Threading.Thread.CurrentThread.ManagedThreadId);
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = new SqlCommand(sql, connection))
{
try
{
if (paramCollection != null)
{
foreach (FastSqlParam fsqlParam in paramCollection)
{
try
{
SqlParameter param = new SqlParameter();
param.Direction = fsqlParam.ParamDirection;
param.Value = fsqlParam.ParamValue;
param.ParameterName = fsqlParam.ParamName;
param.SqlDbType = fsqlParam.ParamType;
command.Parameters.Add(param);
}
catch (ArgumentNullException anx)
{
throw new Exception("Parameter value was null", anx);
}
catch (InvalidCastException icx)
{
throw new Exception("Could not cast parameter value", icx);
}
}
}
connection.Open();
command.CommandType = commandType;
command.CommandTimeout = timeout;
command.ExecuteNonQuery();
if (paramCollection != null)
{
foreach (FastSqlParam fsqlParam in paramCollection)
{
if (fsqlParam.ParamDirection == ParameterDirection.InputOutput || fsqlParam.ParamDirection == ParameterDirection.Output)
try
{
fsqlParam.ParamValue = command.Parameters[fsqlParam.ParamName].Value;
}
catch (ArgumentNullException anx)
{
throw new Exception("Output parameter value was null", anx);
}
catch (InvalidCastException icx)
{
throw new Exception("Could not cast parameter value", icx);
}
}
}
}
catch (SqlException ex)
{
throw ex;
}
catch (ArgumentException ex)
{
throw ex;
}
}
}
per request:
FastSql.ExecuteNonQuery(connectionString, "someProc", System.Data.CommandType.StoredProcedure, new List<FastSqlParam>() { new FastSqlParam(SqlDbType.Int, "#SomeParam", variable)});
Also, I wanted to note that this code seems to fail at random running it from VS2010 [Debug or Release]. When I do a release build, run setup on a dev server that will be hosting it, the application has failed to crash and has been running smoothly.
per request:
Current architecture of threads:
Thread A reading 1 record from a database scheduling table
Thread A, if a row is returned, launches a Task to login to resource to see if there are files to transfer. The task is referencing an object that contains data from the DataTable that was creating using a static call. Basically as below.
If there are files found, Task adds to _MainQ the files to move
//Called from Thread A
void ProcessTask()
{
var parameters = new List<FastSqlParam>() { new FastSqlParam(SqlDbType.Int, "#SomeParam", variable) };
using (DataTable someTable = FastSql.ExecuteDataTable(connectionString, "someProc", CommandType.StoredProcedure, parameters))
{
SomeTask task = new Task();
//assign task some data from dt.Rows[0]
if (task != null)
{
Task.Factory.StartNew(delegate { AddFilesToQueue(task); });
}
}
}
void AddFilesToQueue(Task task)
{
//connect to remote system and build collection of files to WorkItem
//e.g, WorkItem will have a collection of collections to transfer. We control this throttling mechanism to allow more threads to split up the work
_MainQ.Add(WorkItem);
}
Do you think there could be a problem returning a value from FastSql.ExecuteDataTable since it is a static class and then using it with a using block?
I'd personally be wary of introducing extra threads in quite so many places - "Here be Dragons" is a useful rule when it comes to working with threads! I can't see any problems with what you have, but if it were simpler it'd be easier to be more certain. I'm assuming you want the call to "AddToWorkQueue" to be done in a different thread (to test the race condition) so I've left that in.
Does this do what you need it to? (eye compiled so may be wrong)
while(true) {
Task.Factory.StartNew( delegate { AddToWorkQueue(); });
Thread.Sleep(5000);
}
random aside - prefer "throw;" to "throw ex;" - the former preserves the original call stack, the latter will only give you the line number of the "throw ex;" call. Even better, omit the try/catch in this case as all you do is re-throw the exceptions, so you may as well save yourself the overhead of the try.
It turns out the problem was a very, very strange one.
I converted the original solution from a .NET 3.5 solution to a .NET 4.0 solution. The locking up problem went away when I re-created the entire solution in a brand new .NET 4.0 solution. No other changes were introduced, so I am very confident the problem was the upgrade to 4.0.