C# Multithreaded application and SQL connections help - c#

I need some advice regarding an application I wrote. The issues I am having are due to my DAL and connections to my SQL Server 2008 database not being closed, however I have looked at my code and each connection is always being closed.
The application is a multithreaded application that retrieves a set of records and while it processes a record it updates information about it.
Here is the flow:
The administrator has the ability to set the number of threads to run and how many records per thread to pull.
Here is the code that runs after they click start:
Adapters are abstractions to my DAL here is a sample of what they look like:
public class UserDetailsAdapter: IDataAdapter<UserDetails>
{
private IUserDetailFactory _factory;
public UserDetailsAdapter()
{
_factory = new CampaignFactory();
}
public UserDetails FindById(int id){
return _factory.FindById(id);
}
}
As soon as the _factory is called it processes the SQL and immediately closes the connection.
Code For Threaded App:
private int _recordsPerthread;
private int _threadCount;
public void RunDetails()
{
//create an adapter instance that is an abstration
//of the data factory layer
var adapter = new UserDetailsAdapter();
for (var i = 1; i <= _threadCount; i++)
{
//This adater makes a call tot he databse to pull X amount of records and
//set a lock filed so the next set of records that are pulled are differnt.
var details = adapter.FindTopDetailsInQueue(_recordsPerthread);
if (details != null)
{
var parameters = new ArrayList {i, details};
ThreadPool.QueueUserWorkItem(ThreadWorker, parameters);
}
else
{
break;
}
}
}
private void ThreadWorker(object parametersList)
{
var parms = (ArrayList) parametersList;
var threadCount = (int) parms[0];
var details = (List<UserDetails>) parms[1];
var adapter = new DetailsAdapter();
//we keep running until there are no records left inthe Database
while (!_noRecordsInPool)
{
foreach (var detail in details)
{
var userAdapter = new UserAdapter();
var domainAdapter = new DomainAdapter();
var user = userAdapter.FindById(detail.UserId);
var domain = domainAdapter.FindById(detail.DomainId);
//...do some work here......
adapter.Update(detail);
}
if (!_noRecordsInPool)
{
details = adapter.FindTopDetailsInQueue(_recordsPerthread);
if (details == null || details.Count <= 0)
{
_noRecordsInPool = true;
break;
}
}
}
}
The app crashes because there seem to be connection issues to the database. Looking in my log files for the DAL I am seeing this:
Timeout expired. The timeout period
elapsed prior to obtaining a
connection from the pool. This may
have occurred because all pooled
connections were in use and max pool
size was reached
When I run this in one thread it works fine. I am guessing when I runt his in multiple threads I am obviously making too many connections to the DB. Any thoughts on how I can keep this running in multiple threads and make sure the database doesn’t give me any errors.
Update:
I am thinking my issues may be deadlocks in my database. Here is the code in SQL that is running whe I get a deadlock error:
WITH cte AS (
SELECT TOP (#topCount) *
FROM
dbo.UserDetails WITH (READPAST)
WHERE
dbo.UserDetails where IsLocked = 0)
UPDATE cte
SET
IsLocked = 1
OUTPUT INSERTED.*;
I have never had issues with this code before (in other applications). I reorganzied my Indexes as they were 99% fragmented. That didn't help. I am at a loss here.

I'm confused as to where in your code connections get opened, but you probably want your data adapters to implement IDispose (making sure to close the pool connection as you leave using scope) and wrap your code in using blocks:
using (adapter = new UserDetailsAdapter())
{
for (var i = 1; i <= _threadCount; i++)
{
[..]
}
} // adapter leaves scope here; connection is implicitly marked as no longer necessary
ADO.NET uses connection pooling, so there's no need to (and it can be counter-productive to) explicitly open and close connections.

It is not clear to me how you actually connect to the database. The adapter must reference a connection.
How do you actually initialize that connection?
If you use a new adapter for each thread, you must use a new connection for each adapter.
I am not too familiar with your environment, but I am certain that you really need a lot of open connections before your DB starts complaining about it!

Well, after doing some research I found that there might be a bug in SQL server 2008 and running parallel queries. I’ll have to dig up the link where I found the discussion on this, but I ended up running this on my server:
sp_configure 'max degree of parallelism', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO
This can decrease your server performance, overall, so it may not be an option for some people, but it worked great for me.
For some queries I added the MAXDOP(n) (n being the number of processors to utilize) option so they can run more efficiently. It did help a bit.
Secondly, I found out that my DAL’s Dispose method was using the GC.Suppressfinalize method. So, my finally sections were not firing in my DAL properly and not closing out my connections.
Thanks to all who gave their input!

Related

How to avoid .NET Connection Pool timeouts when inserting 37k rows

I'm trying to figure out the best way to batch insert about 37k rows into my Sql Server using DAPPER.
My problem is that when I use Parallel.ForEach - the number of connections to the database increases over a short period of time - finally hitting nearly or about 100 ... which gives connection pool errors. If I force the max degree of parall then it's hit that max number and stays there.
Setting the maxdegree feels wrong.
It currently is doing about 10-20 inserts a second. This is also in a simple Console App - so there's no other database activity besides what's happening in my Parallel.ForEach loop.
Is using Parallel.ForEach the incorrect thing in this case because this is not-CPU bound?
Should I be using async/await ? If so, what stopping this from doing hundreds of db calls in one go?
Sample code which is basically what I'm doing.
var items = GetItemsFromSomewhere(); // Returns 37K items.
Parallel.ForEach(items => item)
{
using (var sqlConnection = new SqlConnection(_connectionString))
{
var result = sqlConnection.Execute(myQuery, new { ... } );
}
}
My (incorrect) understanding of this was that there should on be about 8 or so connections at any time to the db. The Connection Pool will release the connection (which remains instantiated in the Connection Pool, waiting to be used). And if the Execute takes .. i donno .. lets say even a 1 second (the longest running time for an insert was about 500ms .. and that's 1 in every 100 or so) ... that's ok .. that thread is blocked and chills until the Execute completes. Then the scope completes (and Dispose is auto called) and the connection closed. With the connection closed, the Parallel.ForEach then grabs the next item in the collection, goes to the connection pool and then grabs a spare connection (remember - we just closed one, a split second ago) ... rinse.repeat.
Is this wrong?
Notes:
.NET 4.5
Sql 2012
Console app.
Using Dapper.NET for sql code.
First of all: If it is about performance, use SqlBulkCopy. This works with SQL-Server. If you are using other database servers, they might have their own SqlBulkCopy-solution (Oracle has one).
SqlBulkCopy works like a bulk-select: One state opens one connection and streams all the data from the server to the client. With an insert, it works the other way arround: It streams all the new records from the client to the server.
See: https://msdn.microsoft.com/en-us/library/ex21zs8x(v=vs.110).aspx
If you insist of using parallellism, you might want to consider the follow code:
void BulkInsert<T>(object p)
{
IEnumerator<T> e = (IEnumerator<T>)p;
using (var sqlConnection = new SqlConnection(_connectionString))
{
while(true)
{
T item;
lock(e)
{
if (!e.MoveNext())
return;
item = e.Current;
}
var result = sqlConnection.Execute(myQuery, new { ... } );
}
}
}
Now create your own threads and invoke this method on these threads with one and the same parameter: The iterator which runs through your collection. Each threat opens its own connection once, starts inserting, and after all items are inserted, the connection is closed. This solutions uses as many connections as your created threads.
PS: Multiple variants of above code are possible . You could call it from background threads, from Tasks, etc. I hope you get the point.
You should use SqlBulkCopy instead of inserting one by one. Faster and more efficient.
https://msdn.microsoft.com/en-us/library/ex21zs8x(v=vs.110).aspx
credits to the answer owner
Sql Bulk Copy/Insert in C#

Which is better, decreasing or increasing ADO.NET SQL Connections?

I have a multi-threaded windows application using more that a background worker. every background worker is using some code to update the same SQL Server database and when it finished it runs again. I have noticed that every background worker is using a single connection. I have created a ConcurrentQueue of a custom class to add all the stored procedures to it and execute it from a single backgorundworker to use just one connection as the database is getting very slow when using many connections.
here is my code
this is the stored procedure class
string _procName;
Dictionary<string, object> _parameters;
public string ProcName
{
get { return _procName; }
set { _procName = value; }
}
public Dictionary<string, object> Parameters
{
get { return _parameters; }
set { _parameters = value; }
}
public PSCProc(string procName, Dictionary<string, object> parameters)
{
_procName = procName;
_parameters = parameters;
}
and here is the method used to run the stored procedure
public static void execProc(string procName, Dictionary<string, object> parameters)
{
using (var conn = new SqlConnection(Test.Properties.Settings.Default.testConnection))
using (var command = new SqlCommand(procName, conn)
{
CommandType = CommandType.StoredProcedure
})
{
foreach (var item in parameters)
{
command.Parameters.AddWithValue(item.Key, item.Value);
}
conn.Open();
command.ExecuteNonQuery();
conn.Close();
Form1.updated++;
}
}
and this is how i add an item to the queue
Dictionary<string, object> parameters = new Dictionary<string, object>();
int x = 1;
string address = "cairo";
parameters.Add("#id", x);
parameters.Add("#address", address);
PSCProc proc1 = new PSCProc("updateAddress", parameters);
pscQueue.Enqueue(proc1);
and this how i run the background worker to run the procedures
PSCProc proc;
if (pscQueue.TryDequeue(out proc))
{
helper.execProc(proc.ProcName, proc.Parameters);
}
Note that:
-the background worker that executes the procedures runs again when it finished.
-the database has too many locks as there are hundreds using it.
-the database is very important to be responsive all the time without any locks.
-connection pooling is saving the connections sleeping or suspended all the time.
-the ratio of adding procedures to the queue won't be faster that the ratio of executing them.
My Question Is
Is it better to use this way or using many connections won't affect the Database.
I would make a SQL Agent job that runs your stored procedure. Then your connection can login, start the job, and exit, and SQL Agent will run the job in the background. That way your connections aren't held open while the procedure runs.
That being said, I'll bet your database isn't slow because there are lots of connections, I'll bet it's slow because it's running lots of queries on behalf of those connections. But without knowing the code of your stored procedure nor your schema it's really impossible to know.
The amount of connections could slowdown the SQL Server depending on the actual amount.
One way of slimming down could be by checking whether or not the application is using connection pooling correctly. See this MSDN article for getting it right. A lot depends on the connection string and the state a connection is left in. (If there are open transactions, different credentials it can't be pooled)
Another way is moving the execution of the procedure(s) to a central service and have that service cache the database requests/responses.
Finally I'd have a look at the procedures/queries themselves; you mention that there is a lot of locking going on. Try and find out why. Did you create an insert hotspot at the end of a table? An index might help removing the hotspot. An (insert) trigger might be in the way. See this post for more details

Thread Monitor class in c#

In my c# application multiple clients will access the same server, to process one client ata a time below code is written.In the code i used Moniter class and also the queue class.will this code affect the performance.if i use Monitor class, then shall i remove queue class from the code.
Sometimes my remote server machine where my application running as service is totally down.is the below code is the reasond behind, coz all the clients go in a queue, when i check the netstatus -an command using command prompt, for 8 clients it shows 50 connections are holding in Time-wait...
Below is my code where client acces the server ...
if (Id == "")
{
System.Threading.Monitor.Enter(this);
try
{
if (Request.AcceptTypes == null)
{
queue.Enqueue(Request.QueryString["sessionid"].Value);
string que = "";
que = queue.Dequeue();
TypeController.session_id = que;
langStr = SessionDatabase.Language;
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
TypeController.session_id = "";
filter.Execute();
Request.Clear();
return filter.XML;
}
else
{
TypeController.session_id = "";
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
filter.Execute();
}
}
finally
{
System.Threading.Monitor.Exit(this);
}
}
Locking this is pretty wrong, it won't work at all if every thread uses a different instance of whatever class this code lives in. It isn't clear from the snippet if that's the case but fix that first. Create a separate object just to store the lock and make it static or give it the same scope as the shared object you are trying to protect (also not clear).
You might still have trouble since this sounds like a deadlock rather than a race. Deadlocks are pretty easy to troubleshoot with the debugger since the code got stuck and is not executing at all. Debug + Break All, then Debug + Windows + Threads. Locate the worker threads in the thread list. Double click one to select it and use Debug + Call Stack to see where it got stuck. Repeat for other threads. Look back through the stack trace to see where one of them acquired a lock and compare to other threads to see what lock they are blocking on.
That could still be tricky if the deadlock is intricate and involves multiple interleaved locks. In which case logging might help. Really hard to diagnose mandelbugs might require a rewrite that cuts back on the amount of threading.

Increasing the Lifetime element for EWS Streaming Subscription Connection

Using Microsoft's EWS, we're able to listen to a mailbox and take actions when a new email comes in. However, I can't figure out how to avoid the connection timing out.
Per Microsoft, here is the constructor for a StreamingSubscriptionConnection:
public StreamingSubscriptionConnection (
ExchangeService service,
int lifetime
)
In my app, I've coded it as follows:
service = new ExchangeService(ExchangeVersion.Exchange2010_SP1);
StreamingSubscriptionConnection conn = new StreamingSubscriptionConnection(service, 30);
In other words, I've got the timeout (lifetime) set to 30 minutes, because that's the highest I've been able to set it. How can I increase this? Or, how can I trick this subscription into staying alive, even if ~45 minutes transpire between incoming emails?
30 minutes is a hard limit. You can not change it to a higher value.
To solve this issue, wire up a handler to the OnDisconnected handler of the OnDisconnect event of the connection instance. Restart the subscription from there (just call connection.Open() from that handler).
If anyone else is interested, this is how I am accomplishing this.
I want to keep the connection open, so I am resetting it in the OnDisconnect handler.
However, before resetting it, I check the private "subscriptions" dictionary on the connection object using reflection.
This allows me to unsubscribe from my connections elsewhere in my code (OnNotificationEvent), and when all subscriptions have been unsubscribed from, I am then able to close the connection.
Here is my Code:
void connection_OnDisconnect(object sender, SubscriptionErrorEventArgs args)
{
var c = (Dictionary<string, StreamingSubscription>)typeof(StreamingSubscriptionConnection).GetField("subscriptions",System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance).GetValue(sender);
if (c.Count > 0)
{
// reopen the connection
((StreamingSubscriptionConnection)sender).Open();
using (var db = new Metrics_DatabaseEntities())
{
PushNotificationTest pt = new PushNotificationTest();
pt.RetObj = "Connection reset";
db.PushNotificationTests.Add(pt);
db.SaveChanges();
}
}
else
{
using (var db = new Metrics_DatabaseEntities())
{
PushNotificationTest pt = new PushNotificationTest();
pt.RetObj = "Connection closed!";
db.PushNotificationTests.Add(pt);
db.SaveChanges();
}
}
}
Please disregard the poor way that this is written, this is just my first version, as I plan to write this more cleanly soon. I just thought I would share my methodology with folks that might be interested.
If people are interested, here's the little bit of logic that got added.
I added this to my Start method:
conn.OnDisconnect +=
new StreamingSubscriptionConnection.SubscriptionErrorDelegate(OnDisconnect);
I then added the OnDisconnect method:
private void OnDisconnect(object sender, SubscriptionErrorEventArgs args)
{
Start();
}
Ultimately, this still needs improved, because this simply times-out and reconnects every half-hour, regardless of incoming email activity. I'd rather get something in place that resets the counter every time a new message comes in. Then, it would only time-out a couple times per day, instead of 48! Still, this is serving its purpose of keeping my email-listening program online.

Unable to enlist in a distributed transaction with NHibernate

I'm seeing a problem in a unit test where Oracle is thrown an exception with the message "Unable to enlist in a distributed transaction". We're using ODP.net and NHibernate. The issue comes up after making a certain number of commits to the database inside nested transactions. Annoyingly, this is failing on the continuous integration server (Windows Server 2003 R2 SP1), and not on my dev machine (XP SP2).
This is a small(ish) repro of the issue:
using (new TransactionScope())
{
for (int j = 0; j < 15; j++)
{
using (var transactionScope = new TransactionScope(TransactionScopeOption.Required))
using (var session = sessionFactory.OpenSession())
{
for (int i = 0; i < 200; i++)
{
var obj = [create new NHibernate mapped obj]
session.Save(obj);
}
session.Flush();
transactionScope.Complete();
}
}
}
The connection string we're using is:
Data Source=server;User Id=user;Password=password;Enlist=true;
Obviously this looks like a heavy handed thing to be doing, but the case of the product code is more complex (the outer transaction loop and inner transaction loop are very separated).
On the build server, it reliably bombs out on the fifth iteration of the outer loop (j). Seeing as it passes on my local machine, I'm wondering if this is hitting some kind of configured limit of transactions or connections?
Anyone got any hunches I can try out? The obvious way to fix it is to change the code to better handle this situation, but I'd just like to understand why it works on one machine and not on another. Thanks!
It seems to me this has to do with your Oracle database configuration.
Do you use the same database server in both environments (I assume not) ?
Which version of the database do you use (I'll take 10g) ?
Here is what I could find based on these assumptions :
Check Tuning Microsoft Transaction Server Performance. The default value for the ORAMTS_NET_CACHE_MAXFREE parameter is set to 5, which may be related to your problem. Read the whole page before taking any action, though (you could try to increase the SESSIONS and PROCESSES parameters too).
You could enable tracing on Oracle MTS to see what is really happening there.
If still stuck, I guess you could enable tracing on MSDTC to try to get more insight.

Categories

Resources