What happens when Npgsql connection pool reaches Max - c#

Both the name of the connection string parameter and this blog post - http://fxjr.blogspot.co.uk/2010/04/npgsql-connection-pool-explained.html - lead me to believe that Npgsql wont exceed the MaxPoolSize value set in the connection string. However the docs (http://npgsql.projects.postgresql.org/docs/manual/UserManual.html) say "Max size of connection pool. Pooled connections will be disposed of when returned to the pool if the pool contains more than this number of connections. Default: 20"
This suggests that the pool can actually grow larger than MaxPoolSize and it is in fact just a level at which Npgsql starts to aggressively remove connections from the pool as soon as they are returned.
I've been searching to try and find an answer but I can find out exactly what happens when you reach MaxPoolSize. Anyone else know?
edit: I should add we are using Npgsql 2.0.6.0 due to another dependency being supported only up to that version.

I think this may be a copy paste issue regarding the minpoolsize. Npgsql doesn't create more than maxpoolsize connections. When this value is reached, new connection requests are queued until there is a free one.
Which issue are you depending on which only works on 2.0.6?

Related

What is limiting requests count? Why `Timeout while getting a connection from pool`?

At high load conditions NHibernate sometimes throws an exception when BeginTransaction is called. The message contains Timeout while getting a connection from pool in the RequestConnector method of Npgsql.
In the pg_log: could not receive data from client: No connection could be made because the target machine actively refused it.
Postgres stats doesn't show any expensive queries.
The machine have enough free cpu and ram resources.
Versions: Postgres 9.4.0 64-bit, NHibernate 3.3.1.4000, Npgsql 2.2.3.
Postgres settings:
shared_buffers = 128MB
max_connections = 300
checkpoint_segments = 6
Connection string settings:
Pooling = true;
MINPOOLSIZE=20;
MAXPOOLSIZE=1000;
Postgres and the application are located on the same machine.
All NHibernate transactions and sessions are disposed with using.
This problem was caused by disk bottleneck. With SSD it works much better.
One problem that I have seen in the past is the maximum number of sockets that can be opened at the same time and the linger time from the time that a socket is closed till it is freed. Under huge volumes this has become problematic. Here are a couple of links that discuss this problem Link 1 Link 2
We have noticed similar problem, I found at Npgsql github, that they have changed DNS resolving from sync to async in version 2.1 and it leads to this error.
Till today (ver. 2.2.4.3) it is not fixed.
Here is a fix (revert):
Npgsql fork - commit

Database connections not being closed on oracle server

I am experiencing a problem where by when connecting to an Oracle11g database using NHibernate, old connections in the pool are not being closed.
I am fairly sure that all the NHibernate sessions are disposed properly, however the connections still remain in an INACTIVE status. I know this will be because of connection pooling, however surley they should be removed after a certain amount of time? If not how can I configure this to happen.
I have tried adding the following settings into my connection string:
Max Pool Size=10;
Connection Lifetime=120;
Incr Pool Size=1;
Decr Pool Size=10;`
This seems to stop as many connections being created, I guess because this increase size is an increment of 1, however once the connections have been put back into the pool they are never closed.
I have looked at the v$session table and some of the LASST_CALL_ET values were as much as 786465s or 9 days!!
I am fairly sure all the sessions are being disposed, here is an example of the code:
public class DoSomethingToDb(ISessionFactory sessionFactory)
{
using (ISession session = sessionFactory.OpenSession())
{
session.Transaction.Begin();
//Do Stuff
session.Transaction.Commit();
}
}
How can I setup my program/NHibernate/Ado.Net/Oracle to close connections that are no longer in use.
The server we were testing on crashed yesterday as there were over 800 INACTVIE connections and no more could be issued.
The reason you are having problems is because your Decr Pool Size value is too large. It will not be able to close any connections unless all of them are available to close since your Decr Pool Size is the same as your Max Pool Size.
When i have set this value to 1, it takes for ever to release unused connections. I am currently setting mine to 5 and it still takes just as long between each decrements, but it will release more at once.
Pooling=true;
Min Pool Size=0;
Max Pool Size=10;
Incr Pool Size=1;
Decr Pool Size=3;
Also, with Connection Lifetime being set to 120, it will not keep any sessions open for more than 2 minutes.
It would surprise me if you could do this in hibernate since I think the connections are leaked connections. For some reason they got out of control and won't ever be reused.
What you can do is configure an session idle timeout in resource manager in the Oracle database. See Managing Resources with Oracle Database Resource Manager
Make sure that a resource consumer group is defined for your pooled sessions and that the idle timeout is big enough to not unexpectedly interrupt a working healthy session.
Oracle Database Resource Manager is a very flexible and powerful tool that can help your system in many ways.
It seems the problem was being caused by the use of transactions. Removing the transactions from the above code produced the following:
public class DoSomethingToDb(ISessionFactory sessionFactory)
{
using (ISession session = sessionFactory.OpenSession())
{
//Do Stuff
session.Flush();
}
}
Which seems to cause no issues.

When we can use ClearAllPools method?

I face the following problem :
Connection Pool has reached the maximum number of connections
I followed all the recommendations. the problem is n't like before but it happens rarely !!
I use the Using statement with all my connections and Readers .
Lately i face the following error , and i had to reset the iis to fix my problem.
Connection Pool has reached the maximum number of connections. at IBM.Data.Informix.IfxConnectionPool.ReportOpenTimeOut()
at IBM.Data.Informix.IfxConnectionPool.Open(IfxConnection connection)
at IBM.Data.Informix.IfxConnPoolManager.Open(IfxConnection connection)
at IBM.Data.Informix.IfxConnection.Open()
at DB_Connection_s.DB_Connection.GetUserSystems(String emp_num)
Now I read about this method ClearAllPools() .but i don't know when to use this method .and if this considered as a good solution to prevent the have to reset the iis to fix the request time out problem ??
You can call ClearAllPools() when you dont have any active connection.
also check out http://www.codeproject.com/Articles/46267/Connection-Pooling-in-ASP-NET
Ensure that your application closes all database connections correctly and consistently.
Ensure that the database is online.
Increase the connection timeout.
The error pattern indicates that connections are "leaked" over a long period. To fix this problem, ensure that your application closes all database connections correctly and consistently.
The exception does not indicate that the database is offline. The exception indicates a connection pool problem.

Connection Pooling clear time... I don't really know what to call it :)

I have an client application that connects to the DB via the using clause.
using (SqlConnection sqlConn = new SqlConnection(ConnectionString))
{
sqlConn.Open();
//SQLcommand codes
}
I know this will ensure the sqlConnn.Close() and sqlConn.Dispose() to be called. However, for each client after running this code I still see some of the SPID on SQLServer in sleeping mode such as:
60 0 sleeping sa AVENGER 0 Xmark AWAITING COMMAND 0
61 0 sleeping sa AVENGER 0 Xmark AWAITING COMMAND 0
62 0 sleeping sa AVENGER 0 Xmark AWAITING COMMAND 0
I know this is because I use connection pooling in my connection, that is why they are in sleep mode and to reuse again in further command. I see these processes get flushed out over time (after like 10 minutes or so) if I don't do anything with them.
My question is: Is there a setting either from C# or in SQLServer2008 that will reduce this time to like 2 minutes or so?
The problem I face is my max pooling connection limit reaches quickly if I have many clients connect to the Database around a short period of time. I realize I can fix it by increase my connection pooling, however I think that is like upgrade from a smaller to a bigger bowl to contain water from a leaked roof.
I searched on MSDN and came across the ClearPool() that will let me implicitly remove the SPID from the pool but I think that defeats the purpose of connection pooling and it is not really clean.
any help is greatly appreciated.
As you know the Sql server may not close the connection even when you call sqlconn.Close(), and will use the same connection for any other client connecting with the same connection string.
Ideally, Instead of looking to decrease this time, you should use a data access layer class that must act as helper class to create and manage connections.
Using such approach will avoid pooling issues and I have seen this approach used in good application architectures.
You should modify your code to this approach instead.
You are looking for the 'Idle Time-out' setting. It is configured for the application pool in IIS. Refer to this link

What are some good ways to debug timeouts? (C#)

I'm building a site that runs fine for a few hours, but then *.asmx and *.ashx calls start timing out.
The exception is: "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool This may have occurred because all pooled connections were in use and max pool size was reached."
I'm using SubSonic as the ORM.
I suspect that the problem is based on a scheduled task that runs every few minutes and hits the database. When I look in SQL Server 2000's "Current Activity", I see there are:
100 processes with the status "sleeping"
100 locks
The 100 processes are from the Application ".Net SqlClient Data Provider" and the command is "AWAITING COMMAND".
So I'm guessing that's the issue . . but how do I troubleshoot it? Does this sound like a deadlock condition in the db? As soon as I
c:\> iisrestart
, everything's fine (for a while).
Thanks - I've just never encountered something like this and am not sure the best way to proceed.
Michael
It could be a duplicate of this problem - Is connection pooling working correctly in Subsonic?
If you're loading objects with Load() instead of LoadAndCloseReader(), each connection will be left open and eventually you'll exhaust the connection pool.
When you call Load() on a collection it will leave the Reader open - make sure you call LoadAndCloseReader() if you want the reader to close off - or use a using block.
It helps to have some source code as well.
I don't know anything about Subsonic, but maybe you are leaking database 'contexts'? I'd check that any database resource is being disposed after you're finished with it...

Categories

Resources