I have a window service which uses System.Threading.Timer to call a endpoint at a specific configured interval. I have say 10 instance timers configured to call the same endpoint at the same interval (say 10 seconds). If the HTTP endpoint takes longer time to finish, other my timers events are not fired. The other timers are fired after the http call returns. At anypoint in time only two timers are triggered concurrently and runs the handler code. During the execution of the handler code none of the other timers trigger.
To be precise only 2 timers are running concurrently. I am using .net framework 4.8
I will not be able to post the code here since it is a legacy proprietary code
I have found the reason for this behavior in .net framework, there is a default on the number of connection you can open in to a endpoint. ServicePointManager.DefaultConnectionLimit set this limit to 2 for non web application. Since my code run as windows service it is limited to 2. You can override this behaviour by setting
Uri atmos = new Uri("http://endpoint");
ServicePoint sp = ServicePointManager.FindServicePoint(atmos);
sp.ConnectionLimit = 64;
or you can specify the setting in config file
<system.net> <connectionManagement> <add address="**http://endpoint**" maxconnection="**64**"/> </connectionManagement> </system.net>
--experts from msdn--
The maximum number of concurrent connections allowed by a ServicePoint object. The default connection limit is 10 for ASP.NET hosted applications and 2 for all others. When an app is running as an ASP.NET host, it is not possible to alter the value of this property through the config file if the autoConfig property is set to true. However, you can change the value programmatically when the autoConfig property is true. Set your preferred value once, when the AppDomain loads.
https://learn.microsoft.com/en-us/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit?view=netframework-4.8
Thanks to everyone who replied to this question.
Related
Environment Windows11/.NET Framework 4.7.2
For some testing purposes, I setup an environment where all traffic was routed to a local proxy run mitmproxy using Windows proxy settings.
I am running a console app where it makes concurrent HTTP GET calls for some benchmarking and other study. The code is organized to use HttpClient pool. However I tried to limit connection limit for the outbound calls I was not able to do it unless I explicitly set ServicePointManager.DefaultConnectionLimit to a value. Even the default config value did not work:
<configuration>
<system.net>
<connectionManagement>
<add address = "*" maxconnection = "2" />
</connectionManagement>
<settings>
<servicePointManager expect100Continue="false" useNagleAlgorithm="false" dnsRefreshTimeout="30000" />
</settings>
</system.net>
</configuration>
The weirdest of all is that if I printed out ServicePointManager.DefaultConnectionLimit to console it would show 2 which is also happen to be the default if nothing is assigned; but it gets even weirder because if I just added:
ServicePointManager.DefaultConnectionLimit = ServicePointManager.DefaultConnectionLimit;
Then it would limit connection count to 2.
Anyways, I did check and check and check and stepped into .NET code then I found that if I add proxy's address to the list of service points then it will respect that. The issue is that if there is a proxy then it will ignore the actual server URI in service point table and uses proxy's value instead. This causing issue because I can no longer control connection limit per server while using a proxy. I tried to search internet for anything related but I was not able to find anything useful. Has anybody else faced this? Is this a .NET bug? Any answer would be greatly appreciated.
I have a WCF service and client using reliable session.
Inactivity timeout is set to 1 minute during creation of the service but I need to change this timeout dynamically during runtime to, say, 10 minutes for some time and then set back to 1 minute.
I have tried changing timeout interval before creating of service.
This works of course, although I am not able to change it during runtime.
I also tried to increase another timeouts like Send, Receive and Operation at the same time but it didn't help.
So the question is - is it even possible to change inactivity timeout during runtime or not? If so, should I change any other timeout to make it work?
I changed the timeout in one of my contract implementation methods like this.
Although the timeout is correctly set to 10 minutes, it still behaves like it did with the default timeout.
var binding = OperationContext.Current.Host.Description.Endpoints.First().Binding as CustomBinding;
binding.Elements.Find<ReliableSessionBindingElement>().InactivityTimeout = TimeSpan.FromMinutes(10);
OperationContext.Current.Host.Description.Endpoints.ToList().First().Binding = binding;
I have an ASP.NET 3.5 server application written in C#. It makes outbound requests to a REST API using HttpWebRequest and HttpWebResponse.
I have setup a test application to send these requests on separate threads (to vaguely mimic concurrency against the server).
Please note this is more of a Mono/Environment question than a code question; so please keep in mind that the code below is not verbatim; just a cut/paste of the functional bits.
Here is some pseudo-code:
// threaded client piece
int numThreads = 1;
ManualResetEvent doneEvent;
using (doneEvent = new ManualResetEvent(false))
{
for (int i = 0; i < numThreads; i++)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(Test), random_url_to_same_host);
}
doneEvent.WaitOne();
}
void Test(object some_url)
{
// setup service point here just to show what config settings Im using
ServicePoint lgsp = ServicePointManager.FindServicePoint(new Uri(some_url.ToString()));
// set these to optimal for MONO and .NET
lgsp.Expect100Continue = false;
lgsp.ConnectionLimit = 100;
lgsp.UseNagleAlgorithm = true;
lgsp.MaxIdleTime = 100000;
_request = (HttpWebRequest)WebRequest.Create(some_url);
using (HttpWebResponse _response = (HttpWebResponse)_request.GetResponse())
{
// do stuff
} // releases the response object
// close out threading stuff
if (Interlocked.Decrement(ref numThreads) == 0)
{
doneEvent.Set();
}
}
If I run the application on my local development machine (Windows 7) in the Visual Studio web server, I can up the numThreads and receive the same avg response time with minimal variation whether it's 1 "user" or 100.
Publishing and deploying the application to Apache2 on a Mono 2.10.2 environment, the response times scale almost linearly. (i.e, 1 thread = 300ms, 5 thread = 1500ms, 10 threads = 3000ms). This happens regardless of server endpoint (different hostname, different network, etc).
Using IPTRAF (and other network tools), it appears as though the application only opens 1 or 2 ports to route all connections through and the remaining responses have to wait.
We have built a similar PHP application and deployed in Mono with the same requests and the responses scale appropriately.
I have run through every single configuration setting I can think of for Mono and Apache and the ONLY setting that is different between the two environments (at least in code) is that sometimes the ServicePoint SupportsPipelining=false in Mono, while it is true from my machine.
It seems as though the ConnectionLimit (default of 2) is not being changed in Mono for some reason but I am setting it to a higher value both in code and the web.config for the specified host(s).
Either me and my team are overlooking something significant or this is some sort of bug in Mono.
I believe that you're hitting a bottleneck in the HttpWebRequest. The web requests each use a common service point infrastructure within the .NET framework. This appears to be intended to allow requests to the same host to be reused, but in my experience results in two bottlenecks.
First, the service points allow only two concurrent connections to a given host by default in order to be compliant to the HTTP specification. This can be overridden by setting the static property ServicePointManager.DefaultConnectionLimit to a higher value. See this MSDN page for more details. It looks as if you're already addressing this for the individual service point itself, but due to the concurrency locking scheme at the service point level, doing so may be contributing to the bottleneck.
Second, there appears to be an issue with lock granularity in the ServicePoint class itself. If you decompile and look at the source for the lock keyword, you'll find that it uses the instance itself to synchronize and does so in many places. With the service point instance being shared among web requests for a given host, in my experience this tends to bottleneck as more HttpWebRequests are opened and causes it to scale poorly. This second point is mostly personal observation and poking around the source, so take it with a grain of salt; I wouldn't consider it an authoritative source.
Unfortunately, I did not find a reasonable substitute at the time that I was working with it. Now that the ASP.NET Web API has been released, you may wish to give the HttpClient a look. Hope that helps.
I know this is pretty old but I'm putting this here in case it might help somebody else who runs into this issue. We ran into the same problem with parallel outbound HTTPS requests. There are a few issues at play.
The first issue is that ServicePointManager.DefaultConnectionLimit did not change the connection limit as far as I can tell. Setting this to 50, creating a new connection, and then checking the connection limit on the service point for the new connection says 2. Setting it on that service point to 50 once appears to work and persist for all connections that will end up going through that service point.
The second issue we ran into was with threading. The current implementation of the mono thread pool appears to create at most 2 new threads per second. This is an eternity if you are doing many parallel requests that start at exactly the same time. To counteract this, we tried setting ThreadPool.SetMinThreads to a higher number. It appears that Mono only creates up to 1 new thread when you make this call, regardless of the delta between the current number of threads and the desired number. We were able to work around this by calling SetMinThreads in a loop until the thread pool had the desired number of idle threads.
I opened a bug about the latter issue because that's the one I'm most confident is not working as intended: https://bugzilla.xamarin.com/show_bug.cgi?id=7055
If #jake-moshenko is right about ServicePointManager.DefaultConnectionLimit not having any effect if changed in Mono, please file this as a bug in http://bugzilla.xamarin.com/.
However I would try some things before discarding this completely as a Mono issue:
Try using the SGen garbage collector instead of the old boehm one, by passing --gc=sgen as a flag to mono.
If the above doesn't help, upgrade to Mono 3.2 (which BTW defaults to SGEN GC too), because there has been a lot of fixes since you asked the question.
If the above doesn't help, build your own Mono (master branch), as this important pull request about threading has been merged recently.
If the above doesn't help, build your own Mono with this pull request added. If it fixes your problem, please add a "+1" to the pull request. It might be a fix for bug 7055.
I'm using a 3rd party library that makes a number of http calls. By decompiling the code, I've determined that it is creating and using raw HttpWebRequest's, all going to a single URL. The issue is that some of the requests don't get closed properly. After some time, all new HttpWebRequest's block forever when the library calls GetRequestStream()* on them. I've determined this blocking is due to the ConnectionLimit on the ServicePoint for that particular host, which has the default value of 2. In other words, the library has opened 2 requests, and then tries to open a 3rd, which blocks.
I want to protect against this blocking. The library is fairly resilient and will reconnect itself, so it's okay if I kill the existing connections it has made. The problem is that I don't have access to any of the HttpWebRequest or HttpWebResponses this library makes. However I do know the URL it accesses and therefore I can access the ServicePoint for it.
var sp = ServicePointManager.FindServicePoint(new Uri("http://UrlThatIKnowAbout.com"));
(Note: KeepAlive is enabled on these HttpWebRequests)
This worked, though I'm not sure it's the best way to solve the problem.
Get the service point object for the url
var sp = ServicePointManager.FindServicePoint(new Uri("http://UrlThatIKnowAbout.com"));
Increase the ConnectionLimit to int.MaxValue
Create a background thread that periodically checks the ConnectionCount on the service point. If it goes above 5, call CloseConnectionGroup()
Set MaxIdleTime to 1 hour (instead of default)
Setting the ConnectionLimit should prevent the blocking. The monitor thread will ensure that too many connections are never active at the same time. Setting MaxIdleTime should serve as a fall back.
After reading http://msdn.microsoft.com/en-us/library/system.servicemodel.description.servicethrottlingbehavior.maxconcurrentsessions.aspx
and
http://msdn.microsoft.com/en-us/library/system.servicemodel.description.servicethrottlingbehavior.maxconcurrentcalls.aspx
I have concluded that:
MaxConcurrentSessions is the number of queued sessions per client (default of 10)
MaxConcurrentCalls is the number of active connections on the service (default of 16) i.e. all clients accessing the service at any one time, meaning that if 2 client did 10 calls each, 4 would have to wait in the queue for processing.
Questions:
Is my conclusion correct?
How does MaxConnections interact with these?
Does MaxConnections take precedence over the MaxConcurrentX settings?
(Note:I am using .NET 3.5)
MaxConcurrentCalls has to do with the number of calls on the service that are currently executing.
MaxConnections has to do with the total number of open connections on the service, regardless if the service is executing anything for the connection.
For example, if a client opens a connection to the service, calls a method, and is waiting for the method to return, it will count against the MaxConcurrentCalls. As soon as the service returns a response to the client’s method call, it will not count against the MaxConcurrentCalls… even if you didn’t close the client-side proxy. Assuming you didn’t close the client-side proxy, the connection would count towards the MaxConnections on the service since you still have the connection open, but it’s not currently executing anything on the service so it would not count against the MaxConcurrentCalls.