Why I am limited on concurrent HTTP connections using threads [duplicate] - c#

This question already has answers here:
Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)
(4 answers)
HttpWebRequest timing out on third try, only two connections allowed HTTP 1.1 [duplicate]
Remove 2 connections HttpWebRequest limitation in C#
(2 answers)
Closed 3 years ago.
I'm trying to open about 25connections to a host at the same time.
My OS is windows 10.
For testing, I bring up a simple website on my local IIS and I response a simple data to the user with a delay of 2 seconds using Thread.Sleep(2000).
Now using this code on the client:
const int len = 25;
for (int i = 0; i < len; i++)
{
new Thread(new ParameterizedThreadStart((idx) =>
{
// start downloading data.
var res = new WebClient().DownloadString("http://192.168.1.101:8090/");
// log index and time when done.
Console.WriteLine($"{Convert.ToInt32(idx).ToString("00")} done at:{ DateTime.Now.ToString("HH:mm:ss:ffff") }");
})).Start(i);
}
I got the following result:
Thread 01 done at 40:8476 ms
Thread 00 done at 40:8476 ms
Thread 03 done at 40:8496 ms
Thread 04 done at 40:8496 ms
Thread 02 done at 40:8506 ms
Thread 05 done at 40:8506 ms
Thread 07 done at 40:8516 ms
Thread 06 done at 40:8516 ms
Thread 08 done at 40:8536 ms
Thread 09 done at 40:8545 ms
Thread 11 done at 42:8510 ms
Thread 10 done at 42:8510 ms
Thread 12 done at 42:8560 ms
Thread 14 done at 42:8560 ms
Thread 15 done at 42:8570 ms
Thread 13 done at 42:8580 ms
Thread 16 done at 42:8590 ms
Thread 17 done at 42:8590 ms
Thread 18 done at 42:8610 ms
Thread 19 done at 42:8610 ms
Thread 21 done at 44:8565 ms
Thread 20 done at 44:8565 ms
Thread 23 done at 44:8634 ms
Thread 24 done at 44:8654 ms
Thread 22 done at 44:8654 ms
The above result tells us that:
1- Thread 0 to 9 got the data at the same time.(second 40)
2- Thread 10 to 19 got the data at the same time 2 seconds later after previous step.(second 42)
3- Thread 20 to 24 got the data at the same time. 2 seconds later after previous step.(second 44)
Now my question is WHO limited me and why it only opens 10 HTTP connections at the same time and how can I set it to unlimited.
If there is any other platform or programming language it will be welcomed.

Who? Your OS and web server manufacturer, being Microsoft.
Why? Because Windows 10 is a client OS, and Microsoft doesn't want you to host any serious web applications on that.
See for example:
Why is IIS allowing only 3 connections at the time?
Are there any connection limits on Windows 7 IIS v7.5?
Maximum number of http-requests on IIS with Windows 7
And from While using signalr, will there be any connection limits on IIS, linking to the SignalR documentation:
When SignalR is hosted in IIS, the following versions are supported. Note that if a client operating system is used, such as for development (Windows 8 or Windows 7), full versions of IIS or Cassini should not be used, since there will be a limit of 10 simultaneous connections imposed, which will be reached very quickly since connections are transient, frequently re-established, and are not disposed immediately upon no longer being used. IIS Express should be used on client operating systems.
There are other posts, like Does windows 10 connection limit apply to self-hosted applications?, mentioning 20 connections, but that's 20 devices (probably recognized by remote IP address) connecting to selected Windows service (SMB, IIS, ...).
The IIS limit is 10, and has been for many years, I think since Windows 7.
So the first half of the answer to this question is:
If you need more than 10 simultaneous incoming HTTP connections while developing on Windows 10, use IIS Express or any other web server than IIS or Cassini.
But there's a second half of this answer. If you need more than 10 (or 2, or ..., depending on the environment) simultaneous outgoing HTTP connections, there's ServicePointManager who manages this. See:
Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)
How can I programmatically remove the 2 connection limit in WebClient
Microsoft Docs: Network Programming in the .NET Framework - Managing Connections
So change the limit before executing the requests:
System.Net.ServicePointManager.DefaultConnectionLimit = 25;

Related

Kestrel request per second issue

i'm a newbie to asp.net core
i'm write a web api service, which store passed data to database. in theory there is about 300-400 request per second to server in future and response time must be less than 10 seconds
but first of all i try to run some load test with locust.
i write simple app with one controller and only one post method which simple return Ok() without any processing.
i try to create load to this service for 1000 users. my service run under ubuntu 16.04 with .net core 2.1 (2 Xeon 8175M with 8 GB of RAM). Locust run from dedicated computer
but i see only ~400 RPS and response time about 1400 ms. For empty action it is very big value.
i'm turn off all loging, run in production mode but no luck - still ~400 rps.
in system monitor (i use nmon) i see that both cpu loads only for 12-15% (total 24-30%). I have about 3 GB free ram, no network usage (about 200-300 KB/s), no disk usage, so system have hardware resource for handling request.
so i think, that there is problem with some configuration or may be with system resource like sockets, handles etc
i also try to use libuv instead of managed socket, but result is same
in kestrel configuration i setup explicitly Limit.MaxConnection and MaxUpgradedConnection to null (but it is default value)
so, i have two question:
- in theory, can kestrel provide high rps?
- if first is true, can you give me some advise for start point (links, articles and so on)

App to app authentication with Windows Authentication taking 10 seconds

We are doing http calls with Windows Authentication between asp.net apps (specifically a .net core an and a standard .net framework 4.5.1 app) apps using System.Net.Http.HttpClient like this:
var client = new HttpClient(new HttpClientHandler { Credentials = CredentialCache.DefaultCredentials });
var response= await _winHttpClient.GetAsync(url);
...
This works fine, except for that the first request takes 10 seconds. The requests then go fast for about 40 seconds, and then one request takes 10 seconds. This cycle goes on forever.
Looking at the IIS logs on the receiving end, we can se that every request is denied (401) and then a follow up request goes through, and every so often the delay between these are about 10s. This is all invisible to the client code - it is worked out by the underlying framework.
Example:
2017-03-17 14:19:40 10.241.108.23 GET /person/search/john - 80 - 10.211.37.246 - 401 2 5 31
2017-03-17 14:19:40 10.241.108.23 GET /person/search/john - 80 utv\frank 10.211.37.246 - 200 0 0 93
2017-03-17 14:19:41 10.241.108.23 GET /person/search/johnn - 80 - 10.211.37.246 - 401 2 5 46
2017-03-17 14:19:51 10.241.108.23 GET /person/search/johnn - 80 utv\frank 10.211.37.246 - 200 0 0 281
It seems as if the credentials are somehow cached, and have to be refreshed every 40ish seconds.
It is worth noting that this problem doesn't occur when both applications are run locally, only when they are run in the actual hosting environment.
What's going on?
Is it expected behaviour that the consumer has to do two calls for every request? And why do some of the requests take 10 seconds to authenticate?
Any help would be appreciated.

How to avoid DirectoryOperationException: The Server Is Busy when USNChange Poll-Synchronizing an AD LDS directory

We are running a .NET 4.5 console application that performs USNChanged polling on a remote LDAP server and then synchronizes the records into a local AD LDS on Windows Server 2008R2. The DirSync control was not an option on the remote server but getting the records isn't the problem.
The directory is quite large, containing millions of user records. The console app successfully pulls down the records and builds a local cache. It then streams through the cache and does lookup/update/insert as required for each record on the local directory. The various network constraints in the environment had performance running between 8 and 80 records per second. As a result, we used the Task Parallel Library to improve performance:
var totalThreads = Environment.ProcessorCount *2;
var options = new ParallelOptions { MaxDegreeOfParallelism = totalThreads };
Parallel.ForEach(Data.ActiveUsersForSync.Batch(250), options, (batch, loopstate) =>
{
if (!loopstate.IsExceptional
&& !loopstate.IsStopped
&& !loopstate.ShouldExitCurrentIteration)
{
ProcessBatchSync(batch);
}
});
After introducing this block, performance increased to between 1000 and 1500 records per second. Some important notes:
This is running on an eight core machine so it allows up to 16 operations simultaneously Environment.ProcessorCount * 2;
The MoreLinq library batching mechanism is used so each task in the parallel set is processing 250 records on a given connection (from pool) before returning
Each batch is processed synchronously (no additional parallelism)
The implementation relies on System.DirectoryServices.Protocols (Win32), NOT System.DirectoryServices (ADSI)
Whenever a periodic full synchronization is executed, the system will get through about 1.1 million records and then AD LDS returns "The Server Is Busy" and the system throws a DirectoryOperationException. The number it completes before erroring is not constant but it is always near 1.1 million.
According to Microsoft (http://support.microsoft.com/kb/315071) the MaxActiveQueries value in AD LDS is no longer enforced in Windows Server 2008+. I can't change the value anyway, it doesn't show. They also show the "Server is Busy" error coming back only from a violation of that value or from having too many open notification requests per connection. This code only sends simple lookup/update/insert LDAP commands and requests no notifications from the server when something is changed.
As I understand it, I've got at most 16 threads working in tandem to query the LDS. While they are doing it very quickly, that's the max number of queries coming in in a given tick since each of these are processed single-threaded.
Is the Microsoft document incorrect? Am I misunderstanding another component here? Any assistance is appreciated.

Windows Service 100% CPU with C# IMAP Chilkat

Chilkat IMAP service is stuck on 100% CPU after several days the Windows service that uses it executes normally. It is being reproduced every several days (3-7 days).
I'm using Chilkat IAMP for .NET 4.5 version 9.5.0 64-bit.
The way I abort through c# code is (it runs once on every end of iteration):
if (imapCon != null)
{
if (imapCon.IsLoggedIn())
{
imapCon.Logout();
}
if (imapCon.IsConnected())
{
imapCon.Disconnect();
}
imapCon.Dispose();
imapCon = null;
}
From the logs, I get: WSAECONNABORTED An established connection was aborted by the software in your host machine.
The service that runs it resides in a virtual cloud environment.
Is this an issue with how Chilkat IMAP connection is implemented, the cloud environment implementation or something from my service (the application that uses the Chilkat module)...?
The following are Chilkat logs:
DllDate: May 6 2014
ChilkatVersion: 9.5.0.38
UnlockPrefix: SNILIKIMAPMAIL
Username: WIN-OCJD4A0985E:SYSTEM
Architecture: Little Endian; 64-bit
Language: .NET 4.5 / x64
VerboseLogging: 0
listMailboxes:
bSubscribedOnly: 0 reference:
mailbox: *
Escaping quotes and backslashes in mailbox name...
utf7EncodedMailboxPath: *
getCompleteResponse:
WindowsError: An established connection was aborted by the software in your host machine.
WindowsErrorCode: 0x2745
numBytesRequested: 5
Failed to receive data on the TCP socket
Failed to read beginning of SSL/TLS record.
Failed to read incoming handshake messages. (3)
(leaveContext)
Client handshake failed. (3)
(leaveContext)
ConnectFailReason: 0
(leaveContext) failReason: 0
connect failed.
(leaveContext) Login:
DllDate: May 6 2014
ChilkatVersion: 9.5.0.38
UnlockPrefix: SNILIKIMAPMAIL
Username: WIN-OCJD4A0985E:SYSTEM
Architecture: Little Endian; 64-bit
Language: .NET 4.5 / x64
VerboseLogging: 0
login: **
ConnectionType: SSL/TLS
Error sending on socket (1)
SocketError: WSAECONNABORTED An established connection was aborted by
the software in your host machine.
For more information see this Chilkat Blog post:
http://www.cknotes.com/?p=91
send_size: 90
Failed to send TLS message.
Failed to send LOGIN command
Failed.
If a Chilkat method call never returns and utilizes 100% of the CPU, then you would not be able to get the contents of the LastErrorText (which is the Chilkat log you have provided). The fact that you have a LastErrorText indicates that the Chilkat method call has returned and your app then proceeded to display the LastErrorText.
My guess is that your app has a loop where normally a Chilkat method call involving communications with an IMAP mail server succeeds (with a normal amount of time spent communicating with the server), but then for some reason the method call begins returning immediately with a failed status. At that point, I suspect your application is probably in a tight loop calling the Chilkat method over and over. The 100% CPU utilization is likely caused by the loop in your app, NOT by code within a call to a Chilkat method.

MS Enterprise Library data access - Understanding SQL 'user connections' management

I'm trying to understand how MS Enterprise Library's data access block manages its connections to SQL. The issue I have is that under a steady load (from a load test), at 10 minute intervals the number of connections to SQL increases quickly - which causes noticeable jump in page response times from the website.
This is the scenario I'm running:
Visual Studio load test tools, running against 3 web servers behind a load balancer
The tools give full visibility over the performance counters to all web + DB boxes
The tests take ~10 seconds each, and perform 4 inserts (form data), and some trivial selects
There are 60 tests running concurrently. There is no increase or decrease in load during the entire test.
The test is run for between 20 minutes and 3 hours, with consistent results.
And this is the issue we see:
Exactly every 10 minutes, the performance counter from SQL for SQL General: User Connections increases - by ~20 connections total
The pages performing the HTTP post / DB insert are the ones most significantly affected. The other pages show moderate, but noticeable rises.
The CPU/memory load on the web servers is unaffected
This increase corresponds with a notable bump in page response times - E.g. from .3 seconds average to up to 5 seconds
After ~5 minutes it releases many of the connections, with no affect on web performance
The following 5 minutes of testing gives the same (normal) web performance
Ultimately, the graph looks like a square wave
Happens all over again, 10 minutes after the first rise
What I've looked at:
Database calls:
All calls in the database start with:
SqlDatabase database = new SqlDatabase([...]);
And execute either proc with no required output:
return database.ExecuteScalar([...], [...]);
Or read wrapped in a using statement:
using (SqlDataReader reader = (SqlDataReader)database.ExecuteReader([...], [...]))
{
[...]
}
There are no direct uses of SqlConnection, no .Open() or .Close() methods, and no exceptions being thrown
Database verification:
We've run SQL profiler over the login / logout events, and taken snapshots with the sp_who2 command, showing who owns the connections. The latter shows that indeed the web site (seen by machine + credential) are holding the connections.
There are no scheduled jobs (DB or web server), and the user connection load is stable when there is no load from the web servers.
Connection pool config
I know the min & max size of the connection pool can be altered with the connection string.
E.g.:
"Data Source=[server];Initial Catalog=[x];Integrated Security=SSPI;Max
Pool Size=75;Min Pool Size=5;"
A fall back measure may be to set the minimum size to ~10
I understand the default max is 100, and the default min is 0 (from here)
I'm a little bit lithe to think of connection pooling (specific to this setting) and the User Connections performance counter from SQL. This article introduces these connection pools as being used to manage connection string, which seems different to what I assume it does (hold a pool of connections generally available, to avoid the cost of re-opening them on SQL)
I still haven't seen any configuration parameters that are handily defaulting to 5 or 10 minutes, to zero in on...
So, any help is appreciated.
I know that 10 minute spikes sounds like a change in load, or new activity is happening - but we've worked quite hard to isolate those & any other factors - and for this question, I am hoping to understand EL scaling its connections up & down.
Thanks.
So, it turns out that SQL user connections are created & added to the pool whenever all other connections are busy. So when long-running queries occur, or the DB is otherwise unresponsive, it will choose to expand to manage the load.
The cause of this in our case happened to be a SQL replication job (unfortunate, but found...) - And the changes in the # of User Connections was just a symptom, not a possible cause.
Although the cause turned out to be elsewhere, I now feel I understand the connection pool management, from this (and assumably other) SQL libraries.

Categories

Resources