I have a cache instance running on Windows Azure. I'm connecting to it from my web application and getting intermittent exceptions with the following message:
ErrorCode:SubStatus:There is a temporary failure.
Please retry later. (One or more specified cache servers are
unavailable, which could be caused by busy network or servers. For
on-premises cache clusters, also verify the following conditions.
Ensure that security permission has been granted for this client
account, and check that the AppFabric Caching Service is allowed
through the firewall on all cache hosts. Also the MaxBufferSize on the
server must be greater than or equal to the serialized object size
sent from the client.). Additional Information : The client was trying
to communicate with the server:
net.tcp://myserver.cache.windows.net:22234.
I've been able to duplicate the problem with this snippet in LinqPad
var config = new DataCacheFactoryConfiguration
{
AutoDiscoverProperty = new DataCacheAutoDiscoverProperty(true, "myserver.cache.windows.net"),
SecurityProperties = new DataCacheSecurity("key", false)
};
var factory = new DataCacheFactory(config);
var client = factory.GetDefaultCache();
//client.Put("foo", "bar");
for (int i = 0; i < 100; i++)
{
System.Threading.Tasks.Task.Factory.StartNew(o => {
var i1 = (int)o;
try
{
client.Get("foo").Dump();
} catch (Exception e)
{
e.Message.Dump();
}
}, i);
}
If I run this snippet as-is, spawning more than about 50 threads, I get the error. If I uncomment the initial Put(), I can run it with 10,000 threads. I make sure the entry is in the cache regardless before I run this. I've tried using pessimistic locking and it does not seem to have any effect. I've used the latest client DLLs from NuGet. I've tried scaling the cache up to 1GB with no other usage besides this snippet.
Since my requests in my web app are coming in on different threads, I believe this reasonably simulates what's happening in my app. And I'm definitely getting the same exception in both cases. Can anyone suggest a way to avoid this exception? Does it have to do with the initial Put() happening on the same thread as the constructor? That seems unlikely but it's the only thing I can do in this test scenario to eliminate the exception.
Related
We require that data is highly correct, more so than 100% uptime (I recognize this may mean that opentelemetry is not the best choice but still would like to know if its possible).
We are exporting to elastic using APM.
I have noticed 2 significant issues.
Issue 1: I provide no/incorrect bearer token. No errors (or traces) are recorded. Silent failure.
Issue 2: I try to write a huge number of traces (100k) as fast as possible. About 2k make it and the rest are discarded.
var tracerProvider = Sdk.CreateTracerProviderBuilder()
.SetSampler(new AlwaysOnSampler())
.AddSource("MyCompany.MyProduct.MyLibrary")
.AddOtlpExporter(o =>
{
o.ExportProcessorType = ExportProcessorType.Batch;
o.TimeoutMilliseconds = 100 * 1000;
o.Protocol = OtlpExportProtocol.Grpc;
o.Endpoint = new Uri("https://somepath:443");
o.Headers = "Authorization=Bearer token1";
})
//.AddConsoleExporter()
.Build();
Task.Run(() =>
{
for (int i = 0; i < 100000; i++)
{
using (var activity = MyActivitySource.StartActivity("SayHello"))
{
activity?.SetTag("foo", 1);
}
}
});
Console.WriteLine("Done");
Console.ReadLine();
I would start otel collector locally - then point your app code to local otel collector and local otel collector will export traces to Elastic APM.
Configure local otel collector:
1.) Enable telemetry metrics and monitor failed metrics of used OTLP exporter, e.g. https://grafana.com/grafana/dashboards/15983
2.) Use aggresive batching - batch processor before exporting to Elastic APM, e.g.
processors:
batch:
send_batch_size: 10000
timeout: 5s
send_batch_max_size: 0
Collector will decrease ingestion rate to Elastic APM thank to batching. Of course you need to test it first, because some trace backend implementation uses defaults - e.g. GRPC for Go has default 4MB GRPC message size and 10k traces in one message will very likely exceeds this limit. Very often these infrastructure/app limits are not documented, so stress testing before production is highly recommended. Keep in mind that batching will require additional memory, so monitor also memory usage.
I would customize retry and queue behaviour.
I'am using Redis cache as distributed cache in ASP.NET app.
It works until Redis server becomes unavailable and the question is:
How to properly handle disconnection issues?
Redis is configured this way (Startup.cs):
services.AddDistributedRedisCache(...)
Option AbortOnConnectFail is set to false
Injected in service via constructor:
...
private IDistributedCache _cache
public MyService(IDistributedCache cache)
{
_cache = cache;
}
When Redis is down the following code throws an exception (StackExchange.Redis.RedisConnectionException: SocketFailure on 127.0.0.1:6379/Subscription ...):
var val = await _cache.GetAsync(key, cancellationToken);
I don't think that using reflection to inspect a connection state inside _cache object is a good way. So are there any 'right' options to handle it?
Maybe you can check Polly Project. It has Retry/WaitAndRetry/RetryForever and Circuit Breakers that can be handy. So you can catch that RedisConnectionException And then retry or fallback to other method.
You have Plugin for Microsoft DistributedCache Provider.
Check it out.
First of all, why is your Redis server becoming unavailable? And for how long? You should minimize these kinds of situations. Do you use Redis as a service from AWS i.e. ElasticCache? If so you can configure it to promote a new Redis slave /read-replice server to become a master if the first master fails.
To improve fault tolerance and reduce write downtime, enable Multi-AZ with Automatic Failover for your Redis (cluster mode
disabled) cluster with replicas. For more information, see Minimizing
downtime in ElastiCache for Redis with Multi-AZ.
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
Apart from that, a fallback solution to an unresponsive Redis server would be just to retrieve the objects/entities that your a caching in Redis from the database if the Redis server is down. You can retry the Redis call two times with 5 seconds between each retry and if the server is still down you should just query the database. This would result in a performance hit but it is a better solution than throwing an error.
T val = null;
int retryCount = 0;
do
{
try
{
val = await _cache.GetAsync(key, cancellationToken);
}
catch(Exception ex)
{
retryCount++;
Thread.Sleep(retryCount * 2000)
}
}
while(retryCount < 3 && val == null);
if (val == null)
{
var = call to database
}
I currently have a single application that needs to be started from a windows service that i am coding in .net 3.5. This application is currently running as the user who ran the service, in my case the SYSTEM user. If running as the SYSTEM user it does not show the application to the users desktop. Thoughts? advice?
//constructor
private Process ETCHNotify = new Process();
//StartService()
ETCHNotify.StartInfo.FileName = baseDir + "\\EtchNotify.exe";
ETCHNotify.StartInfo.UseShellExecute = false;
//BackgroundWorkerThread_DoWork()
if (!systemData.GetUserName().Equals(""))
{
// start ETCHNotify
try {
ETCHNotify.Start();
}
catch (Exception ex)
{
systemData.Run("ERR: Notify can't start: " + ex.Message);
}
}
I only execute the try/catch if the function i have written GetUserName() (which determines the username of the user running explorer.exe) is not null
again to reiterate: desired functionality is that this starts ETCHNotify in a state that allows it to interact with the currently logged in user as determined by GetUserName()
Collage of some post found around (this and this)
Note that as of Windows Vista, services are strictly forbidden from interacting directly with a user:
Important: Services cannot directly interact with a user as of Windows
Vista. Therefore, the techniques mentioned in the section titled Using
an Interactive Service should not be used in new code.
This "feature" is broken, and conventional wisdom dictates that you shouldn't have been relying on it anyway. Services are not meant to provide a UI or allow any type of direct user interaction. Microsoft has been cautioning that this feature be avoided since the early days of Windows NT because of the possible security risks.
There are some possible workarounds, however, if you absolutely must have this functionality. But I strongly urge you to consider its necessity carefully and explore alternative designs for your service.
Use WTSEnumerateSessions to find the right desktop, then CreateProcessAsUser to start the application on that desktop (you pass it the handle of the desktop as part of the STARTUPINFO structure) is correct.
However, I would strongly recommend against doing this. In some environments, such as Terminal Server hosts with many active users, determining which desktop is the 'active' one isn't easy, and may not even be possible.
A more conventional approach would be to put a shortcut to a small client app for your service in the global startup group. This app will then launch along with every user session, and can be used start other apps (if so desired) without any juggling of user credentials, sessions and/or desktops.
Ultimately in order to solve this i took the advice of #marco and the posts he mentioned. I have created the service to be entirely independent of the tray application that interacts with the user. I did however install the Tray application via registry 'start up' methods with the service. The Service installer will now install the application which interacts with the user as well... This was the safest and most complete method.
thanks for your help everyone.
I wasn't going to answer this since you already answered it, (and it's oh, what? going on 2.5 years OLD now!?) But there are ALWAYS those people who are searching for this same topic, and reading the answers...
In order to get my service to Interact with the Desktop, no matter WHAT desktop, nor, how MANY desktops, nor if the service was even running on the SAME COMPUTER as the desktop app!! None of that matters with what I got here... I won't bore you with the details, I'll just give you the meat and potatoes, and you and let me know if you want to see more...
Ok. First thing I did was create an Advertisement Service. This is a thread that the service runs, opens up a UDP socket to listen for broadcasts on the network. Then, using the same piece of code, I shared it with the client app, but it calls up Advertise.CLIENT, rather than Advertise.SERVER... The CLIENT opens the port I expect the service to be on, and broadcasts a message, "Hello... Is there anybody out there??", asking if they're there ANY servers listening, and if so, reply back to THIS IP address with your computer name, IP Address and port # where I can find the .NET remoting Services..." Then it waits a small amount of time-out time, gathers up the responses it gets, and if it's more than one, it presents the user with a dialog box and a list of services that responded... The Client then selects one, or, if only ONE responded, it will call Connect((TServerResponse) res); on that, to get connected up. At this point, the server is using Remoting Services with the WellKnownClientType, and WellKnownServerType to put itself out there...
I don't think you are too interested in my "Auto-Service locater", because a lot of people frown on UDP, even more so when your app start broadcasting on large networks. So, I'm assuming you'd be more interested in my RemotingHelper, that gets the client connected up to the server. It looks like this:
public static Object GetObject(Type type)
{
try {
if(_wellKnownTypes == null) {
InitTypeCache();
}
WellKnownClientTypeEntry entr = (WellKnownClientTypeEntry)_wellKnownTypes[type];
if(entr == null) {
throw new RemotingException("Type not found!");
}
return System.Activator.GetObject(entr.ObjectType, entr.ObjectUrl);
} catch(System.Net.Sockets.SocketException sex) {
DebugHelper.Debug.OutputDebugString("SocketException occured in RemotingHelper::GetObject(). Error: {0}.", sex.Message);
Disconnect();
if(Connect()) {
return GetObject(type);
}
}
return null;
}
private static void InitTypeCache()
{
if(m_AdvertiseServer == null) {
throw new RemotingException("AdvertisementServer cannot be null when connecting to a server.");
}
_wellKnownTypes = new Dictionary<Type, WellKnownClientTypeEntry>();
Dictionary<string, object> channelProperties = new Dictionary<string, object>();
channelProperties["port"] = 0;
channelProperties["name"] = m_AdvertiseServer.ChannelName;
Dictionary<string, object> binFormatterProperties = new Dictionary<string, object>();
binFormatterProperties["typeFilterLevel"] = "Full";
if(Environment.UserInteractive) {
BinaryServerFormatterSinkProvider binFormatterProvider = new BinaryServerFormatterSinkProvider(binFormatterProperties, null);
_serverChannel = new TcpServerChannel(channelProperties, binFormatterProvider);
// LEF: Only if we are coming form OUTSIDE the SERVICE do we want to register the channel, since the SERVICE already has this
// channel registered in this AppDomain.
ChannelServices.RegisterChannel(_serverChannel, false);
}
System.Diagnostics.Debug.Write(string.Format("Registering: {0}...\n", typeof(IPawnStatServiceStatus)));
RegisterType(typeof(IPawnStatServiceStatus),m_AdvertiseServer.RunningStatusURL.ToString());
System.Diagnostics.Debug.Write(string.Format("Registering: {0}...\n", typeof(IPawnStatService)));
RegisterType(typeof(IPawnStatService), m_AdvertiseServer.RunningServerURL.ToString());
System.Diagnostics.Debug.Write(string.Format("Registering: {0}...\n", typeof(IServiceConfiguration)));
RegisterType(typeof(IServiceConfiguration), m_AdvertiseServer.RunningConfigURL.ToString());
}
[SecurityPermission(SecurityAction.Demand, Flags=SecurityPermissionFlag.RemotingConfiguration, RemotingConfiguration=true)]
public static void RegisterType(Type type, string serviceUrl)
{
WellKnownClientTypeEntry clientType = new WellKnownClientTypeEntry(type, serviceUrl);
if(clientType != RemotingConfiguration.IsWellKnownClientType(type)) {
RemotingConfiguration.RegisterWellKnownClientType(clientType);
}
_wellKnownTypes[type] = clientType;
}
public static bool Connect()
{
// Init the Advertisement Service, and Locate any listening services out there...
m_AdvertiseServer.InitClient();
if(m_AdvertiseServer.LocateServices(iTimeout)) {
if(!Connected) {
bConnected = true;
}
} else {
bConnected = false;
}
return Connected;
}
public static void Disconnect()
{
if(_wellKnownTypes != null) {
_wellKnownTypes.Clear();
}
_wellKnownTypes = null;
if(_serverChannel != null) {
if(Environment.UserInteractive) {
// LEF: Don't unregister the channel, because we are running from the service, and we don't want to unregister the channel...
ChannelServices.UnregisterChannel(_serverChannel);
// LEF: If we are coming from the SERVICE, we do *NOT* want to unregister the channel, since it is already registered!
_serverChannel = null;
}
}
bConnected = false;
}
}
So, THAT is meat of my remoting code, and allowed me to write a client that didn't have to be aware of where the services was installed, or how many services were running on the network. This allowed me to communicate with it over the network, or on the local machine. And it wasn't a problem to have two or more people running the app, however, yours might. Now, I have some complicated callback code in mine, where I register events to go across the remoting channel, so I have to have code that checks to see if the client is even still connected before I send the notification to the client that something happened. Plus, if you are running for more than one user, you might not want to use Singleton objects. It was fine for me, because the server OWNS the objects, and they are whatever the server SAYS they are. So, my STATS object, for example, is a Singleton. No reason to create an instance of it for EVERY connection, when everyone is going to see the same data, right?
I can provide more chunks of code if necessary. This is, of course, one TINY bit of the overall picture of what makes this work... Not to mention the subscription providers, and all that.
For the sake of completeness, I'm including the code chunk to keep your service connected for the life of the process.
public override object InitializeLifetimeService()
{
ILease lease = (ILease)base.InitializeLifetimeService();
if(lease.CurrentState == LeaseState.Initial) {
lease.InitialLeaseTime = TimeSpan.FromHours(24);
lease.SponsorshipTimeout = TimeSpan.FromSeconds(30);
lease.RenewOnCallTime = TimeSpan.FromHours(1);
}
return lease;
}
#region ISponsor Members
[SecurityPermissionAttribute(SecurityAction.LinkDemand, Flags=SecurityPermissionFlag.Infrastructure)]
public TimeSpan Renewal(ILease lease)
{
return TimeSpan.FromHours(12);
}
#endregion
If you include the ISponsor interface as part of your server object, you can implement the above code.
Hope SOME of this is useful.
When you register your service, you can tell it to allow interactions with the desktop. You can read this oldie link http://www.codeproject.com/KB/install/cswindowsservicedesktop.aspx
Also, don't forget that you can have multiple users logged in at the same time.
Apparently on Windows Vista and newer interacting with the desktop has been made more difficult. Read this for a potential solution: http://www.codeproject.com/KB/cs/ServiceDesktopInteraction.aspx
I am developing an app where I need to download a bunch of web pages, preferably as fast as possible. The way that I do that right now is that I have multiple threads (100's) that have their own System.Net.HttpWebRequest. This sort of works, but I am not getting the performance I would like. Currently I have a beefy 600+ Mb/s connection to work with, and this is only utilized at most 10% (at peaks). I guess my strategy is flawed, but I am unable to find any other good way of doing this.
Also: If the use of HttpWebRequest is not a good way to download web pages, please say so :)
The code has been semi-auto-converted from java.
Thanks :)
Update:
public String getPage(String link){
myURL = new System.Uri(link);
myHttpConn = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(myURL);
myStreamReader = new System.IO.StreamReader(new System.IO.StreamReader(myHttpConn.GetResponse().GetResponseStream(),
System.Text.Encoding.Default).BaseStream,
new System.IO.StreamReader(myHttpConn.GetResponse().GetResponseStream(),
System.Text.Encoding.Default).CurrentEncoding);
System.Text.StringBuilder buffer = new System.Text.StringBuilder();
//myLineBuff is a String
while ((myLineBuff = myStreamReader.ReadLine()) != null)
{
buffer.Append(myLineBuff);
}
return buffer.toString();
}
One problem is that it appears you're issuing each request twice:
myStreamReader = new System.IO.StreamReader(
new System.IO.StreamReader(
myHttpConn.GetResponse().GetResponseStream(),
System.Text.Encoding.Default).BaseStream,
new System.IO.StreamReader(myHttpConn.GetResponse().GetResponseStream(),
System.Text.Encoding.Default).CurrentEncoding);
It makes two calls to GetResponse. For reasons I fail to understand, you're also creating two stream readers. You can split that up and simplify it, and also do a better job of error handling...
var response = (HttpWebResponse)myHttpCon.GetResponse();
myStreamReader = new StreamReader(response.GetResponseStream(), Encoding.Default)
That should double your effective throughput.
Also, you probably want to make sure to dispose of the objects you're using. When you're downloading a lot of pages, you can quickly run out of resources if you don't clean up after yourself. In this case, you should call response.Close(). See http://msdn.microsoft.com/en-us/library/system.net.httpwebresponse.close.aspx
I am adding this answer as another possibility which people may encounter when
downloading from multiple servers using multi-threaded apps
using Windows XP or Vista as the operating system
The tcpip.sys driver for these operating systems has a limit of 10 outbound connections per second. This is a rate limit, not a connection limit, so you can have hundreds of connections, but you cannot initiate more than 10/s. The limit was imposed by Microsoft to curtail the spread of certain types of virus/worm. Whether such methods are effective is outside the scope of this answer.
In a multi-threaded application that downloads from multitudes of servers, this limitation can manifest as a series of timeouts. Windows puts into a queue all of the "half-open" (newly open but not yet established) connections once the 10/s limit is reached. In my application, for example, I had 20 threads ready to process connections, but I found that sometimes I would get timeouts from servers I knew were operating and reachable.
To verify that this is happening, check the operating system's event log, under System. The error is:
EventID 4226: TCP/IP has reached the security limit imposed on the number of concurrent TCP connect attempts.
There are many references to this error and plenty of patches and fixes to apply to remove the limit. However because this problem is frequently encountered by P2P (Torrent) users, there's quite a prolific amount of malware disguised as this patch.
I have a requirement to collect data from over 1200 servers (that are actually data sensors) on 5-minute intervals. I initially developed the application (on WinXP) to reuse 20 threads repeatedly to crawl the list of servers and aggregate the data into a SQL database. Because the connections were initiated based on a timer tick event, this error happened often because at their invocation, none of the connections are established, thus 10 are immediately queued.
Note that this isn't a problem necessarily, because as connections are established, those queued are then processed. However if non-queued connections are slow to establish, that time can negatively impact the timeout limits of the queued connections (in my experience). The result, looking at my application log file, was that I would see a batch of connections that timed out, followed by a majority of connections that were successful. Opening a web browser to test "timed out" connections was confusing, because the servers were available and quick to respond.
I decided to try HEX editing the tcpip.sys file, which was suggested on a guide at speedguide.net. The checksum of my file differed from the guide (I had SP3 not SP2) and comments in the guide weren't necessarily helpful. However, I did find a patch that worked for SP3 and noticed an immediate difference after applying it.
From what I can find, Windows 7 does not have this limitation, and since moving the application to a Windows 7-based machine, the timeout problem has remained absent.
I do this very same thing, but with thousands of sensors that provide XML and Text content. Factors that will definitely affect performance are not limited to the speed and power of your bandwidth and computer, but the bandwidth and response time of each server you are contacting, the timeout delays, the size of each download, and the reliability of the remote internet connections.
As comments indicate, hundreds of threads is not necessarily a good idea. Currently I've found that running between 20 and 50 threads at a time seems optimal. In my technique, as each thread completes a download, it is given the next item from a queue.
I run a custom ThreaderEngine Class on a separate thread that is responsible for maintaining the queue of work items and assigning threads as needed. Essentially it is a while loop that iterates through an array of threads. As the threads finish, it grabs the next item from the queue and starts the thread again.
Each of my threads are actually downloading several separate items, but the method call is the same (.NET 4.0):
public static string FileDownload(string _ip, int _port, string _file, int Timeout, int ReadWriteTimeout, NetworkCredential _cred = null)
{
string uri = String.Format("http://{0}:{1}/{2}", _ip, _port, _file);
string Data = String.Empty;
try
{
HttpWebRequest Request = (HttpWebRequest)WebRequest.Create(uri);
if (_cred != null) Request.Credentials = _cred;
Request.Timeout = Timeout; // applies to .GetResponse()
Request.ReadWriteTimeout = ReadWriteTimeout; // applies to .GetResponseStream()
Request.Proxy = null;
Request.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.NoCacheNoStore);
using (HttpWebResponse Response = (HttpWebResponse)Request.GetResponse())
{
using (Stream dataStream = Response.GetResponseStream())
{
if (dataStream != null)
using (BufferedStream buffer = new BufferedStream(dataStream))
using (StreamReader reader = new StreamReader(buffer))
{
Data = reader.ReadToEnd();
}
}
return Data;
}
}
catch (AccessViolationException ave)
{
// ...
}
catch (Exception exc)
{
// ...
}
}
Using this I am able to download about 60KB each from 1200+ remote machines (72MB) in less than 5 minutes. The machine is a Core 2 Quad with 2GB RAM and utilizes four bonded T1 connections (~6Mbps).
I'm developing an application (winforms C# .NET 4.0) where I access a lookup functionality from a 3rd party through a simple HTTP request. I call an url with a parameter, and in return I get a small string with the result of the lookup. Simple enough.
The challenge is however, that I have to do lots of these lookups (a couple of thousands), and I would like to limit the time needed. Therefore I would like to run requests in parallel (say 10-20). I use a ThreadPool to do this, and the short version of my code looks like this:
public void startAsyncLookup(Action<LookupResult> returnLookupResult)
{
this.returnLookupResult = returnLookupResult;
foreach (string number in numbersToLookup)
{
ThreadPool.QueueUserWorkItem(lookupNumber, number);
}
}
public void lookupNumber(Object threadContext)
{
string numberToLookup = (string)threadContext;
string url = #"http://some.url.com/?number=" + numberToLookup;
WebClient webClient = new WebClient();
Stream responseData = webClient.OpenRead(url);
LookupResult lookupResult = parseLookupResult(responseData);
returnLookupResult(lookupResult);
}
I fill up numbersToLookup (a List<String>) from another place, call startAsyncLookup and provide it with a call-back function returnLookupResult to return each result. This works, but I found that I'm not getting the throughput I want.
Initially I thought it might be the 3rd party having a poor system on their end, but I excluded this by trying to run the same code from two different machines at the same time. Each of the two took as long as one did alone, so I could rule out that one.
A colleague then tipped me that this might be a limitation in Windows. I googled a bit, and found amongst others this post saying that by default Windows limits the number of simultaneous request to the same web server to 4 for HTTP 1.0 and to 2 for HTTP 1.1 (for HTTP 1.1 this is actually according to the specification (RFC2068)).
The same post referred to above also provided a way to increase these limits. By adding two registry values to [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] (MaxConnectionsPerServer and MaxConnectionsPer1_0Server), I could control this myself.
So, I tried this (sat both to 20), restarted my computer, and tried to run my program again. Sadly though, it didn't seem to help any. I also kept an eye on the Resource Monitor while running my batch lookup, and I noticed that my application (the one with the title blacked out) still only was using two TCP connections.
So, the question is, why isn't this working? Is the post I linked to using the wrong registry values? Is this perhaps not possible to "hack" in Windows any longer (I'm on Windows 7)?
And just in case anyone should wonder, I have also tried with different settings for MaxThreads on ThreadPool (everything from 10 to 100), and this didn't seem to affect my throughput at all, so the problem shouldn't be there either.
It is matter of ServicePoint. Which provides connection management for HTTP connections.
The default maximum number of concurrent connections allowed by a ServicePoint object is 2.
So if you need to increase it you can use ServicePointManager.DefaultConnectionLimit property. Just check the link in MSDN there you can see a sample. And set the value you need.
For quicker reference for someone. To increase the connection limit per host you can do this in your Main() or anytime before you begin making the HTTP requests.
System.Net.ServicePointManager.DefaultConnectionLimit = 1000; //or some other number > 4
Fire and forget this method from your main method. Icognito user is correct, only 2 threads are allowed to play at the same time.
private static void openServicePoint()
{
ServicePointManager.UseNagleAlgorithm = true;
ServicePointManager.Expect100Continue = true;
ServicePointManager.CheckCertificateRevocationList = true;
ServicePointManager.DefaultConnectionLimit = 10000;
Uri MS = new Uri("http://My awesome web site");
ServicePoint servicePoint = ServicePointManager.FindServicePoint(MS);
}
For Internet Explorer 8:
Run Registry Editor and navigate to following key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_MAXCONNECTION SPERSERVER
and
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_MAXCONNECTION SPER1_0SERVER
If FEATURE_MAXCONNECTIONSPERSERVER and FEATURE_MAXCONNECTIONSPER1_0SERVER are missing then create them. Now create DWORD Value called iexplore.exe for both sub keys (listed above) and set their value to 10 or whatever number desired.