I'm using C# to connect to a Webservice to grab data. However, I'm currently having problems getting the code to run on a remote server; when I say problems, I mean its running, but the connection speed between client and server is ridiculously slow (through no fault of mine - the client is providing a slow resultset via a webservice, and they have all timeouts turned off their side in order to do so.)
if ((endpointConfiguration == EndpointConfiguration.SFFService))
{
System.ServiceModel.BasicHttpBinding result = new System.ServiceModel.BasicHttpBinding();
result.MaxBufferSize = int.MaxValue;
result.ReaderQuotas = System.Xml.XmlDictionaryReaderQuotas.Max;
result.MaxReceivedMessageSize = int.MaxValue;
result.AllowCookies = true;
result.OpenTimeout = TimeSpan.MaxValue;
result.CloseTimeout = TimeSpan.MaxValue;
result.SendTimeout = TimeSpan.MaxValue;
return result;
}
So. Not a great start. Open Close and Send all set to maximum.
Anyway, I've matched their long timeouts my side, and a few of the smaller webservice requests finish and succeed ok on the server. The biggest, slowest one however just hangs indefinitely, probably because I've told it to never timeout.
However, I'm pretty sure there's some other problem happening, as I left it overnight and it just sat there. Locally, on my development machine, although slow, it works.
My question is, has anyone any idea on additional things to check about the environment that could potentially be in play here? I thought perhaps firewall, but given that the small requests succeed (and connect) it is very difficult to debug the slow requests as I've no idea how long to wait until accepting that the program isn't going to do anything.
FWIW I've tried connecting via a browser, and again, the browser just sits there waiting for the request to finish which it never does (most likely due to the timeout being turned off on the server). If there was any way to see even how much of the request was left to finish (like a percentage download) that may help give me some guidance as to if the code is doing anything other than waiting.
There's no way to get a progress of the remote call even when you are attached to the remote process. Try using a local Visual Studio on the server machine (preferably on a non-production VM) and attach to the local process rather than using the Remote Debugger.
I am not sure exactly what the question is but the first step I'd take while debugging a slow application would be to test a local connection (local client and local server) to eliminate the network from the equation. If that works well, try hosting the server on a different place (public cloud maybe?) and try again there, if it works well then there's definitely something en-route or on that server.
If you're interested in tracking how long web service calls take you could track it by placing the start time into the HttpContext.Current.Items or OperationContext.Current.Items on BeginRequest/EndRequest in Global.asax or in a MessageInspector if you use WCF (you can send the datetime between the two methods by returning it into the Before method and read it from the corelationState parameter in the After method).
Related
I have a REST service in a self hosted ASP.Net WebApi application (Console).
Some clients poll the server in specific intervals to fetch new data. In general all is working fine.
The problem is, that the server stops responding to requests after some random duration (~30mins - 2.5 hours). All client requests start to time out.
The weird thing is, the server doesn't seem to receive the requests anymore as no controller method is invoked anymore). Server didn't throw any exceptions and the console app is still responsive. So I can only suppose there is a problem, before the request reaches the API controller.
In the debugger everything seems fine.
How can I diagnose such an issue?
What else can I try to fix the described behavior?
Notes:
Tested on multiple systems
.Net 4.5.1
Asp.Net WebApi 5.1.2
I have found the issue, the reason this is happening is because of connection leaks. If you are sending requests and aren't closing them correctly, either after the request is finished, or within an exception, the amount of open connections will eventuelly reach it's max value. Either you change the max amount of open connections in the connectionstring or(the prefered way) make sure your code is handling the closing part:
SqlConnection myConnection = new SqlConnection(ConnectionString);
try
{
conn.Open();
someCall (myConnection);
}
finally
{
myConnection.Close();
}
Credit goes to How can I solve a connection pool problem between ASP.NET and SQL Server? Where you can read more about this.
In my case, the issue was caused by never ending tasks. Due a misusage of the ReactiveExtensions Api, I randomly created never ending tasks. It seems, at some point the task scheduler simply couldn't handle them anymore, although I'm not completely sure about that.
Thing learned: It seems, by doing bad things in your app code (too many tasks, SQL connections ...) you can kill the WebApi infrastructure, so that it doesn't handle requests - at any level - anymore.
In the project I'm working on, we have several services implemented using WCF. The situation I'm facing is that some of the services need to know when a session ends, so that it can appropriately update the status of that client. Notifying the service when a client gracefully terminates (e.g. the user closes the application) is easy, however, there are cases where the application might crash, or the client machine might restart, in which case the client won't be able to notify the service about its status.
Initially, I was thinking about having a timer on the server side, which is triggered once a client connects, and changes the status of that client to "terminated" after, let's say, 1 minute. Now the client sends its status every 30 seconds to the service, and the service basically restarts its timer on every request from the client, which means it (hopefully) never changes the status of the client as long as the client is alive.
Even though this method is pretty reliable (not fully reliable; what if it takes the client more than 1 minute to send its status?) it's still not the best approach to solving this problem. Note that due to the original design of the system, I cannot implement a duplex service, which would probably make things a lot simpler. So my question is: Is there a way for the sevice to know when the session ends (i.e. the connection times out or the client closes the proxy)? I came accross this question: WCF: How to find out when a session is ending but the link on the answer seems to be broken.
Another thing that I'm worried about is; they way I'm currently creating my channel proxies is implemented like this:
internal static TResult ExecuteAndReturn<TProxy, TResult>(Func<TProxy, TResult> delegateToExecute)
{
string endpointUri = ServiceEndpoints.GetServiceEndpoint(typeof(TProxy));
var binding = new WSHttpBinding();
binding.Security.Mode = SecurityMode.Message;
binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;
TResult valueToReturn;
using (ChannelFactory<TProxy> factory = new ChannelFactory<TProxy>(binding,
new EndpointAddress(new Uri(endpointUri),
EndpointIdentity.CreateDnsIdentity(ServiceEndpoints.CertificateName))))
{
TProxy proxy = factory.CreateChannel();
valueToReturn = delegateToExecute(proxy);
}
return valueToReturn;
}
So the channel is closed immediately after the service call is made (since it's in a using block), is that, from a service standpoint, an indication that the session is terminated? If so, should I keep only one instance of each service during application runtime, by using a singleton maybe? I apologize if the questions seem a little vague, I figured there would be plenty of questions like these but wasn't able to find something similar.
Yes, closing the channel terminates the session, but if there is an error of some kind then you are subject to the timeout settings of the service, like this:
<binding name="tcpBinding" receiveTimeout="00:00:10" />
This introduces a ten second timeout if an error occurs.
Check out Managing WCF Session Lifetime with IsInitiating and IsTerminating
I have an ASP.NET 3.5 server application written in C#. It makes outbound requests to a REST API using HttpWebRequest and HttpWebResponse.
I have setup a test application to send these requests on separate threads (to vaguely mimic concurrency against the server).
Please note this is more of a Mono/Environment question than a code question; so please keep in mind that the code below is not verbatim; just a cut/paste of the functional bits.
Here is some pseudo-code:
// threaded client piece
int numThreads = 1;
ManualResetEvent doneEvent;
using (doneEvent = new ManualResetEvent(false))
{
for (int i = 0; i < numThreads; i++)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(Test), random_url_to_same_host);
}
doneEvent.WaitOne();
}
void Test(object some_url)
{
// setup service point here just to show what config settings Im using
ServicePoint lgsp = ServicePointManager.FindServicePoint(new Uri(some_url.ToString()));
// set these to optimal for MONO and .NET
lgsp.Expect100Continue = false;
lgsp.ConnectionLimit = 100;
lgsp.UseNagleAlgorithm = true;
lgsp.MaxIdleTime = 100000;
_request = (HttpWebRequest)WebRequest.Create(some_url);
using (HttpWebResponse _response = (HttpWebResponse)_request.GetResponse())
{
// do stuff
} // releases the response object
// close out threading stuff
if (Interlocked.Decrement(ref numThreads) == 0)
{
doneEvent.Set();
}
}
If I run the application on my local development machine (Windows 7) in the Visual Studio web server, I can up the numThreads and receive the same avg response time with minimal variation whether it's 1 "user" or 100.
Publishing and deploying the application to Apache2 on a Mono 2.10.2 environment, the response times scale almost linearly. (i.e, 1 thread = 300ms, 5 thread = 1500ms, 10 threads = 3000ms). This happens regardless of server endpoint (different hostname, different network, etc).
Using IPTRAF (and other network tools), it appears as though the application only opens 1 or 2 ports to route all connections through and the remaining responses have to wait.
We have built a similar PHP application and deployed in Mono with the same requests and the responses scale appropriately.
I have run through every single configuration setting I can think of for Mono and Apache and the ONLY setting that is different between the two environments (at least in code) is that sometimes the ServicePoint SupportsPipelining=false in Mono, while it is true from my machine.
It seems as though the ConnectionLimit (default of 2) is not being changed in Mono for some reason but I am setting it to a higher value both in code and the web.config for the specified host(s).
Either me and my team are overlooking something significant or this is some sort of bug in Mono.
I believe that you're hitting a bottleneck in the HttpWebRequest. The web requests each use a common service point infrastructure within the .NET framework. This appears to be intended to allow requests to the same host to be reused, but in my experience results in two bottlenecks.
First, the service points allow only two concurrent connections to a given host by default in order to be compliant to the HTTP specification. This can be overridden by setting the static property ServicePointManager.DefaultConnectionLimit to a higher value. See this MSDN page for more details. It looks as if you're already addressing this for the individual service point itself, but due to the concurrency locking scheme at the service point level, doing so may be contributing to the bottleneck.
Second, there appears to be an issue with lock granularity in the ServicePoint class itself. If you decompile and look at the source for the lock keyword, you'll find that it uses the instance itself to synchronize and does so in many places. With the service point instance being shared among web requests for a given host, in my experience this tends to bottleneck as more HttpWebRequests are opened and causes it to scale poorly. This second point is mostly personal observation and poking around the source, so take it with a grain of salt; I wouldn't consider it an authoritative source.
Unfortunately, I did not find a reasonable substitute at the time that I was working with it. Now that the ASP.NET Web API has been released, you may wish to give the HttpClient a look. Hope that helps.
I know this is pretty old but I'm putting this here in case it might help somebody else who runs into this issue. We ran into the same problem with parallel outbound HTTPS requests. There are a few issues at play.
The first issue is that ServicePointManager.DefaultConnectionLimit did not change the connection limit as far as I can tell. Setting this to 50, creating a new connection, and then checking the connection limit on the service point for the new connection says 2. Setting it on that service point to 50 once appears to work and persist for all connections that will end up going through that service point.
The second issue we ran into was with threading. The current implementation of the mono thread pool appears to create at most 2 new threads per second. This is an eternity if you are doing many parallel requests that start at exactly the same time. To counteract this, we tried setting ThreadPool.SetMinThreads to a higher number. It appears that Mono only creates up to 1 new thread when you make this call, regardless of the delta between the current number of threads and the desired number. We were able to work around this by calling SetMinThreads in a loop until the thread pool had the desired number of idle threads.
I opened a bug about the latter issue because that's the one I'm most confident is not working as intended: https://bugzilla.xamarin.com/show_bug.cgi?id=7055
If #jake-moshenko is right about ServicePointManager.DefaultConnectionLimit not having any effect if changed in Mono, please file this as a bug in http://bugzilla.xamarin.com/.
However I would try some things before discarding this completely as a Mono issue:
Try using the SGen garbage collector instead of the old boehm one, by passing --gc=sgen as a flag to mono.
If the above doesn't help, upgrade to Mono 3.2 (which BTW defaults to SGEN GC too), because there has been a lot of fixes since you asked the question.
If the above doesn't help, build your own Mono (master branch), as this important pull request about threading has been merged recently.
If the above doesn't help, build your own Mono with this pull request added. If it fixes your problem, please add a "+1" to the pull request. It might be a fix for bug 7055.
Okay I know there is lots of info out there on this and I promise you I have read it all and tried umpteen different methods to get this working!!
I have a socket server program which runs on a laptop. I then have up to 50 laptops connected wirelessly via the same LAN to the server. The client laptops all connect to the server (using Socket.ConnectAsync) and the server uses async methods as well to send and receive data. The server shows a list of connected client laptops to the user and this list seems to be accurate and picks up whenever a client disconnects and connects. However, the client laptops never seem to detect when connection to the server has been lost under certain circumstances (ie if server program crashes, if server laptop goes in to standby mode etc.) I have got a timer on the client laptops which polls the connection every 5 seconds as follows:
bool SocketConnected(Socket s)
{
bool part1 = s.Poll(0, SelectMode.SelectWrite);
bool part2 = (s.Available == 0);
if (!part1 && part2)
{
return false;
}
else
{
return true;
}
}
I have tried using all selectmodes (SelectWrite,SelectRead,SelectError) and have tried using different time out values. I have tried checking s.Connected value after these operations and have tried all manners of other methods to determine the connection state and nothing seems to produce reliable results!! I think I can achieve the result I desire by sending dummy information every 5 seconds and checking s.Connected after doing so, however I don't really want to do this as each laptop is already sending lots of data to the server as it is. Any help at all is massively appreciated! Thanks
The only reliable way to check if a connection is alive is to send something to the other end and see if it arrives. You can do this either manually by sending and receiving a "ping" value from time to time, or automatically by enabling the KeepAlive socket option.
The MSDN documentation for Socket.Poll is very explicit about the exact situations (server crashes, standby) you mentioned:
This method cannot detect certain kinds of connection problems, such
as a broken network cable, or that the remote host was shut down
ungracefully. You must attempt to send or receive data to detect these
kinds of errors.
In developing a relatively simple web service, that takes the data provided by a post and records it in a database table, we're getting this error:
Exception caught: The remote server returned an error: (500) Internal Server Er
or.
Stack trace: at System.Net.HttpWebRequest.GetResponse()
on some servers, but no others. The ones that are getting this are the physical machines, the others are virtual, and obviously the physical servers are far more powerful.
As far as we can tell, the problem is that the DB connections aren't being released back to the pools after each query. I'm using the using pattern below:
using (VoteDaoDataContext dao = new VoteDaoDataContext())
{
dao.insert_response_and_update_count(answerVal, swid, agent, geo, DateTime.Now, ip);
dao.SubmitChanges();
msg += "Thank you for your vote.";
dao.Dispose();
}
I added the dao.Dispose() call to ensure that connections are released when the method finishes, but I don't know whether or not it's necessary.
Am I using this pattern correctly? Is there something else I need to do to ensure that connections get returned to the pools correctly?
Thanks!
Your diagnostic information is not good enough. An HTTP/500 isn't enough detail to really tell if your theory is correct. You're going to need to capture a full stack trace in your logging if you want to get to the problem. I think you've jumped to a conclusion here. And no, you do not need that Dispose() before the end of your using{} block. That's what using{} does.
I thought that dispose() call was redundant, but I wanted to be sure.
We're seeing the connection pools saturating in the SQL logs (I can't look at the directly, I'm just a developer, and this stuff's running in a prod environment), and my ops guy said he's seeing connections timing out... and once they time out, the server starts running again, until the next time it saturates the connection pool.
We're going through the process of tweaking the connection pool settings at the moment... I wanted to be certain that I wasn't doing anything wrong, since this is my first time using Linq.
Thanks!