Selfhosted Asp.Net WebApi stops receiving reuqests - c#

I have a REST service in a self hosted ASP.Net WebApi application (Console).
Some clients poll the server in specific intervals to fetch new data. In general all is working fine.
The problem is, that the server stops responding to requests after some random duration (~30mins - 2.5 hours). All client requests start to time out.
The weird thing is, the server doesn't seem to receive the requests anymore as no controller method is invoked anymore). Server didn't throw any exceptions and the console app is still responsive. So I can only suppose there is a problem, before the request reaches the API controller.
In the debugger everything seems fine.
How can I diagnose such an issue?
What else can I try to fix the described behavior?
Notes:
Tested on multiple systems
.Net 4.5.1
Asp.Net WebApi 5.1.2

I have found the issue, the reason this is happening is because of connection leaks. If you are sending requests and aren't closing them correctly, either after the request is finished, or within an exception, the amount of open connections will eventuelly reach it's max value. Either you change the max amount of open connections in the connectionstring or(the prefered way) make sure your code is handling the closing part:
SqlConnection myConnection = new SqlConnection(ConnectionString);
try
{
conn.Open();
someCall (myConnection);
}
finally
{
myConnection.Close();
}
Credit goes to How can I solve a connection pool problem between ASP.NET and SQL Server? Where you can read more about this.

In my case, the issue was caused by never ending tasks. Due a misusage of the ReactiveExtensions Api, I randomly created never ending tasks. It seems, at some point the task scheduler simply couldn't handle them anymore, although I'm not completely sure about that.
Thing learned: It seems, by doing bad things in your app code (too many tasks, SQL connections ...) you can kill the WebApi infrastructure, so that it doesn't handle requests - at any level - anymore.

Related

Active connections number reached maximum Mongo Atlas

I have an .NetCore application. My DB is Mongo and hosted on Mongo Atlas. I reach the number of maximum connections very easy. I have a lot of request made to the db but my connection with the DB is Singletone.
I connect to Mongo through Mongo Client
MongoClient client = new MongoClient(config.GetConnectionString("Dbconnectionstring"));
IMongoDatabase database = client.GetDatabase("database");
All this in a class that is registered as a singletone in Startup.cs
services.AddSingleton<MongoUnitOfWork>();
I did not have this problem before. On this current application I have to make a lot of requests to the db but if I registered the connection as I singleton I expected to use the same connection and not open a new one for every request. At least this is what I think it happens.
UPDATE
I used a recurrent service named HangFire (intended to use for recurring jobs) that needed a connection string to the DB. Even though I didn't put any function to be automatically called every 1 minute or something similar, I think this was the problem. For now I commented every thing about this service and everything went back to normal.
I will get back as soon as I'm 100% sure that this was the problem.
UPDATE 2
Today I had the problem again so I figured that the Recurrent Service wasn't the problem.
UPDATE 3
Last time when I got rid of that service I also deleted all the connections. Until now I didn't have another problem and when I checked them in the day that I deleted the service all seemed fine. Today I checked again the connections and I had around 70. Apparently the connections don't close.
Also, yes, I'm sure I'm using a singleton to instantiate the MongoClient. I also put breakpoints to see if the they are hit on request. Not the requests are the problem.

WCF request returns wrong response

I have a c# application that the client uses wcf to talk to the server. In the background every X seconds the client calls a Ping method to the server (through WCF). The following error has reproduced a couple of times (for different method calls):
System.ServiceModel.ProtocolException: A reply message was received for operation 'MyMethodToServer' with action 'http://tempuri.org/IMyInterface/PingServerResponse'. However, your client code requires action 'http://tempuri.org/IMyInterface/MyMethodToServerResponse'.
MyMethodToServer is not consistent and it falls on different methods.
How can this happen that a request receives a different response?
I think you have a pretty mess problem with async communication, main suggestion (as your question isn't clear very well), is try to identify every request, catch the calls and waiting for them, do asyncronic communication and getting a several work with threading.
As you present it, is a typical architecture problem.
If you present more code, can I suggest some code fixing in my answer and I'll be glad to update my answer.
If this occurs randomly and not you consistently, you might be running in a load-balanced setup, and deployed an update to only one of the servers?
Wild guess: your client uses same connection to do two requests in parallel. So what happens is:
Thread 1 sends request ARequest
Thread 2 sends request BRequest
Server sends reply BReply
Thread 1 receives reply BReply while expecting AReply
If you have request logs on the server, it'll be easy to confirm - you'll likely see two requests coming with short delay from the client host experiencing the issue
I think MaxConcurrentCall and ConcurrencyMode may be relevant here (although I did not touch WCF for a long while)

Apparent delay in Azure KeyVault access

We have an Azure-based ASP.NET Web Service that accesses an Azure KeyVault. We are seeing two instances in which a method "hangs" on a first try, and then works a minute or so later.
In both instances, a KeyVault access occurs. In both instances the problem started when we started using the KeyVault in these methods.
We have done very careful logging in the first instance, and cannot see anything else in our code that could cause the hang. The KeyVault access is the primary suspect.
In addition, if we run the app from our local servers (from Visual Studio), the KeyVault access works fine on the "first try". It only produces the "hang" error when it runs in production on Azure, and only on that "first try".
By "hang" I mean that in one instance, which is triggered by an external API, it takes at least 60 seconds (we can tell that because the external API times out.) In the other instance, which is triggered by a page request, several minutes can pass and the page just spins, at which point we assume the DB request or something else has timed out.
When I say "a minute or so later", that's as fast as we have timed the retry.
Is there some kind of issue or function where the KeyVault needs to be "warmed up" before it works on the first try?
Update: I'm looking at the code more carefully, and I see at least a couple of places where we can insert still more logging to get a more exact picture of where the failure occurs. I'm going to do that, and then I'll report back here.
Update: See answer below - major newbie error, has been corrected.
Found the problem, and the solution.
Key Vault access needs to be called from an async task, because there is a multi-second delay.
private async Task<string> GetKeyVaultSecretValue(varSecretParms) {
I don't understand the underlying technology, however, apparently, if the call is from within a standard code sequence, the server doesn't like to wait, and so the thread is abandoned/halts.
According to your description, it seems that it dues to WebApp that does not enable Always on .
By default, web apps are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the app loaded all the time
If possible, please have a try to enable Always on and try it again.

client-server question

If i have a client that is connected to a server and if the server crashes, how can i determine, form my client, if the connection is off ? the idea is that if in my client's while i await to read a line from my server ( String a = sr.ReadLine(); ) and while the client is waiting to recieve that line , the server crashes , how do i close that thread that contains my while ?
Many have told me that in that while(alive) { .. } I should just change the alive value to true , but if my program is currently awaiting for a line to read, it won't get to exit the while because it will be trapped at sr.ReadLine() .
I was thinking that if i can't send a line to the server i should just close the client thread with .abort() . Any Ideas ?
Have a TimeOut parameter in ReadLine method which takes a TimeSpan value and times out after that interval if the response is not received..
public string ReadLine(TimeSpan timeout)
{
// ..your logic.
)
For an example check these SO posts -
Implementing a timeout on a function returning a value
Implement C# Generic Timeout
Is the server app your own, or something off the shelf?
If it's yours, send a "heart beat" every couple of seconds to let the clients know that the connection and service are still alive. (This is a bit more reliable than just seeing if the connection is closed since it may be possible for the connection to remain open while the server app is locked.)
That the server crashes has nothing to do with your clients. There are several external factors that can make the connection go down: The client is one of them, internet/lan problems is another one.
It doesn't matter why something fails, the server should handle it anyway. Servers going down will make your users scream ;)
Regarding multi threading, I suggest that you look at the BeginXXX/EndXXX asynchronous methods. They give you much more power and a more robust solution.
Try to avoid any strategy that relies on thread abort(). If you cannot avoid it, make sure you understand the idiom for that mechanism, which involves having a separate appdomain and catching ThreadAbortException
If the server crashes I imagine you will have more problems than just fixing a while loop. Your program may enter an unstable state for other reasons. State should not be overlooked. That being said, a nice "server timed out" message may suffice. You could take it a step further and ping, then give a slightly more advanced message "server appears to be down".

ASP.NET SqlConnection Timeout issue

I have run into a frustrating issue which I originally thought was a connection leak but that does not seem to be the case. The secnario is this: the data access for this application is using the Enterprise Libraries (v4) from Microsoft. All data access calls are wrapped in using statements such as
using (DbCommand dbCommand = db.GetStoredProcCommand("sproc"))
{
db.AddInParameter(dbCommand, "MaxReturn", DbType.Int32, MaxReturn);
...more code
}
Now the index of this application makes 8 calls to the database to load everything and I can bring the application to its knees by refreshing the index about 15 times. It seems that when the the database reaches 113 connections is when I recieve this error. Here is what makes this weird:
I have run similar code with the entlib on high traffic sites and have NEVER had this problem ever.
If I kill all the connections to the database and get the production application back up and running everytime I refresh the application I can run this SQL
SELECT DB_NAME(dbid) as 'Database Name',
COUNT(dbid) as 'Total Connections'
FROM sys.sysprocesses WITH (nolock)
WHERE dbid > 0
GROUP BY dbid
I can see the number of connections actively increasing with each page refresh. Running the same code on my local box with the same connection string does not cause this problem. Further if the production website is down I can fire up the site via Visual Studio and run it fine and the only difference between the two is that the production site has Windows authentication turned on and my local copy doesn't. Turning windows authentication off seems to have no effect on the server.
I have absolutely no clue what is causing this or why the connections are not being disposed of in SQL Server. The EntLib objects do no explose .Close() methods for anything so I can't explictily close the object.
Any thoughts?
Thanks!
Edit
Wow I just noticed that I never actually posted the error message. Oy. The actual connection error is: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Check that the stored procedure you are executing is not running into a row or table lock. Also if you can possibly try to deploy in another server and check if the application would crawl again.
Also try to increase the maximum allowed connections for your SQL server.
think the “Timeout Expired” error is a general issue and may have seveal causes. Increasing the TimeOut can solve some of them but not all.
You may also refer to the following links to troubleshoot and fix the error
http://techielion.blogspot.com/2007/01/error-timeout-expired-timeout-period.html
Could it be a configuration issue on the server?
How do you make a connection to the database on the production server?
That might be an area worth looking into.
While I don't know the answer I can suggest that for some reason connections are not being closed by you application when run in production. (Stating the obvious)
You might want examine your network configuration between the web server and sql server. High latency networks can cause connections not being closed in time.
Also it might help looking at the performance counters listed in the end of the following msdn article:
http://msdn.microsoft.com/en-us/library/8xx3tyca%28VS.71%29.aspx
Finally, if nothing else helps, I'd get debugger and Enterprise Library source code on production and debug your code inside the enterprise library to find out why connections are not being closed.
Silly question are you properly closing your DataReader? If not this could be the problem and the difference in behaviour between dev and prod can be caused by different garbage collection patterns.
I would disable connection pooling and try to suppress it (heh). Just add ";Pooling=false" to your connection string.
Or, perhaps you could add something like the following 'cleanup' code to your page (which closes any connection left open when the page unloads) - right in the 'using' clause:
System.Web.UI.Page page = HttpContext.Current.Handler as System.Web.UI.Page;
if (page != null) {
page.Unload += (EventHandler)delegate(object s, EventArgs e) {
try {
dbCommand.Connection.Close();
} catch (Exception) {
} finally {
result = null;
}
};
}
Also, make sure you've enabled the 'shared memory' protocoll if your SQL server and IIS are on the same machine (a real performance booster)!

Categories

Resources