Slow initial connection to API - c#

I've got an API written in C# (webforms) and an SQL Server 2008 database that accepts JSON POST data on an AWS EC2 VM. My problem is that the "first" use of this API is rather slow to respond.
What I mean by "first" is that if I were to wait for an hour or so, then post some data, that would be the first. Subsequent posts would process rather quickly in comparison, and I would need to wait another hour or so before experiencing the slow "first" transaction again.
Since only the initial post is slow, it makes me wonder if something is "spinning down" after being idle for some time, and then spinning up again upon first use, adding the extra time.
Things I have tried -
Run program through a performance profiler - This didn't really help. As far as I can see, the program itself doesn't have any obvious parts that run very slowly or inefficiently.
Change configuration to persist at least 1 connection to the database at all times. Again, no real change. I did this by adding "Min Pool Size=1;Max Pool Size=100" to my connection string.
Change configuration to use named pipes instead of TCP. Once again, no real change. I did this by adding "np:" before the server specified in my connection string, eg. server=np:MyServer;database=MyDatabase;
Is there anything else I can do to diagnose the problem? What else should I be looking for in this scenario?

Chances are your app pool is shutting down after a designated period of non-use. The first call after the shutdown forces everything to get loaded back into memory which explains the lag.
You could play with these settings: http://technet.microsoft.com/en-us/library/cc771956%28v=ws.10%29.aspx to see if you get the desired effect, or setup a task scheduler job that makes at least one call every 10+/- minutes of so by doing a simulated post - a simple powershell script could handle that for you and will keep everything 'primed' for the next use.

Related

Checking for internet connection slows the load speed when disconnected

I want to create an easy autoupdate system in my program. It works fine, but I want it to proceed only when the user is connected to the internet.
I tried many ways, every worked, but when I'm disconnected from the internet, the time till the application loads is around 10 seconds, which is really slow. My program checks for the update on load and so does the connection test, which I think is the problem, because if I run the test inside a button click, it loads pretty fast, even when you are disconnected from the internet.
If you are curious, I tried to use every connection test I found, including System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable();.
Your problem is that checking for a connection has a timeout. When there's a connection it finds that out really fast (usually) and you don't notice the delay. When you don't have a connection it has to do more checks and wait for responses. I don't see anyway to adjust the timeout, and even if you could you'd risk it not detecting connections even if they were available.
You should run the check on a separate thread so that your GUI loading isn't disrupted.
Rather than checking at startup, check on a background thread while the application is running and update then. Any solution for checking connection can have a delay even if the internet is up, if there are DNS issues or just general slowness.

How do I handle WCF Call lifecycles under load when timeouts are expected?

I have a nice fast task scheduling component (windows service as it happens but this is irrelevant), it subscribes to an in memory queue of things to do.
The queue is populated really fast ... and when I say fast I mean fast ... so fast that I'm experiencing problems with some particular part.
Each item in the queue gets a "category" attached to it and then is passed to a WCf endpoint to be processed then saved in a remote db.
This is presenting a bit of a problem.
The "queue" can be processed in the millions of items per minute whereas the WCF endpoint will only realistically handle about 1000 to 1200 items per second and many of those are "stacked" in order to wait for a slot to dump them to the db.
My WCF client is configured so that the call is fire and forget (deliberate) my problem is that when the call is made occasionally a timeout occurs and thats when the headaches begin.
The thread just seems to stop after timeout no dropping in to my catch block nothing ... just sits there, whats even more confusing is that this is an intermittent thing, this only happens when the queue is dealing with extreme loads and the WCF endpoint is over taxed, and even in that scenario it's only about once a fortnight this happens.
This code is constantly running on the server, round the clock 24/7.
So ... my question ...
How can I identify the edge case that is causing my problem so that I can resolve it?
Some extra info:
The client calling the WCF endpoint seems to automatically "throttle itself" by the fact that i'm limiting the number of threads making calls, and the code hangs about until a call is considered complete (i'm thinking this is a http level thing as im not asking the service for a result of my method call).
The db is talked to with EF which seems to never open more than a fixed number of connections to the db (quite a low number too which is cool) and the WCF endpoint from the call reception back seems super reliable.
The problem seems to be coming off the queue processor to the WCf endpoint.
The queue processor has a single instance of my WCF endpoint client which it reuses for all calls ... (is it good practice to rebuild this endpoint per call? - bear in mind number of calls here).
Final note:
It's a peculiar "module" of functionality, under heavy load for hours at a time it's stable, but for some reason this odd thing happens resulting in the whole lot just stopping and not recovering. The call is wrapped in a try catch, but seemingly even if the catch is hit (which isn't guaranteed) the code doesn't recover / drop out as expected ... it just hangs.
Any ideas?
Please let me know if there's anything else I can add to help resolve this.
Edit 1:
binding - basicHttpBinding
error handling - no code written other than wrapping the WCF call in a try catch.
Seemingly my solution appears to be to increase the timeout settings on the client config to allow the server more time to respond.
The net result being that whilst the database is busy saving data (effectively the slowest part of this process) the calling client sits and waits (on all threads but seemingly not as long as i would have liked).
This issue seems to be the net result of a lot of multithreaded calls to the WCF and not giving it enough time to respond.
The high load is not conintuous, the service usage seems to spike then tail off, adding to the expected response time allows spikes to be filtered through as they happen.
A key note:
Way too many calls will result in the server / service treating them as a dos type attack and as such may simply terminate the connection.
This isn't what I'm getting, but some fine tuning and time may result in this ...
Time for some bigger servers !!!

Advices on time-consuming procedure in C#

I've developed a program using Delphi that, among some features, does a lot of database reading of float values and many calculations on these values. At the end of these calculations, it shows a screen with some results. These calculations take some time to finish: today, something like 5 to 10 minutes before finally showing up the results screen.
Now, my customers are requesting a .Net version of this program, as almost all of my other programs have already gone to .Net. But I'm afraid that this time-consuming calculation procedure wouldn't fit the web scenario and the program would take the user to some kind of timeout error.
So, I'd like some tips or advices on how to do this kind of procedure. Initially I thought about calling a local executable (that could be even my initial Delphi program, in a console way) and after some time show the result screen in a web page. But, again, I'm afraid this wouldn't be the best approach.
An external process is a reasonable way to go about it. You can fire off a thread inside the ASP.NET process (i.e. just with new Thread()) which could also work, but there are issues around process recycling and pooling that might make this a little harder. simply firing off an external process and then maybe using some Ajax polling to check on it's status on the browser seems like a good solution to me.
FWIW, another pattern that some existing online services use (for instance, ones that do file conversion that may take a few minutes) is having the person put in an email address and just send the results via email once it's done - that way if they accidentally kill their browser or it takes a little longer than expected or whatever, it's no big deal.
Another approach I've taken in the past is basically what Dean suggested - kick it off and have a status page that auto-refreshes, and once it's complete, the status includes a link to results.
How about:
Create a Web Service that does the fetching/calculation.
Set the timeout so it wont expire.
YourService.HeavyDutyCalculator svc = new YourService.HeavyDutyCalculator();
svc.Timeout = 10 * 1 * 1000; //Constitutes 10 mins, 10 mins x 1 second x 1000 ms
Service.CalculateResult result = svc.Calculate();
Note that you can put -1 if you want it to run infinitely.
MSDN:
Setting the Timeout property to Timeout.Infinite indicates that the request does not time out. Even though an XML Web service client can set the Timeout property to not time out, the Web server can still cause the request to time out on the server side.
Call that web method inside you web page
Place a waiting/inProgress image
Register for web method OnComplete event; and show results upon complete.
You can also update the timeout in your web.config:
<httpRuntime useFullyQualifiedRedirectUrl="true|false"
maxRequestLength="size in kbytes"
executionTimeout="seconds"
minFreeThreads="number of threads"
minFreeLocalRequestFreeThreads="number of threads"
appRequestQueueLimit="number of requests"
versionHeader="version string"/>
Regardless of what else you do you need a progress bar or other status indication to the user. Users are used to web pages that load in seconds, they simply won't realise (even if you tell them in advance) that they have to wait a full 10 minutes for their results.

Why does my website constantly freeze?

This is a pretty vague question and getting it answered seems like a long shot, but I don't know what else to do.
Ever since I made my website live every now and then it will just freeze. You click on a link and the browser will just site there looking like its trying to connect. It seems the freezing can last up to 2 minutes or so, then everything is just fine. Then a little while later, it will do the same thing.
I track all the exceptions that occur on my website in a log file.
I get these quite a bit ..
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding
And the stack trace shows it leading to some method that's connecting to the database.
I'm assuming the freezing has to do with this timeout problem. My website is hosted on a shared server, and my database is on some other server with about a billion other databases as well.
But even being on a shared server, this freezing problem happens all the time. Its extremely annoying. And I can see this being a pretty catastrophic problem considering my site is ecommerce based and people are doing transactions on it. The last thing I want is the site freezing when my users hit the 'Submit payment' button, then it results in them hitting the submit payment button over and over again because the site froze, then there credit card gets charged about 10 extra times.
Does anyone have any suggestions on the best way to handle this?
I am guessing that it has to do with the database connections. Check to see that they are getting released properly? If not then it will use them all up.
Also check to see if your database has connection pooling configured.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding
That's a Sql command timeout exception - it can be somewhat common if your database is under load. Make sure you're disposing of SqlConnections and SqlCommands - though that'd usually result in a pool timeout exception (can't retrieve a connection from the connection pool).
Odds are, someone is running queries that are badly tuned or otherwise sucking resources. It may be your site, but since you're on a shared db server, it could just as easily be someone else's. It could also be blocking, or open transactions - since those would be on your database, that'd be a coding issue. You'll probably need to get your hosting provider involved to track it down or move to a dedicated db server.
You can decrease the CommandTimeout of your SqlCommands - I know that sounds somewhat counter-intuitive, but I often find that it's better to fail early than try for 60 seconds throwing additional load on the server. If your .5 second query isn't done in 5 seconds, odds are it won't be done in 60 either.
Alternatively, if you're the patient type, you can increase the CommandTimeout - but there's also a IIS timeout of 90 seconds that you'll need to modify if you bump it up too much.
The timeout errors is definitely the source of the freezing pages. When it happens the page will wait something like a minute for the database connection before it returns the error message. As the web server only handles one page at a time from each user, the entire site will seem to be frozen for the user until the timeout error comes. Even if it only happens for a few users once in a while, it will seem quite severe to them as they can't access the site at all for a minute or so.
How severe the problem really is depends on how many errors you get. From your description it sounds like you get a bit too many to be normal.
Make sure that all your data readers, command objects and connection objects gets disposed properly, so that you don't leave connections open.
Look for deadlock errors in the log also, as they can cause timeouts. If you have queries that lock each other, you may be able to improve them by changing the order that they use the tables.
Check the SQL Server logs, especially for deadlocks.
If you have multiple connections open, one might be waiting on a row that is locked by the other.

.net remoting stops every 100 seconds

We have very strange problem, one of our applications is continually querying server by using .net remoting, and every 100 seconds the application stops querying for a short duration and then resumes the operation. The problem is on a client and not on the server because applications actually queries several servers in the same time and stops receiving data from all of them in the same time.
100 Seconds is a give away number as it's the default timeout for a webrequest in .Net.
I've seen in the past that the PSI (Project Server Interface within Microsoft Project) didn't override the timeout and so the default of 100 seconds was applied and would terminate anything talking to it for longer than that time.
Do you have access to all of the code and are you sure you have set timeouts where applicable so that any defaults are not being applied unbeknownst to you?
I've never seen that behavior before and unfortunately it's a vague enough scenario I think you're going to have a hard time finding someone on this board who's encountered the problem. It's likely specific to your application.
I think there are a few investigations you can do to help you narrow down the problem.
Determine whether it's the client or server that is actually stalling. If you have problems determining this, try installing a packet filter and monitor the traffic to see who sent the last data. You likely won't be able to read the binary data but at least you will get a sense of who is lagging behind.
Once you figure out whether it's the client or server causing the lag, attempt to debug into the application and get a breakpoint where the hang occurs. This should give you enough details to help track down the problem. Or at least ask a more defined question on SO.
How is the application coded to implement the continuous querying? Is it in a continuous loop? or a loop with a Thread.Sleep? or is it on a timer ?,
It would first be useful to determine if your system is executing this "trigger" in your code when you expect it to, or if it is, and the remoting server is not responding... so, ...
if you cannot reproduce this issue in a development environment where you can debug it, then, if you can, I suggest you add code to this Loop to write out to a log file (or some other persistence mechanism) each time it "should" be examining whatever conditions it uses to decide whether to query the remoting server or not, and then review those logs when the problem reoccurs...
If you can do the same in your remoting server, to record when the server receives a remoting request, this would help as well...
... and oh yes, just a thought, (I don;t know how you have coded this... ) but if you are using a separate thread in client to issue the remoting request, and the channel is being registered, and unregistered on that separate thread, make sure you are deconflicting the requests, cause you can't register the same port twice on the same machine at the same time...
(although this should probably have raised an exception in your client if this was the issue)

Categories

Resources