I'm running a .NET C# (4.5) web forms application and on our production server, the application gets slower and slower for everyone over time until about an hour where it becomes pretty much unusable and we just restart the server.
It works perfectly fine on my local machine. Never gets slower at all. So I'm trying to think of what would be different.
My local machine connect to the same database. There's also some active directory things being used for logins but that also connects to the same place on both my local and production environments.
I have debug off in the web.config. I've also looked up pretty much any other solutions and haven't had luck.
I did see some stuff about viewstates building up with ajax requests. I do have a page with updatepanels that are refreshing every few seconds, but I'm confused as to why the production server would be getting slower over time with this and not my local machine testing.
There's only about 10 people using the application at the moment also. Any ideas?
Refreshing a bunch of updatepanels every few seconds is not a good idea pretty much. I was able to optimize things here and there but in the end it was the auto refreshing that was slowing things down. Completely reprogrammed using signalr to grab updates and now it's very fast.
Related
We have a website built on MVC3 and Telerik. After the latest release, we've got huge performance issues ( all the pages load about 40-50 seconds). As far as we can see in our dev environments, old and new release work absolutely fine. Whereas on prod, loading any page remotely works extremely slow. However, from prod box itself, using localhost or hostname, it works fine too.
What we have already checked:
database works absolutely fine
old/new releases on all the QA,DEV envs
application pool settings were compared with other websites, which are working fine
Application pool recycling counter - no unexpected recycles
Different browsers - also checked
Chrom dev tools show that all the time spends on getting data from the server (I believe rendering the page on the server). All the Ajax request work fast.
To be fair, I run out of thoughts what it might be, so can you please suggest what else worth checking in this case (network setting, IIS settings, perf counters and etc)?
Is there a proxy or other intermediary server in play?
If performance is acceptable when you browse locally but poor remotely I would first check the path to the website when you visit it remotely via traceroute or something similar. If the hoops are as expected, I would check the boxes along the way to your website to make sure they are not doing something weird. If you guys use a CDN, I would check if it still configured correctly. Failing that I would look at perhaps adding some client side instrumentation so you can see whats actually taking long, something like this perhaps.
If you have action filters enabled try disabling them and test. Might be some action filters are doing an extra work which delays the response.
Our team has an application in Android, with a .NET c# backend, hosted in IIS.
Recently, we have observed sudden and unexplainable latencies in our customers with the following scenario:
Without any warning, users are enable to change the channel (Zapping) , since the product has to do with Live Media Streaming, and they can not even log out of the application
The mobile application connected to another backend (still a c# backend) , is working properly, without any problem
After some time (which varies from 6 hours of the first incident, to 5 minutes of the last one), it all turns back to normal.
I have enabled Failed Request Tracing logs, to see if I can get anything from there, and I have results as follows:
<failedRequest url="https://ourDNS.com:443/servertime.aspx"
siteId="1"
appPoolId="DefaultAppPool"
processId="22232"
verb="POST"
remoteUserName=""
userName=""
tokenUserName="NT AUTHORITY\IUSR"
authenticationType="anonymous"
activityId="{80013C53-0802-B500-B63F-84710C7967BB}"
failureReason="TIME_TAKEN"
statusCode="200"
triggerStatusCode="0"
timeTaken="45141"
xmlns:freb="http://schemas.microsoft.com/win/2006/06/iis/freb"
>
The page described above is a simple page, that first gets the server's timezone, and then after getting the customer's timezone (that can be set manually from the client), returns the exact date and time of the device where the application is hosted, for further calculations of stream program, what is playing now etc. However, for this page, that returns a simple JSON with a string in it, it requires some times more than 45 seconds (to me this is insane).
Another log from Client side in the moment is one Exception as above:
java.net.SocketTimeoutException
at java.net.PlainSocketImpl.read(PlainSocketImpl.java:491)
at java.net.PlainSocketImpl.access$000(PlainSocketImpl.java:46)
at java.net.PlainSocketImpl$PlainSocketInputStream.read(PlainSocketImpl.java:240)
at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:103)
at org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:191)
at org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:82)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:174)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:180)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:235)
at org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:259)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:279)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:428)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465)
at com.framework.utilityframe.webhelper.HttpRequest.getHttpResponse(HttpRequest.java:316)
at com.framework.utilityframe.webhelper.HttpRequest.httpRequest(HttpRequest.java:393)
at com.tibo.webtv.web.TiboLog.logBufferingError(TiboLog.java:319)
at com.tibo.webtv.CustomVideoView$Buffering_Problem.doInBackground(CustomVideoView.java:324)
at com.tibo.webtv.CustomVideoView$Buffering_Problem.doInBackground(CustomVideoView.java:307)
at android.os.AsyncTask$2.call(AsyncTask.java:287)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305)
at java.util.concurrent.FutureTask.run(FutureTask.java:137)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
Reading through different forums, I have seen different causes of performance leaks, starting from database to IIS and even a misconfiguration of the application. I have discarded database as a cause because:
At the moment of the problem, database parameters were absolutely fine, no changes in queries time execution, no waiting tasks, no locking
Secondly, the mobile and Decoder application connect to the same database, and the mobile application is running just fine with the same queries
Now, if I think of IIS, every Application hosted at that AppPool, was running fine and without delays, but still there may be something I am missing over there
And at least, something that makes me suspicious is the fact that the mobile application differs in two ways with the Decoder application:
First, the mobile application takes the responses from the Backend in XML format, the Decoder uses JSON.
Second,the mobile application uses http requests, and the Decoder uses https (SSL)
If anyone has experienced similar issues, their help would be greatly appreciated. And for any other detail you need, just ask and I will provide.
So,
Today, our team made another test, which included :
Application hosted in one server and database in another
Application and database hosted in a completely different server (Azure environment)
In both cases, the result was the same: Latencies and problem at the service.
The problem was neither at the backend nor the server. First, the Java application by mistake executed Sync Tasks when saving the logs to another server(dedicated, with full potential to keep as much data as you can give). Second, the log server had a full HDD, with more than 1 TB of only DB Logs, so when the application executed those Sync Tasks (which came as the first call, before any interaction with the channels), they received the Socket exceptions. So, maybe for someone else who may see this post: PLEASE,ALWAYS CHECK YOUR TASKS IN YOUR APPLICATION,AND ALWAYS CHECK ANY SERVER RELATED TO YOUR APPLICATION!!! Thank you very much :D
I have a web app written in ASP.NET Web Forms which takes like 7 seconds to perform any anction.
The browser tools says that the loading time is spent to actually send the request, while to retrieve it is fine.
Here are a couple of screenshots
How could I test what's wrong other that that?
I have other applications coming from the same source code, and those works fine.
The webApp is running on a local IIS server (8.5) using a classic asp.net 4 app pool and is made in Visual Studio 2010 (althought I could test it with 2013 also)
I scanned the whole thing using Ants Performance profiler
but I found nothing.
Anyway I found the cause were a really simple query which should've returned back a single line..
The db wasn't properly indexed so it took forever to load
Removing that query solved the problem for me
Thank guys
Basically, I have a Windows service which performs a batch job.
I have two collections that are related, customerAccounts and events. The events collection logs actions that customers performed on a site, containing the timestamp, the name of the event, the page it occurred on and the username.
The service runs through each account and works out their journey phase and risk of account closure based on what events they have in the Events collection and a set of user-defined rules.
There are about 3,500 accounts and around 100,000 events in my database at present. The service takes just over 1 minute to run on my development PC, but takes seemingly forever on the server (I've estimated it takes roughly 2.5 hours based on modifying the service so it only performs the job on a single customer account.
My machine is a Core i7 with 16GB of RAM, the server is an Intel Xeon E5-2609 (64bit, Win 2008 R2) with 24GB of RAM. I put the database on a much older server (32bit, Windows 2003) and the service took about 2 minutes to run. So, I know that on my dev machine it takes just over a minute, and on older server hardware it takes just over 2 minutes, yet on a modern server it's taking a matter of hours.
Originally, the Mongo Shell was warning that NUMA was enabled on the server and should be switched off to avoid performance problems. This has since been turned off but hasn't seemed to have an affect on performance.
When I run db.currentOp(); on the server, I've noticed that it's always got a query "createIndexes" of some nature (the indexes were created ages ago), yet when I mongodump/restore the database to my dev machine and run the service/currentOp, the "createIndexes" query isn't there. Apart from that, nothing else seems to jump out at me.
Does anyone have any ideas / help on this mysterious performance issue? I'll post currentOp/mongostats if/when required.
Quick answer: I re-installed Mongo. No fancy configuration, just ran the setup and it fixed the issue.
I never worked out why Mongo was constantly creating indexes. The log file is 0.25GB for a single day full of event logs for "creating index".
Now that the use of one of my asp.net apps has gone up significantly, two odd problems are occurring that are very infrequent, and that I cannot reproduce.
I am at a loss as to how to debug and troubleshoot these problems.
Here are two examples:
One of my aspx pages resets a session state value to 0 on !IsPostBack (is true). However, one of my users at a specific location frequently comes to that page when it is not a postback and the session state value does not get reset on his laptop. (I am basing this statement on how the app subsequently behaves, not on running in debug mode) But the code works and the session state is reset on my laptop when I am sitting next to him running the app on my laptop using the same browser on the same internet connection at the same time. And when this user runs the app on his laptop from home where he has a better internet connection he does not have the problem as frequently.
One of the aspx pages in my app does a server.transfer to itself after running code that saves data to a DB. Almost all the time after the server.transfer the textboxes contain their default value (as they should since !isPostback==True), but about 1% of the time the textboxes contain the previous value. I know that there has been a roundtrip to the server because data has been saved. This problem occurs on the same pcs using the same browsers by the same users doing the same actions. So 99% of the time it works correctly, and 1% of the time they do the exact same thing and it does not work correctly.
How do I even start trying to figure out what is causing these problems if they seem to be occurring randomly?
I suspect that the quality of the internet connection is the issue because it is the one variable that is changing, but how does that info help me?
It's not like I can debug either of these problems by running my app in debug mode.
I am using Asp .Net 3.5, C# 3.5 and the app is run in IE 6-8. (IE 8 in compatibility mode)
I would add logging to code where the problem is occurring. Then inform the users who are having the problem to try and note the time when they run into the issue. Once you have the logs and an approximate time, you can go in and pour over these logs to see if anything points you in the right direction. I would also look at your IIS and Event Logs on the server.
You can install Firefox Throttle plugin to simulate slow connections. Lot's of things can happen in ASP.NET with slow page loads. If the page isn't loaded fully but items are clickable ASP.NET can get really upset with event validation...etc
Also, I encourage you to start logging and tracing the problem areas in your application. You can then correlate that with the IIS request logs and get a fairly accurate picture of whats's happening when.
It seems like you're having problems with Session State. By default, ASP.NET uses InProc session state mode which uses server memory to store values. In many occasions, this can be lost or reset (app pool recycled). Switching to SQLServer Session State might help you solve the issue.