So, I've been working on project that's making use of the Katana/Owin pipeline. And I'm trying to use Owin to serve static files securely. Now, when I run the project locally through VS, everything works perfectly fine.
However, when I deploy it to a server (Windows Server 2008 R2, IIS 7), IIS has decided that it'll be damned before it lets any dirty managed handler manage its static files. Even when I've removed the static file handler. Even when I've added my own handler for a specific path. Even when I've told it to run All Managed Modules For All Requests. And since the path that's being passed in is a virtual path, IIS of course craps out and dies and either spits out a 500 or a 404 error. I'm not entirely sure what causes it to spit out a 500 over a 404, but anytime the request is deeper than 1 directory deep it gives out a 500 over 404. (/Client/Content/Folder/static.html(404) vs /Client/Content/Folder/SubFolder/static.html (500))
I'm just not sure where to go from here, at all. Worse still, when I've deployed to my local IIS (not running it out of VS, that is.) it works perfectly. It respects the web.config, the Owin middleware handles the static requests perfectly, the security is in place. Everything is peachy. Granted I'm running IIS 8 express on my box, but I'm not convinced that that is it.
I'm going to post my comment as an answer since the more I think about it, the more sure I am that it could be the issue.
Check that all necessary asp.net and IIS features have been installed correctly via Programs and Features - do a comparison between your dev and server environments. I've experienced essentially the same issues as yourself and this was usually the reason.
If this isn't the reason, check the IIS settings/Application pool settings make sure they are the on dev and production environments.
Related
We have a website built on MVC3 and Telerik. After the latest release, we've got huge performance issues ( all the pages load about 40-50 seconds). As far as we can see in our dev environments, old and new release work absolutely fine. Whereas on prod, loading any page remotely works extremely slow. However, from prod box itself, using localhost or hostname, it works fine too.
What we have already checked:
database works absolutely fine
old/new releases on all the QA,DEV envs
application pool settings were compared with other websites, which are working fine
Application pool recycling counter - no unexpected recycles
Different browsers - also checked
Chrom dev tools show that all the time spends on getting data from the server (I believe rendering the page on the server). All the Ajax request work fast.
To be fair, I run out of thoughts what it might be, so can you please suggest what else worth checking in this case (network setting, IIS settings, perf counters and etc)?
Is there a proxy or other intermediary server in play?
If performance is acceptable when you browse locally but poor remotely I would first check the path to the website when you visit it remotely via traceroute or something similar. If the hoops are as expected, I would check the boxes along the way to your website to make sure they are not doing something weird. If you guys use a CDN, I would check if it still configured correctly. Failing that I would look at perhaps adding some client side instrumentation so you can see whats actually taking long, something like this perhaps.
If you have action filters enabled try disabling them and test. Might be some action filters are doing an extra work which delays the response.
I have my web application running. And every time I change some little peace of logic code I need to stop the app and wait for IIS to restart entirely.
Somewhere in the web I saw some guy saying that one of cool features of MVC5 (or maybe MVC6 on ASP.NET Core) that you can make changes "on the fly".
So can I not stop and restart IIS every time, or I just misunderstood something?
It depends on how the ASP.NET Core app is deployed. Essentially, its ability to make changes on the fly is owed to it being able to be deployed as just plain code, rather than as a compiled application. The web server essentially compiles it on the fly. However, for that to happen, you need to be using a web server than actually can compile it on the fly. IIS cannot. However, IIS can act as a reverse proxy for Kestrel, and Kestrel can compile on the fly. If you deploy the app in the traditional "compile and publish directly to IIS application directory" approach, then you will not benefit from this.
Actually, you don't need to restart IIS after each deploy. Whenever a change is detected in DLLs, the app (not IIS) will recycle and re-load the new DLLs. It just impacts that particular application and reloads the app domain.
Also, editing the web.config in a web application will also recycle the app.
You can read more in this article.
I have made some changes inside the user control, to both the code behind and aspx. When i run my local development or the dev site (posted the changes to dev site). I don't see my changes. I recycled the app pools and restarted the dev site as well.
I have placed break points in the code. The code never hits those. When i mouse over the break points after the page has executed, i get unreachable code message (yellow popop and attached).
I am only able to see my changes (local dev and dev site) after deleting asp.net temp internet files on my local machine and dev box.
I have just posted the code to the staging site and it is doing the same thing. Here i can't delete the asp.net temp files during the middle of the day or restart iis.
The project is
VS 2012
ASP.NET 4.5
IIS 7
Kentico CMS - Classic Asp.Net
This is the first time i am seeing this behavior. Has some one else seen this and how did you fix it?
Thanks.
I think, recycling the AppPool should solve the issue, and if the IIS is configured for Overlapping recycling, the users should notice that it is happening. Info from IIS MS Docu
...
Overlapping recycling, the default, lets an unhealthy worker process
become marked for recycling, but continues handling requests that this
unhealthy process has already received. It does not accept new
requests from HTTP.sys. When all existing requests are handled, the
unhealthy worker process shuts down.
...
I hope this helps.
Kentico had a caching bug that got fixed in HotFix upgrade 7.0.86. I have applied most recent HotFix 7.0.92 and on dev and staging sites, it looks fixed now.
We have a C# web application, and the latest deploy doesn't work on our Windows Small Business Server 2008 (IIS7). The exact copy of that site runs fine on my Windows 7 machine (IIS7.5). The previous version and other builds still work on the Server 2008 R2 machine, but this itteration doesn't.
I've checked the W3SVC logs, but no requests are logged. I've checked the eventlog for errors, but no errors are logged. I also checked in fiddler, but the request just doesn't get a response as far as I can tell (Result column remains -)
When you open the url, the browser will just keep loading (no timeout).
Is there anything else I can check or enable to debug this IIS7 behaviour?
Thanks in advance,
Nick.
UPDATE
I published the application again & created a new site in IIS, and this new version works. While my the immediate problem is solved at this time, I would still like to know how to debug IIS7, see how it works & why it would keep loading infinitely.
First, I would drop a regular .html file into the sites directory. Then I would have a browser request that specific static file. This would bypass the .net engine and should be logged.
If for some reason it doesn't work and/or isn't logged then there are other things to check, let us know.
Assuming that it does serve the file and you are pointing to the correct machine then inspect your global.asax file and remove any type of error handling you might have. Also turn off the custom errors section of your web.config. Both of which could result in the server essentially spinning off into nothingness if improperly coded. If you have any type of additional threads you are spinning up on access, then see if you can turn those off or add additional logging.
Next, look in the HTTPERR logs to see if you can identify what's going on. These are located at
%SystemRoot%\system32\LogFiles\HTTPERR\httperr*.log
Info about this log file is at: http://support.microsoft.com/default.aspx?scid=kb;en-us;820729
If your app uses ADO then there is chance that depending where the build occurred on Windows 7 or not and whether SP1 is installed or not (at the time of the build) that your build is broken by some Micorsoft ADO-update contained in SP1 (see http://www.codeproject.com/Articles/225491/Your-ADO-is-broken.aspx).
If no requests are logged in the W3SVC logs then it probably means that IIS is not recieving the request at all - likely due to firewall configuration or similar.
You should diagnose why IIS is unavailable (for example by attempting to serve some static content) and then try again.
Try these:
re-register asp.net runtime with your IIS7
make sure the asp.net extension for the correct version is set to Allowed in 'ISAPI and CGI restrictions' in your IIS
We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot.
The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now.
I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are:
Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good.
Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy.
Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?
Edit
Clarification of infrastructure... We have 2 IIS 7 webservers in an NLB running their webroots off a shared NAS (To be more clear, they're using the exact same webroot on the NAS). We do a lot of deploys, to the point where any approach we can't automate really won't be viable.
What you need to do is temporary stop IIS from processing any incoming requests for that app, so you can copy the new files and then start it again. This will lead to a small downtime for your clients, but unless your website is mission critical, that shouldn't be that big of a problem.
ASP.NET has a feature that targets exactly this scenario. Basically, it boils down to temporarily creating a file named App_Offline.htm in the root of your webapp. Once the file is there, IIS will takedown the worker process for you app and unload any files in use. Once you copy over your files, you can delete the App_Offline.htm file and IIS will happily start churning again.
Note that while that file is there, IIS will serve its content as a response to any requests to your webapp. So be careful what you put in the file. :-)
Another solution is IIS Programmatic Administration.
Then you can copy your new/updated web to an alternative directory then switch the IIS root of your webapp to this alternative directory. Then you don't matter if files are locked in the original root. This a good solution for website availability.
However it requires some permission tuning...
You can do it via ADSI or WMI for IIS 6 or Microsoft.Web.Administration for IIS 7.
About your 2., note that WMI don't require administrator privileges as ADSI do. You can configure rights by objects. Check your WMI console (mmc).
Since you're already load balancing between 2 web servers, you can:
In the load balancer, take web server A offline, so only web server B is in use.
Deploy the updated site to web server A.
(As a bonus, you can do an extra test pass on web server A before it goes into production.)
In the load balancer, take B offline and put A online, so only web server A is in use.
Deploy the updated site to web server B.
(As a bonus, you can do an extra test pass on web server B before it goes into production.)
In the load balancer, put B back online. Now both web servers are upgraded and back in use in production.
List item
You could also try to modify the timestamp of web.config in the root folder before attempting to copy the files. This will unload the application and free used files.
Unless you're manually opening a handle to a file on your web server, IIS won't keep locks on your files.
Try shutting down other services that might be locking your files. Some examples of common services that do just that:
Windows Search
Google Desktop Search
Windows Backup
any other anti-virus or indexing software
We had the same server (2003) and the same problem. Certain dll's were being locked and putting the App_Offline.htm in the website root did jack diddly for us.
Solution:
File permissions!
We were using a web service which runs under the Network Service account or the IIS_WPG account to deploy updates to the web site. Thus it needed write access to all the files. I already knew this, and had already set the permissions on the directory a while ago. But for some strange reason, the necessary permissions were not set on this one problem dll. You should check the permissions not only on the directory, but on the problem file as well.
We gave Network Service and IIS_WPG users read/write access to the entire web root directory and that solved our file in use, file locked, timeout, and access denied issues.