Whenever I need to make any changes in development I used to modify the web.config file and restart the app in IIS/Visual studio. This method used to work in production too. Today when I changed the web config file I got the error message
Server Error in '/' Application.
Configuration Error
Description: An error occurred during the processing of a
configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately.
Parser Error Message: The configuration file has been changed by another program.
I think that this caused because the application was still running. I know the purpose of an app_offline.htm file and that it can be placed in the root of the application to stop serving out new request. The reason why I cannot do that is because even if I put an app_offline.htm file it does not reflect the changes instantly and this creates chaos.Sometimes when I try from a different network the changes reflects instantly. Could be a cache problem ?
It usually takes take .5 to 1.5 hours for the changes to reflected back for the clients. Sometimes the changes are applied instantly as well. My hosting provider is not that great and I am on shared hosting. So they said that they have 11 load balancing servers and the changes need to get updated on all the servers and could take some time.
To fix the above mentioned error ,I had to personally call the hosting providers and ask them to restart the application pool.
1) Does anyone know what can be done so that I don't have to call up the hosting provider every time?
2) Is the load balancer problem true? I am not sure if it is or they are making it up
PS I do not have Remote IIS access.
Cheers,
Varun
Related
I made an ASP.NET Web API project in VS2013 with one controller. The setup uses Entity Framework Code First.
When I run it locally, it works fine. But when I publish it to Azure Web App, calls to the controller will wait for a long time or returns an error An error has occurred.
I removed codes which work with DB and it works again.
The plan has 1GB of storage and I want to use it rather than paying for a separate database service.
How can I debug it to see what happens on the server with the database?
Thanks.
You can set your connection string in Azure control panel which will be used instead of one in your web.config.
You can also turn on error reporting in your web.config file to see if you are getting the 26 - Database not found error (connection timeout).
We have a C# web application, and the latest deploy doesn't work on our Windows Small Business Server 2008 (IIS7). The exact copy of that site runs fine on my Windows 7 machine (IIS7.5). The previous version and other builds still work on the Server 2008 R2 machine, but this itteration doesn't.
I've checked the W3SVC logs, but no requests are logged. I've checked the eventlog for errors, but no errors are logged. I also checked in fiddler, but the request just doesn't get a response as far as I can tell (Result column remains -)
When you open the url, the browser will just keep loading (no timeout).
Is there anything else I can check or enable to debug this IIS7 behaviour?
Thanks in advance,
Nick.
UPDATE
I published the application again & created a new site in IIS, and this new version works. While my the immediate problem is solved at this time, I would still like to know how to debug IIS7, see how it works & why it would keep loading infinitely.
First, I would drop a regular .html file into the sites directory. Then I would have a browser request that specific static file. This would bypass the .net engine and should be logged.
If for some reason it doesn't work and/or isn't logged then there are other things to check, let us know.
Assuming that it does serve the file and you are pointing to the correct machine then inspect your global.asax file and remove any type of error handling you might have. Also turn off the custom errors section of your web.config. Both of which could result in the server essentially spinning off into nothingness if improperly coded. If you have any type of additional threads you are spinning up on access, then see if you can turn those off or add additional logging.
Next, look in the HTTPERR logs to see if you can identify what's going on. These are located at
%SystemRoot%\system32\LogFiles\HTTPERR\httperr*.log
Info about this log file is at: http://support.microsoft.com/default.aspx?scid=kb;en-us;820729
If your app uses ADO then there is chance that depending where the build occurred on Windows 7 or not and whether SP1 is installed or not (at the time of the build) that your build is broken by some Micorsoft ADO-update contained in SP1 (see http://www.codeproject.com/Articles/225491/Your-ADO-is-broken.aspx).
If no requests are logged in the W3SVC logs then it probably means that IIS is not recieving the request at all - likely due to firewall configuration or similar.
You should diagnose why IIS is unavailable (for example by attempting to serve some static content) and then try again.
Try these:
re-register asp.net runtime with your IIS7
make sure the asp.net extension for the correct version is set to Allowed in 'ISAPI and CGI restrictions' in your IIS
I have a working WCF service and worker role that I have been debugging locally on the Azure Development Fabric. All is well, but now that I'm trying to deploy it to the cloud in a staging environment, I'm seeing some weird issues.
My worker role, which is infinitely more complex than the service, works fine. It goes from Initializing -> Busy -> Ready.
My service role, however, goes from Initializing -> Busy and then the status never changes again.
I have read a few articles about Initialize -> Busy -> Stopping loops, but this is not the behavior I'm seeing. In fact, when I try to use IntelliTrace, I can't access any logs for the service because it never enters the Unresponsive status. I am able to access logs for the successfully-loaded worker role.
How am I supposed to resolve this issue if I can't see any logs or attach a debugger to figure out what's going on? Again, this service works absolutely fine on my local environment.
And before anyone suggests it, I have already done the following:
Check the DiagnosticsConnectionString and make sure it is connected to my storage account
Enable IntelliTrace on the deployment.
Check all referenced assemblies to make sure non-.NET assemblies are "copy to local = true"
It sure would be great if Azure exposed some kind of console so that I could see what's going on.
Later this year, you'll be able to use Remote Desktop to connect in and see what's going on.
For now, you can contact support, and they should be able to help.
Typically, "Busy" is the state you're in while you're still executing code in OnStart(). Is there any chance your OnStart() implementation isn't returning? (Or perhaps some constructor?)
In my experience, when Azure starts playing blackbox in the production, this is caused by the problems with configuration. One of the possible reasons - config section is recognized on your local machine, but is not available on Windows Azure Guest OS.
In this case your role will not even have a chance to say something to IntelliTrace or any logger (it's not loaded).
For example, if you have uri config section in your web.config, then it might work locally, but will cause Azure to freeze web deployment in production. Fix (in this case):
add following line to configuration/configsections:
<section name="uri" type="System.Configuration.UriSection, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
Your case might be different. Just look for any uncommon configs or cases, where schema might be unknown to Azure Guest OS you are running against.
I had the same problem and ended up starting with a blank project. Getteing it to run in the cloud. Added a few lines, deploying.
Eventually (after 10 deployments... do you know how long that takes???) I got it working... an the cause was in the config file. I didn't trace it down to one line though.
Contacting support would be the right thing to do, as smarx suggests.
We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot.
The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now.
I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are:
Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good.
Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy.
Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?
Edit
Clarification of infrastructure... We have 2 IIS 7 webservers in an NLB running their webroots off a shared NAS (To be more clear, they're using the exact same webroot on the NAS). We do a lot of deploys, to the point where any approach we can't automate really won't be viable.
What you need to do is temporary stop IIS from processing any incoming requests for that app, so you can copy the new files and then start it again. This will lead to a small downtime for your clients, but unless your website is mission critical, that shouldn't be that big of a problem.
ASP.NET has a feature that targets exactly this scenario. Basically, it boils down to temporarily creating a file named App_Offline.htm in the root of your webapp. Once the file is there, IIS will takedown the worker process for you app and unload any files in use. Once you copy over your files, you can delete the App_Offline.htm file and IIS will happily start churning again.
Note that while that file is there, IIS will serve its content as a response to any requests to your webapp. So be careful what you put in the file. :-)
Another solution is IIS Programmatic Administration.
Then you can copy your new/updated web to an alternative directory then switch the IIS root of your webapp to this alternative directory. Then you don't matter if files are locked in the original root. This a good solution for website availability.
However it requires some permission tuning...
You can do it via ADSI or WMI for IIS 6 or Microsoft.Web.Administration for IIS 7.
About your 2., note that WMI don't require administrator privileges as ADSI do. You can configure rights by objects. Check your WMI console (mmc).
Since you're already load balancing between 2 web servers, you can:
In the load balancer, take web server A offline, so only web server B is in use.
Deploy the updated site to web server A.
(As a bonus, you can do an extra test pass on web server A before it goes into production.)
In the load balancer, take B offline and put A online, so only web server A is in use.
Deploy the updated site to web server B.
(As a bonus, you can do an extra test pass on web server B before it goes into production.)
In the load balancer, put B back online. Now both web servers are upgraded and back in use in production.
List item
You could also try to modify the timestamp of web.config in the root folder before attempting to copy the files. This will unload the application and free used files.
Unless you're manually opening a handle to a file on your web server, IIS won't keep locks on your files.
Try shutting down other services that might be locking your files. Some examples of common services that do just that:
Windows Search
Google Desktop Search
Windows Backup
any other anti-virus or indexing software
We had the same server (2003) and the same problem. Certain dll's were being locked and putting the App_Offline.htm in the website root did jack diddly for us.
Solution:
File permissions!
We were using a web service which runs under the Network Service account or the IIS_WPG account to deploy updates to the web site. Thus it needed write access to all the files. I already knew this, and had already set the permissions on the directory a while ago. But for some strange reason, the necessary permissions were not set on this one problem dll. You should check the permissions not only on the directory, but on the problem file as well.
We gave Network Service and IIS_WPG users read/write access to the entire web root directory and that solved our file in use, file locked, timeout, and access denied issues.
One of the projects I am working on includes a website that
is hosted on a cheap shared hosting server.
Whenever I upload to the server some updated files, they don't
necessarily become available immediately.
It can take from 15 to 30 minutes before the server actually
starts using the new files instead of the old ones and in some
cases I even need to re-re-upload the updated files.
Some more info:
- C# webforms files (.aspx and .aspx.cs)
- If there was no previous file with that name on the server
then the file always become immediately available
- But if I first delete the older file and refresh the page
I get immediately a "file not found" error but if I then upload
the newer file the "file not found error" stops immediately but I
get back the older file again.
I understand that the server isn't actually serving the .aspx
page but rather using the compiled to dll version that it has made
(right?) so maybe this is a compiling problem on the server somehow?
I'm not sure if this would be better on serverfault.com
but as a programmer SO is where I usually come.
Any idea why this is happenning and preferably some solution
on how to fix this behavior so that when I upload an updated page
we can start using it immediately?
Thank you.
Usually, touching your web.config file will recycle the web server - if you do that, you should flush any caches. Just upload a new web.config with a trivial change and see if that helps.
If you are using .NET 2.0 websites, you can have problems with the .dlls in the bin folder. Changing to a Web application should solve your problem permanently.
http://webproject.scottgu.com/
I have seen this behavior on one of my sites as well.
In my case the issues began just after the hosting provider had moved my site to their new SAN-solution.
It turned out that this new storage solution did not support "file system watchers".
And without it IIS would never receive any notification of when a file has been updated or not.
The workaround they introduced was to move the applications into new application pools with regular intervals. (This gives the symptoms you are describing with updates only being applied at regular intervals.)
The only solution I found was to move my sites to a different hosting provider.