Appcache files not updated - c#

Circumstances
I'm building a webapp that will be used offline but will also be updated regularily when it's used online. I'm invalidating the manifest server side by adding a comment containing a tstamp and then reload the page automatically via JS as soon as that change is detected. That worked perfectly fine until now.
Problem
The above process is still executed completely, but for some reason, everytime the browser tries to fetch the new files, only old ressources are loaded. So the update progress is definitly firing and working (i can tell from chromes console), but it seems that the files requested during the process are retrieved from the browser cache (!= appcache).
This occurs even if I deleted the browser cache before. Also, I'm already using several anti-cache metas and changed IIS's invalidation header for immediate invalidation.
Additional Info
When I delete the application cache manually the problem is solved. But it will reoccur after some time (unfortunately i have no idea that triggers this)

Seems you want the cache to be used when offline, but not when you're online? I don't think it does that magically...

Related

Rollback of a C# MVC application causes caching issues

When I perform a rollback to a previous build my clients seem to have issues where some files remain stuck in their browser caches. The sequence of events is:
Deploy with build "B" that has the same .html file last modified at 1/2/2016
Make a browser request for the .html file
Deploy an older build "A" that has the same .html file last modified at 1/1/2016
Make a browser request for the same .html file
At the end of the sequence of events, the clients browser will make a Request with the header If-Modified-Since:1/2/2010 which will get a response 304 Not Modified and will get the wrong file version!
Is this correct or are we looking at a red herring to another issue?
This is correct behavior. It makes sense that's what you are seeing. You can modify IIS to expire everything right now. See the following link.
https://technet.microsoft.com/en-us/library/cc770661(v=ws.10).aspx
If this is hosted in Azure, you can go to tools -> console (in the azure portal) and open a command prompt. Then run touch <filename>. This will update the timestamp and invalidate the cache.

Caching of "multiple sites" with Windows Phone 8.1

I have this weird problem my website won't cache the mainsite!
Here is a little overview about what I am trying to do
The first page that is being loaded is the
[DidTheUserLoggedInBefore?.html]
which checks if the user already has logged in or not depending on that result the user will be redirected to
either [LOGIN.html] or [MAINPAGE.HTML]
pretty simple!
But here comes the problem when the user restarts the app in Offline mode the App should redirect immediately to the mainpage (assuming the previous login was a success).
But that doesnt happen at all.
Instead the [DidTheUserLoggedInBefore?.html] from cache was called (which is correct) and starts loading the mainpage which isnt in cache which results in a whitescreen aka my error.
So how do I get my App to cache the Mainpage?
I've tried setting CacheSize to 100, but that didn't changed a thing :(
You can't check if the user has logged in with a .html file... You need some sort of server side language to set a cookie... Anyway this isn't much clear, is your "app" just a webview?
I couldn't let the webview cache more than 2 (simple) webpages...
WebView ignores he offline.manifest.php file too ...

When are the parameters used in WebResource.axd reset?

When using WebResource.axd you will see two parameters being passed in the query string. Usually looks something like this:
WebResource.axd?d=9H3mkymBtDwEocEoKm-S4A2&t=634093400273197793
I have run into an issue where I need a permanent link to the resource in question. Recently the link I was using stopped working. What would cause these ids to change? Rebooting the server? Recompiling the code? Is there anyway to make these ids permanent?
Background -
As part of a site monitoring service we are subscribed to, we have "recorded" several sets of user actions for our website. For example, we recorded the process of logging into the site. The monitoring is now saying that the user login process fails (it's working fine) because it cannot find the WebResource.axd with the ids it recorded.
This page provides all the information on the makeup of the URL
http://support.microsoft.com/kb/910442
The "d" stands for the requested Web Resource
Something worth noting is that you don't need to have the timestamp (t) parameter there to call the resource. Try it on your own site, view the source and grab a webresource.axd url and navigate to the it, remove the t

How to prevent form.target from causing all HTTP GET in history from doing another roundtrip to the server?

Alright, I'm using form.target to open content in a new window. However, when I do this and then hit the back button on my browser, I find any entry resulting from a GET is doing another round trip to the server. This is a problem because session variables may have been changed in the interim, so the new GET no longer matches the old one.
I'm using C# and javascript for this web application, if it helps any.
This behavior occurs on IE8, but not on Firefox 10. Is there any way to prevent it in IE?
I solved this problem by adding a semi-random ID as a querystring on the URL.
I also added this to my OnInit event. For some reason explicitly setting the cache settings helps.
Response.AppendHeader("Cache-Control", "private, max-age=600");
Combined, this gave the page uniqueness and enforced the browser caching. This prevented the browser from always retrieving the most recent version of the page when encountering a GET operation in the browsing history.

How can I debug problems related to (lack of) postback

I have created a custom wizard control that dynamically loads usercontrols as you progress though it. The wizard is behaving as expected in all environments (PC/MAC) and browsers I have tested however a client is reporting that she is unable to complete the wizard. What I know about the issue:
It always fails on the same wizard step for this user (not the first step)
When the user clicks on the 'next' button in the step, the controller reports that the request was not a postback request (ie. IsPostBack() == false) and displays the first page of the wizard
The client is using a Mac and is accessing the site using the latest version of Safari
If the client switches to Firefox, or even just switches the user agent in Safari to something other than Safari the problem goes away.
So the problem is that when the client reaches a certain step in the wizard and clicks 'next', instead of re-loading that step to initiate the save event, the controller is merely displaying the first step of the wizard.
The step that fails contains many different form controls including textboxes, dropdowns, checkboxes and a fileupload control. We thought that it might have something to do with invalid characters getting pasted in from Word or something similar but that seems strange seeing as the problem only appears to be happening in Safari.
No exceptions are thrown and the windows event log is not displaying any related errors/warnings.
What I am looking for is ways to diagnose this error. At the moment I've been unable to reproduce the behavior that the client is experiencing but after going on site and seeing it for myself I can verify that it is definitely a valid issue.
Update 26/10/2010:
We installed a proxy on the clients NIC in order to retrieve the requests and responses. Problem is that when running the proxy the client appears to not have to problem any more. Does this behavior make sense to anyone?
Update 27/10/2010:
After investigating the traffic on the clients machine we noticed that the response headers were including some entries related to a client side proxy and we confirmed that they are in fact running the squid proxy in their office. To rule out that it had anything to do with the problem we got them to turn if off and then try the wizard again. This time no problems were encountered! So the proxy seems to be interfering with the requests causing .NET to somehow record the POST request as a non-postback. The following lines were found in the response header of a failed request. Can anyone comment on how squid could cause the behavior we are experiencing and what we can do about it?
Via:1.0 squid-12 (squid/3.1.0.13), 1.0 ClientSiteProxy:3128 (squid/2.7.STABLE4)
X-Cache:MISS from squid-12, MISS from ClientSiteProxy
X-Cache-Lookup:MISS from ClientSiteProxy:3128
If I have to troubleshoot this, I would first take a fiddler trace (www.fiddlertool.com) on the client and see what the requests are up to. I am not sure if Fiddler works on Mac, but any HTTP Watch, Network Monitor tool should be good. The reason that I am not doubting the code is that it works very well on all the other browsers, so the code shouldn't be bad.
May be there is something in the code [like adding cookies, etc] that is messing with the specific Client's browser.
HTH,
Rahul
For Mac There's a HTTPScoop which lets you to debug http post data's....it is similar to fiddler
The problem is not solved as such but we ended up just adding an exception to the clients squid proxy to bypass our website. The problem seems proxy/IIS/Safari related but we haven't been able to track the problem down any further and the client is happy with this solution as long as the problem doesn't resurface somewhere else. I'll re-post if more information surfaces.

Categories

Resources