Rollback of a C# MVC application causes caching issues - c#

When I perform a rollback to a previous build my clients seem to have issues where some files remain stuck in their browser caches. The sequence of events is:
Deploy with build "B" that has the same .html file last modified at 1/2/2016
Make a browser request for the .html file
Deploy an older build "A" that has the same .html file last modified at 1/1/2016
Make a browser request for the same .html file
At the end of the sequence of events, the clients browser will make a Request with the header If-Modified-Since:1/2/2010 which will get a response 304 Not Modified and will get the wrong file version!
Is this correct or are we looking at a red herring to another issue?

This is correct behavior. It makes sense that's what you are seeing. You can modify IIS to expire everything right now. See the following link.
https://technet.microsoft.com/en-us/library/cc770661(v=ws.10).aspx
If this is hosted in Azure, you can go to tools -> console (in the azure portal) and open a command prompt. Then run touch <filename>. This will update the timestamp and invalidate the cache.

Related

Dynamic HTML in WPF WebBroswer - conent set by BrowserBehavior.Html, linking to file:/// url - does not behave

I am working on a system that has a WPF WebBrowser that is displaying dynamically generated HTML.
This contains links to files, using file:///servername/filename.ext addresses.
This should have worked in times gone by when this was first developed, but does not seem to behave now.
What I can see:
White click on the generated file in the browser of being an HTML file: File is served from about:blank and in the Internet Zone. Clicking a link does nothing.
What I have done:
I have added about:blank to the Trusted Zone, and have set the security for the Trusted Zone to Low. Clicking a link still does nothing.
Created an HTML file and hosted it on my local IIS. Browse to this file in IE. The file contains a link to a file:/// address. Nothing happens on click.
Added http://127.0.0.1 to the Trusted Zone. The above test still fails.
Changed the generated HTML to be a link to http://www.google.com. This works.
What I think is happening:
The WPF WebBrowser is IE underneath. Did IE have a security update that stopped access to file:/// paths?
What I cannot do due to technical restrictions with deployment:
Have the generated HTML and the files linked to served by a web server so everything is within an http(s) environment.
What I can do:
Update browser settings
Update our code
Update - additional information:
The HTML is being displayed on the WPF by binding to a string that contains the HTML (effectively <html><body>Look! Stuff!<br />Whatever</html>)
file:///foo/whatever.txt exists and I have access to it
That file is generated by a process on a server and the client is generating the link to the file. This is a historic design, I didn't come up with it, I'm just maintaining it. I can't do massive code overhauls.
I cannot install any additional services anywhere
All Browsers have updated to prevent interesting stuff happening on local HTML files. Because you could do interesting stuff in the past it meant interesting exploits could be utilized too.
I've had a recent issue where I created a HTML in code and wanted to display it in CEFSharp (much better than WebBrowser by the way) with a link to CSS and JavaScript Files.
How I fixed it was to run a LocalHost and did this using this code which works really well: An HTTP file server (130 lines of code) in VB.Net
For testing your HTML outside of code you could run this batch file to start your LocalHost:
ECHO OFF
ECHO "Launching Localhost:8080"
py -3 -m http.server
ECHO "Loading HTML.."
start chrome localhost:8000
This batch file assumes you have Python 3+ installed. You can verify this in the Command Prompt with:
python --version
I've solved this by cheating a little.
I've got the VM to write the HTML out to a file, and then pass the file name to the browser in the view. This means that I am displaying the created content from file:////foo.htm, and that is fine for links to file:///server/bar

Chrome cookie not up-to-date

In my Winforms project, I can get a cookie of a site opened in IE by the following method :
InternetGetCookie("mysite.com", "mycookie", "something" , "something" )
As a new requirement coming, the site must be opened in Chrome. That means the method above doesn't work anymore.
After some research, I found out a solution to use Sqlite to read the cookies file stored in "Users\xx\AppData\Google\Chrome\User Data\Default\cookies", it works as expected. I can fetch the cookie by giving the name and URL.
BUT PROBLEM: The cookies file is not up-to-date and is updated 1-2 minutes laters. That means the cookies of the request shown in Chrome DevTool is not the same as in the cookies file.
Is there any way to fetch the cookie in Chrome from C# Winforms project similar to InternetGetCookie?
Hmm, there doesn't seem to be a chrome flag to flush this quicker so probably not going to be an easy option... You could maybe:
Grab it from memory (may be possible if you can search for the value somehow)
Write a Chrome extension which dumps it immediately
Use a headless browser instance to visit the site and send the cookie back instead

Appcache files not updated

Circumstances
I'm building a webapp that will be used offline but will also be updated regularily when it's used online. I'm invalidating the manifest server side by adding a comment containing a tstamp and then reload the page automatically via JS as soon as that change is detected. That worked perfectly fine until now.
Problem
The above process is still executed completely, but for some reason, everytime the browser tries to fetch the new files, only old ressources are loaded. So the update progress is definitly firing and working (i can tell from chromes console), but it seems that the files requested during the process are retrieved from the browser cache (!= appcache).
This occurs even if I deleted the browser cache before. Also, I'm already using several anti-cache metas and changed IIS's invalidation header for immediate invalidation.
Additional Info
When I delete the application cache manually the problem is solved. But it will reoccur after some time (unfortunately i have no idea that triggers this)
Seems you want the cache to be used when offline, but not when you're online? I don't think it does that magically...

FTP upload in .net — not getting correct file path in some browsers

I'm building an application which involves writing some fields to a database, along with uploading some files from the end user to an FTP site. The file upload works fine... in IE. In Firefox and Chrome, I get an error that it can't find the file (running it in localhost at this point, haven't moved it to a dev or production environment yet).
I have tried getting the file via:
Server.MapPath(FileUpload1.PostedFile.Filename)
... which points to the folder the application is residing in.
And also:
Path.GetFullPath(FileUpload1.PostedFile.Filename)
... which points to c://Programs (x86)/... ...
I can get a file to upload properly if I get it from either folder, but nothing from anywhere else.
Any ideas on how to make this point to the right place? Or, will it actually work properly once it resides in a server environment?
Thanks in advance!
FileUpload.PostedFile.Filename works differently in each browser. in Firefox and Chrome it won't include the full path - just the file name. It depends on your customer's browser.
FileUpload.PostedFile.FileName
This actually gives you path of the uploaded file.
But in all the newer browsers (FF 3.6 series, Chrome, IE7+) this feature has been disabled due to security reasons. Any website should not need path of a file stored in client's systems because that gives the directory structure and may expose other important things to website owner.
So in your case, the above code returned only the file name.
you can check this link, it may help you Fileupload control - fullpath issue

POSTing to a re-written URL on IIS 6 doesn't work

I am working on a site which is programmed in C# .net. It uses a CMS called ADX Studio (a decision which predates my time there) which provides a shonky form of URL Rewriting (as far as I can tell it works by assigning an aspx page as the default 404 handler in IIS).
I have an web form which lives at a rewritten URL. I edited it so that the html form's action points back to the rewritten URL:
var u = new Uri(Request.RawUrl.Split(new char[1] { ';' }).Last());
userAdminForm.Action = u.PathAndQuery;
(kind of ugly but works based on what Request.RawUrl is on these rewritten URLs).
The "pretty" URL is something like this:
http://www.site.com/admin/user/edit/
On my development box (Windows XP/ IIS 5) when I initially tried POSTing back to URLs like this I got a HTTP 405 error. I worked around this by adding a script mapping so Aspnet_isapi.dll handles all (*) requests. And everything works fine on my development machine.
I just pushed my changes to the live server (Windows Server 2003 R2 and IIS 6) and the post fails silently. The page refreshes but all of my logic (from within an IsPostBack path in the code) doesn't get hit. No errors are displayed, it just doesn't work.
If I remove my code setting the .Action of the form then the postback works but it is posting to the ugly URL corresponding to the physical location of the aspx file rather than my page.
Am I missing a simple way to make this work? I don't want to be switching URL rewriting method or anything as this is a large legacy site and is unfortunately pretty dependent on ADX Studio so I don't want to do anything that will break that.
[edited because somehow the code above lost its code highlighting]
The issue is that the page's <form> tag is referencing the "ugly" url as the action. You can resolve that by completely removing the action tag from the form. Browsers will, by default, postback to the same page, ie. the "pretty" url.
This article explains how to accomplish an "actionless" form (~ two thirds of the way down) http://msdn.microsoft.com/en-us/library/ms972974.aspx
It seems like the problem is the same as it was on IIS 5. I can get it to work by doing the following in the IIS Manager:
Right click on the relevant website and select "Properties"
Choose the "Home Directory" tab
Click "Configuration" down in the "Application settings"
Click "Insert" next to the "Wildcard application maps"
Browse to the location of aspnet_isapi.dll (in my case: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll )
Untick "Check that file exists"
Click "OK" back through the Russian doll of dialogs.
This is basically the same as the approach that I linked to in the question for IIS5. However, it's not optimal because IIS is running every request through asp (even static files). Which seems like it can only slow things down. I'd like to be able to specify that asp only needs invoking for HTTP POST requests at least.
The weird thing is that IIS5 gave a HTTP 405 error when POSTing to an extension without a registered ISAPI extension but IIS6 just fails silently. And the page is being run through IIS (I can debug with a breakpoint in the Page_Load function) but IsPostBack (and IsCrossPagePostBack) don't get correctly set. Could it be related to the view state? Is there any alternative to my solution described above?
I've come to what I think is an optimal solution for this problem. It turns out that ADXStudio CMS does use the default 404 rule to do some form of URL rewriting. This has a problem with http POST:
when IIS initially executes a custom
URL on a 404 error, it changes POST to
GET, even if the client does a POST
request.
(thanks to elite brains' blog post about setting up IIS6 and ASP.NET MVC).
Rather than creating my own HttpModule I decided instead to use Ionics Isapi Rewrite Filter to rewrite my URLs. I then set the 404 error handler in IIS to the default. And I created this IIRF.ini file to redirect all requests to the same format as the 404 handler produced:
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /Default.aspx?404;http://%{HTTP_HOST}$1 [U,L]
And everything seems to work great. The advantage over my previous answer is that the rewrite code is low level and runs fast and the -f and -d switches mean that if a file actually exists it isn't re-written and so static files don't have the overhead of running through .net.

Categories

Resources