I am using an HttpHandler to modify some CSS (only simple colours) on the fly, based on a technique I read about on SO.
Everything works just fine expect on the page where I am giving the user the option to specify the colours they want. Ideally as soon as the user saves his new colours and the page refreshes I want the new colours to be displayed. However they only come through when I explicitly press the browser reload or F5 key.
I appreciate that something somewhere (IIS or the browser) is doing some helpful caching of my stylesheet which 999 times in 1000 is exactly what I want, however on this particular page event I want to be able to force a reload and cause the HttpHandler to fire.
Anyone understand how this works and what I can do?
Things I have tried:
Response.Clear();
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Expires = -1;
Response.Cache.SetExpires(DateTime.Now.AddDays(-1));
Because I am also using ASP.NET themes adding a querystring the stylesheet link isn't really a simple option.
Thoughts anyone?
This can be solved with technique that I use on my sites to cause reloads of assets once they have changed, such as after a deploy.
Append ?value to the end of your CSS url, where value corresponds to the version, or some unique value the browser hasn't seen yet. In my case I use the file modification time, however in your case since the CSS is dynamic on almost every pageload, I suggest generating some unique value.
Since the URL is always different, the browser will always reload it and it will never get put into its cache.
Related
I'm working with webbrowser tool trying to build my own browser.
Something that i'm having trouble doing is the history part.
When the document completes navigating , I search in my database if its URL doesn't exist then I add it to the history, else I just increase the "counter" of this page in the database.
The problem is that when I enter some pages each time it gives me different URL but it's the same page ! such as google.com , when I navigate to it it gives me in the first time (for example) : https://www.google.co.il/?gws_rd=cr&ei=eBP-UtPCOMi84ASukoCAAw
the second time I navigate :
https://www.google.co.il/?gws_rd=cr&ei=rhP-UpW6CYG54ATAqIHIDg
Is there a way to identify that both these URLs lead to the same page ??
I'm trying to do this because when I load the history to my application , many URLs are loaded that are leading to the same page.
Any help is appreciated , thanx in advance
You can use the Uri object and ask for the AbsolutePath property
I personally would expect my browser to have the history by URL and not by content (that's what you actually try to do as far as I understand). But if you want to avoid these multiple entries, you might calculate a hash code for each content received by that page and increase your counter.
The problem is that you cannot know what the server will do with that URL. It might be the same today and different tomorrow. I also wouldn't just go for the URL without the parameters because on other pages the parameter might make a really important difference.
Another note: In case you hash the content, you might want to exclude things like 404 pages (which can occur with different URLs and shouldn't be grouped under the same hash.)
I am working on an ASP.NET/MVC4 app and I fetch data continuously and my problem is related to caching.
The problem is that when I click on a particular link in my application it works fine, but sometimes it automatically redirects to the INDEX page that is the default page.
I surfed around about this problem and found that it's a problem in Mozilla that it maintains caching of every link. But sometimes some weird things happen and it automatically redirects a particular link to the INDEX page (301 Permanently REMOVED) and also stores it in the cache such that now every time I click on that link it always redirects me to the INDEX page that's been cached.
So now I have to clear the cache in my browser every time I face this problem.
How can I make it not automatically redirect to the cached INDEX page?
You should really expand on what exactly is happening at that particular link you mention because well it should not 301 redirect unless your telling it to.
Also you say I fetch data continuously. What does this mean to us? Why is this important to know? Explain if this changes the link or the data? Are you 404ing the older data or something? That could possibly explain why you 301 back to your index.
Now with the limited information we have been given by you... if you want to prevent firefox from caching your urls/redirects simply make your url have a querystring that updates which each request. Like using a timestamp.
For example: http://example.com/return-data.asp?timestamp=1350668920
Then each time you continuously fetch data update the page's link accordingly
For example: http://example.com/return-data.asp?timestamp=1350669084
Im trying to scrape a page. Everything is ok, but when values are updated, the sourse code of page is still the same for a minute. Even when i refresh a page with slow internet connection, first i see old data, and only after page gets fully loaded values are current.
I guess javascript updates them. BUt it still has to download them somehow.
How can i get current values?
I write my program in C#, but if you have some ideas/advices/examples language doesnt really matter.
Thank you.
You're right - javascript is probably updating the data after load.
I could think of three ways to handle this:
Use a webbrowser control - I guess your using the HttpWebRequest object to retrieve values from the site. This won't work if you need to let the javascript to run. You can use the webbrowser control, let the javascript run and retrieve values from the DOM. Only thing I don't like about this approach is it feels like a hack and probably too clunky for prod applications. You also need to know when to read the contents of the DOM (an update might be inprogress in the background). Google "C# WebBrowser Control Read DOM Programmatically" or you can read more about that here.
I personally prefer this over the previous but it doesn't work all the time. First you need to inspect the website from firebug or something and see which urls are called from the background. Say for example the site is updating stock quotes using javascript. Most likely, its using an asynchronous request to retrieving the updated information from a webservice. Using firebug, you can view this under NET>XHR. Now is the hard part. Well, take a look at the request and the values returned. The idea is, you can try to retrieve the values your self and parse the contents - which can be a lot easier than scraping a page. The problem is, you would need to do a bit of reverse engineering to get it right. You might also encounter problems with authentication and/or encryption.
Lastly and my most preferred solution is asking the owner [of the site your are scraping] directly.
I think the WebBrowser control approach is probably OK and doesn't depend on third party libraries. Here is what I intend to use and it solves the problem of waiting for the page to complete loading:
private string ReadPage(string Link)
{
using (var client = new WebClient())
{
this.wbrwPages.Navigate(Link);
while (this.wbrwPages.ReadyState != WebBrowserReadyState.Complete)
{
Application.DoEvents();
}
ReadPage = this.wbrwPages.DocumentText;
}
}
I will get information out of the HTML through some form of DOM or XPath treatment. I am curious if others will have comments about entering the 'while' loop and depending upon the 'complete' state to get me out of it. I may put a timer of some sort in there as well - just to be safe.
I sometimes find that I need to press CTRL+REFRESH BUTTON (or simply REFRESH BUTTON) in order for pages to be updated.
I thought this may have been a problem with using AJAX Update Panel and things, but it also happens on pages where there is no AJAX partial rendering.
I have also removed if(!isPostBack), and yet still I need to refresh the page for the contents to be updated.
Is it to do with the cache?
Does anyone know of a fix for this?
I believe it only happens with IE 7 (which I am using). I tried the same feature with Chrome, and it worked as it is supposed to.
EDIT: Unfortuanetly, it is not as easy as setting to cache header to 0 or in IE retriving the latest page always on page load. I have done these and the same problem happens.
For instance, on one part of my site, you can change the profile picture. If I choose to remove the profile picture (which should then set to the default picture), it only deletes the picture (but doesnt display the default picture). The page loads again but it still references to the picture I deleted (so I get an X for the picture). I have to go onto a different page, and then back to the profile page for me to see the default picture. CTRL + REFRESH also works.
Note that this particular problem happens under all browsers (Chrome included).
If it helps, I am using Content pages which are in a master page.
Changing your browser cache settings will fix the problem locally, but to fix it for a general case, add the header "Expires: 0" to your outbound page, which will prevent browsers from caching it at all.
To do this in C#, add this code to the page load event:
Response.AddHeader("Expires", "0");
Ctrl+refresh forces you IE to reload page from server instead of using locally cached version. First, check your browser's settings: Settings - General - Browsing history. "Check for newer versions of stored pages" should be set to "Automatically". Then, check if you're adding any "expires" header to your pages.
You can also consider setting the caching policy on the response object or set the entity tag to something different every time...
http://msdn.microsoft.com/en-us/library/system.web.httpcachepolicy.aspx
This may be some "best practices" thing I've overlooked or don't know about, so go easy on me please.
I have an asp.net website that populates a gridview with columns from my database table. One of those columns gets processed into a link to a word document on another server. The issue is that if a user clicks on the word document to view it, and then that document is updated on the remote server, the user cannot access the changed document until their browser cache is cleared and it's forced to go out to the network to grab a fresh copy when the link is clicked.
Basically I want to somehow force the machine never to use the cached copy of the document, but always go out to the network to get the newest copy.
Bonus question: Would this be better handled somehow by storing the documents in SharePoint?
UPDATE: using Response.Cache.SetCacheability(HttpCacheability.NoCache); in my codebehind I have now resolved the issue in FireFox, but IE8 is weird. If I update the document and then left click on it, it brings up the word doc in the IE window without the changes. However, if I make changes, save them and then middle click on the document so it opens up a new tab, the document reflects the changes. I'm mostly there...
Try adding a little extra data to the link. Here's an example using js; if you're building the url server side, it should be essentially the same:
var url = "http://www.mydomain.com/mywordfile.doc?ts=" + (new Date()).getTime();
That'll force the url to have a different query url each time, which (in theory) should force the browser to re-request and re-download it.
By chance are you seeing this with IE8 specifically? We've seen it show this behavior where caching was previously not an issue.
Typically it can be cleared up with a couple steps: explicitly telling the browser not to cache via HTTP headers, and also expiring the page immediately. Google the "pragma no-cache" header, there is typically a couple of different lines you need to add to cover all browsers.