Disable browser caching the page and javascript running in Azure - c#

Whenever we deploy an application and the client reviews the app, sometimes the javascript doesn't work (not totally). But when the browser is refreshed, the page works as intended.
I'm suspecting that it has something to do with the cache. Is there a way to disable caching of pages? I'm using Azure with .NET 4.0
Thank you in advance!

The only way I know of to reliably stop caching of files and links in most browsers is to append a random number or time to the file. e.g.
http://www.domain.com/js/script.js?date=20120409120003
This will mean it is a new link each time the page is loaded and next time it goes to get the file it won't have it available in cache.

Related

C# WPF, and data exchange with the browser

I've a C# WPF application developed in VS 2015, and I want the browser to read some data from it. Just a short string. I can save it in a text file, or in a variable but it should be visible to the browser (using JS I suppose). For instance using file:/// doesn't work if the original page is hosted online - as in my case (different source conflict). This should work in Opera and FFox, but looking at their extensions, it seems you can only develop with front-end technologies, which are not enough in my case since I use WPF to look into Win OS, and then I need to share the result with the browser.
I suspect it's possible, and no , it's not to write a malicious piece of code. For instance I can read the details of the graphic card for diagnostic purposes.
Please help, many thanks.
Browsers run in a security sandbox which is intended to stop them reading or writing files to the file system.
You could write to the user's appdata. There are various javascript frameworks which persist data to there so they can provide offline or static data.
I don't think that is a good plan though.
I suggest your first candidate would be a cookie.
Quick google on how to do that, I find:
How to create cookie in c#.net windows application?
From a web page you can use the content of a cookie dynamically. So you could change what you see in the web page after it's up and running from some process in your wpf app and do a counter or whatever.
I've not used this with windows apps and a browser but I have with a web app and Silverlight. I'm afraid I don't have that code to hand though.

Kentico custom table data editing issue

Custom table's data hangs on the loading screen after saving any changes. This is happening on some of tables and it seems that the majority of records are saved, however I have noticed a couple that didn't save within some custom table until reapplying the change!
I was wondering what can cause the issue.
I have found the issue using browser's developer tools.
Issue
Clicking save button was creating
Mixed Content: The page at 'https://address' was loaded over HTTPS, but requested an insecure form action 'http://address'. This request has been blocked; the content must be served over HTTPS. JS error on the browser and browser was blocking the content. However, the form action was not pointing to absolute URL address.
Solution
As the server SSL config was fine, therefore, there was not any other way than changing core CustomTableForm.ascx.cs Kentico file. Although it is not recommended. The problem solved by developing RedirectUrlAfterSave property of customTableFormobject to make sure it will redirect correct protocol instead of Absolute URL
Hope it will help you guys.
This was just brought to my attention, not sure how I missed it before. So, I will post my answer just for the future reference :-)
I guess there is some SLL offloading going on before the actual IIS where Kentico is running. In this case, SSL Accelerator must be implemented. The link goes to Xperience 13 version but the same idea applies for older versions. Just use the version selector in the top bar - there could be some API differences.
And the same applies e.g. when uploading media files - the browser console will show mixed content warning. This is for security reasons. Browser sees the HTTPS but behind the offloader there is HTTP communication and the GetAbsoluteURL method takes the protocol from the request. Thus, mixed content. Using the ssl accelerator will tell Kentico to use HTTPS internally.

Abot Web Crawler Performance

I have built a robots.txt crawler which extracts the urls out of robots and then loads the page with some post processing once the page is done. This all happens quite fast, and I can extract information from 5 pages per second.
In the event a website doesn't have a robots.txt I use Abot Web Crawler instead. The problem is Abot is far slower than the direct robots.txt crawler. It seems when Abot hits a page with lots of links, it schedules each link very slowly. With some pages taking 20+ seconds to queue all and run the post process as mentioned above.
I use the PoliteWebCrawler which is configured to not crawl external pages. Should I instead be crawling multiple websites at once or is there another, faster solution to Abot?
Thanks!
Added a patch to Abot to fix issues like this one. Should be available in nuget version 1.5.1.42. See issue #134 for more details. Can you verify this fixed your issue?
Is it possible that the site you are crawling cannot handle lots of concurrent requests? A quick test would be to open a browser and start clicking around the site while Abot is crawling it. If the browser is noticeably slower then the server is showing signs of the load.
If that is the issue, you need to slow the crawl down through the configuration settings.
If not, can you give a url of a site or page that is being crawled slowly? Abot's full configuration would also be helpful.

C# .net Offline page?

So I have here a .NET C# web app that needs one page able to be viewed offline as a user could be off in the middle of 'whoop whoop' with no internet.
The order of events are:
User visits a form online
Store the webpage using HTML5 so they can visit it later offline
When online - the user then can submit the form to the database
I've been looking over HTML5 appcache however it seems to only reference physical .html or .php pages rather than storing pages which have been generated by 'Razor' .cshtml Views.
e.g. domain.com/path/view.
I haven't been able to find any relevant documentation for my problem either.
So is it possible to cache a .NET webapp ofline?
Although I have not tried it, and assuming your app uses ASP.NET MVC, this might help you:
Build an HTML5 Offline Application with Application Cache, Web Storage and ASP.NET MVC
It uses HTML5 Offline Web Application API (or HTML Application Cache). Note the comment on browser support.
The linked article shows a sample application, but I could not see a link to a downloadable source code. But one commenter appears to have recreated the project.
The appcache is what you need. Note that you specify the pages to be cached, but the browser never sees if the page is a static .html or generated via Razor. As long as the path you specify opens the right page, it will be cached.

Replicate steps in downloading file

I'm trying to automate the download of a file from a website. Normally to download the file, I login with a username and password. Navigate to a particular screen then click a button.
I've been trying to watch the sequence of POSTs using Chrome's developer mode, and then replicate all the steps using .Net WebClient class, but to no success. I've derived from the WebClient class and added cookie handling. Which seems to be working. I go to the login page and post using WebClient.UploadValues. About half the times it seems to work. The next step appears to make another POST action to a reporting URL. Once again I use WebClient.UploadValues, but the response from the server is a page showing an internal error.
I have a couple of questions.
1) Are there better tools than hand coding C# code to replicate a bunch of web browser interactions? I really only care about being able to download the file at a particular time each day onto a Windows box.
2) The WebClient does not seem to be the best class to use for this. Perhaps it's a bit to simplistic. I tried using HttpWebRequest, but it has no facilities for encoding POST requests. Any other recommendations?
3) Although Chrome's developer plugin appears to show all interaction, I find it a bit cumbersome to use. I'd be interested in seeing all of the raw communication (unencrypted though, the site is only accesses via https), so I can see if I'm really replicating all of the steps.
I can even post the exact code I'm using. The site I'm pulling data from, specifically is the Standard and Poors website. They have the ability to create custom reports for downloading historical data which I need for reporting, not republishing.
Using IE to download the file would be a much easier, as compared to writing C# / Perl / Java code to replicate http requests.
Reason is, even a slight change in JavaScript code can break the flow.
With IE, you can automate it using COM. Following VBA example opens IS and performs a google search:
Sub Search_Google()
Dim IE As Object
Set IE = CreateObject("InternetExplorer.Application")
IE.Navigate "http://www.google.com" 'load web page google.com
While IE.Busy
DoEvents 'wait until IE is done loading page.
Wend
IE.Document.all("q").Value = "what you want to put in text box"
ie.Document.all("btnG").Click
'clicks the button named "btng" which is google's "google search" button
While ie.Busy
DoEvents 'wait until IE is done loading page.
Wend
End Sub
3) Although Chrome's developer plugin appears to show all interaction, I find it a bit cumbersome to use. I'd be interested in seeing all of the raw communication (unencrypted though, the site is only accesses via https), so I can see if I'm really replicating all of the steps.
For this you can use Fiddler to view all the interaction going on and the RAW data going back and forth. To make it work with HTTPS you will need to install the Certificates to enable decryption of trafffic.

Categories

Resources