my scenario is this; the user selects the list of reports they wish to print, once they select and click on the a button, i open up another page with the selected reports ready for printing. I am using a session variable to pass reports from one page to another.
first time you try it, it works fine, second time you try it, it opens the report window with the previous selected reports. I have to refresh the page to make sure it loads the latest selections.
is there a way to get the latest value from the session every time you use it? or is there a better way to solve this problem. open for suggestions...
Thanks
C# Asp.net, IE&7 /IE 8
After doing some more checking maybe if you check out COMET it might help.
The idea is that you can have code in your second page which will keep checking the server for updated values every few seconds and if it finds updated values it will refresh itself.
There are 2 very good links explaining the imlementation.
Scalable COMET Combined with ASP.NET
Scalable COMET Combined with ASP.NET - Part 2
The first link explains what COMET is and how it ties in with ASP.NET, the second link has an example using a chat room. However, I'm sure the code querying for updates will be pretty generic and can be applied to your scenario.
I have never implemented COMET yet so I'm not sure how complex it is or if it is easy to implement into your solution.
Maybe someone developing the SO application is able to resolve this issue for you. SO uses some real-time feature for the notifications on a page, i.e: You are in the middle of writing an answer and a message pops up in your client letting you know someone else has added an answer and to click "here" to refresh.
The proper fix is to set the caching directives on the HTTP response correctly, so that the cached response is not reused without validation from the server.
When you fail to specify the cache lifetime, the client has to "guess" how long the response is good for, and the browser's guess probably isn't what you want. See http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx
It's better to use URL paramaters. So you have a view of value of the paramaters.
Related
Good afternoon,
Currently I'm developing a web app using ASP.net Core MVC.
I would like to know if there is a way to load and show the query results one by one instead of wait until it is complete and show everything.
For example, in my application I have a query that usually returns more than 500~1,000 products. But my page stays loading until everything is completed and then display the content.
I used to develop in Clojure and there the products were loading one by one instead of waiting everything to be completed.
I am wondering if there is a way to fix this in ASP.net or maybe using AJAX.
Thank you
Sounds like you need some pagination. This way you can take only 25, or 100, or whatever number of rows you want at a time. Datatables works well for this in server side mode but it is going to take some work to setup.
I think you should use server side pagination, read article for this:
https://datatables.net/examples/data_sources/server_side.html
http://www.codeproject.com/Articles/155422/jQuery-DataTables-and-ASP-NET-MVC-Integration-Part
I am working on an ASP.NET/MVC4 app and I fetch data continuously and my problem is related to caching.
The problem is that when I click on a particular link in my application it works fine, but sometimes it automatically redirects to the INDEX page that is the default page.
I surfed around about this problem and found that it's a problem in Mozilla that it maintains caching of every link. But sometimes some weird things happen and it automatically redirects a particular link to the INDEX page (301 Permanently REMOVED) and also stores it in the cache such that now every time I click on that link it always redirects me to the INDEX page that's been cached.
So now I have to clear the cache in my browser every time I face this problem.
How can I make it not automatically redirect to the cached INDEX page?
You should really expand on what exactly is happening at that particular link you mention because well it should not 301 redirect unless your telling it to.
Also you say I fetch data continuously. What does this mean to us? Why is this important to know? Explain if this changes the link or the data? Are you 404ing the older data or something? That could possibly explain why you 301 back to your index.
Now with the limited information we have been given by you... if you want to prevent firefox from caching your urls/redirects simply make your url have a querystring that updates which each request. Like using a timestamp.
For example: http://example.com/return-data.asp?timestamp=1350668920
Then each time you continuously fetch data update the page's link accordingly
For example: http://example.com/return-data.asp?timestamp=1350669084
Im trying to scrape a page. Everything is ok, but when values are updated, the sourse code of page is still the same for a minute. Even when i refresh a page with slow internet connection, first i see old data, and only after page gets fully loaded values are current.
I guess javascript updates them. BUt it still has to download them somehow.
How can i get current values?
I write my program in C#, but if you have some ideas/advices/examples language doesnt really matter.
Thank you.
You're right - javascript is probably updating the data after load.
I could think of three ways to handle this:
Use a webbrowser control - I guess your using the HttpWebRequest object to retrieve values from the site. This won't work if you need to let the javascript to run. You can use the webbrowser control, let the javascript run and retrieve values from the DOM. Only thing I don't like about this approach is it feels like a hack and probably too clunky for prod applications. You also need to know when to read the contents of the DOM (an update might be inprogress in the background). Google "C# WebBrowser Control Read DOM Programmatically" or you can read more about that here.
I personally prefer this over the previous but it doesn't work all the time. First you need to inspect the website from firebug or something and see which urls are called from the background. Say for example the site is updating stock quotes using javascript. Most likely, its using an asynchronous request to retrieving the updated information from a webservice. Using firebug, you can view this under NET>XHR. Now is the hard part. Well, take a look at the request and the values returned. The idea is, you can try to retrieve the values your self and parse the contents - which can be a lot easier than scraping a page. The problem is, you would need to do a bit of reverse engineering to get it right. You might also encounter problems with authentication and/or encryption.
Lastly and my most preferred solution is asking the owner [of the site your are scraping] directly.
I think the WebBrowser control approach is probably OK and doesn't depend on third party libraries. Here is what I intend to use and it solves the problem of waiting for the page to complete loading:
private string ReadPage(string Link)
{
using (var client = new WebClient())
{
this.wbrwPages.Navigate(Link);
while (this.wbrwPages.ReadyState != WebBrowserReadyState.Complete)
{
Application.DoEvents();
}
ReadPage = this.wbrwPages.DocumentText;
}
}
I will get information out of the HTML through some form of DOM or XPath treatment. I am curious if others will have comments about entering the 'while' loop and depending upon the 'complete' state to get me out of it. I may put a timer of some sort in there as well - just to be safe.
The solution is for a project in which changing all instances of Session[string] is not an option. My thoughts have been implementing the SessionStateStoreProviderBase. I understand that creating a class Session and having properties like Session.UserName would be a good idea.
Edit: The goal here is to turn off Sessions per user request, not application wide, without changing code in each aspx page.
First you need a way to tell a bot from a human apart.
When you're through, consider what do you want to achieve.
If you wish to disable Session to bots, then be sure it won't break you site. If a search engine bot gets a crashed page, it will index and rank it as such.
Set up your robots.txt file to direct (most) bots to a page of your choice, where you have control over session and other information. If you want free access to all pages, you have to put in code to distinguish bots by http header information - that's a research project in itself.
I am working on a website that I inherited (ASP.NET and C#), and I noticed that in almost EVERY method in the code behind of the project pages (except some helper methods), the original author uses Response.Redirect() to redirect to a page (typically home.aspx, but not always).
What is the purpose of doing this? It seems unneeded to me - at least it doesn't appear to change anything the website is doing if I keep it in or remove it.
Thanks.
Response.Redirect() issues a 302 HTTP Redirect header to the browser, which causes the browser to request a new page from your web site.
If the author was using the POST-Redirect-GET pattern to stop the problem with users being able to hit the "refresh" button and repost forms, this might explain why it's used everywhere.
Redirects should really only be used when location is determined by something in the code behind. Redirects tend to cause ThreadAbortExceptions which are just further demand on a system when a simple href might be what the doctor ordered. Unless you can define some true architectural need for redirects, you might just want to begin phasing these things out.
It sends a response to the user agent/browser and tells it to redirect to the specified page. It can be put into any part of the code, but by default, the page will still execute to completion, then the redirect response will be set to the client...
It should only be needed at the last point in the code that you are running (generally)
ASP.NET Pages Post back to themselves, so some use the redirect method to open a new page. Use it when you need it. If you don't see a difference when you remove it. It might be the site uses links to navigate from one page to another, instead of doing it via the server.
Without more information it's hard to be definitive.
However, if home.aspx is an empty page, it may be that the original author may have been trying to terminate the processing of the page early in an effort to prevent subsequent processing.
Normally, Response.Redirect() is used to end the response and inform the browser to navigate to a new page. However, if the browser has that page cached, it may not actually perform a trip to the server. I've seen some cases where developers do this as a way of short-circuiting subsequent processing.
It's also possible that the code is doing something crazy, like making home.aspx the main display page for all data - and using session state or cache to communicate changes across pages. Sadly, I've seen this done too.... sigh. Often this is done to deal with the user being able to multiply submit forms.