(I believe that this applies to both normal ASP.NET web parts and SharePoint hosted web parts)
The web part has an 'export' button that renders the output as csv and sets the appropriate headers so its opened in Excel.
Hooking in the buttons click event, clearing the response, adding the appropriate headers and content types is trivial - example
However I've noticed that if this code added to a web part and a debugger attached then if there are multiple instances of this (or any other) web part on the page then neither HTTPApplication.CompleteRequest or Response.End stop the processing/page lifecycle and all the events for all the page controls still fire.
This is wasteful in this example as the other web parts don't need to run - nothing they do will get to the response.
Any way of stopping other web parts being rendered?
What about creating another page that does the actual export function for you, then have the export link merely link to this other page?
This should solve the problem of having to end the request manually, since you can control exactly what is output in the response.
Short answer: no.
Since you're doing full postbacks, HTTP response stream must contain the entire HTML page that was delivered by the server.
You can use ASP.NET output caching and refresh cache by some parameter, allowing your page to render such web parts that have their data changed.
Learn more about ASP.NET caching reading this documentation:
http://msdn.microsoft.com/en-us/library/xsbfdd8c(v=VS.100).aspx
HttpContext.Current.Response.End();
Is only for compatibility with ASP.
ApplicationInstance.CompleteRequest();
is supposed to do what you want.
Related
I have seen 100 examples of posting from a HTML form to a .aspx page and using Request.Form to see the values; this makes sense.
I'm trying to build up a small example to mimic a project where data is posted to a blank .aspx page and the values just read server side. I know it seems odd to leverage a .aspx page for this purpose, but that's the objective.
I want to know which event will be raised on the server in my Default.aspx page when data is POSTed to it? I doub't Page_Load() fires because the page is not being pyhsically opened, rather just hosted in IIS on the server.
Which event will I use in Default.aspx to read or siphon out the POSTed data?
EDIT: 'mimic' is the keyword here. This is not a new project, but I don't have the source for the original - it's a prototype to mimic an implemented example. If I was starting from scratch exposing something to POST data I'd most likely choose WebAPI nowadays.
Page_Load()
If the page has no content and its sole purpose is to receive these values, then Page_Load() would be the sensible place to capture the values and pass them along to wherever they need to go in the business logic.
I doub't Page_Load() fires because the page is not being physically opened
Sure it is, at least as far as the page itself is concerned. How the client requests the page and what the client does with the response from the page is immaterial in this regard. If the page is requested, it's "loaded" server-side and returned as the response.
I know it seems odd to leverage a .aspx page for this purpose, but that's the objective.
Very odd indeed. Though not uncommon. An ASHX handler may suit your needs more effectively, as might a WCF service endpoint. But without more information about what you're building and how it's going to be maintained, this is all hearsay.
I would like to know how the HTML source of ajax based sites can be read using HttpWebRequest / HttpWebResponse (That is reading the contents of a website at server side). The problem that I'm facing is that I'm unable to read parts of the webpage which uses Ajax or stuffs like UpdatePanel.
My application is in ASP.NET / C#, so can't think of using stuffs like Browser control or mshtml.dll since I would not be able to serve multiple requests.
Thanks in advance.
this is going to be difficult.
I know you said you don't want to use Browser control, but I'm going to say it anyway. You will most probably be better off using a Browser control. The reasons are as follow:
AJAX sites make multiple calls from the browser to the server to obtain the required view.
The multiple calls are being performed via JavaScript
The data returned from the server may be reformatted by JavaScript before being updated onto the view.
If you are going to do this using HttpWebXyz functions, you will have to do the following:
Make the relevant calls to get the initial page source.
Parse the page for JavaScript.
Evaluate/execute the JavaScript. This may include providing the relevant implementation for functions such as alert and making subsequent calls to the server.
Depending on the complexity of the AJAX site, you may want to reconsider using the browser control. Complex sites are easier process by the control. If the site is simple enough, you may survive parsing and executing the required JavaScript.
This example uses a deprecated class to parse JavaScript.
You may want to explore ICodeCompiler and its relevant classes for the new approach.
Good luck.
I have a page with a menu that uses JQuery AJAX calls to populate the page with. To reflect any changes I update the URL with a #... instead of ?... or /... So an URL that originally reads : htpp://localhost/pages/index/id=1 would look like : http://localhost/#pages/index/id=1. If a user bookmarks this, and later comes back to the page, I wonder if it's possible to use the second URL in my route decoding, or if I have to load it blank, then use the same JS/Ajax to populate the page?
In my mind it is problematic to use Ajax in these cases if a user copies the link and mails it to a friend with JavaScript disabled.
edit#1: Fixed some spelling.
edit#2: To clarify the question a bit: I want a site where I can do the following:
(a): with javascript turned on, use ajax calls to replace the content of a div (without reloading the page)
(b): with javascript turned on, bookmark the page as it is after the ajax call i (a)
(c): take the URL, send it to a person with noscript turned on, and have the same page as after the ajax call was made.
(a) and (b) works just fine on my page but (c) is seemingly impossible.
Currently, the only portion of a URL you can update without causing the browser to redirect is the hash. This portion of the URL is not sent to the server in a request and is only available for client-side processing, so it cannot be used to provide a javascript-free way of providing a link.
The issue you are facing is a common one amongst those using AJAX. The best solution I've encountered is to provide a way to view any AJAX-loaded state of every page through a "true" URL, one that will be passed to the server.
This means you have one URL which provides a "snapshot" of a page's state:
http://localhost/pages/index/1/someaction
And an AJAX-specific URL which provides the local state of the page in the client's browser:
http://localhost/pages/index/1#someaction
What you then have to do is provide some means of generating the "snapshot" link to the page from the AJAX version. A "Link to this Page" or "Permanent Link" button is a reasonable option.
This not possible simply because everything that comes after the # sign (fragment identifier) is never sent to the server and there's no way for the server to ever capture this value, so no routing with it.
You could try replacing the '#' with a '?' This will send the rest of it as a get variable, so you may need to do some tweeks, such as change the format to http://localhost/?pages=index&id=1
There are some fancy things you can set up with the web server so that localhost/article/fancystuff is re-directed to localhost/article.php?title=fancystuff
There are a lot of ways of allowing for an AJAX site to work with bookmarks and the back button. But you should ask your self, do you want people to do certain things. Generally, AJAX is used for more advanced web-applications that do not map well to the traditional back and forth model.
EDIT
What with you additions to the question. I will say that seeming as you want to fully support users who are scared of Javascript that you will need to make your site work perfectly with out any AJAX at all. But you should design it in such a way, that the content of pages are included from separate files. This means that when you add in the additional Javascript, it can load the file and place it more or less directly into the content holder on your page.
You do need to remember that you can't force some one to accept a bookmark or force a change to a book mark. What you are after may be best served suing cookies. Luckily, even less people are scared of cookies, hardly anyone disables them, unless they are either paranoid or up to something.
I am working on a website that I inherited (ASP.NET and C#), and I noticed that in almost EVERY method in the code behind of the project pages (except some helper methods), the original author uses Response.Redirect() to redirect to a page (typically home.aspx, but not always).
What is the purpose of doing this? It seems unneeded to me - at least it doesn't appear to change anything the website is doing if I keep it in or remove it.
Thanks.
Response.Redirect() issues a 302 HTTP Redirect header to the browser, which causes the browser to request a new page from your web site.
If the author was using the POST-Redirect-GET pattern to stop the problem with users being able to hit the "refresh" button and repost forms, this might explain why it's used everywhere.
Redirects should really only be used when location is determined by something in the code behind. Redirects tend to cause ThreadAbortExceptions which are just further demand on a system when a simple href might be what the doctor ordered. Unless you can define some true architectural need for redirects, you might just want to begin phasing these things out.
It sends a response to the user agent/browser and tells it to redirect to the specified page. It can be put into any part of the code, but by default, the page will still execute to completion, then the redirect response will be set to the client...
It should only be needed at the last point in the code that you are running (generally)
ASP.NET Pages Post back to themselves, so some use the redirect method to open a new page. Use it when you need it. If you don't see a difference when you remove it. It might be the site uses links to navigate from one page to another, instead of doing it via the server.
Without more information it's hard to be definitive.
However, if home.aspx is an empty page, it may be that the original author may have been trying to terminate the processing of the page early in an effort to prevent subsequent processing.
Normally, Response.Redirect() is used to end the response and inform the browser to navigate to a new page. However, if the browser has that page cached, it may not actually perform a trip to the server. I've seen some cases where developers do this as a way of short-circuiting subsequent processing.
It's also possible that the code is doing something crazy, like making home.aspx the main display page for all data - and using session state or cache to communicate changes across pages. Sadly, I've seen this done too.... sigh. Often this is done to deal with the user being able to multiply submit forms.
I've been struggling to find an exmample of some C# code (I'm using C# Visual Studio 2008 Express) that can programmatically save an entire web page (given a URL) including the images and formatting (e.g. CSS). The intention is that in a subsequent phase I'd ship this off (not sure how yet) so it could be viewed later via a browser.
Is there an example of the most simple approach (leveraging the .NET Framework methods) to save an entire web page? Saving as one page with a subdirectory for images, or otherwise. Basically the same as what you get with browsers when you say "save entire web page".
The simplest way is probably to add a WebBrowser Control to your application and point it at the page you want to save using the Navigate() method.
Then, when the document has loaded, call the ShowSaveAsDialog method. The user can then save the page as a single file, or a file with images in a subdirectory.
[Update]
Having now noticed "programatically" in your question, the above approach is not ideal as it requires either user involvement or delving into the Windows API to send input using SendKeys or similar.
There is nothing built-in to the .NET Framework that does all of what you ask.
So my approach revised would be:
Use System.NET.HttpWebRequest to get the main HTML document as a string or stream (easy).
Load this into a HTMLAgilityPack document where you can now easily query the document to get lists of all image elements, stylesheet links, etc.
Then make a separate web request for each of these files and save them to a subdirectory.
Finally update all relevent links in the main page to point to the items in the subdirectory.
In effect you would be implementing a very simple web browser. You may run into issues with pages that use JavaScript to dynamically alter or request page content, but for most pages this should give acceptable results.
From code Project: ZetaWebSpider
It's definitely not elegant, but you could navigate a System.Windows.Forms.WebBrowser to the URL and then call its ShowSaveAsDiagog() method to save the page.