Html rendered under asp:updatepanel does not appear in page source - c#

I am working with .net c#.
Is there a way to see the rendered html code under the updatepanel?
Thanks
more info:
I dynamically generate UI controls and place them in a asp:Panel control I have under updatePanel. My page is initially almost empty, and I add about 50 new controls upon button click. However, I cannot see the html code generated in the page source. as in, I can see my textfield on the screen but I cannot see the corresponding code in the html source on my browser.
Thanks again.

What are you using to view the source? If you are using the View Source functionality in some browsers, this may only be showing you the initial server response, and anything dynamically inserted into the page in an AJAX call might not appear.
If you use a tool like Firebug you can watch the current state of the DOM, which will show you any dynamically inserted elements.

With Internet Explorer you can use the Developer Tools (IE8) to view the actual source, not just the initial source. As Tom said Firebug will do the same thing in Firefox, and Safari has a similar option that I can't remember off hand what it's called.

Basically, you need to inspect the DOM instead of the html source. Addins like Firebug for firefox and Developer Tools for IE8 would allow you to inspect the DOM and even allow you to update them dynamically.
If you need to view HTML instead of the DOM representation, you can use Fiddler or Firebug's NET Panel, which will let you debug HTTP traffic and see the response given for the AJAX calls.

It does appear over here, just like normal ASP.net controls, just there is a little bit of ajax code that does the updating. Can you be more specific about what are you looking for?

Related

How to get URLs on page with HTMLAgilityPack, when the Source does not contain the URLs?

I am trying to scrape the KB Urls from this page:
https://support.microsoft.com/en-us/kb/894199
On the page, there are URLs such as:
https://support.microsoft.com/kb/2976978
If you open up the developer tools in Chrome, it shows that data is contained like this:
<div class="indent">
<a id="kb-link-142" href="https://support.microsoft.com/kb/2976978" target="_self">https://support.microsoft.com/kb/2976978</a>
</div>
Now based on the above HTML, I believe I should be able to scrape the URLs from the href element like this:
foreach(HtmlNode link in doc.DocumentNode.SelectNodes("//a[#href]"))
{
list.Add(link.GetAttributeValue("href", string.Empty));
}
The problem I am running into though, is that when I download the HTMLSource, the content changes. What I mean is that even though the Developer tools show the above HTML available on the page, if you right click the page and choose to View source, the HTML it shows at that point is totally different, and does not contain any of the URLs that the rendered page displays.
My theory is that there's some kind of file reference where the HTML loads a file somewhere and the file contains the details of the page that is rendered.
So how can I use HTMLAgilityPack to get the URLs that are on the rendered page, since the source doesn't seem to contain them?
Also - I realize my question Title may be really confusing. If there is a technical term for what this page is doing/how it works, let me know and I can update the title so it is more logical and others can search it out in the future.
Okay, I see the problem now. This page is using Angularjs directives and bindings, and the hrefs are loading post page load. The page we are getting is before any parsing/execution has happened as from the web browser agent. This means the changes on the page after any DOM manupulation/ javascript or ajax modification will not be included in the HtmlDocument response. I think the way to go about this would be to pretend like a web browser request, let the javascript and ajax execute completely and fetch the content as advised here . Hope this helps!

Parse links from WebBrowser if source code is not updated

It should be such a problem: I should parse links from the site. Everything would be fine, but the links are displayed in the script and in the source code they are not. More precisely, they are, but the old ones.
Here is the site: http://54.join.ru/resume?q=
Need to parse links to resume. Everything is fine. But when you go to some other page, for example 5, a summary of changes, and the source code are old links, ie those that were on the first page.
Can anybody suggest how can I parse the new links? Write in c # using webBrowser.
Use Selenium WebDriver.
Selenium-WebDriver was developed to better support dynamic web pages
where elements of a page may change without the page itself being
reloaded.
Thus you will be able to access elements on a web page that has been changed dynamically by javascript.
Following code for example finds an element by given class name:
IWebElement we = driver.FindElement(By.ClassName("ra-elements-list__new-window-link"));

Translating website to Arabic dynamically in C#

I am facing a pesky problem at the moment on a large website with multiple languages. On arrival at the website, it detects what country you are from and prompts you to confirm this. On confirmation, it swaps out the pages languages from the DB and displays the relevant language. This is done using jQuery. Now the problem is that Arabic reads rtl, so I need to either:
-- swap out the stylesheets for "rtl" version
or
-- change the HTML tag and include a "dir='rtl'" arrtribute
Now, I have tried both of these, with failures on both. When I view the page source, it still shows the old Css file or HTML tag without the "dir" attribute. Correct me if I'm wrong but I believe this to be due to the DOM not registering the new changes, as they have happened asynchronously via jQuery after the DOM has been instantiated.
After all that blah blah and tldr;
Is there not an easier way to swap out the text direction dynamically? If this is a DOM issue, how can I reload the DOM after the asynchronous callback?
I have been at this issue for hours now and have had very little luck on the interwebz.
Any and all help is welcome and greatly appreciated.
Kind Regards,
William Francis
EDIT:
After much investigation I found that the only way to truly work the Arabic way is with a post-back. Once the language has been selected you do a postback, then its just a simple process of changing the Stylesheet HREF attribute from code behind. There doesn't seem to be any form of JavaScript or jQuery that can change it without a post-back and still reflect the new Stylesheet. NOTE: you need to set the Stylesheet HREF on each post-back, i.e. through a master page. The Stylsheet changes do not persist across pages.
Here's a website that helped greatly and explains a whole lot on Stylesheet changes using JavaScript. sadly, it didn't work for me.
http://www.alistapart.com/articles/alternate/
There could be several things going on. I found this page to be very helpful when I was dealing with a similar thing, so I highly recommend it:
http://www.w3.org/International/tutorials/bidi-xhtml/
Also, if you aren't already doing so, use a tool like Firebug to examine the generated DOM after your AJAX has run to be sure you are seeing the altered state of the DOM and not the initial source of the page. It is possible to change the dir dynamically--you can use Firebug to add a new attribute to the HTML tag of this very page (set dir="rtl") to see it change dynamically. It could be some other element is overriding the direction, it could be that the AJAX changes aren't loading correctly, or other things. If you can post more of your code it would be helpful to give a better answer, but I hope this will help.

How do browsers determine which controls are "successful" for a multi/form-data Postback?

I am trying to create a multi/form-data Postback using System.Net.HttpWebRequests.
Normally the browser creates the multi/form-data Postback. However, since I am using HttpWebRequests, I will have to parse the Html and then create a POST body based on the Html.
The controls on the page are updated frequently so I can't rely on hard coding the data for each control that should be posted. Instead I'll have to make a list of all the controls which should be posted and then do something like this.
But to do that I need to know how browsers determine which controls to include in the Postback body. So how do they?
I found my answer on W3C's website.
Browser's submit behavior is to send content of all controls (like <input> elements) inside <form> element that is being submitted.
Usage of this behavior on pages is low due to heavy reliance on AJAX post backs for pages. AJAX requests are constructed by script code and are essentially not restricted anything.
Complete list of "form controls" can be found in the Forms in HTML documents specification.

How to get page source of IFrame in a subdomain in IE addon

I am making an IE Addon using BandObjects in C#. I am making my web browser navigate to a page, suppose it's example.com. In that page there's an IFrame whose src is sub.example.com. So, IFrame points to a subdomain. I am able to fetch the URL of the IFrame, but unable to get the Page Source, when I view in the browser, it's there, but through code I can only view the script, no data.
I am pasting the IFrame:
<iframe height="40" src="http://sub.example.com/....php?style=web&ext=1305964161&hash=Ng1gwLG821-f" frameBorder="0" width="300" scrolling="no"></iframe>
When I view this element through visual studio, in HTML view, it shows me the data, that's an email, and Text View shows this. How do I get the HTML view or say the Page Source if this Iframe.
So, overall I want the data contained in this IFrame, the browser executes it some way, but how can I do it with code?
I have visited lot of sites, forums, but couldn't get it to work.
Well in the browser thats active the code is what the code is, theres no way of viewing the source of a frame within the active window unless of course the frame is large enough to right click on and view the source of itself. Otherwise your stuck with whats rendered in the browser as your source and the iframe per say is irrelevant in a matter of speaking. However seeing as its an add-on your making is it possible to load the url of the iframe up in a hidden window of sorts and then obtain the code that way as you would for the active page? Can you use javascript anywhere in your addon? I know thats a silly question but Ive never built an addon for a browser.
if you can use javascript however maybe something like getting the iframe name/id/whatever to identify it and then using innerHTML on the element you might be able to catch the source.

Categories

Resources