I have a bunch of webpages where I need to grab some product information but the webpages are all written using javascript. So there will be something like
<a href="javascript:__getInfo('content_abc')">Product Name<a>
And once that's clicked all the page's content will change (but not the html address). How can I programmatically execute that script and get all the loaded content through a C# script?
While this is the wrong approach to your problem, there is a solution that might work.
By using automated testing tools, such as Selenium, it is possible to use the web driver interface to take control of a web browser and interact with elements on the screen, click buttons and check for results.
Related
I am forced to use a RPA software and C# scripts at my current job to automate a website. Basically everything worked fine until i had to handle a webpage dialog box like the one found on this post!
The dialog box pops up when clicking a button and it seems to have some javascript functionality behind.
I have to say that using the selenium webdriver switchTo() method allows me to handle the popup, get the htmlDocument and complete my task. However, I am forced to use this RPA software which can use its own browser or the Internet Explorer browser, therefore can't use the webdriver from the Selenium library.
Is there any other way of handling this kind of popups? Maybe another library that gives me the Selenium functionality? The most important thing is being able to read the htmlDocument in order to have access to all the webdialog elements (e.g. radio buttons, checkbox, searchbox etc.)
I'm making a winform application containing a WebBrowser control. I'm trying to connect to a Web page that contains ads, but it's affecting the loading speed of the page. The adblock plugin for Chrome blocks ads. So is there any way to add that plugin or any other way to achieve the same result?
The adblock plugin is just some js script with some browsers specific metadata on top. You can check the source code here: https://adblockplus.org/source
You can probably sort something out with this (like run the plugin's code after page load)
I haven't looked into it very much but am struggling to find relevant information on the topic. I basically want to create a browser that applies a filter to a webpage by changing colors in a webpage. My guess is that I will have to change the html once loaded or something, would this work? Do I have other options?
PS. I don't just want to make every color darker, I would more like to invert the colors.
Edit:
If any you were wondering, I am talking about the XAML browser component that can be used in a Windows Phone application.
I think the simplest way to do that is to inject some Javascript into your page once it has loaded.
To do that, you need to set the IsScriptEnabled to true on your WebBrowser control and then subscribe to the Navigated event.
When that event occurs you can inject some JS codeby using the WebBrowser.InvokeScript method.
Here is an example of JS code that darken the page : JavaScript: Invert color on all elements of a page
If you are talking about in a PC internet browser, you can find an add-on to execute Javascript automatically, such as Greasemonkey for Firefox. If you are talking about Windows Phone's Internet Explorer, I don't really know what you could do there, as I don't think they allow add-ons.
I need to surf on a web page using a C# window application (with a browser tool on it), and collect information for my data mining project. I need a tool that help me invoke events like click and refer to objects with a jQuery or css selectors like syntax to read them and save in database.
I try watin but that is just for testing my own web application.
You might want to try Selenium...
We use it for automated Regression Testing
http://seleniumhq.org/
We sometimes use Selenium for web scraping.
There is a website that was created using ColdFusion (not sure if this matters or not). I need to interact with this web site. The main things I need to do are navigate to different pages and click buttons.
I have come up with two ideas on how to do this. The first is to use the WebBrowser control. With this, I could certainly navigate pages, and click buttons (According to This).
The other way is to interact with the html directly. Not sure exactly how to do this, but I am assuming I could click buttons or use HTML requests to interact with the page.
Does anyone have a recommendation on which way is better? Is there a better way that I haven't thought of?
I'd use Html AgilityPack to parse the html and then do POSTs and GETs appropriately with HttpWebRequest.
While it may be possible to use the WebBrowser control to simulate clicks and navigation you get more control with Html AgilityPack and HttpWebRequest regarding what gets sent
Did you consider Selenium? The WebDriver API is quite good, and permits a lot of things in terms of Website automation.
why not submit directly the url? that's what the button click will do.
using WebRequest.Create you can submit directly to the url. no need to load, parse and "click" the button.
HtmlAguilityPack is useful for pulling the web elements and finding tags easily. If you need to remotely "steer" a web session, though, I prefer to use WatiN. It bills itself as a web unit testing framework, but it's very useful anytime you need to fake a browser section. Further, it can remote control different browsers well enough for most tasks you'll need (like finding a button and pushing it, or a text field and filling in text if you need a login).