Hello I've been assigned a QA ticket to validate a javascript code snippet on a page. The best way of doing this of course is to view source on page and look for the code.
However I have to do this on 20 different locales, and I've been using Selenium RC with .NET driver handily.
Is there a functionality in Selenium to check for page source code?
You could use the getHtmlSource command, but what you're asking for isn't really as easy as it sounds. Unless you just want to look and say "Yup, the script is there!".
Related
I'm trying to scrape a particular webpage which works as follows.
First the page loads, then it runs some sort of javascript to fetch the data it needs to populate the page. I'm interested in that data.
If I Get the page with HtmlAgilityPack - the script doesn't run so I get what it essentially a mostly-blank page.
Is there a way to force it to run a script, so I can get the data?
You are getting what the server is returning - the same as a web browser. A web browser, of course, then runs the scripts. Html Agility Pack is an HTML parser only - it has no way to interpret the javascript or bind it to its internal representation of the document. If you wanted to run the script you would need a web browser. The perfect answer to your problem would be a complete "headless" web browser. That is something that incorporates an HTML parser, a javascript interpreter, and a model that simulates the browser DOM, all working together. Basically, that's a web browser, except without the rendering part of it. At this time there isn't such a thing that works entirely within the .NET environment.
Your best bet is to use a WebBrowser control and actually load and run the page in Internet Explorer under programmatic control. This won't be fast or pretty, but it will do what you need to do.
Also see my answer to a similar question: Load a DOM and Execute javascript, server side, with .Net which discusses the available technology in .NET to do this. Most of the pieces exist right now but just aren't quite there yet or haven't been integrated in the right way, unfortunately.
You can use Awesomium for this, http://www.awesomium.com/. It works fairly well but has no support for x64 and is not thread safe. I'm using it to scan some web sites 24x7 and it's running fine for at least a couple of days in a row but then it usually crashes.
I have been using Selenium along side C# in Visual Studio 2013. I will make a call to:
driver.Navigate().GoToUrl("http://<insert webpage>");
...Which will open whichever WebDriver I choose to use.
From here, I will make calls to links/text boxes/menus as I need to.
However, I was wondering if there is a way to get the information from webpages without having to actually open a browser, and if so, could someone perhaps explain or link me to the right direction? It would save time and speed up a lot of my programs. I know applications can get information remotely without actually opening a browser, I just do not know how the process works or if Selenium alone will give that ability.
I appologize if this is wrong place to ask this question.
It is not clear whether or not you need to work with web page (like click on the links, or edit test), but here are two options:
You can use PhantomJS.It is headless browser and since there will be no UI execution may be faster. There is a selenium driver for it.
You can use Html Agility Pack to parse the page and WebClient to download the page. No selenium is required in that case. Html Agility Pack will allow you to make XPath queries, find objects by class name or ID. But: you won't be able to manipulate with DOM structure as you can do with real browser. It is just to parse and navigate over static html page.
I have a good understanding of DOM+HTML etc but I'm new to c#, whats the best way currently of downloading then rendering (executing all javascript + DOM changes etc) and simulating user interaction with a webpage in c#?
I've seen HTML agility pack mentioned quite a few times but it doesn't look like its been updated since August 2012? Has anyone used this recently and encountered any problems? Does c# have anything built in for this?
Thanks!
First of all HTMLAgilityPack it's not for simulating user interaction in a web page, HTMLAgilityPack is an agile HTML parser that builds a read/write DOM and supports plain XPATH or XSLT (you actually don't HAVE to understand XPATH nor XSLT to use it, don't worry...).
HTMLAgilityPack not support JavaScript, it's a very important step, because many developers get trouble with the full load of the page in the browser and the request made by HTMLAgilityPack or any library you use to make the request.
For user interaction, full load of the web page, web testing I strongly recommend you Selenium, Selenium automates browsers. Selenium has support for several programming languages (Java, C#, Ruby, Python, etc), you can read more in the above link with a very good documentation.
The only drawback of Selenium is its open a browser to make the work, but it can be simulated in some environments to run headless browser, you can read more about this in the following links :
Selenium Headless Automated Testing in Ubuntu
Headless Browser and scraping - solutions
I hope this help you
I'm writing a test for a webapp. At one point in the application the webpage is completed using Javascript.
For testing I'm using Visual Studio 2012 with NUnit and Selenium.
I want to check if the box with id=j_idt13:JNumber has the text value of sometext.
IJavaScriptExecutor js = driver as IJavaScriptExecutor;
string valoare = (string)js.ExecuteScript("return $('#j_idt13\\:JNumber').val();");
Assert.IsTrue(valoare.Equals("sometext"));
I keep getting this error:
"Syntax error, unrecognized expression: unsupported pseudo:JNumber".
What am I missing here?
I know you have something that works but I'd like to caution you to avoid using JavaScript to fetch the value of the element, in fact in general it should be avoided when doing your tests except when there is no other way to do what you want to do. The reason is that Selenium is supposed to behave as a typical user would. Typical users don't type JavaScript into a page or interact with it directly. This goes extra for using jQuery as your tests should not assume that jQuery exists on the page and is functioning. Selenium itself provides the ability to fetch the values of fields so I'd recommend you rewrite your code to something like:
driver.FindElement(By.Id("j_idt13:JNumber")).GetAttribute("value");
After some trial and error I found something that works:
string valoare = (string)js.ExecuteScript("return document.getElementById('j_idt13\\:JNumber').value;");
Why it works, I don't know. Basically is the same command.
And jQuery is working with other commands, just not the one I tried first.
I have been given a task to crawl / parse and index available books on many library web page. I usually use HTML Agility Pack and C# to parse web site content. One of them is the following:
http://bibliotek.kristianstad.se/pls/bookit/pkg_www_misc.print_index?in_language_id=en_GB
If you search for a * (all books) it will return many lists of books, paginated by 10 books per page.
Typical web crawlers that I have found fail on this website. I have also tried to write my own crawler, which would go through all links on the page and generate post/get variables to dynamically generate results. I havent been able to do this as well, mostly due to some 404 errors that I get (although I am certain that the links generated are correct).
The site relies on javascript to generate content, and uses a mixed mode of GET and POST variable submission.
I'm going out on a limb, but try observing the JavaScript GETs and POSTs with Fiddler and then you can base your crawling off of those requests. Fiddler has FiddlerCore, which you can put in your own C# project. Using this, you could monitor requests made in the WebBrowser control and then save them for crawling or whatever, later.
Going down the C# JavaScript interpreter route sounds like the 'more correct' way of doing this, but I wager it will be much harder and frought with errors and bugs unless you have the simplest of cases.
Good luck.
FWIW, the C# WebBrowser control is very, very slow. It also doesn't support more than two simultaneous requests.
Using SHDocVw is faster, but is also semaphore limited.
Faster still is using MSHTML. Working code here: https://svn.arachnode.net/svn/arachnodenet/trunk/Renderer/HtmlRenderer.cs Username/Password: Public (doesn't have the request/rendering limitations that the other two have when run out of process...)
This is headless, so none of the controls are rendered. (Faster).
Thanks,
Mike
If you use the WebBrowser control in a Windows Forms application to open the page then you should be able to access the DOM through the HtmlDocument. That would work for the HTML links.
As for the links that are generated through Javascript, you might look at the ObjectForScripting property which should allow you to interface with the HTML page through Javascript. The rest then becomes a Javascript problem, but it should (in theory) be solvable. I haven't tried this so I can't say.
If the site generates content with JavaScript, then you are out of luck. You need a full JavaScript engine usable in C# so that you can actually execute the scripts and capture the output they generate.
Take a look at this question: Embedding JavaScript engine into .NET -- but know that it will take "serious" effort to do what you need.
AbotX does javascript rendering for you. Its not free though.