Now, I'm thinking for creating Web Automatic Testing Tool by using Selenium WebDriver with Visual Studio (C# ASP.Net).
When I create test cases, I have to make correct 'a' link's ID so that the tool can click defined links.
However, I'd like to make it automatic process, like clicking any 'a' link on the rendered page randomly for 5 minutes, for example. That means the tool will render pages until it finds any broken link.
Is it possible??
This would be possible using the page object framework as long as your links had something in common to be able to identify them.
You could Initialise the page when you first land on it and possibly use xPath selector to identify all links and put it into a List e.g.
[FindsBy(How = How.xPath, Using = "xpathToIdentifyAllLinks"]
public IList<IWebElement> Links { get; set; }
Since you have a common way to find links all you need to do is randomly select something from the Links list and click it. Then Reinitialize the page and do the same until an exception gets thrown?
The massive downside to this is if you end up with an exception getting thrown that the link is broken it will be hard to reproduce without any custom logging in place since you wont know what your test is doing.
Related
I have been trying to open different subreddits using selenium, but I can't seem to figure it out. I want to be able to open a specified number of subreddits, from a specified search term.
Usually, I would just put in the url, but I can't do that, because when the user inputs a different keyword, it would come up with different results. Here's the example link.
When I use inspect on the first 3 subreddits (the ones I want to click), I can't really see a real way to differentiate them, other than the subreddit name (can't use, as people will be using different search terms).
Any help would be much appreciated!
Using Visual Studio, C#, and selenium
Can locate all such links present in page using a common locator and then click all of them one by one:
IList<IWebElement> reditLinks= driver.FindElements(By.XPath("//span[text()='Communities and users']//following-sibling::div//a//div//div[contains(#class,'_2torGbn')]"));
// For current page, it will return list of 3 elements.
for (WebElement reditLink: reditLinks){
reditLinks.Click();
}
Note : I am no expert in C#, I am more of a Java / Python person. But idea here will work.
I'm aware that I can navigate backwards through my history using the IWebDriver.Navigate().Back() method, but what if I just need the URL of the last page visited? Is there a way to grab that from the WebDriver, without actually navigating there?
To be clear, this is a question about Selenium WebDriver, and has nothing to do with JavaScript.
As suggested by several, what I ended up doing is wrapping the Selenium WebDriver in my own class that monitors all navigation, and keeps its own history. It seems a redundancy, given that somewhere deep in the bowels of WebDriver another history already exists, but since the tester doesn't have access to it, I see no other way of achieving this goal.
Thanks to all who contributed their thoughts and suggestions!
You can just keep the previous page URL in a variable and update/pass it to the actual code that needs it. Even keep a collection of all visited URLs and just get the last item, all this will give you a history without any need for JS hacks. Storing and sharing state is a valid case, already implemented in some frameworks, like SpecFlow's ScenarioContext. And the previous page URL value will be available to all your steps/code for each test.
I need to create a "speed bump" that issues a warning whenever a user clicks on a link that would direct them to a different website (not on the domain). Is there any way to create a custom Orchard workflow activity that will activate whenever a link on the website is clicked? I'm having a problem getting C# to fire an event whenever a link (or anchor tag) on the page gets clicked (I can't just add an onServerClick event to every anchor tag or add an event handler to anchor tags with specific IDs because I need it to fire on all anchor tags many of which are dynamically assigned an id when created).
Another option I was toying with would be to create a custom workflow task that will search any content item for links and then add a speedbump to any link that is determined to lead to an external url. Is it possible to use C# to search the contents of any content item upon creation/publish for anchor tags and then alter the tag somehow to include a speedbump?
As a side note I also need to be able to whitelist urls so a third party can't use the speedbump to direct the user to a malicious website.
I've been stumped on this for quite some time any help would be greatly appreciated.
One way to do this is to add a bit of client-side script to intercept the A tags click events and handle them according to the logic you want to implement. Advantages are performance and ease of implementation. Very, very few people disable javascript, and those users who do can presumably read a domain name in the address bar, so there are no downsides.
Another way, if you don't want to use javascript, is to write a server-side filter that parses the response being output, finds all A tags, and replaces their URL on the fly with the URL of a special controller, with the actual URL being passed as a querystring parameter. Drawbacks of this method is that it's going to be an important drag on the performance of the server, and it's going to be hard to write.
But the best way to solve the issue, by far, for you and your users, is to convince your legal department that this is an extremely bad idea and that there is, in reality, no legal issue here (but I may be wrong about this: not a lawyer (this is not legal advice)).
There is one website named "www.localbanya.com", i wanted to grab the HTML information from that site, they list products, the structure of their display is:
First they display some around 8-10 products on page-load, and
later when user scrolls down it generates more products.
Now as this is happening based on javascript, i am not able to get the whole page source using WebClient.
I wanted to know is there any way i can update the page-source while using WebClient class in .net to retrieve whole page information or any other alternative i can use to get the whole page HTML information, at once.
You can refer this for reference localbanya product page
Any help will be a appreciated.
WebClient obviously doesn't run the javascript.
so you gonna need some sort of a headless browser to do it.
There are many options for it, though I don't know any C# or .NET implementation..
You may look into Phantom JS and other headless browsers which replicate what a normal browser does and you can write scripts for it.
Also refer to this question
Headless browser for C# (.NET)?
You can also run something like Fiddler to see what requests were made from the page when scrolling down, to reverse engineer how the data is retrieved, and replicate that with a WebClient if possible.
Hope this Helps.
So I've been working learning how to use Selenium in C# to do some automated testing for a project. However, I've hit a roadblock on this one. I have been trying to figure out of way to click the following link on this webpage.
Here is what I'm trying to target:
<A class='PortalLink' HREF="https://mywebsite.com/myprograms/launchprogram.jsp?" onClick="setUser('login','password');"><span>MyProgram</span></A>
Searching by ClassName hasn't turned up anything. Although there are multiples, I just wanted to see if I could detect the presence of them.
By.ClassName("PortalLink")
I tried a href based search using CssSelector, but this failed as well.
By.CssSelector("[href*='https://mywebsite.com/myprograms/launchprogram.jsp?']")
Lastly, I tried to use XPath and search by class and span content, but this failed to find the link as well.
By.XPath("//A[contains(#class,'PortalLink') and span[text()='MyProgram']]")))
The webpage in question contains 2 frames which I've tried both.
I'm waiting 200 seconds before timing out. What am I doing incorrectly? Thanks in advance for any help!
Assuming that this element is not appended to the DOM during ajax, your statement should be
By.CssSelector("a.PortalLink[href*='launchprogram.jsp']")
If there are multiple of these links, then we'll need to go further up in the parent-child hierarchy since this link has no more attributes that make this link unique.
If you can post the parent html of this link then we can suggest more options,
Can you try these......
//span[contains(text(),'MyProgram']
//span[contains(text(),'MyProgram']/../