How to open multiple links in different windows using selenium and C# - c#

I have been trying to open different subreddits using selenium, but I can't seem to figure it out. I want to be able to open a specified number of subreddits, from a specified search term.
Usually, I would just put in the url, but I can't do that, because when the user inputs a different keyword, it would come up with different results. Here's the example link.
When I use inspect on the first 3 subreddits (the ones I want to click), I can't really see a real way to differentiate them, other than the subreddit name (can't use, as people will be using different search terms).
Any help would be much appreciated!
Using Visual Studio, C#, and selenium

Can locate all such links present in page using a common locator and then click all of them one by one:
IList<IWebElement> reditLinks= driver.FindElements(By.XPath("//span[text()='Communities and users']//following-sibling::div//a//div//div[contains(#class,'_2torGbn')]"));
// For current page, it will return list of 3 elements.
for (WebElement reditLink: reditLinks){
reditLinks.Click();
}
Note : I am no expert in C#, I am more of a Java / Python person. But idea here will work.

Related

C# ASP.Net Selenium Random Clicks

Now, I'm thinking for creating Web Automatic Testing Tool by using Selenium WebDriver with Visual Studio (C# ASP.Net).
When I create test cases, I have to make correct 'a' link's ID so that the tool can click defined links.
However, I'd like to make it automatic process, like clicking any 'a' link on the rendered page randomly for 5 minutes, for example. That means the tool will render pages until it finds any broken link.
Is it possible??
This would be possible using the page object framework as long as your links had something in common to be able to identify them.
You could Initialise the page when you first land on it and possibly use xPath selector to identify all links and put it into a List e.g.
[FindsBy(How = How.xPath, Using = "xpathToIdentifyAllLinks"]
public IList<IWebElement> Links { get; set; }
Since you have a common way to find links all you need to do is randomly select something from the Links list and click it. Then Reinitialize the page and do the same until an exception gets thrown?
The massive downside to this is if you end up with an exception getting thrown that the link is broken it will be hard to reproduce without any custom logging in place since you wont know what your test is doing.

Connecting To A Website To Look Up A Word(Compiling Mass Data/Webcrawler)

I am currently developing a Word-Completion application in C# and after getting the UI up and running, keyboard hooks set, and other things of that nature, I came to the realization that I need a WordList. The only issue is, I cant seem to find one with the appropriate information. I also don't want to spend an entire week formatting and gathering a WordList by hand.
The information I want is something like "TheWord, The definition, verb/etc."
So, it hit me. Why not download a basic word list with nothing but words(Already did this; there are about 109,523 words), write a program that iterates through every word, connects to the internet, retrieves the data(definition etc) from some arbitrary site, and creates XML data from said information. It could be 100% automated, and I would only have to wait for maybe an hour depending on my internet connection speed.
This however, brought me to a few questions.
How should I connect to a site to look up these words? << This my actual question.
How would I read this information from the website?
Would I piss off my ISP or the website for that matter?
Is this a really bad idea? Lol.
How do you guys think I should go about this?
EDIT
Someone noticed that Dictionary.com uses the word as a suffix in the url. This will make it easy to iterate through the word file. I also see that the webpage is stored in XHTML(Or maybe just HTML). Here is the source for the Word "Cat". http://pastebin.com/hjZj6AC1
For what you marked as your actual question - you just need to download the data from the website and find what you need.
A great tool for this is CsQuery which allows you to use jquery selectors.
You could do something like this:
var dom = CQ.CreateFromUrl("http://www.jquery.com");
string definition = dom.Select(".definitionDiv").Text();

How do I select individual words inside a Google document using selenium webdriver (C# or Java)

I'm currently testing content within google documents using selenium webdriver. Some of my tests involve selecting individual words within a google document then performing some action against them such as bold the word or change the font type for the specific word etc.
I would simply like to be able to select a word like this:
http://s10.postimg.org/9x3d4f1q1/image.png
And here is the code returned from the Google document:
http://s24.postimg.org/e4zfocy9x/image.png
I have tried using send keys to send a ctrl+a command and this works for me but the problem is, I need to do a little house keeping prior to running my test by creating a document with one word inside it. Kind of defeats the purpose of automating this.
I have tried using substring to get specific words but then I can't perform any action on the String as it will not be a web element.
Would someone be so kind and point me in the right direction? Thanks very much for any help. It is much appreciated.
Selenium can only manipulate WebElement
In your example you won't be able to manipulate only "is" which is a text, not a HTML node.
The best you can do is selecting the <span>:
driver.findElement(By.xpath("//span[contains(text(),'This is a paragraph')]"));
and do whatever you want with it

Finding element using Selenium Web Driver in C#

So I've been working learning how to use Selenium in C# to do some automated testing for a project. However, I've hit a roadblock on this one. I have been trying to figure out of way to click the following link on this webpage.
Here is what I'm trying to target:
<A class='PortalLink' HREF="https://mywebsite.com/myprograms/launchprogram.jsp?" onClick="setUser('login','password');"><span>MyProgram</span></A>
Searching by ClassName hasn't turned up anything. Although there are multiples, I just wanted to see if I could detect the presence of them.
By.ClassName("PortalLink")
I tried a href based search using CssSelector, but this failed as well.
By.CssSelector("[href*='https://mywebsite.com/myprograms/launchprogram.jsp?']")
Lastly, I tried to use XPath and search by class and span content, but this failed to find the link as well.
By.XPath("//A[contains(#class,'PortalLink') and span[text()='MyProgram']]")))
The webpage in question contains 2 frames which I've tried both.
I'm waiting 200 seconds before timing out. What am I doing incorrectly? Thanks in advance for any help!
Assuming that this element is not appended to the DOM during ajax, your statement should be
By.CssSelector("a.PortalLink[href*='launchprogram.jsp']")
If there are multiple of these links, then we'll need to go further up in the parent-child hierarchy since this link has no more attributes that make this link unique.
If you can post the parent html of this link then we can suggest more options,
Can you try these......
//span[contains(text(),'MyProgram']
//span[contains(text(),'MyProgram']/../

Is it possible to return the URL of a website in an external browser(i.e firefox) using C#?

I'm trying to make a program that restricts certain websites by returning the URL of the current page and then doing a search in the string for certain keywords. However, as I'm not accustomed to working with any C# functionality that monitors system activity outside of the C# environment, I do not know if this is even remotely possible. Can anyone shed some light into this?
Following the comment discussion, the HtmlAgilityPack ( http://html-agility-pack.net/ ) can be used to process (even broken and malformatted) HTML documents with ease. You can walk the DOM tree and modify href="" attributes, for example, which sounds like half-way towards solving your problem.

Categories

Resources