Short version:
I am looking to make sure that a URL (partial match) is requested (client-side).
Long Version:
I am looking to automate part of my testing. Currently I use Fiddler2 to manually verify.
Here's the scenario:
User navigates to Site A
My app redirects using a tracking URL (seen in Fiddler's HTTP traffic)
User ends up on Site A, parameters now applied.
I would like to verify, in C#, that step 2 happened by doing a partial match (contains {string} for example).
Question:
How should I go about this? I have started looking into HttpWebRequest class and FiddlerCore, but my love using the simplest code possible (so other team members to update if needed) lead me to ask what the users of StackOverflow would recommend.
Take a look at SharpPcap. It's based on pcap (WinPcap on Windows), which is the packet capture library that is used by the popular Wireshark.
There is a really great tutorial on CodeProject with lots of example code to get you started: http://www.codeproject.com/Articles/12458/SharpPcap-A-Packet-Capture-Framework-for-NET
Once you have a hold of the packets (SharpPcap does capture, not parsing), you can use Packet.Net to parse the packets into something usable (HTTP communications, in your case).
Edit: Didn't see #2 as an intermediate URL when I read the question, it looked like it was the (only) redirect action. Depending on your browser of choice, and the type of redirect performed, you can use Selenium to read the page referrer and get the redirect.
WebDriver driver; // Assigned elsewhere
JavascriptExecutor js = (JavascriptExecutor) driver;
// Call any javascript
var referrer = js.executeScript("document.referrer");
I would recommend Selenium Webdriver for all your web site/app testing needs in C#. It integrates very nicely with NUnit, MSTest and other test frameworks - it's very easy to use.
With Selenium Webdriver, you will start an automated browser instance (Firefox, Chrome, Internet Explorer, PhantomJS and others) from your C# testing code. You will then control the browser with simple commands, like "go to url" or "enter text in input box" or "click button". See more in the API.
It doesn't require much from other developers either - they just run the test suite, and assuming they have the browser installed, it will work. I've used it successfully with hundreds of tests across a team of developers who each had different browser preferences (even for the testing, which we each tweaked) and on the team build server.
For this test, I would go to the url in step 1, then wait for a second, and read the url in step 3.
Here is some sample code, adapated from Introducing the Selenium-WebDriver API by Example. Since I don't know the URL nor {string} ("cheese" in this example) you are looking for, the sample hasn't changed much.
using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;
// Requires reference to WebDriver.Support.dll
using OpenQA.Selenium.Support.UI;
class RedirectThenReadUrl
{
static void Main(string[] args)
{
// Create a new instance of the Firefox driver.
// Notice that the remainder of the code relies on the interface,
// not the implementation.
// Further note that other drivers (InternetExplorerDriver,
// ChromeDriver, etc.) will require further configuration
// before this example will work. See the wiki pages for the
// individual drivers at http://code.google.com/p/selenium/wiki
// for further information.
IWebDriver driver = new FirefoxDriver();
//Notice navigation is slightly different than the Java version
//This is because 'get' is a keyword in C#
driver.Navigate().GoToUrl("http://www.google.com/");
// Print the original URL
System.Console.WriteLine("Page url is: " + driver.Url);
// #kirbycope: In your case, the redirect happens here - you just have
// to wait for the new page to load before reading the new values
// Wait for the page to load, timeout after 10 seconds
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(10));
wait.Until((d) => { return d.Url.ToLower().Contains("cheese"); });
// Print the redirected URL
System.Console.WriteLine("Page url is: " + driver.Url);
//Close the browser
driver.Quit();
}
}
Sounds like you want to sniff HTTP traffic. You could use a packet capture driver like winpcap, import that DLL and test, or use SharpPcap that #SimpleCoder mentioned.
The path of minimum effort would be write a FiddlerScript Addon, to check the request and redirect if necessary.
Follow Up:
I ended up using Telerik's proxy to send HTTP Requests and parse the responces via C#. Here's the article that was used as a springboard:
https://docs.telerik.com/teststudio/advanced-topics/coded-samples/general/using-the-http-proxy
Related
I need to load a page in Firefox selenium and then rewrite the content of the page, It's essential that I use Firefox so Chrome is not an option.
I tried the below code
FirefoxDriver firefoxDriver = new FirefoxDriver(new FirefoxOptions() { AcceptInsecureCertificates = true });
IJavaScriptExecutor javaScriptExecutor = (IJavaScriptExecutor)firefoxDriver;
firefoxDriver.Navigate().GoToUrl("https://www.google.com");
javaScriptExecutor.ExecuteScript("document.write('a');");
But it gives me the error:
OpenQA.Selenium.WebDriverException: 'SecurityError: The operation is insecure.
`
I need to know if there is any option in about:config or any way to make Firefox run insecure operations.
Document.write()
The Document.write() method writes a string of text to a document stream, calling document.write() on a closed (loaded) document automatically calls document.open(), which will clear the document.
For a long time, firefox and internet-explorer additionally erased all JavaScript variables in addition to removing all nodes. But this is no longer the case. However google-chrome still continues to do so.
Equivalent Python code:
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
driver = webdriver.Chrome(options=options, executable_path=r'C:\WebDrivers\chromedriver.exe')
driver.get("https://google.com")
driver.execute_script("document.write('a');")
Browser Snapshot:
Gecko-specific notes
Starting with Gecko 1.9, this method is subject to the same
same-origin
policy
as other properties, and does not work if doing so would change the
document's origin.
Starting with Gecko 1.9.2, document.open() uses the principal of the
document whose URI it uses, instead of fetching the principal off the
stack. As a result, you can no longer call document.write() into an
untrusted document from chrome, even using
wrappedJSObject.
tl; dr
See Security check basics for more about principals.
References
You can find a couple of relevant detailed discussions in:
Uncaught DOMException: Blocked a frame with origin “http://localhost:8080” from accessing a cross-origin frame while listing the iframes in page
Is there way to disable CORS check using RemoteWebDriver for SauceLabs
I am trying to get strings from HTML using regular expression and it works on local html file. Only thing I need is to login on website using my program to get html from there.
The problem is that I tried logging in using 3 different codes without luck (I FOUND AL 3 CODES HERE). Website is HTTPS and also has no support for Internet Explorer. Don't want to use fiddler or any debugging tool. I don't care for speed, just want simple browser opening, signing in and getting html code from displayed content.
Is there any way to open chrome/mozilla/opera and transfer displayed HTML to my program? Or, if it's impossible, is there any some kind of universal way for signing in?
Is there any way to open chrome/mozilla/opera and transfer displayed HTML to my program?
You could use for example Selenium WebDriver for this. It will allow you to automate button pushing, text inputting etc. on the target web page. I wouldn't call it a "debugging tool", it's more like a testing framework. NuGet has all the packages you need:
Selenium WebDriver
Selenium WebDriver Support Classes
There is a really neat usage sample here:
// Initialize the Chrome Driver
using (var driver = new ChromeDriver())
{
// Go to the home page
driver.Navigate().GoToUrl("https://yourdomainhere.net");
// Get the page elements
var userNameField = driver.FindElementById("username");
var userPasswordField = driver.FindElementById("password");
var loginButton = driver.FindElementByXPath("//input[#value='Login']");
// Type user name and password
userNameField.SendKeys("admin");
userPasswordField.SendKeys("12345");
// and click the login button
loginButton.Click();
// Extract the text and save it into result.txt
var result = driver.FindElementByXPath("//div[#id='case_login']/h3").Text;
File.WriteAllText("result.txt", result);
}
I am trying to get strings from HTML using regular expression
I have small hunch that you.. shouldn't. You can use the driver to extract the data you want from the page.
I am using Selenium with C# and I was wondering if there is anyway in the test to handle the response status code. I need to check for Status Code 500 Internal Server Error.
I CAN match the displayed text but I do not want to do that as it can break in the future.
Selenium does not have native support for getting HTTP status code. That feature request is out there for a long time. You need to find third party library or something else.
And, since you are using C#, you can use fiddler application along with Selenium proxy as suggested by JimEvans here. Note he is one of the core contributors of Selenium C# bindings. He also has a public github to show the example here
I would suggest that you drop Selenium. Just use HttpStatusCode Enumeration to check status (or get status). You will find more info at https://msdn.microsoft.com/en-us/library/system.net.httpstatuscode.aspx
I am using Selenium (2.24) to generate unit tests (for the Visual Studio unit test framework). While using the C# WebDriver for FireFox, it appears that the browser which is fired up by the driver is not finding my website cookies via javascript (I have a javascript file included in the site that looks for cookies and lets me know if they are found). Also, it is not using the browsers image cache, and is always requesting new images from the server. This behavior does not happen when I run my site from the "normal" (not launched by Selenium) FireFox.
The strange thing is that calling the below code in my unit test DOES return my cookie (it just can't be found by my JavaScript)
driver.Manage().Cookies.GetCookieNamed("MyCookie");
How can I configure the driver to respect my cookies and use the browsers image cache? This functionality is key to testing my website.
By default the FirefoxDriver will create a new anonymous Profile each time it starts Firefox. If you want it to use an exiting profile you need to tell it to.
In Java you do it like so:
ProfilesIni allProfiles = new ProfilesIni();
FirefoxProfile profile = allProfiles.getProfile("MyProfile");
WebDriver driver = new FirefoxDriver(profile);
I'm assuming there's something similar in C#
For cookies: if cookie is marked as "HTTP Only" JavaScript on a page will not be able to see it. As result any code that uses execution of JavaScript on the page will not see this particular cookie.
You can confirm it by using some HTTP debugger (i.e. Fiddler) to see if cookie is set with HttpOnly property. You also can check if running script on a page via dev tools or typing javascript:alert(...) in address bar can see the cookie (document.cookie)
I've to automate a file download activity from a website (similar to, let's say, yahoomail.com). To reach a page which has this file download link, i've to login, jump from page to page to provide some parameters like dates etc., and finally click on download link.
I am thinking of three approaches:
Using WatIN and develop a windows service that periodically executes some WatiN code to traverse through the page and download the file.
Using AutoIT (no much idea)
Using a simple HTML parsing technique (there are several questions here eg., how to maintain a session after doing a login? how to do a logout after doing it?
I use scrapy.org, it's a python library. It's quiet good actually. Easy to write spiders and it's very extensive in it's functionality. Scraping sites after login is available in the package.
Here is an example of a spider that would crawl a site after authentication.
class LoginSpider(BaseSpider):
domain_name = 'example.com'
start_urls = ['http://www.example.com/users/login.php']
def parse(self, response):
return [FormRequest.from_response(response,
formdata={'username': 'john', 'password': 'secret'},
callback=self.after_login)]
def after_login(self, response):
# check login succeed before going on
if "authentication failed" in response.body:
self.log("Login failed", level=log.ERROR)
return
# continue scraping with authenticated session...
I used mechanize for Python with success for a few things. It's easy to use and supports HTTP authentication, form handling, cookies, automatic HTTP redirection (30X), ... Basically the only thing missing is JavaScript, but if you need to rely on JS you're pretty much screwed anyway.
Try a Selenium script, automated with Selenium Remote Control.
Free Download Manager is great for crawling, and you could use wget.