I am using Selenium with C# and I was wondering if there is anyway in the test to handle the response status code. I need to check for Status Code 500 Internal Server Error.
I CAN match the displayed text but I do not want to do that as it can break in the future.
Selenium does not have native support for getting HTTP status code. That feature request is out there for a long time. You need to find third party library or something else.
And, since you are using C#, you can use fiddler application along with Selenium proxy as suggested by JimEvans here. Note he is one of the core contributors of Selenium C# bindings. He also has a public github to show the example here
I would suggest that you drop Selenium. Just use HttpStatusCode Enumeration to check status (or get status). You will find more info at https://msdn.microsoft.com/en-us/library/system.net.httpstatuscode.aspx
Related
Short version:
I am looking to make sure that a URL (partial match) is requested (client-side).
Long Version:
I am looking to automate part of my testing. Currently I use Fiddler2 to manually verify.
Here's the scenario:
User navigates to Site A
My app redirects using a tracking URL (seen in Fiddler's HTTP traffic)
User ends up on Site A, parameters now applied.
I would like to verify, in C#, that step 2 happened by doing a partial match (contains {string} for example).
Question:
How should I go about this? I have started looking into HttpWebRequest class and FiddlerCore, but my love using the simplest code possible (so other team members to update if needed) lead me to ask what the users of StackOverflow would recommend.
Take a look at SharpPcap. It's based on pcap (WinPcap on Windows), which is the packet capture library that is used by the popular Wireshark.
There is a really great tutorial on CodeProject with lots of example code to get you started: http://www.codeproject.com/Articles/12458/SharpPcap-A-Packet-Capture-Framework-for-NET
Once you have a hold of the packets (SharpPcap does capture, not parsing), you can use Packet.Net to parse the packets into something usable (HTTP communications, in your case).
Edit: Didn't see #2 as an intermediate URL when I read the question, it looked like it was the (only) redirect action. Depending on your browser of choice, and the type of redirect performed, you can use Selenium to read the page referrer and get the redirect.
WebDriver driver; // Assigned elsewhere
JavascriptExecutor js = (JavascriptExecutor) driver;
// Call any javascript
var referrer = js.executeScript("document.referrer");
I would recommend Selenium Webdriver for all your web site/app testing needs in C#. It integrates very nicely with NUnit, MSTest and other test frameworks - it's very easy to use.
With Selenium Webdriver, you will start an automated browser instance (Firefox, Chrome, Internet Explorer, PhantomJS and others) from your C# testing code. You will then control the browser with simple commands, like "go to url" or "enter text in input box" or "click button". See more in the API.
It doesn't require much from other developers either - they just run the test suite, and assuming they have the browser installed, it will work. I've used it successfully with hundreds of tests across a team of developers who each had different browser preferences (even for the testing, which we each tweaked) and on the team build server.
For this test, I would go to the url in step 1, then wait for a second, and read the url in step 3.
Here is some sample code, adapated from Introducing the Selenium-WebDriver API by Example. Since I don't know the URL nor {string} ("cheese" in this example) you are looking for, the sample hasn't changed much.
using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;
// Requires reference to WebDriver.Support.dll
using OpenQA.Selenium.Support.UI;
class RedirectThenReadUrl
{
static void Main(string[] args)
{
// Create a new instance of the Firefox driver.
// Notice that the remainder of the code relies on the interface,
// not the implementation.
// Further note that other drivers (InternetExplorerDriver,
// ChromeDriver, etc.) will require further configuration
// before this example will work. See the wiki pages for the
// individual drivers at http://code.google.com/p/selenium/wiki
// for further information.
IWebDriver driver = new FirefoxDriver();
//Notice navigation is slightly different than the Java version
//This is because 'get' is a keyword in C#
driver.Navigate().GoToUrl("http://www.google.com/");
// Print the original URL
System.Console.WriteLine("Page url is: " + driver.Url);
// #kirbycope: In your case, the redirect happens here - you just have
// to wait for the new page to load before reading the new values
// Wait for the page to load, timeout after 10 seconds
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(10));
wait.Until((d) => { return d.Url.ToLower().Contains("cheese"); });
// Print the redirected URL
System.Console.WriteLine("Page url is: " + driver.Url);
//Close the browser
driver.Quit();
}
}
Sounds like you want to sniff HTTP traffic. You could use a packet capture driver like winpcap, import that DLL and test, or use SharpPcap that #SimpleCoder mentioned.
The path of minimum effort would be write a FiddlerScript Addon, to check the request and redirect if necessary.
Follow Up:
I ended up using Telerik's proxy to send HTTP Requests and parse the responces via C#. Here's the article that was used as a springboard:
https://docs.telerik.com/teststudio/advanced-topics/coded-samples/general/using-the-http-proxy
I have a problem here. Assume there's a basic calculator implemented in javascript hosted on a website ( I have googled it and to find an example and found this one: http://www.unitsconverter.net/calculator/ ). What I want to do is make a program that opens this website, enters some value and gets the return value. So, in our website calculator, the program:
- open the website
- enters an operand
- enters an operation
- enters an operand
- retrieve the result
Note: things should be done without the need to show anything to the user ( the browser for example ).
I did some search and found about HttpWebRequest and HttpWebRespond. But I think those can be used to post data to the server, which means, The file I'm sending data to must be php, aspx or jsp. But Javascript is client side. So, I think they are kind of useless to me in this case.
Any help?
Update:
I have managed to develop the web bot using WebBrowser Control tool ( found in System.Windows.Forms )
Here's a sample of the code:
webBrowser1.Navigate("LinkOfTheSiteYouWant"); // this will load the page specified in the string. You can add webBrowser1.ScriptErrorsSuppressed = true; to disable the script in a page
webBrowser1.Document.GetElementById("ElementId").SetAttribute("HTMLattrbute", "valueToBeSet");
Those are the main methods I have used to do what I wanted to.
I have found this video useful: http://www.youtube.com/watch?v=5P2KvFN_aLY
I guess you could use something like WatiN to pipe the user's input/output from your app to the website and return the results, but as another commenter pointed out, the value of this sort of thing when you could just write your own calculator fairly escapes me.
You'll need a JavaScript interpreter (engine) to parse all the JavaScript code on the page.
https://www.google.com/search?q=c%23+javascript+engine
What you're looking for is something more akin to a web service. The page you provided doesn't seem like it accepts any data in an HTTP POST and doesn't have any meaningful information in the source that you could scrape. If for example you wanted to programmatically make searches for eBay auctions, you could figure out how to correctly post data to it eg:
http://www.ebay.com/sch/i.html?_nkw=http+for+dummies&_sacat=267&_odkw=http+for+dummies&_osacat=0
and then look through the http response for the information you're looking for. You'd probably need to create a regular expression to match the markup you're looking for like if you wanted to know how many results, you'd search the http response for this bit of markup:
<div class="alt w"><div class="cnt">Your search returned <b>0 items.</b></div></div>
As far as clientside/javascript stuff, you just plain aren't going to be able to do anything like what you're going for.
It is a matter of API: "Does the remote website expose any API for the required functionality?".
Well web resources that expose interactive API are called web service. There are tons of examples (Google Maps for istance).
You can access the API -depending on the Terms & Conditions of the service- through a client. The nature of the client depends on the kind of web service you are accessing.
A SOAP based service is based on SOAP protocol.
A REST based service is based on REST principles.
So, if there is an accessible web service called "Calculator", then you can access the service and, for istance, invoke the sum method.
In your example, the calculator is a Javascript implementation, so it is not a web service and it cannot be accessed via HTTP requests. Though, its implementation is still accessible: it is the javascript file where the calculator is implemented. You can always include the file in your website and access its functions via javascript (always mind terms and conditions!!).
A very common example is the jQuery library stored in Google Libraries.
I've been trying to figure out how to handle 401 responses on WebKit.NET and show an authentication box so that user can enter his credentials and then send them back to the server.
This guy figured a way to add the proper headers to a new request and send them to the server, but seems like the code is sending them to every page that the browser navigates to which is not what I want. I dug a bit into the code and there is this interface called IWebResourceLoadDelegate which among other contains two event handlers called didReceiveResponse and didReceiveContentLength that will be called for every response, but can't figure out how in the world to read the headers from the parameters being passed. I think the header is just not being passed at all.
Also, seems like the guys at web kit sharp haven't solve this issue either, but somehow Chrome does handles it properly. I'm not sure which build of WebKit Chrome uses. I just hope is not a custom build such that I won't have a choice other than spending the rest of my life trying to build WebKit (and the other rest trying to add the missing functionality).
Any one has any idea how could I begin to figure out how to handle this?
I haven't worked on this project in some time, but it looks to me like you should be able to get the request headers from the WebURLResponse object, perhaps from the allHeaderFields or statusCode methods...
It would be really great if you could finish my work to get full HTTP Auth support in WebKit.NET. I just haven't had the time... Chrome and Safari have their own proprietary implementations that do the trick.
I'm using CouchDB as a data source for a C# web service.
Being RESTful, CouchDB passes back a status code of 404 when asked for a document that does not exist. The standard .NET web request wants to throw an exception at this but (to me, at least) communicating that a data source has returned "no results" via an exception is utterly horrible; and it's a stink I really don't want wafting around in my code...
Is there any replacement for WebRequest I can use that will allow me to deal with status codes as I see fit?
EDIT: Just to clarify, due to the responses I've had so far: I do not want to hide the exception that WebRequest throws. I am looking for an alternative to the standard WebRequest that does not throw exceptions based on status codes as .NET's interpretation of what constitutes an error doesn't seem in-line with REST principles.
EDIT #2 I really need a 3.5 compatible way of doing this; sorry for not being specific about that at the start.
The HttpClient library does not throw exceptions after the request. See this for usage examples.
I have not used them but there are several dedicated C# CouchDB client libraries.
There is CouchOne's list of CouchDB drivers.
Also there is the CouchDB wiki list of C# clients.
My personal preference is to stick as closely to the HTTP layer as possible. HTTTP is very simple and the CouchDB API is very simple. There is no need for middleware to access it. (It is unfortunate that your WebRequest class apparently has this bug.)
I'm not sure what's your problem. If you don't want to get 404 when asking doesn't exist documents, I think you just need to add a wildcard application maps in your IIS settings and uncheck the "Verify that file exists" box.
Where can I find the RAW/object data of a SOAP request in C# when using WebServices.
Can't find it anywhere. Shouldent it be available in the HttpContext.Current.Request object ?
Shouldent it be available in the HttpContext.Current.Request object ?
No, it shouldn't.
What are you trying to accomplish? If you just want to see that data so you can log it, or as an aid to debugging, then see the example in the SoapExtension class. It's a working sample of an extension that can log input and output as XML. I've used a modified version of it myself.
If you're just looking to debug your web service, then you can install Fiddler, and that allows you to inspect the data sent to and from your web service.
It sounds like you're going to have to go lower level on your implementation if you want to see the raw XML. Check out the generic handler (ASHX extension). This will allow you to deal with the request/response streams directly. It's very low level, but gives you full control over the service lifecycle.
I found
Request.Params[null]
refers to the RAW data posted to the page in C# ASP.NET.