We have tried using selenium for testing, but it has numerous setbacks, delays and sudden crashes.
Jquery sounds a good alternative, but the challenge is how to jquerify every page load on the browser.
Brandon Martinez here has an example of how to add jquery to the console of chrome to jquerify a page:
var element1 = document.createElement("script");
element1.src = "http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js";
element1.type="text/javascript";
document.getElementsByTagName("head")[0].appendChild(element1);
we want that code to automatically be available in every browser page without the need to manually click a bookmark link on every page.
If we get around that then we can use C# code to:
Process.Start("chrome", #"target site");
and since jquery is already available for every page it will do the population and submit we want.
How can I automatically include jquery for every page that gets loaded on the browser? Is it possible to do that via a chrome plugin; jquery or C# code!? Is it at all possible?
I've decided to use Fiddler to modify response body before being displayed on the browser. Now I can jquerify all pages comes to the browser. Look at this link for a detailed example.
Related
I'm trying to get Selenium to click on one of the most visited web links from the default Chrome web page.
The problem is that Selenium cannot find the element on the page and I think it has to do with the fact that a web page technically didn't get loaded. When you open up Chrome it has HTML elements there but the address bar is completely empty. I think possibly this is why Selenium can't find the link? The code is simple and finding the XPATH wasn't an issue. I just don't know if this is a function that Selenium will be able to do or not. I'm trying to do the click because the navigate() function will not work when I put in the proxy information due to the fact that Selenium doesn't have a built-in way to handle a proxy with username and password.
At the end of the day I'm trying to get the username/password box to pop up by clicking on the link. When I open the browser with Selenium programmatically and then manually click on the link the username/password box pops up. But I can't get Selenium to find the element to click on programmatically.
var did = driver.FindElement(By.XPath("//*[#id='mv-tiles']/a[1]"));
did.Click();
UPDATE 1:
I was able to find the element when taking into consideration the iframe but clicking still is an issue.
var frm = driver.SwitchTo().Frame("mv-single");
var did = frm.FindElement(By.XPath("//*[#id='mv-tiles']/a[1]"));
//did.Click(); <-- I can see it go to the element but nothing occurs
IJavaScriptExecutor js2 = (IJavaScriptExecutor) driver;
js2.ExecuteScript("arguments[0].click();", did);
The JavaScriptExecuter is able to click the element but Chrome blocks the redirect with the following message:
[21040:24704:1204/150143.743:ERROR:CONSOLE(1)] "Unsafe JavaScript attempt to initiate navigation for frame with URL 'chrome-search://local-ntp/local-ntp.html' from frame with URL 'chrome-search://most-visited/single.html?title=Most%20visited&removeTooltip=Don%27t%20show%20on%20this%20page&enableCustomLinks=1&addLink=Add%20shortcut&addLinkTooltip=Add%20shortcut&editLinkTooltip=Edit%20shortcut'. The frame attempting navigation is targeting its top-level window, but is neither same-origin with its target nor has it received a user gesture. See https://www.chromestatus.com/features/5851021045661696.
", source: (1)
FINAL UPDATE:
I gave up and decided to do the browser extension solution for proxies with passwords: https://stackoverflow.com/a/35293222/5415162
That list of "Most Recent Pages" is actually in an iframe, which is probably why Selenium can't find it. Try updating the selector to account for the iframe, or maybe add a wait clause to allow the iframe to finish loading.
Regardless of that solution, I don't think it will act any differently than just navigating to the target URL. So to fix your root problem have you tried setting the proxy details when creating the ChromeOptions?
Is there a way to redirect my application to a webpage after checking the browser version first?
I'm using C# to run my angular app and the index.html is loaded by default, but is there a way to control that ?
E.g : if my browser is IE load wrongBrowser.html otherwise load index.html (the default one)
Note that i dont want to redirect my page because i want to keep the orignal url : ex localhost/api/search=text. If i do a redirect, it will overide my url. So i just want to load the html content
Im using C# with visual Studio for the server side
The first page of your app will have to load as it needs to be able to determine the browser specs. Only then can you then redirect to another page based on that knowledge.
I have never used Angular JS neither Angular with C#, but from personal knowledge I know you can "redirect" without changing the url, using a XML request in vanilla javascript (maybe you can place this somewhere):
var request = new XMLHttpRequest();
request.addEventListener("load", function(evt){
document.write(evt.target.response);
}, false);
request.open('GET', 'a.html', true),
request.send();
Now what this does is simple, we set variable request which is a XMLHttpRequest object, then we set an event listener for when it loads, after it loads we replace the code in the page with the targets code, we then set the url and send the request.
I have only used this for testing, so there might be issues that I don't know of, but it does import the html code in.
In C# you can do the following using Request.Browser:
if(Request.Browser.Browser.IndexOf("InternetExplorer")){
return View("wrongBrowser");
}
You have to use IndexOf due to the fact that most, if not all browsers return their version in their name too.
Here's a list of possible browser strings:
IE <= 11: InternetExplorer <numeric version>
Edge: Edge <version>
Safari: Safari <version>
Chrome: Chrome <version>
Opera: Chrome <version>
There are more, most of them will come under Chrome though. I will not go into much detail as you specified IE11 which is listed above. The above method is not really reliable for other, less popular browsers so keep that in mind.
I want to download content of one website programatically and it looks like this content is loaded by ajax calls. When I simply disable javascript in my browser, only 1 request is made by this page and all content is loaded without AJAX.
What I need to achieve is to make a web request which will tell web page that I have disabled javascript so it returns me all the content and not just empty body tag with no content at all.
Any suggestions how to do that?
You need to mimic browser.
Steps:
Use Fiddler and see what is sent by browser.
Set the same headers/cookies/user agent via C# code.
If does not work - compare request your code makes with browser's one by using Fiddler as proxy for your C# code (set proxy to http://localhost:8888)
I have to make a console application in C# which retrieve some data from webpages.
I have downloaded the HTML code from the main page of a website.
WebClient client = new WebClient();
String htmlCode= client.DownloadString(linkToWebpage);
I have verified the string and it is good.
After this part, I have searched for a specific line in the html code which contains a button and a link.
<a rel="nofollow" class="link" onclick="loadornot()" href="http://aaaaa.com/D?WUrtlC1" target="_blank">Click to read more</a>
Now i am trying to download html code from the ancored link (the one from the href), but I am redirected to the main page and I am not sure why. Even if I copy the link from href and paste it into a webbrowser, I am redirected to the main page.
I believe that this happens because the button call a function onclick="loadornot()". That's why it doesn't work the way I have tried? And if yes, how could I call that function from my c# application to continue my app?
Thank you.
Edit:
I have found out that I need some cookies, more exactly, sessioncode, to make that link work. How can I do that?
You can't run javascript code from web page without browser. So, if you really need to execute that function in downloaded page, use some kind of headless browser, like those: webkitdotnet or awesomium
I've application that uses another web sites data so how can i get it because it uses some JavaScript functions to get that data and it not show in page view-source.
Check the NET tab in firebug, XHR and check the resource that is requested, and request the same resource.
Basically you have to render the webpage and ensure the javascript functions are run (evaluated). You could do this by "borrowing" their javascript files (by linking to them from your own page), but this may not work as you don't know what's in those files - they could be accessing DOM elements that you don't have in your page, or calling to other domains which may prevent them from working correctly.
The easiest way to show the same data is to just host the page inside an iframe on your own page. If you are looking to do this from a normal client application (i.e. not a web app) then you will need a browser control that you navigate to the target page. If the browser control is invisible you could then scrape values from it and show them in your app, although this is a very clumsy way to do it, and it's debatable about how ethical it is.
If you want the another web site view source use the HTTPWebRequest to get the response stream in c#.