I'm running the latest version of Firefox and Selenium in C#. I'm automating a crawler to find data in a web app. The server is super slow, so I've had to add in many waits plus put in initial wait using thread sleep.
So my crawler gets a list of items, then iterates: it has to load the details of each item by clicking on its id. Then it goes back (these navigation controls are all built into the web app, no browser controls used), rinse and repeat. The server shows a progress spinner during loading details and going back. The crawler reaches the same item each time and locks, eg: the progress spinner freezes and Firefox crashes.
I've tried a couple of different things including using background threads, deleting all cookies, and checking if my memory is overloading. I haven't noticed any peaks in memory (in task manager). I also tried restarting web driver but it doesn't really work out well because this is a web app.
Is there something I'm overlooking? (I tried to be as clear and elaborate as possible)
Forgot to mention, when I stop the program the page 'unfreezes' and I can continue use from that point.
This was a known problem with C# selenium web bindings 2.39 and earlier. This should now be fixed in 2.40. The problem was to do with a deadlock in the web bindings code when redirecting console logging from firefox. If you upgrade to 2.40 it should solve it. See here under the heading 'Update 25th Feb 2014' for more information.
Related
I'm currently working with CefSharp on a web application. I would like to be able to pause the execution of the program while the webpage is loading and then, once all the elements have been loaded, resume the execution (for example click on a button or enter a given field).
I'm a former java developper and have used Selenium on a few projects. I know that what I'm looking for is a CefSharp equivalent to the "driver.manage().timeouts().implicitlyWait(...);" method of Selenium. I've been looking for a solution for a while and have found a few threads speaking of this problem however none of the solutions seems to work.
I'm finishing (QA testing) a web parser built in C# that is parsing specific data from a
web site that is being load to a webbrowser control in a WFA (Windows Form Application) program.
The weird behavior is when I'm killing the internet connection... Actually the program is designed to navigate recursively in the site and each step its waiting for a WebBrowserDocumentCompletedEventHandler to be triggered. Beside that there is a Form timer set,
and if the handler is not triggered in a specific interval then its reloading the entire procedure.
Everything is working good even if I manually avoid the handler from triggering - As I said the timer kicks in and restart the operation successfully and retrying another value successfully.
When shutting the internet connection manually while the procedure is running, I can see the page is getting the internet explorer message: "This page can't be displayed"
(For some reason the DocumentComplete... is not triggered).
Then immediately reconnecting the internet and waiting for the timer to kick in - As expected it fires the reload function but this time everything is going wild!! the functions are being fired not in the correct order and it seems like there is 100 threads that are running at the same time - a total chaos.
I know that its not easy to answer this question without experiencing that and seeing the code
But if I copy the entire code it will be just too long using 5 different classes and I really can't see where is the problem...
I'll try to simplify the question:
why when connection lost the documentcomplete handler don't fires?
Does anyone has experienced an application going wild only after webbrowser control losses connection?
Thanks
Currently I am in process of building an automation tool for testing webpages. Already aware of selenium tool but will not be using that currently as our framework has already been built and requires minor changes to make it reliable. While testing this framework with test pages (html and javascript only) I encounter issues such as webpage takes a lot if time to load(happens like 1 out of 20 times). And when you try to find the co-ordinate and click the button or try to find the element in the webpage and click it sometimes it fails as button doesn't even exists at that point of time.
Currently using Thread.sleep or retry n number of times. Are there any better solutions to remove this flaky behavior?
look into WebDriverWait class. There is a respective binding for c# as well. Also, I have discussed the WebDriverWait here.
You can try to use Implicit waits
Read about it here http://www.seleniumhq.org/docs/04_webdriver_advanced.jsp
Basically you set it once per session. If you can't find an element, selenium waits the amount of time you set before throwing the exception.
I have two questions related to the same problem...
Q1) I am using WatiN(3.5) for automation of a website.
The situation is that I want to obtain a div tag when the result page is fully loaded but WatiN don't wait for that page to be campletely loaded and tries to obatin that div which results in getting div with null. This div is populated by AJAX. This is th code that I am using to avoid that error but it does not work.
while (resultDiv == null)
{
browser.Div("ui-tabs-1").WaitUntilExists();
resultDiv = browser.Div("ui-tabs-1");
}
So how I can wait for a page to be completely loaded by using WatiN?
Q2) I found a solution for above problem here but I stuck at a point as I could not find a reference of library for these interfaces i.e. IElement and IBrowser. These interfaces are bring used in the extension methods.
I have also asked the author of that article and waiting for his reply.
I am making this apllication by usng WatiN 2.5 and .Net framework 3.5 in VS 2010.
I have ran into similar problem with watin on a site using Ajax.
This is the workaround for this.
//After click on link/Tab/Button on which the result is loaded in non Ajax websites.
We have a function here, browser.WaitForComplete() but it works only when the page is in loading state. but in case of Ajax on a part of browser window gets updated. so no loading state for browser.
So one solution for this problem is
Use Thread.Sleep(10000); This time can vary upon the normal time the website takes to load the required div.
Thread.Sleep can be used but for anything other than a proof of concept that waiting for something to load is indeed the issue Thread.Sleep should be avoided. Sleeps add in unnecessary idle time if you sleep for the max time the action is going to take, and give false positive failures when waiting less time.
See Jeroen's link in his response here if you are testing an ASP.NET Ajax site: In WatiN how to wait until postback is complete - WaitForAsyncPostbackToComplete. I used this idea for some methods and properties to rid my code of a lot of long Sleep calls. Tests ran faster and results were much more reliable.
If the specific JS call won't work as you're using a different clientside framework, using the basic polling concept with shorter sleeps in a loop is going to do you better than long sleeps.
I have a C# .NET 3.5 application with an embedded web browser. The browser is designed to point to remote sites (Rather than anything local). Everything works fine, but when the page is slow to respond this causes my entire application to become unresponsive until the page is loaded.
I don't mind the browser being unresponsive while it does its thing, but the application going too is far from ideal.
Is there a good way to prevent this? Would it be beneficial to run the WebBrowser on a seperate thread - that's a bit beyond my skillset right now and I don't think the WebBrowser control really likes multithreading? But I can learn if needs be.
See the answer #2 on this question for a solution on how to run it on a separate thread: BackgroundWorker and WebBrowser Control
You might as well read answer #1 too, it explain the behaviors you are seeing (WebBrowser control blocking UI thead).
As it happens I found that the root cause of this was my application running as administrator. Exactly the same issue was seen when using Internet Explorer - as such, I've simply rewritten the bits that required admin privileges so I'm now no longer seeing the original issue.
this happened only on win7;I use fiddler2 to Monitor HTTP/HTTPs traffic .I find embedded web browser to visit this web:http://ctldl.windowsupdate.com/msdownload/update/v3/static/trustedr/en/disallowedcertstl.cab?50ff94e72ac1a75c;the solution is follow:http://support.microsoft.com/kb/2730040/en (Method 2 or Method 3).you can try it.other u can use .net framework4.0,then u haven't this problem.