I'm running this out of the VS 2008 debugger on Windows 7, running .Net 3.5.
The idea was to make all ajax requests with jQuery only, rather than .net, following some tutorials online.
Default.aspx -> HTML page, jquery triggers method in Default.aspx.cs
http://pastebin.com/pxBvKA2H
Default.aspx.cs -> C# Webform, just defines a GetDate function, which only returns a string for now (trying to eliminate any possible issues)
(can only post one hyperlink...) pastebin.com/pnHn50hu
The ajax query takes longer than it should. Profiling with firebug revealed that it took 1.03 ms.
1s DNS Lookup |
26ms Waiting |
1ms Receiving
EDIT: It continues to take the same set of times if you continue to click and resubmit the request.
Is there anything I can do to cut down on the DNS Lookup time / what did I do wrong?
Is it only slow in Firefox? This sounds like the old IPv6 lookup problem that Firefox tends to suffer from.
If it's fast in IE, then follow these directions to turn off IPv6 lookups in firefox.
After you get site running in debug mode, change your URL in the browser from localhost:#### to 127.0.0.1:#### and see if that makes a difference. I found recently that it did.
Related
I am trying to use Selenium and ChromeDriver to get the complete DOM of a page as quickly as possible. When my important Ajax requests are finished I inject a class into the dom and use a WebDriverWait to wait for that class before continuing.
When I test the responses from the api (the ajax calls) they are very consistent, I have also removed any requests to adservers, or anything outside the website and api. I have inspected the requests from server logs and wireshark and again, they are very consistent.
The time it takes for ChromeDriver to get the full dom varies wildly. Below are the arguments and switches I am using. I am getting anything from 700ms - 4 seconds to get the full dom for the same page. Are there switches I am using here that are impeding ChromeDriver? What should I use if I simply want the DOM, nothing else? How can I optmise for speed?
Using Selenium.WebDriver.3.5.0 and Chrome 60.
chromeOptions.AddArguments("headless", "disable-gpu", "renderer");
chromeOptions.AddArgument("disable-translate");
chromeOptions.AddArgument("no-default-browser-check");
chromeOptions.AddArgument("site-per-process");
chromeOptions.AddArgument("disable-3d-apis");
chromeOptions.AddArgument("disable-background-mode");
chromeOptions.AddArgument("site-per-process");
chromeOptions.AddUserProfilePreference("profile.managed_default_content_settings.images", 2);
chromeOptions.AddUserProfilePreference("credentials_enable_service", false);
Thanks
My use case was to use chrome webdriver to generate a full dom to be sent to crawlers. It was to be a replacement fro prerender.io.
I have seen some other developers want to try this too.
My advice is to not do this. It's isn't designed for it and there is nothing you can do to make the responses more reliable in my experience.
If yu are using angular, use universal (https://universal.angular.io/) we tested this and it works great.
For other JS frameworks, https://prerender.io is still an option.
Thanks.
I have a dot net web application. There is one page where we enter data & submit the form.We upload the attachment before submitting the form.The submit action is taking long time almost minute for files with attachment of 650KB. The code behind is C#. We use third party API(Ektron).Its a CMS tool.
Please let me know , in what all ways i can analyse the bottle neck for the issue.Please provide open source Tool & the browser addons.. other than Page speed & Yslow .
Please check if the time taken is for the request to initiate or the response to comeback to your browser..
It is only then you can look for a solution..
To answer the second half of your question. At the very least most modern browsers (FireFox, Chrome and Safari) have a developer console that will give you a breakdown of the times taken in each request state on a per request basis. My personal preference is FireFox with FireBug as I find the Network pane view easy to interpret.
Redgate ANTS Performance Profiler is pretty much the bees knees for troubleshooting performance problems in ASP.net.
I have a page with a few big tables. When loading this page or triggering an event is fast enough with Chrome but when I run this in IE7 the page is slow.
Sometimes if I click a button it takes a few seconds before it is loaded instead of instant action with Chrome or FF.
I Googled a around a bit to find an solution to this problem and I tried the HTML validator. If I save a page in HTML format and insert it in the validator I get 1K+ errors, most of these errors are tags that are not closed.
If I check the ASP code, which is very limited because all the code is written dynamically with objects (I didn't write my own HTML code), all my tags are closed and I don't get a single warning or error in Visual Studio.
In this page I use jQuery and some custom JavaScript (nothing to complex).
All my data comes from SQL server, If I ran all the queries at once it's still less then one second, pretty sure these queries are written as best as possible.
Any idea how I can make the website faster in IE?
(Unfortunately 90% of the users have IE7)
I would recommend that you install the plugin yslow on firefox and check what kind of score the plugin gives your site and what recomendations does it give to optimize the site.
Also, you should know that IE 6-8 is extremely slow at compiling javascript and at DOM manipulation. The crudest way of identifying javascript slow downs I know of, is to simply comment out javascript functions from your page, one by one, until the site starts loading fast. Then you work on optimizing whatever function you think loads slowly.
Without seeing any code, it's hard to assert why these performance issues arise. One thing I can think of is how jQuery works in IE7
Simply put, when you are using a selector in jQuery (like $(".some-class")) jQuery will use the native function document.querySelectorAll, which queries the DOM using CSS selectors (unless you're using jQuery-specific selectors like :animated). However, IE7 does not have an implementation for the querySelectorAll method, which causes jQuery to search the DOM in a more iterative way. I'm not entirely sure how this works, but I'm sure one can find out at sizzlejs.org
Now if you have a very large HTML document in IE7, and you are, for instance, attaching events to each row in your table like so: $(".some-class-that-marks-as-clickable").click(...), jQuery will have to look for all these rows and apply the handler. If this is the case, it can easily be remedied by using the onclick attribute on each clickable element instead.
Of course, since you have not posted any code I cannot guarantee that this is your problem. I only know I had that exact problem a few years back, which caused IE7 to render the page in ~45 seconds, while Firefox did in less than one second.
my scenario is this; the user selects the list of reports they wish to print, once they select and click on the a button, i open up another page with the selected reports ready for printing. I am using a session variable to pass reports from one page to another.
first time you try it, it works fine, second time you try it, it opens the report window with the previous selected reports. I have to refresh the page to make sure it loads the latest selections.
is there a way to get the latest value from the session every time you use it? or is there a better way to solve this problem. open for suggestions...
Thanks
C# Asp.net, IE&7 /IE 8
After doing some more checking maybe if you check out COMET it might help.
The idea is that you can have code in your second page which will keep checking the server for updated values every few seconds and if it finds updated values it will refresh itself.
There are 2 very good links explaining the imlementation.
Scalable COMET Combined with ASP.NET
Scalable COMET Combined with ASP.NET - Part 2
The first link explains what COMET is and how it ties in with ASP.NET, the second link has an example using a chat room. However, I'm sure the code querying for updates will be pretty generic and can be applied to your scenario.
I have never implemented COMET yet so I'm not sure how complex it is or if it is easy to implement into your solution.
Maybe someone developing the SO application is able to resolve this issue for you. SO uses some real-time feature for the notifications on a page, i.e: You are in the middle of writing an answer and a message pops up in your client letting you know someone else has added an answer and to click "here" to refresh.
The proper fix is to set the caching directives on the HTTP response correctly, so that the cached response is not reused without validation from the server.
When you fail to specify the cache lifetime, the client has to "guess" how long the response is good for, and the browser's guess probably isn't what you want. See http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx
It's better to use URL paramaters. So you have a view of value of the paramaters.
I have been given a task to crawl / parse and index available books on many library web page. I usually use HTML Agility Pack and C# to parse web site content. One of them is the following:
http://bibliotek.kristianstad.se/pls/bookit/pkg_www_misc.print_index?in_language_id=en_GB
If you search for a * (all books) it will return many lists of books, paginated by 10 books per page.
Typical web crawlers that I have found fail on this website. I have also tried to write my own crawler, which would go through all links on the page and generate post/get variables to dynamically generate results. I havent been able to do this as well, mostly due to some 404 errors that I get (although I am certain that the links generated are correct).
The site relies on javascript to generate content, and uses a mixed mode of GET and POST variable submission.
I'm going out on a limb, but try observing the JavaScript GETs and POSTs with Fiddler and then you can base your crawling off of those requests. Fiddler has FiddlerCore, which you can put in your own C# project. Using this, you could monitor requests made in the WebBrowser control and then save them for crawling or whatever, later.
Going down the C# JavaScript interpreter route sounds like the 'more correct' way of doing this, but I wager it will be much harder and frought with errors and bugs unless you have the simplest of cases.
Good luck.
FWIW, the C# WebBrowser control is very, very slow. It also doesn't support more than two simultaneous requests.
Using SHDocVw is faster, but is also semaphore limited.
Faster still is using MSHTML. Working code here: https://svn.arachnode.net/svn/arachnodenet/trunk/Renderer/HtmlRenderer.cs Username/Password: Public (doesn't have the request/rendering limitations that the other two have when run out of process...)
This is headless, so none of the controls are rendered. (Faster).
Thanks,
Mike
If you use the WebBrowser control in a Windows Forms application to open the page then you should be able to access the DOM through the HtmlDocument. That would work for the HTML links.
As for the links that are generated through Javascript, you might look at the ObjectForScripting property which should allow you to interface with the HTML page through Javascript. The rest then becomes a Javascript problem, but it should (in theory) be solvable. I haven't tried this so I can't say.
If the site generates content with JavaScript, then you are out of luck. You need a full JavaScript engine usable in C# so that you can actually execute the scripts and capture the output they generate.
Take a look at this question: Embedding JavaScript engine into .NET -- but know that it will take "serious" effort to do what you need.
AbotX does javascript rendering for you. Its not free though.