I am facing a pesky problem at the moment on a large website with multiple languages. On arrival at the website, it detects what country you are from and prompts you to confirm this. On confirmation, it swaps out the pages languages from the DB and displays the relevant language. This is done using jQuery. Now the problem is that Arabic reads rtl, so I need to either:
-- swap out the stylesheets for "rtl" version
or
-- change the HTML tag and include a "dir='rtl'" arrtribute
Now, I have tried both of these, with failures on both. When I view the page source, it still shows the old Css file or HTML tag without the "dir" attribute. Correct me if I'm wrong but I believe this to be due to the DOM not registering the new changes, as they have happened asynchronously via jQuery after the DOM has been instantiated.
After all that blah blah and tldr;
Is there not an easier way to swap out the text direction dynamically? If this is a DOM issue, how can I reload the DOM after the asynchronous callback?
I have been at this issue for hours now and have had very little luck on the interwebz.
Any and all help is welcome and greatly appreciated.
Kind Regards,
William Francis
EDIT:
After much investigation I found that the only way to truly work the Arabic way is with a post-back. Once the language has been selected you do a postback, then its just a simple process of changing the Stylesheet HREF attribute from code behind. There doesn't seem to be any form of JavaScript or jQuery that can change it without a post-back and still reflect the new Stylesheet. NOTE: you need to set the Stylesheet HREF on each post-back, i.e. through a master page. The Stylsheet changes do not persist across pages.
Here's a website that helped greatly and explains a whole lot on Stylesheet changes using JavaScript. sadly, it didn't work for me.
http://www.alistapart.com/articles/alternate/
There could be several things going on. I found this page to be very helpful when I was dealing with a similar thing, so I highly recommend it:
http://www.w3.org/International/tutorials/bidi-xhtml/
Also, if you aren't already doing so, use a tool like Firebug to examine the generated DOM after your AJAX has run to be sure you are seeing the altered state of the DOM and not the initial source of the page. It is possible to change the dir dynamically--you can use Firebug to add a new attribute to the HTML tag of this very page (set dir="rtl") to see it change dynamically. It could be some other element is overriding the direction, it could be that the AJAX changes aren't loading correctly, or other things. If you can post more of your code it would be helpful to give a better answer, but I hope this will help.
Related
I have googled this with a couple of differing terms and I could not find my solution. What I want to do is to manipulate a webpage by editing its source, for example removing a part from the code maybe a div or so. I know how to get the source of a webpage and know how to change the code but I have no idea how to manipulate the page instantly, by for example removing an element.
Your help would be appreciated!
If you want to manipulate client code (HTML) what you need is Ajax.
You can use JQuery javascript library to manipulate html of a page adding, editing and removing html tags, scripts, etc.
Here you can find a decent tutorial as a start point.
If you want to manipulate server code (C# codebehind) what you need is creating a web project in visual studio (ASP.NET Web Application)
EDIT: As commented by #CSharpened both solutions are not mutual exclusive. You can have an ASP.NET Web application that uses Ajax to manipulate UI. In fact lot of people does that.
I would consider using AJAX. You can use either javascript or jquery coupled with html, asp.net and C# to achieve the results you are after.
For simple editing like removing divs or collapsing menus etc simple Javascript or jquery will suffice. However changing the coding of the page requires you to use AJAX or similar.
Im trying to scrape a page. Everything is ok, but when values are updated, the sourse code of page is still the same for a minute. Even when i refresh a page with slow internet connection, first i see old data, and only after page gets fully loaded values are current.
I guess javascript updates them. BUt it still has to download them somehow.
How can i get current values?
I write my program in C#, but if you have some ideas/advices/examples language doesnt really matter.
Thank you.
You're right - javascript is probably updating the data after load.
I could think of three ways to handle this:
Use a webbrowser control - I guess your using the HttpWebRequest object to retrieve values from the site. This won't work if you need to let the javascript to run. You can use the webbrowser control, let the javascript run and retrieve values from the DOM. Only thing I don't like about this approach is it feels like a hack and probably too clunky for prod applications. You also need to know when to read the contents of the DOM (an update might be inprogress in the background). Google "C# WebBrowser Control Read DOM Programmatically" or you can read more about that here.
I personally prefer this over the previous but it doesn't work all the time. First you need to inspect the website from firebug or something and see which urls are called from the background. Say for example the site is updating stock quotes using javascript. Most likely, its using an asynchronous request to retrieving the updated information from a webservice. Using firebug, you can view this under NET>XHR. Now is the hard part. Well, take a look at the request and the values returned. The idea is, you can try to retrieve the values your self and parse the contents - which can be a lot easier than scraping a page. The problem is, you would need to do a bit of reverse engineering to get it right. You might also encounter problems with authentication and/or encryption.
Lastly and my most preferred solution is asking the owner [of the site your are scraping] directly.
I think the WebBrowser control approach is probably OK and doesn't depend on third party libraries. Here is what I intend to use and it solves the problem of waiting for the page to complete loading:
private string ReadPage(string Link)
{
using (var client = new WebClient())
{
this.wbrwPages.Navigate(Link);
while (this.wbrwPages.ReadyState != WebBrowserReadyState.Complete)
{
Application.DoEvents();
}
ReadPage = this.wbrwPages.DocumentText;
}
}
I will get information out of the HTML through some form of DOM or XPath treatment. I am curious if others will have comments about entering the 'while' loop and depending upon the 'complete' state to get me out of it. I may put a timer of some sort in there as well - just to be safe.
Anybody has any idea on crawling websites that have dynamic pages/queries? I mean if I click a certain link, it has different values every I try to reload it in a web browser. Now my webcrawler could not download the contents of these pages. Please advise.
it would be the same way even it is dynamic or not. actually a crawler is only a mater of 3 things
The url
The data it sent to server if it is a POST Method then
The cookie if authentication is required
that's all,
the common problem when doing crawler:
Miss-guess of default page [index.html, index.php, default.aspx etc].. actually it will work without it for all method [POST/GET]
One of each field name is not written exactly
ASP.Net form viewstate id field (i forgot the name) but i can be achieve easily
Dynamic page generated by javascript. this one is the hardest part and the most cases even google still have problem about this.
hope that help.
You might want to look at this question which details how to write a crawler or look at the source code for http://searcharoo.net/ which contains a good crawler (see here).
I have the requirement to create a page which contains a graph at the top, and for each item in the graph there's a fact sheet below. I already produce the fact sheets as stand-alone pages. Now, rather than recreating the fact sheet to include in the page I have to create, I'd like to use the work that already exists.
Is it realistic that I dynamically generate each fact sheet as needed, strip out the body and insert that into the new page? If so, does anyone have any pointers or suggestions? Thanks
I would suggest moving the content of the existing page into an ASCX user control - it should be a fairly quick job, and then you can incorporate it into other pages as required.
A quick way to implement this feature would be to use an iframe to load the other pages in to. This would allow you to keep using the work that already exists. Not the most elegant solution but it would work.
Hope this helps.
No, generating stuff you don't need and then removing it is inelegant at best, hackish, and will likely contribute to lowering the quality (including reliability, security etc.) of your code.
Refactor your factsheet page code to generate a header, but use a self-contained class/widget to generate the actual factsheet content. Then just instanciate and use that same class/widget on the other page as well.
A similar question was asked here in storing information in a given html element.
I'm still green to jQuery, but I'm looking for the best way to store information on the page. I have a Repeater that holds one image per item. These images are clickable and can fire a given jQuery event. The issue I'm having is, the objects that the Repeater is bound to holds some specific information(such as "Subtext", "LargerImage", etc) which I would like to be accessible from the page.
Core/Data in jQuery accomplishes this just fine, however we would still need to build the jQuery statement from C#, as all the data is stored on the server. To clarify a bit, this is storing information on the page from a database, which is a bit different than arbitrary information being made available through jQuery.
I'm not restricting this question to "how to bind a custom attribute to an element", because I did come across an idea of generating a JS Struct from the C# codebehind to store information, but I'm avoiding any code generating code scenarios(or trying to).
Custom Attributes from HTML5(ie, "data-subtext") are also a possibility as I can easily add those from the itemdatabound event:
sampleImageElement.Attributes.Add("data-subtext", "And this what the image is about");
I'm a bit confused on browser support for this specific attribute though, or if it is even best practice so early in the game. If custom attributes are the way to go, that's an easy change to make happen. If jQuery can accomplish the same, I'd love to be pointed that way at least for my own understanding.
Any thoughts are greatly appreciated.
I'm answering this question only for the record keeping purposes of stackoverflow, as this is the solution I've moved forward with for this scenario. An AJAX call is completely warranted for any larger datasets and is a direction I would definitely go otherwise.
I went ahead with the "data-" field in the HTML5 spec, provided by the jQuery meta-data plugin.
I'm wrote a short extension method on the Web.UI.AttributeCollection class called "AddMetaData", which accepts an IList as well as a string "Key" to ease the attachment to a given page element.
I'm not marking this as the answer just yet, as there might be some community feedback on my own direction.
To clarify what happens in ASP.NET, once the page is served to the client, the objects that the Repeater is bound to on the server are destroyed and are then recreated upon each page postback.
It sounds like you want to achieve some kind of tooltip effect where the contents are retrieved from the server through AJAX? There are numerous different tooltip options available
jQuery Tooltip plugin
Random.Next()'s jQuery AJAX tooltip
dhtml goodies AJAX tooltip
clueTip
that can be used to do this. You could then set up a webservice or page method to retrieve the relevant data from your datasource.
Of course, you could have the content rendered in the HTML sent to the client when the request is processed and simply hide this markup. Then write your own plugin to display the markup in the form you require.