I've tried to check other answers on this site, but none of them worked for me. I have following HTML code:
<h3 class="x-large lheight20 margintop5">
<strong>some textstring</strong>
</h3>
I am trying to get # from this document with following code:
string adUrl = Doc.DocumentNode.SelectSingleNode("//*[#id=\"offers_table\"]/tbody/tr["+i+ "]/td/table/tbody/tr[1]/td[2]/div/h3/a/#href").InnerText;
I've also tried to do that without #href. Also tried with a[contains(#href, 'searchString')]. But all of these lines gave me just the name of the link - some textstring
Attributes doesn't have InnerText.You have to use the Attributes collection instead.
string adUrl = Doc.DocumentNode.SelectSingleNode("//*[#id=\"offers_table\"]/tbody/tr["+i+ "]/td/table/tbody/tr[1]/td[2]/div/h3/a")
.Attributes["href"].Value;
Why not just use the XDocument class?
private string GetUrl(string filename)
{
var doc = XDocument.Load(filename)
foreach (var h3Element in doc.Elements("h3").Where(e => e.Attribute("class"))
{
var classAtt = h3Element.Attribute("class");
if (classAtt == "x-large lheight20 margintop5")
{
h3Element.Element("a").Attribute("href").value;
}
}
}
The code is not tested so use with caution.
Related
So i've been trying to get a program working where I get info from google finance regarding different stock stats. So far I have not been able to get information out of spans. As of now I have hardcoded direct access to the apple stock.
Link to Apple stock: https://www.google.com/finance?q=NASDAQ%3AAAPL&ei=NgItWIG1GIftsAHCn4zIAg
What i can't understand is that I receive correct output when I trying it in the chrome console with the following command:
$x("//*[#id=\"appbar\"]//div//div//div//span");
This is my current code in Visual studio 2015 with Html Agility Pack installed(I suspect a fault in currDocNodeCompanyName):
class StockDataAccess
{
HtmlWeb web= new HtmlWeb();
private List<string> testList;
public void FindStock()
{
var histDoc = web.Load("https://www.google.com/finance/historical?q=NASDAQ%3AAAPL&ei=q9IsWNm4KZXjsAG-4I7oCA.html");
var histDocNode = histDoc.DocumentNode.SelectNodes("//*[#id=\"prices\"]//table//tr//td");
var currDoc = web.Load("https://www.google.com/finance?q=NASDAQ%3AAAPL&ei=CdcsWMjNCIe0swGd3oaYBA.html");
var currDocNodeCurrency = currDoc.DocumentNode.SelectNodes("//*[#id=\"ref_22144_elt\"]//div//div");
var currDocNodeCompanyName = currDoc.DocumentNode.SelectNodes("//*[#id=\"appbar\"]//div//div//div//span");
var histDocText = histDocNode.Select(node => node.InnerText);
var currDocCurrencyText = currDocNodeCurrency.Select(node => node.InnerText);
var currDocCompanyName = currDocNodeCompanyName.Select(node => node.InnerText);
List<String> result = new List<string>(histDocText.Take(6));
result.Add(currDocCurrencyText.First());
result.Add(currDocCompanyName.Take(2).ToString());
testList = result;
}
public List<String> ReturnStock()
{
return testList;
}
}
I have been trying the Xpath expression [text] and received an output that i can work with when using the chrome console but not in VS. I have also been experimenting with a foreach-loop, a few suggested it to others.
class StockDataAccess
{
HtmlWeb web= new HtmlWeb();
private List<string> testList;
public void FindStock()
{
///same as before
var currDoc = web.Load("https://www.google.com/finance?q=NASDAQ%3AAAPL&ei=CdcsWMjNCIe0swGd3oaYBA.html");
HtmlNodeCollection currDocNodeCompanyName = currDoc.DocumentNode.SelectNodes("//*[#id=\"appbar\"]//div//div//div//span");
///Same as before
List <string> blaList = new List<string>();
foreach (HtmlNode x in currDocNodeCompanyName)
{
blaList.Add(x.InnerText);
}
List<String> result = new List<string>(histDocText.Take(6));
result.Add(currDocCurrencyText.First());
result.Add(blaList[1]);
result.Add(blaList[2]);
testList = result;
}
public List<String> ReturnStock()
{
return testList;
}
}
I would really appreciate if anyone could point me in the right direction.
If you check the contents of currDoc.DocumentNode.InnerHtml you will notice that there is no element with the id "appbar", therefore the result is correct, since the xpath doesn't return anything.
I suspect that the html element you're trying to find is generated by a script (js for example), and that explains why you can see it on the browser and not on the HtmlDocument object, since HtmlAgilityPack does not render scripts, it only download and parse the raw source code.
I've got a few web pages that have static data in HTML mark-up tables. By this, I mean, manually maintained text:
<table border="1" >
<tr><th>Number</th><th>Date</th><th>BW</th><th>WW</th><th>%</th><th>Type</th><th>CED</th><th>BW</th><th>WW</th><th>YW</th><th>Mlk</th><th>Me</th></tr>
<tr><td>313</td><td>9/16/2013</td><td>74</td><td>512</td><td>100</td><td>861U</td><td>3</td><td>-1.1</td><td>54</td><td>85</td><td>16</td><td></td></tr>
<tr><td>315</td><td>10/6/2013</td><td>-</td><td>-</td><td>-</td><td>W179</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr>
<tr><td>316</td><td>10/102013</td><td>72</td><td>595</td><td>94.2</td><td>W179</td><td>7</td><td>-2.3</td><td>53</td><td>80</td><td>21</td><td>-3</td></tr>
<tr><td>350</td><td>10/11/2013</td><td>71</td><td>703</td><td>100</td><td>W179</td><td>7</td><td>-2.3</td><td>46</td><td>72</td><td>20</td><td>-5</td></tr>
<tr><td>392</td><td>3/8/2013</td><td>61</td><td>651</td><td>100</td><td>RANGER</td><td>7</td><td>-2.3</td><td>52</td><td>82</td><td>20</td><td>-2</td></tr>
<tr><td>303</td><td>7/3/2013</td><td>63</td><td>-</td><td>97.1</td><td>W179</td><td>8</td><td>-3.2</td><td>N/A</td><td>82</td><td>21</td><td>-8</td></tr>
<tr><td>304</td><td>7/8/2013</td><td>62</td><td>-</td><td>97.1</td><td>W179</td><td>7</td><td>-3.9</td><td>N/A</td><td>69</td><td>20</td><td>-4</td></tr>
<tr><td>397</td><td>3/18/2013</td><td>78</td><td>621</td><td>100</td><td>STATEMENT</td><td>6</td><td>-2.7</td><td>55</td><td>84</td><td>19</td><td>5</td></tr>
<tr><td>395</td><td>3/17/2013</td><td>63</td><td>716</td><td>94.2</td><td>STATEMENT</td><td>5</td><td>-2.7</td><td>54</td><td>85</td><td>19</td><td>5</td></tr>
<tr><td>390</td><td>3/6/2013</td><td>66</td><td>583</td><td>94.2</td><td>ENVY</td><td>2</td><td>-0.6</td><td>55</td><td>80</td><td>23</td><td>2</td></tr>
<tr><td>388</td><td>3/4/2013</td><td>53</td><td>621</td><td>100</td><td>STATEMENT</td><td>10</td><td>-5.1</td><td>49</td><td>82</td><td>20</td><td>2</td></tr>
<tr><td>300</td><td>3/22/2013</td><td>61</td><td>633</td><td>100</td><td>RANGER</td><td>8</td><td>-2.8</td><td>49</td><td>81</td><td>19</td><td>-2</td></tr>
<tr><td>379</td><td>2/1/2013</td><td>55</td><td>518</td><td>100</td><td>STATEMENT</td><td>8</td><td>-4.1</td><td>61</td><td>98</td><td>18</td><td>1</td></tr>
<tr><td>398</td><td>3/20/2013</td><td>62</td><td>664</td><td>100</td><td>RANGER</td><td>6</td><td>-2.3</td><td>53</td><td>83</td><td>20</td><td>0</td></tr>
<tr><td>384</td><td>2/10/2013</td><td>61</td><td>650</td><td>100</td><td>ENVY</td><td>3</td><td>-1</td><td>50</td><td>70</td><td>19</td><td>4</td></tr>
<tr><td>369</td><td>1/30/2013</td><td>76</td><td>651</td><td>100</td><td>STATEMENT</td><td>5</td><td>-2.4</td><td>60</td><td>99</td><td>20</td><td>8</td></tr>
<tr><td>373</td><td>1/21/2013</td><td>71</td><td>433</td><td>100</td><td>STATEMENT</td><td>4</td><td>-1.6</td><td>55</td><td>89</td><td>17</td><td>3</td></tr>
<tr><td>393</td><td>3/10/2013</td><td>63</td><td>717</td><td>100</td><td>STATEMENT</td><td>3</td><td>-4.6</td><td>51</td><td>91</td><td>20</td><td>5</td></tr>
<tr><td>389</td><td>3/8/2013</td><td>72</td><td>723</td><td>88.3</td><td>ENVY</td><td>4</td><td>-0.6</td><td>54</td><td>76</td><td>24</td><td>2</td></tr>
<tr><td>364</td><td>10/1/2012</td><td>60</td><td>574</td><td>100</td><td>RANGER</td><td>1</td><td>0.4</td><td>56</td><td>84</td><td>21</td><td>2</td></tr>
</table>
Currently, I am contemplating using a WebClient.DownloadString to pull all of the text in, and try to create an XML file out of it by parsing each row <tr>.
That sounds tedious, and I would rather not reinvent the wheel. Besides, a few good solutions would give me something to look at for ideas on how to best approach writing my version.
Has anyone come across some code that can do this?
I've started, to give you an idea of what I'm working on:
private const string XML_DATA = "App_Data/page_data.xml";
private const string TABLE_START = "<table>";
private const string TABLE_STOP = "</table>";
private string[] TABLE_ROW = { "<tr>", "</tr>" };
private string[] TABLE_HEAD = { "<th>", "</th>" };
private string[] TABLE_DET = { "<td>", "</td>" };
private void load_data() {
if (!File.Exists(XML_DATA)) {
string HtmlText;
using (var client = new WebClient()) {
HtmlText = client.DownloadString(Server.MapPath("/Sales.aspx"));
}
if (!String.IsNullOrEmpty(HtmlText)) {
var lcTxt = HtmlText.ToLower();
int len0 = TABLE_START.Length;
int tStart = lcTxt.IndexOf(TABLE_START) + len0;
int tStop = lcTxt.IndexOf(TABLE_STOP);
if ((len0 < tStart) && (tStart < tStop)) {
var tableString = HtmlText.Substring(tStart, tStop - tStart);
var tableRows = tableString.Split(TABLE_ROW, StringSplitOptions.RemoveEmptyEntries);
foreach (var row in tableRows) {
if (-1 < row.IndexOf(TABLE_HEAD[0])) {
//
} else {
//
}
}
}
}
}
}
Of course, you can see that is already going to fail, because the Markup using <table border="1">.
Yes, easy to fix, but I'd rather have a working guide that has already been through a lot of debugging steps.
UPDATE: I tried using XmlDocument's LoadXml method, but it can't seem to read basic HTML:
You definitely shouldn't be trying to parse that manually. Other people have already solved that problem.
If your markup is valid XML (and from what you've shown us, it looks like it is), then you can just parse it as XML:
XmlDocument doc = new XmlDocument();
doc.LoadXml(HtmlString);
doc.Save("myfile.xml");
But for that matter, if it's already valid XML markup, and all you need to do is save it as a file, then you don't need to parse it. Just save it:
File.WriteAllText("myfile.xml", HtmlString);
I work with Umbraco from Console application.
When I try get NiceUrl for some node it is impossible because UmbracoContext.Current is null.
I can get node path with ids like this: "-1,1067,1080", but don't know how convert it in url format.
How Can I get NiceUrl for Node in console application?
I did next:
In my console application I get node by Id, simple like this:
Node someNode = new Node(nodeId);
When I try get NiceUrl:
string url = someNode.NiceUrl;
get ArgumentNullException.
I checked why it: found next answer NiceUrl uses UmbracoContext so it is not possible because it's null.
Also I can't use this: UmbracoContext.Current.ContentCache.GetById(someidhere).Url
Thanks.
Without the UmbracoContext I don't think it's possible in V6 to get the URL of an IContent node.
I looked through the Umbraco source code and decided to recreate the way it's done there. I came up with this, which worked for my needs.
https://gist.github.com/petergledhill/ca2a3a0ea81b06abcb08
public static class ContentExtensions
{
public static string RelativeUrl(this IContent content)
{
var pathParts = new List<string>();
var n = content;
while (n != null)
{
pathParts.Add(n.UrlName());
n = n.Parent();
}
pathParts.RemoveAt(pathParts.Count() - 1); //remove root node
pathParts.Reverse();
var path = "/" + string.Join("/", pathParts);
return path;
}
public static string UrlName(this IContent content)
{
return new DefaultUrlSegmentProvider().GetUrlSegment(content).ToLower();
}
}
Yes, you can't use: UmbracoContext.Current.ContentCache because this is accessing the same context.
It looks like you are using v6+, so instead you will need to use the API services that Umbraco provide, specifically the ContentService.
There is a thread here that looks into the same thing you are asking: http://our.umbraco.org/forum/developers/api-questions/37981-Using-v6-API-ContentService-in-external-application
And an example of a solution here: https://github.com/sitereactor/umbraco-console-example
net C#. I am trying to parse Json from a webservice. I have done it with text but having a problem with parsing image. Here is the Url from where I m getting Json
http://collectionking.com/rest/view/items_in_collection.json?args=122
And this is My code to Parse it
using (var wc = new WebClient()) {
JavaScriptSerializer js = new JavaScriptSerializer();
var result = js.Deserialize<ck[]>(wc.DownloadString("http://collectionking.com/rest/view/items_in_collection.json args=122"));
foreach (var i in result) {
lblTitle.Text = i.node_title;
imgCk.ImageUrl = i.["main image"];
lblNid.Text = i.nid;
Any help would be great.
Thanks in advance.
PS: It returns the Title and Nid but not the Image.
My class is as follows:
public class ck
{
public string node_title;
public string main_image;
public string nid; }
Your problem is that you are setting ImageUrl to something like this <img typeof="foaf:Image" src="http://... and not an actual url. You will need to further parse main image and extract the url to show it correctly.
Edit
This was a though nut to crack because of the whitespace. The only solution I could find was to remove the whitespace before parsing the string. It's not a very nice solution but I couldn't find any other way using the built in classes. You might be able to solve it properly using JSON.Net or some other library though.
I also added a regular expression to extract the url for you, though there is no error checking what so ever here so you'll need to add that yourself.
using (var wc = new WebClient()) {
JavaScriptSerializer js = new JavaScriptSerializer();
var result = js.Deserialize<ck[]>(wc.DownloadString("http://collectionking.com/rest/view/items_in_collection.json?args=122").Replace("\"main image\":", "\"main_image\":")); // Replace the name "main image" with "main_image" to deserialize it properly, also fixed missing ? in url
foreach (var i in result) {
lblTitle.Text = i.node_title;
string realImageUrl = Regex.Match(i.main_image, #"src=""(.*?)""").Groups[1].Value; // Extract the value of the src-attribute to get the actual url, will throw an exception if there isn't a src-attribute
imgCk.ImageUrl = realImageUrl;
lblNid.Text = i.nid;
}
}
Try This
private static string ExtractImageFromTag(string tag)
{
int start = tag.IndexOf("src=\""),
end = tag.IndexOf("\"", start + 6);
return tag.Substring(start + 5, end - start - 5);
}
private static string ExtractTitleFromTag(string tag)
{
int start = tag.IndexOf(">"),
end = tag.IndexOf("<", start + 1);
return tag.Substring(start + 1, end - start - 1);
}
It may help
I am trying to figure out how to read header links using C#.NET. I want to get the edit link from Browser1 and put it in browser 2. My problem is that I can't figure out how to get at attributes, or even the link tags for that matter. Below is what I have now.
using System.XML.Linq;
...
string source = webKitBrowser1.DocumentText.ToString();
XDocument doc = new XDocument(XDocument.Parse(source));
webKitBrowser2.Navigate(doc.Element("link").Attribute("href").Value.ToString());
This would work except that xml is different than html, and right off the bat, it says that it was expecting "doctype" to be uppercase.
I finally figured it out, so I will post it for anyone who has the same question.
string site = webKitBrowser1.Url.Scheme + "://" + webKitBrowser1.Url.Authority;
WebKit.DOM.Document doc = webKitBrowser1.Document;
WebKit.DOM.NodeList links = doc.GetElementsByTagName("link");
WebKit.DOM.Element link;
string editlink = "none";
foreach (var item in links)
{
link = (WebKit.DOM.Element)item;
if (link.Attributes["rel"].NodeValue == "edit") { editlink = link.Attributes["href"].NodeValue; }
}
if (editlink != "none") { webKitBrowser2.Navigate(site + editlink); }