Windows Form app find Link on Web - c#

I need to create a method that find the newest version of application on a website (Hudson server) and allow to download it.
till now I use regex to scan all the HTML and find the href tags and search for the string I wish to.
I want to know if there is a simplest way to do so.
I attached the code I use today:
namespace SDKGui
{
public struct LinkItem
{
public string Href;
public string Text;
public override string ToString()
{
return Href;
}
}
static class LinkFinder
{
public static string Find(string file)
{
string t=null;
List<LinkItem> list = new List<LinkItem>();
// 1.
// Find all matches in file.
MatchCollection m1 = Regex.Matches(file, #"(<a.*?>.*?</a>)",
RegexOptions.Singleline);
// 2.
// Loop over each match.
foreach (Match m in m1)
{
string value = m.Groups[1].Value;
LinkItem i = new LinkItem();
// 3.
// Get href attribute.
Match m2 = Regex.Match(value, #"href=\""(.*?)\""",
RegexOptions.Singleline);
if (m2.Success)
{
i.Href = m2.Groups[1].Value;
}
// 4.
// Remove inner tags from text.
t = Regex.Replace(value, #"\s*<.*?>\s*", "",
RegexOptions.Singleline);
if (t.Contains("hms_sdk_tool_"))
{
i.Text = t;
list.Add(i);
break;
}
}
return t;
}
}
}

It is easy to collect all href values and filter against any of your conditions using HtmlAgilityPack. The following method shows how to access a page, get all <a> tags, and return a list of all href values containing hms_sdk_tool_:
private List<string> HtmlAgilityCollectHrefs(string url)
{
var webGet = new HtmlAgilityPack.HtmlWeb();
var doc = webGet.Load(url);
var a_nodes = doc.DocumentNode.SelectNodes("//a");
return a_nodes.Select(p => p.GetAttributeValue("href", "")).Where(n => n.Contains("hms_sdk_tool_")).ToList();
}
Or, if you are interested in 1 return string, use
private string GetLink(string url)
{
var webGet = new HtmlAgilityPack.HtmlWeb();
var doc = webGet.Load(url);
var a_nodes = doc.DocumentNode.SelectNodes("//a");
return a_nodes.Select(p => p.GetAttributeValue("href", "")).Where(n => n.Contains("hms_sdk_tool_")).FirstOrDefault();
}

Related

How to highlight only results of PrefixQuery in Lucene and not whole words?

I'm fairly new to Lucene and perhaps doing something really wrong, so please correct me if it is the case. Being searching for the answer for a few days now and not sure where to go from here.
The goal is to use Lucene.NET to search for user names with partial search (like StartsWith) and highlight only the found parts. For instance if I search for abc in a list of ['a', 'ab', 'abc', 'abcd', 'abcde'] it should return just the last three in a form of ['<b>abc</b>', '<b>abc</b>d', '<b>abc</b>de']
Here is how I approached this.
First the index creation:
using var indexDir = FSDirectory.Open(Path.Combine(IndexDirectory, IndexName));
using var standardAnalyzer = new StandardAnalyzer(CurrentVersion);
var indexConfig = new IndexWriterConfig(CurrentVersion, standardAnalyzer);
indexConfig.OpenMode = OpenMode.CREATE_OR_APPEND;
using var indexWriter = new IndexWriter(indexDir, indexConfig);
if (indexWriter.NumDocs == 0)
{
//fill the index with Documents
}
The documents are created like this:
static Document BuildClientDocument(int id, string surname, string name)
{
var document = new Document()
{
new StringField("Id", id.ToString(), Field.Store.YES),
new TextField("Surname", surname, Field.Store.YES),
new TextField("Surname_sort", surname.ToLower(), Field.Store.NO),
new TextField("Name", name, Field.Store.YES),
new TextField("Name_sort", name.ToLower(), Field.Store.NO),
};
return document;
}
The search is done like this:
using var multiReader = new MultiReader(indexWriter.GetReader(true)); //the plan was to use multiple indexes per entity types
var indexSearcher = new IndexSearcher(multiReader);
var queryString = "abc"; //just as a sample
var queryWords = queryString.SplitWords();
var query = new BooleanQuery();
queryWords
.Process((word, index) =>
{
var boolean = new BooleanQuery()
{
{ new PrefixQuery(new Term("Surname", word)) { Boost = 100 }, Occur.SHOULD }, //surnames are most important to match
{ new PrefixQuery(new Term("Name", word)) { Boost = 50 }, Occur.SHOULD }, //names are less important
};
boolean.Boost = (queryWords.Count() - index); //first words in a search query are more important than others
query.Add(boolean, Occur.MUST);
})
;
var topDocs = indexSearcher.Search(query, 50, new Sort( //sort by relevance and then in lexicographical order
SortField.FIELD_SCORE,
new SortField("Surname_sort", SortFieldType.STRING),
new SortField("Name_sort", SortFieldType.STRING)
));
And highlighting:
var htmlFormatter = new SimpleHTMLFormatter();
var queryScorer = new QueryScorer(query);
var highlighter = new Highlighter(htmlFormatter, queryScorer);
foreach (var found in topDocs.ScoreDocs)
{
var document = indexSearcher.Doc(found.Doc);
var surname = document.Get("Surname"); //just for simplicity
var surnameFragment = highlighter.GetBestFragment(standardAnalyzer, "Surname", surname);
Console.WriteLine(surnameFragment);
}
The problem is that the highlighter returns results like this:
<b>abc</b>
<b>abcd</b>
<b>abcde</b>
<b>abcdef</b>
So it "highlights" entire words even though I was searching for partials.
Explain returned NON-MATCH all the way so not sure if it's helpful here.
Is it possible to highlight only the parts which were searched for? Like in my example.
While searching a bit more on this I came to a conclusion that to make such highlighting work one needs to tweak index generation methods and split indices by parts so offsets would be properly calculated. Or else highlighting will highlight only surrounding words (fragments) entirely.
So based on this I've managed to build a simple highlighter of my own.
public class Highlighter
{
private const string TempStartToken = "\x02";
private const string TempEndToken = "\x03";
private const string SearchPatternTemplate = $"[{TempStartToken}{TempEndToken}]*{{0}}";
private const string ReplacePattern = $"{TempStartToken}$&{TempEndToken}";
private readonly ConcurrentDictionary<HighlightKey, Regex> _regexPatternsCache = new();
private static string GetHighlightTypeTemplate(HighlightType highlightType) =>
highlightType switch
{
HighlightType.Starts => "^{0}",
HighlightType.Contains => "{0}",
HighlightType.Ends => "{0}$",
HighlightType.Equals => "^{0}$",
_ => throw new ArgumentException($"Unsupported {nameof(HighlightType)}: '{highlightType}'", nameof(highlightType)),
};
public string Highlight(string text, IReadOnlySet<string> words, string startToken, string endToken, HighlightType highlightType)
{
foreach (var word in words)
{
var key = new HighlightKey
{
Word = word,
HighlightType = highlightType,
};
var regex = _regexPatternsCache.GetOrAdd(key, _ =>
{
var parts = word.Select(w => string.Format(SearchPatternTemplate, Regex.Escape(w.ToString())));
var pattern = string.Concat(parts);
var highlightPattern = string.Format(GetHighlightTypeTemplate(highlightType), pattern);
return new Regex(highlightPattern, RegexOptions.IgnoreCase | RegexOptions.CultureInvariant | RegexOptions.Compiled);
});
text = regex.Replace(text, ReplacePattern);
}
return text
.Replace(TempStartToken, startToken)
.Replace(TempEndToken, endToken)
;
}
private record HighlightKey
{
public string Word { get; init; }
public HighlightType HighlightType { get; init; }
}
}
public enum HighlightType
{
Starts,
Contains,
Ends,
Equals,
}
Use it like this:
var queries = new[] { "abc" }.ToHashSet();
var search = "a ab abc abcd abcde";
var highlighter = new Highlighter();
var outputs = search
.Split((string[])null, StringSplitOptions.RemoveEmptyEntries | StringSplitOptions.TrimEntries)
.Select(w => highlighter.Highlight(w, queries, "<b>", "</b>", HighlightType.Starts))
;
var result = string.Join(" ", outputs).Dump();
Util.RawHtml(result).Dump();
Output looks like this:
a ab <b>abc</b> <b>abc</b>d <b>abc</b>de
a ab abc abcd abcde
I'm open to any other better solutions.

How to retrieve specific HTML information from a given website

I'm trying to program an API for discord and I need to retrieve two pieces of information out of the HTML code of the web page https://myanimelist.net/character/214 (and other similar pages with URLs of the form https://myanimelist.net/character/N for integers N), specifically the URL of the Character Picture (in this case https://cdn.myanimelist.net/images/characters/14/54554.jpg) and the name of the character (in this case Youji Kudou). Afterwards I need to save those two pieces of information to JSON.
I am using HTMLAgilityPack for this, yet I can't quite see through it. The following is my first attempt:
public static void Main()
{
var html = "https://myanimelist.net/character/214";
HtmlWeb web = new HtmlWeb();
var htmlDoc = web.Load(html);
var htmlNodes = htmlDoc.DocumentNode.SelectNodes("//body");
foreach (var node in htmlNodes.Descendants("tr/td/div/a/img"))
{
Console.WriteLine(node.InnerHtml);
}
}
Unfortunately, this produces no output. If I followed the path correctly (which is probably the first mistake) it should be "tr/td/div/a/img". I get no errors, it runs, yet I get no output.
My second attempt is:
public static void Main()
{
var html = "https://myanimelist.net/character/214";
HtmlWeb web = new HtmlWeb();
var htmlDoc = web.Load(html);
var htmlNodes = htmlDoc.DocumentNode.SelectNodes("//body");
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
var script = htmlDoc.DocumentNode.Descendants()
.Where(n => n.Name == "tr/td/a/img")
.First().InnerText;
// Return the data of spect and stringify it into a proper JSON object
var engine = new Jurassic.ScriptEngine();
var result = engine.Evaluate("(function() { " + script + " return src; })()");
var json = JSONObject.Stringify(engine, result);
Console.WriteLine(json);
Console.ReadKey();
}
But this also doesn't work.
How can I extract the required information?
EDIT:
So, I've come quite further now, and I've found a solution to finding the link. It was rather simple. But now I'm stuck with finding the name of the character. The website is structured the same on every other link there is (changing the last number) so, I want to find many different ones via for loop. Here's how I tried to do it:
for (int i = 1; i <= 1000; i++)
{
HtmlWeb web = new HtmlWeb();
var html = "https://myanimelist.net/character/" + i;
var htmlDoc = web.Load(html);
foreach (var item in htmlDoc.DocumentNode.SelectNodes("//*[#]"))
{
string n;
n = item.GetAttributeValue("src", "");
foreach (var item2 in htmlDoc.DocumentNode.SelectNodes("//*[#src and #alt='" + n + "']"))
{
Console.WriteLine(item2.GetAttributeValue("src", ""));
}
}
}
in the first foreach I would try to search for the name, which is concluded always at the same position (e.g http://prntscr.com/o1uo3c and http://prntscr.com/o1uo91 and to be specific: http://prntscr.com/o1xzbk) but I haven't found out how yet. Since the structure in the HTML doesn't have any body type I can follow up with. The second foreach loop is to search for the URL which works by now and the n should give me the name, so I can figure it out for each different character.
I was able to extract the character name and image from https://myanimelist.net/character/214 using the following method:
public static CharacterData ExtractCharacterNameAndImage(string url)
{
//Use the following if you are OK with hardcoding the structure of <div> elements.
//var tableXpath = "/html/body/div[1]/div[3]/div[3]/div[2]/table";
//Use the following if you are OK with hardcoding the fact that the relevant table comes first.
var tableXpath = "/html/body//table";
var nameXpath = "tr/td[2]/div[4]";
var imageXpath = "tr/td[1]/div[1]/a/img";
var htmlDoc = new HtmlWeb().Load(url);
var table = htmlDoc.DocumentNode.SelectNodes(tableXpath).First();
var name = table.SelectNodes(nameXpath).Select(n => n.GetDirectInnerText().Trim()).SingleOrDefault();
var imageUrl = table.SelectNodes(imageXpath).Select(n => n.GetAttributeValue("src", "")).SingleOrDefault();
return new CharacterData { Name = name, ImageUrl = imageUrl, Url = url };
}
Where CharacterData is defined as follows:
public class CharacterData
{
public string Name { get; set; }
public string ImageUrl { get; set; }
public string Url { get; set; }
}
Afterwards, the character data can be serialized to JSON using any of the tools from How to write a JSON file in C#?, e.g. json.net:
var url = "https://myanimelist.net/character/214";
var data = ExtractCharacterNameAndImage(url);
var json = JsonConvert.SerializeObject(data, Formatting.Indented);
Console.WriteLine(json);
Which outputs
{
"Name": "Youji Kudou",
"ImageUrl": "https://cdn.myanimelist.net/images/characters/14/54554.jpg",
"Url": "https://myanimelist.net/character/214"
}
If you would prefer the Name to include the Japanese in parenthesis, replace GetDirectInnerText() with just InnerText, which results in:
{
"Name": "Youji Kudou (工藤耀爾)",
"ImageUrl": "https://cdn.myanimelist.net/images/characters/14/54554.jpg",
"Url": "https://myanimelist.net/character/214"
}
Alternatively, if you prefer you could pull the character name from the document title:
var title = string.Concat(htmlDoc.DocumentNode.SelectNodes("/html/head/title").Select(n => n.InnerText.Trim()));
var index = title.IndexOf("- MyAnimeList.net");
if (index >= 0)
title = title.Substring(0, index).Trim();
How did I determine the correct XPath strings?
Firstly, using Firefox 66, I opened the debugger and loaded https://myanimelist.net/character/214 in the window with the debugging tools visible.
Next, following the instructions from How to find xpath of an element in firefox inspector, I selected the Youji Kudou (工藤耀爾) node and copied its XPath, which turned out to be:
/html/body/div[1]/div[3]/div[3]/div[2]/table/tbody/tr/td[2]/div[4]
I then tried to select this node using SelectNodes()... and got a null result. But why? To determine this I created a debugging routine that would break the path into successively longer portions and determine where the failure occurs:
static void TestSelect(HtmlDocument htmlDoc, string xpath)
{
Console.WriteLine("\nInput path: " + xpath);
var splitPath = xpath.Split('/');
for (int i = 2; i <= splitPath.Length; i++)
{
if (splitPath[i-1] == "")
continue;
var thisPath = string.Join("/", splitPath, 0, i);
Console.Write("Testing \"{0}\": ", thisPath);
var result = htmlDoc.DocumentNode.SelectNodes(thisPath);
Console.WriteLine("result count = {0}", result == null ? "null" : result.Count.ToString());
}
}
This output the following:
Input path: /html/body/div[1]/div[3]/div[3]/div[2]/table/tbody/tr/td[2]/div[4]
Testing "/html": result count = 1
Testing "/html/body": result count = 1
Testing "/html/body/div[1]": result count = 1
Testing "/html/body/div[1]/div[3]": result count = 1
Testing "/html/body/div[1]/div[3]/div[3]": result count = 1
Testing "/html/body/div[1]/div[3]/div[3]/div[2]": result count = 1
Testing "/html/body/div[1]/div[3]/div[3]/div[2]/table": result count = 1
Testing "/html/body/div[1]/div[3]/div[3]/div[2]/table/tbody": result count = null
Testing "/html/body/div[1]/div[3]/div[3]/div[2]/table/tbody/tr": result count = null
Testing "/html/body/div[1]/div[3]/div[3]/div[2]/table/tbody/tr/td[2]": result count = null
Testing "/html/body/div[1]/div[3]/div[3]/div[2]/table/tbody/tr/td[2]/div[4]": result count = null
As you can see, something goes wrong selecting the <tbody> path element. Manual inspection of the InnerHtml returned by selecting /html/body/div[1]/div[3]/div[3]/div[2]/table revealed that, for some reason, the server is not including the <tbody> tag when returning HTML to the HtmlWeb object -- possibly due to some difference in request header(s) provided by Firefox vs HtmlWeb. Once I omitted the tbody path element I was able to query for the character name successfully using:
/html/body/div[1]/div[3]/div[3]/div[2]/table/tr/td[2]/div[4]
A similar process provided the following working path for the image:
/html/body/div[1]/div[3]/div[3]/div[2]/table/tr/td[1]/div[1]/a/img
Since the two queries are finding contents in the same <table>, in my final code I selected the table only once in a separate step, and removed some of the hardcoding as to the specific nesting of <div> elements.
Demo fiddle here.
Alright, to finnish it up, I've rounded the Code, gratefully assisted by dbc, and implemented nearly completly into the project. Just if someone in later days maybe has a identical question, here they go. This outputs out of a defined number all the character names, links and images and writes it into a JSON file and could be adapted for other websites.
using System;
using System.Linq;
using Newtonsoft.Json;
using HtmlAgilityPack;
using System.IO;
namespace SearchingHTML
{
public class CharacterData
{
public string Name { get; set; }
public string ImageUrl { get; set; }
public string Url { get; set; }
}
public class Program
{
public static CharacterData ExtractCharacterNameAndImage(string url)
{
var tableXpath = "/html/body//table";
var nameXpath = "tr/td[2]/div[4]";
var imageXpath = "tr/td[1]/div[1]/a/img";
var htmlDoc = new HtmlWeb().Load(url);
var table = htmlDoc.DocumentNode.SelectNodes(tableXpath).First();
var name = table.SelectNodes(nameXpath).Select(n => n.GetDirectInnerText().Trim()).SingleOrDefault();
var imageUrl = table.SelectNodes(imageXpath).Select(n => n.GetAttributeValue("src", "")).SingleOrDefault();
return new CharacterData { Name = name, ImageUrl = imageUrl, Url = url };
}
public static void Main()
{
int max = 10000;
string fileName = #"C:\Users\path of your file.json";
Console.WriteLine("Environment version: " + Environment.Version);
Console.WriteLine("Json.NET version: " + typeof(JsonSerializer).Assembly.FullName);
Console.WriteLine("HtmlAgilityPack version: " + typeof(HtmlDocument).Assembly.FullName);
Console.WriteLine();
for (int i = 6; i <= max; i++)
{
try
{
var url = "https://myanimelist.net/character/" + i;
var htmlDoc = new HtmlWeb().Load(url);
var data = ExtractCharacterNameAndImage(url);
var json = JsonConvert.SerializeObject(data, Formatting.Indented);
Console.WriteLine(json);
TextWriter tsw = new StreamWriter(fileName, true);
tsw.WriteLine(json);
tsw.Close();
} catch (Exception ex) { }
}
}
}
}
/*******************************************************************************************************************************
****************************************************IF TESTING IS REQUIERED****************************************************
*******************************************************************************************************************************
*
* static void TestSelect(HtmlDocument htmlDoc, string xpath)
Console.WriteLine("\nInput path: " + xpath);
var splitPath = xpath.Split('/');
for (int i = 2; i <= splitPath.Length; i++)
{
if (splitPath[i - 1] == "")
continue;
var thisPath = string.Join("/", splitPath, 0, i);
Console.Write("Testing \"{0}\": ", thisPath);
var result = htmlDoc.DocumentNode.SelectNodes(thisPath);
Console.WriteLine("result count = {0}", result == null ? "null" : result.Count.ToString());
}
}
*******************************************************************************************************************************
*********************************************FOR TESTING ENTER THIS INTO MAIN CLASS********************************************
*******************************************************************************************************************************
*
* var url2 = "https://myanimelist.net/character/256";
var data2 = ExtractCharacterNameAndImage(url2);
var json2 = JsonConvert.SerializeObject(data2, Formatting.Indented);
Console.WriteLine(json2);
var nameXpathFromFirefox = "/html/body/div[1]/div[3]/div[3]/div[2]/table/tbody/tr/td[2]/div[4]";
var imageXpathFromFirefox = "/html/body/div[1]/div[3]/div[3]/div[2]/table/tbody/tr/td[1]/div[1]/a/img";
TestSelect(htmlDoc, nameXpathFromFirefox);
TestSelect(htmlDoc, imageXpathFromFirefox);
var nameXpathFromFirefoxFixed = "/html/body/div[1]/div[3]/div[3]/div[2]/table/tr/td[2]/div[4]";
var imageXpathFromFirefoxFixed = "/html/body/div[1]/div[3]/div[3]/div[2]/table/tr/td[1]/div[1]/a/img";
TestSelect(htmlDoc, nameXpathFromFirefoxFixed);
TestSelect(htmlDoc, imageXpathFromFirefoxFixed);
*******************************************************************************************************************************
*******************************************************************************************************************************
*******************************************************************************************************************************
*/

Web-scrape project writing too much information

I'm trying to modify the code below to scrape jobs from www.itoworld.com/careers. The jobs are in a table format and return all the <'td> values.
I believe it comes from the line:
var parentnode = node.ParentNode.ParentNode.ParentNode.FirstChild.NextSibling
However, I want it to write:
<a class="std-btn" href="http://www.itoworld.com/office-manager/">Office Manager</a>
Currently it is writing
<a href='http://www.itoworld.com/office-manager/' target='_blank'>Office ManagerOffice & AdminCambridgeFind out more</a>
I plan on 'brute force' modifying the output to remove unnecessary extras but was hoping there is a smarter way to do this. Is there a way for example to remove the second and third ParentNode after they have been called? (So they do not get written?)
public string ExtractIto()
{
string sUrl = "http://www.itoworld.com/careers/";
GlobusHttpHelper ghh = new GlobusHttpHelper();
List<Links> link = new List<Links>();
bool Next = true;
int count = 1;
string html = ghh.getHtmlfromUrl(new Uri(string.Format(sUrl)));
HtmlAgilityPack.HtmlDocument hd = new HtmlAgilityPack.HtmlDocument();
hd.LoadHtml(html);
var hn = hd.DocumentNode.SelectSingleNode("//*[#class='btn-wrapper']");
var hnc = hn.SelectNodes(".//a");
foreach (var node in hnc)
{
try
{
var parentnode = node.ParentNode.ParentNode.ParentNode.FirstChild.NextSibling;
Links l = new Links();
l.Name = ParseHtmlContainingText(parentnode.InnerText);
l.Link = node.GetAttributeValue("href", "");
link.Add(l);
}
}
string Xml = getXml(link);
return WriteXml(Xml);
For completeness below is the definition of ParseHtmlContainingText
public string ParseHtmlContainingText(string htmlString)
{
return Regex.Replace(Regex.Replace(WebUtility.HtmlDecode(htmlString), #"<[^>]+>| ", ""), #"\s{2,}", " ").Trim();
}
You just need to create a "name node" and use that for your parse method.
I tested with this code and it worked for me.
var parentnode = node.ParentNode.ParentNode.ParentNode.FirstChild.NextSibling;
var nameNode = parentnode.FirstChild;
Links l = new Links();
l.Name = ParseHtmlContainingText(nameNode.InnerText);
l.Link = node.GetAttributeValue("href", "");

Reading Specific text from a website

I am trying to make a database, but i need to get info from a website. Mainly the Title, Date, Length and Genre from the IMDB website. I have tried like 50 different things and it is just not working.
Here is my code.
public string GetName(string URL)
{
HtmlWeb web = new HtmlWeb();
HtmlDocument doc = web.Load(URL);
var Attr = doc.DocumentNode.SelectNodes("//*[#id=\"overview - top\"]/h1/span[1]#itemprop")[0];
return Name;
}
When I run this it just gives me a XPathException. I just want it to return the Title of a movie. I am now just using this movie for a example and testing but, I want it to work with all movies http://www.imdb.com/title/tt0405422
I am using the HtmlAgilityPack.
The last bit of your XPath is not valid. Also to get only single element from HtmlDocument() you can use SelectSingleNode() instead of SelectNodes() :
HtmlWeb web = new HtmlWeb();
HtmlDocument doc = web.Load("http://www.imdb.com/title/tt0405422/");
var xpath = "//*[#id='overview-top']/h1/span[#class='itemprop']";
var span = doc.DocumentNode.SelectSingleNode(xpath);
var title = span.InnerText;
Console.WriteLine(title);
output :
The 40-Year-Old Virgin
demo link : *
https://dotnetfiddle.net/P7U5A7
*) the demo shows that the correct title is printed, along with an error specific to .NET Fiddle (you can safely ignore the error).
I making something familiar and this is my code which gets info from imdb.com website.:
string html = getUrlData(imdbUrl + "combined");
Id = match(#"<link rel=""canonical"" href=""http://www.imdb.com/title/(tt\d{7})/combined"" />", html);
if (!string.IsNullOrEmpty(Id))
{
status = true;
Title = match(#"<title>(IMDb \- )*(.*?) \(.*?</title>", html, 2);
OriginalTitle = match(#"title-extra"">(.*?)<", html);
Year = match(#"<title>.*?\(.*?(\d{4}).*?\).*?</title>", html);
Rating = match(#"<b>(\d.\d)/10</b>", html);
Genres = matchAll(#"<a.*?>(.*?)</a>", match(#"Genre.?:(.*?)(</div>|See more)", html));
Directors = matchAll(#"<td valign=""top""><a.*?href=""/name/.*?/"">(.*?)</a>", match(#"Directed by</a></h5>(.*?)</table>", html));
Cast = matchAll(#"<td class=""nm""><a.*?href=""/name/.*?/"".*?>(.*?)</a>", match(#"<h3>Cast</h3>(.*?)</table>", html));
Plot = match(#"Plot:</h5>.*?<div class=""info-content"">(.*?)(<a|</div)", html);
Runtime = match(#"Runtime:</h5><div class=""info-content"">(\d{1,4}) min[\s]*.*?</div>", html);
Languages = matchAll(#"<a.*?>(.*?)</a>", match(#"Language.?:(.*?)(</div>|>.?and )", html));
Countries = matchAll(#"<a.*?>(.*?)</a>", match(#"Country:(.*?)(</div>|>.?and )", html));
Poster = match(#"<div class=""photo"">.*?<a name=""poster"".*?><img.*?src=""(.*?)"".*?</div>", html);
if (!string.IsNullOrEmpty(Poster) && Poster.IndexOf("media-imdb.com") > 0)
{
Poster = Regex.Replace(Poster, #"_V1.*?.jpg", "_V1._SY200.jpg");
PosterLarge = Regex.Replace(Poster, #"_V1.*?.jpg", "_V1._SY500.jpg");
PosterFull = Regex.Replace(Poster, #"_V1.*?.jpg", "_V1._SY0.jpg");
}
else
{
Poster = string.Empty;
PosterLarge = string.Empty;
PosterFull = string.Empty;
}
ImdbURL = "http://www.imdb.com/title/" + Id + "/";
if (GetExtraInfo)
{
string plotHtml = getUrlData(imdbUrl + "plotsummary");
}
//Match single instance
private string match(string regex, string html, int i = 1)
{
return new Regex(regex, RegexOptions.Multiline).Match(html).Groups[i].Value.Trim();
}
//Match all instances and return as ArrayList
private ArrayList matchAll(string regex, string html, int i = 1)
{
ArrayList list = new ArrayList();
foreach (Match m in new Regex(regex, RegexOptions.Multiline).Matches(html))
list.Add(m.Groups[i].Value.Trim());
return list;
}
Maybe you will find something useful

Method that extracts a list of URLs that match the pattern

Well, I'm trying to create a method, using Regex , that will extract a list of URLs that matches this pattern #"http://(www\.)?([^\.]+)\.com", and so far I've done this :
public static List<string> Test(string url)
{
const string pattern = #"http://(www\.)?([^\.]+)\.com";
List<string> res = new List<string>();
MatchCollection myMatches = Regex.Matches(url, pattern);
foreach (Match currentMatch in myMatches)
{
}
return res;
}
main issue is , which code should I use in foreach loop
res.Add(currentMatch.Groups.ToString());
or
res.Add(currentMatch.Value);
Thanks!
You just need to get all .Match.Values. In your code, you should use
res.Add(currentMatch.Value);
Or, just use LINQ:
res = Regex.Matches(url, pattern).Cast<Match>()
.Select(p => p.Value)
.ToList();
res.Add(currentMatch.Groups.ToString()); will give: System.Text.RegularExpressions.GroupCollection so you didn't test it.
How many matches do you expect from the parameter url?
I would use this:
static readonly Regex _domainMatcher = new Regex(#"http://(www\.)?([^\.]+)\.com", RegexOptions.Compiled);
public static bool IsValidDomain(string url)
{
return _domainMatcher.Match(url).Success;
}
or
public static string ExtractDomain(string url)
{
var match = _domainMatcher.Match(url);
if(match.Success)
return match.Value;
else
return string.Empty;
}
Because the parameter is called url so it should be one url
If there are more possibilities and you want to extract all domainnames that matches the pattern:
public static IEnumerable<string> ExtractDomains(string data)
{
var result = new List<string>();
var match = _domainMatcher.Match(data);
while (match.Success)
{
result.Add(match.Value);
match = match.NextMatch();
}
return result;
}
Notice the IEnumerable<> instead of List<> because there is no need to modify the result by the caller.
Or this lazy variant:
public static IEnumerable<string> ExtractDomains(string data)
{
var match = _domainMatcher.Match(data);
while (match.Success)
{
yield return match.Value;
match = match.NextMatch();
}
}

Categories

Resources