Are there any alternatives of ASP Panels to insert html from code? - c#

I'm using an ASP panel as a placeholder to create things like menus that come from the database. For this I created a class with a function that returns a Panel.
Is there an alternative to this? I would like my code to be completly independed of the project. Maby some classic ASP function?
Code that creates the menu:
public static Panel createMenu(Panel panel)
{
List<menu> menuItems = menu.selectMenuitems();
panel.Controls.Add(new LiteralControl("<ul>"));
for (int i = 0; i < menuItems.Count; i++)
{
string menuPath = menuItems[i].virtualpath;
string menuName = menuItems[i].name;
panel.Controls.Add(new LiteralControl("<li>"));
// Get the full URL
string url = HttpContext.Current.Request.Url.AbsoluteUri;
// Get last part of the URL
string path = url.Split('/').Last().ToLower();
// If the url is the same as the menu-item, add class active.
if (path == menuPath || (path == "default.aspx" && i==0))
panel.Controls.Add(new LiteralControl("<a class='active' href='/" + menuPath + "'>"));
else
panel.Controls.Add(new LiteralControl("<a href='/" + menuPath + "'>"));
panel.Controls.Add(new LiteralControl(menuName));
panel.Controls.Add(new LiteralControl("</a>"));
panel.Controls.Add(new LiteralControl("</li>"));
}
panel.Controls.Add(new LiteralControl("</ul>"));
return panel;
}

Your menu structure appears to be something like this:
<ul>
<li><a class href>name</a></li>
</ul>
Why not just create a routine that outputs a string. You can pass the string over to a literal control to load the menu. Something like:
myLiteral.Text = createMenu();
This code is not tested, but hopefully gives you a start:
public static string createMenu()
{
List<menu> menuItems = menu.selectMenuitems();
var m = new StringBuilder();
m.AppendLine("<ul>");
for (int i = 0; i < menuItems.Count; i++)
{
string menuPath = menuItems[i].virtualpath;
string menuName = menuItems[i].name;
m.AppendLine("<li>");
// Get the full URL
string url = HttpContext.Current.Request.Url.AbsoluteUri;
// Get last part of the URL
string path = url.Split('/').Last().ToLower();
// If the url is the same as the menu-item, add class active.
m.AppendLine("<a " + setActiveClass(path, menuPath, i) + " href=\"/" + menuPath + "\">");
m.AppendLine(menuName);
m.AppendLine("</a>");
m.AppendLine("</li>");
}
m.AppendLine("</ul>");
return m.toString();
}
private static string setActiveClass(string path, string menuPath, int i) {
if (path.Equals(menuPath, StringComparison.OrdinalIgnoreCase) || (path.Equals("default.aspx", StringComparison.OrdinalIgnoreCase) && i==0)) {
return "class=\"active\"";
}
else {
return "";
}
}
You can also expand upon this to include sub-menus.

To be "independent" you could create the menu via Javascript/Jquery.
You can create a .ashx page or something like this that returns a JSON than you create the menu asynchronously. Your JSON should return a two keys structure like this
[{"menuName": "menu1","menuLink": "http://www.google.com"}, { "menuName": "menu2", "menuLink": "http://www.yahoo.com"}, { "menuName": "menu3", "menuLink": "http://www.pudim.com"}]';
Your JS/jQuery function would be like this
function createMenuAsync()
{
var menuContainer = $("#menuContainer");
var listRoot = $("<ul></ul>");
$.getJSON(callString, function (response) {
$.each(JSON.parse(response), function(key,value){
var listItem = $("<li></li>");
var itemLink = $("<a>" + value.menuName + "</a>").attr("href",value.MenuUrl);
itemLink.appendTo(listItem);
listItem.appendTo(listRoot);
})
});
listRoot.appendTo(menuContainer);
}
The code will be cleaner and will be lighter to your webserver, since the html tags elements will be created on client side instead of server side.
I create a fiddle with this code working
https://jsfiddle.net/h4ywwxm8/2/

Related

C# Webscraper to grab amount of Google Results given a specific search term

I've been working on a webscraper as a Windows Forms application in C#. The user enter a search term and the term and the program will then split the search string for each individual words and look up the amount of search results through Yahoo and Google.
My issue lies with the orientation of the huge HTML document. I've tried multiple approaches such as
iterating recursively and comparing ids aswell as with lamba and the Where statements. Both results in null. I also manually looked into the html document to make sure the id of the div I want exist in the document.
The id I'm looking for is "resultStats" but it is suuuuuper nested. My code looks like this:
using HtmlAgilityPack;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace WebScraper2._0
{
public class Webscraper
{
private string Google = "http://google.com/#q=";
private string Yahoo = "http://search.yahoo.com/search?p=";
private HtmlWeb web = new HtmlWeb();
private HtmlDocument GoogleDoc = new HtmlDocument();
private HtmlDocument YahooDoc = new HtmlDocument();
public Webscraper()
{
Console.WriteLine("Init");
}
public int WebScrape(string searchterms)
{
//Console.WriteLine(searchterms);
string[] ssize = searchterms.Split(new char[0]);
int YahooMatches = 0;
int GoogleMatches = 0;
foreach (var term in ssize)
{
//Console.WriteLine(term);
var y = web.Load(Yahoo + term);
var g = web.Load(Google + term + "&cad=h");
YahooMatches += YahooFilter(y);
GoogleMatches += GoogleFilter(g);
}
Console.WriteLine("Yahoo found " + YahooMatches.ToString() + " matches");
Console.WriteLine("Google found " + GoogleMatches.ToString() + " matches");
return YahooMatches + GoogleMatches;
}
//Parse to get correct info
public int YahooFilter(HtmlDocument doc)
{
//Look for node with correct ID
IEnumerable<HtmlNode> nodes = doc.DocumentNode.Descendants().Where(n => n.HasClass("mw-jump-link"));
foreach (var item in nodes)
{
// displaying final output
Console.WriteLine(item.InnerText);
}
//TODO: Return search resultamount.
return 0;
}
int testCounter = 0;
string toReturn = "";
bool foundMatch = false;
//Parse to get correct info
public int GoogleFilter(HtmlDocument doc)
{
if (doc == null)
{
Console.WriteLine("Null");
}
foreach (var node in doc.DocumentNode.ChildNodes)
{
toReturn += Looper(node, testCounter, toReturn, foundMatch);
}
Console.WriteLine(toReturn);
/*
var stuff = doc.DocumentNode.Descendants("div")
.Where(node => node.GetAttributeValue("id", "")
.Equals("extabar")).ToList();
IEnumerable<HtmlNode> nodes = doc.DocumentNode.Descendants().Where(n => n.HasClass("appbar"));
*/
return 0;
}
public string Looper(HtmlNode node, int counter, string returnstring, bool foundMatch)
{
Console.WriteLine("Loop started" + counter.ToString());
counter++;
Console.WriteLine(node.Id);
if (node.Id == "resultStats")
{
returnstring += node.InnerText;
}
foreach (HtmlNode n in node.Descendants())
{
Looper(n, counter, returnstring, foundMatch);
}
return returnstring;
}
}
}
I made an google HTML Scraper a few weeks ago, a few things to consider
First: Google don't like when you try to Scrape their Search HTML, while i was running a list of companies trying to get their addresses and phone number, Google block my IP from accessing their website for a little bit (Which cause a hilarious panic in the office)
Second: Google will change the HTML (Id names and etc) of the page so using ID's won't work, on my case i used the combination of HTML Tags and specific information to parse the response and extract the information that i wanted.
Third: It's better to just use their API to grab the information you need, just make sure you respect their free tier query limit and you should be golden.
Here is the Code i used.
public static string getBetween(string strSource, string strStart, string strEnd)
{
int Start, End;
if (strSource.Contains(strStart) && strSource.Contains(strEnd))
{
Start = strSource.IndexOf(strStart, 0) + strStart.Length;
End = strSource.IndexOf(strEnd, Start);
return strSource.Substring(Start, End - Start);
}
else
{
return "";
}
}
public void SearchResult()
{
//Run a Google Search
string uriString = "http://www.google.com/search";
string keywordString = "Search String";
WebClient webClient = new WebClient();
NameValueCollection nameValueCollection = new NameValueCollection();
nameValueCollection.Add("q", keywordString);
webClient.QueryString.Add(nameValueCollection);
string result = webClient.DownloadString(uriString);
string search = getBetween(result, "Address", "Hours");
rtbHtml.Text = getBetween(search, "\">", "<");
}
On my case i used the String Address and Hours to limit what information i wanted to extract.
Edit: Fixed the Logic and added the Code i used.
Edit2: forgot to add the GetBetween Class. (sorry it's my first Answer)

How to run #Url.action written by string

I am trying to show image from database with #Url.Action method. The html document is also in database and I render it with #html.Raw function.
The problem is when I render the html including #Url.Action method it just show the whole function shape in the source parameter. The code I tried is below.
private string ConvertImageSource(int articleID, string content // the html string)
{
var imageCount = // counting number of image;
for (int i = 0; i < imageCount; i++)
{
content = content.Replace($"<!{i + 1}>", $"#Url.Action('ShowImage','Articles',new{{articleID={ articleID },imageID={ imageID }}})");
}
return content;
}
public ActionResult ShowImage(int? articleID, int? imageID)
{
var imageData = Encoding.ASCII.GetBytes(// get image string from database);
return new FileStreamResult(new MemoryStream(imageData), "image/jpeg");
}
I would like to know how to make it works. Any idea?
You may use that for loop on a partial view.
Looks like..
#model MyContentModel
#for (int i = 0; i < model.ImageCount; i++){{
#Url.Action("ShowImage", "Articles", new {{ articleID ={ model.ArticleID },imageID ={ model.ImageID } }})}}
Thanks for the answer. I found the solution by myself.
Instead of #Url.Action function, I put url address and it works.
private string ConvertImageSource(int articleID, string content)
{
var imageCount = content.Split(new string[] { "img " }, StringSplitOptions.None).ToList().Count - 1;
for(int i = 0; i < imageCount; i++)
{
content = content.Replace($"<!{i + 1}>", $"/Articles/ShowImage?articleID={ articleID }&imageID={i + 1}");
}
return content;
}

C# web scraper navigate to aspx link

I have a C# Windows Phone 8.1 app which I am building. Part of the app needs to go and look for information on a specific web page. One of the fields which I need is a URL which can be found on certain items on the page, however I am finding that the URL is in a relative-style format
FullArticle.aspx?a=323495
I am wondering if there is a way in C# using HtmlAgilityPack, HttpWebRequest etc etc to find the link to the actual page. Code snippet is below.
private static TileUpdate processSingleNewsItem(HtmlNode newsItemNode)
{
Debug.WriteLine("");
var articleImage = getArticleImage(getNode(newsItemNode, "div", "nw-container-panel-articleimage"));
var articleDate = getArticleDate(getNode(newsItemNode, "div", "nw-container-panel-articledate"));
var articleSummary = getArticleSummary(getNode(newsItemNode, "div", "nw-container-panel-textarea"));
var articleUrl = getArticleUrl(getNode(newsItemNode, "div", "nw-container-panel-articleimage"));
return new TileUpdate{
Date = articleDate,
Headline = articleSummary,
ImagePath = articleImage,
Url = articleUrl
};
}
private static string getArticleUrl(HtmlNode parentNode)
{
var imageNode = parentNode.Descendants("a").FirstOrDefault();
Debug.WriteLine(imageNode.GetAttributeValue("href", null));
return imageNode.GetAttributeValue("href", null);
}
private static HtmlNode getNode(HtmlNode parentNode, string nodeType, string className)
{
var children = parentNode.Elements(nodeType).Where(o => o.Attributes["class"].Value == className);
return children.First();
}
Would appreciate any ideas or solutions. Cheers!
In my web crawler here's what I do:
foreach (HtmlNode link in doc.DocumentNode.SelectNodes(#"//a[#href]"))
{
HtmlAttribute att = link.Attributes["href"];
if (att == null) continue;
string href = att.Value;
if (href.StartsWith("javascript", StringComparison.InvariantCultureIgnoreCase)) continue; // ignore javascript on buttons using a tags
Uri urlNext = new Uri(href, UriKind.RelativeOrAbsolute);
// Make it absolute if it's relative
if (!urlNext.IsAbsoluteUri)
{
urlNext = new Uri(urlRoot, urlNext);
}
...
}

Change hyperlink own text on its click event through jquery

I have a dynamically generated list of hyperlinks and i'm using jquery to bind the click events, everything is working fine, just one thing i am unable to do is to changes its text
**this.value = s;**
This is what I was trying to do without any success.
My full code:
$(document).ready(function () {
$('[id*="lnkStatus_"]').bind('click', SaveRequirmentStatus);
});
function SaveRequirmentStatus(event) {
var itemID = $(event.currentTarget).attr('id');
var intProjectId = $('[id$="hdnProjectId"]').val();
var idRequirment = itemID.split('_')[1];
var idRequirementPhase = itemID.split('_')[2];
var idPhaseStatus = $(event.currentTarget).val();
if (intProjectId != '0' && idRequirment != '0' && idRequirementPhase != '0') {
$.getJSON('handler/RequirementLifecycleHandler.ashx? FuncName=SaveRequirment&idRequirment=' + idRequirment + "&idRequirementPhase=" + idRequirementPhase + "&idProject=" + intProjectId + "&idPhaseStatus=" + idPhaseStatus, function (ValueStatus) {
var s = ValueStatus;
alert(this);
this.value = s;
});
}
}
this in the context that you are using it does not refer to the link, so save a reference to it outside of the inner function and use that. Also, a link does not have a value, you can set the text using the jQuery text function.
Changing your code to this should do what you want:
function SaveRequirmentStatus(event) {
var $this = this; // save reference to the clicked link
var itemID=$(event.currentTarget).attr('id');
var intProjectId=$('[id$="hdnProjectId"]').val();
var idRequirment=itemID.split('_')[1];
var idRequirementPhase=itemID.split('_')[2];
var idPhaseStatus = $(event.currentTarget).val();
if (intProjectId != '0' && idRequirment != '0' && idRequirementPhase != '0') {
$.getJSON('handler/RequirementLifecycleHandler.ashx?FuncName=SaveRequirment&idRequirment=' + idRequirment + "&idRequirementPhase=" + idRequirementPhase + "&idProject=" + intProjectId + "&idPhaseStatus=" + idPhaseStatus, function(ValueStatus) {
$this.text(ValueStatus); // set the text of the link to ValueStatus
});
}
}
This should do
$(function() {
$('[id*="lnkStatus_"]').bind('click', SaveRequirmentStatus);
});
function SaveRequirmentStatus(event) {
$(this).text(ValueStatus);
}

C# .Contains() to check if it is a URL

I don't like to post such a general question, but I am not seeing a lot on the topic, so I was wondering if anyone has done something like this, and whether or not this is a good implementation to go with.
EDIT Added whole method
Here is the code
protected void gridViewAttachments_HtmlDataCellPrepared(object sender, DevExpress.Web.ASPxGridView.ASPxGridViewTableDataCellEventArgs e)
{
//if (e.DataColumn.FieldName == "AttachmentName" && e.CellValue.ToString().ToLower().Contains("://"))
// attachmentUrl = e.CellValue.ToString();
//if (e.DataColumn.FieldName == "AttachmentName" && !e.CellValue.ToString().ToLower().Contains("://"))
// attachmentUrl = "http://" + e.CellValue;
Uri targetUri;
if (Uri.TryCreate("http://" + e.CellValue, UriKind.RelativeOrAbsolute, out targetUri))
{
attachmentUrl = new Uri("http://" + e.CellValue);
}
if (e.DataColumn is DevExpress.Web.ASPxGridView.GridViewDataHyperLinkColumn)
{
if (attachmentUrl.ToString() == "")
{
DevExpress.Web.ASPxEditors.Internal.HyperLinkDisplayControl hyperlink =
(e.Cell.Controls[0] as DevExpress.Web.ASPxEditors.Internal.HyperLinkDisplayControl);
hyperlink.Target = "_blank";
hyperlink.NavigateUrl = ApplicationUrl + "/Attachment.ashx?key=" + hyperlink.Text;
hyperlink.Text = GetWords("GENERAL.VIEW_ATTACHMENT");
}
else
{
DevExpress.Web.ASPxEditors.Internal.HyperLinkDisplayControl hyperlink = (e.Cell.Controls[0] as DevExpress.Web.ASPxEditors.Internal.HyperLinkDisplayControl);
hyperlink.Target = "_blank";
hyperlink.NavigateUrl = attachmentUrl.ToString();
hyperlink.Text = "Go to URL";
}
}
}
Pretty basic, and it works. My question is this: Is checking if the string contains :// enough to check whether or not it is a url? The reason I have to check is it is pulling the data from table and some of the fields in the table are filenames (mydoc.docx) in which case I will do something else with them. Is there another more robust check I can do in C#?
You could use Uri.TryCreate instead to see if creation of the URL is successful:
Uri targetUri;
if (Uri.TryCreate("http://" + e.CellValue, UriKind.RelativeOrAbsolute, out targetUri))
{
//success
attachmentUrl = "http://" + e.CellValue;
}

Categories

Resources