I need help to pull RSS feeds from a facebook page I'm using the following code but it keeps giving me an error :
string url =
"https://www.facebook.com/feeds/page.php?id=40796308305&format=rss20";
XmlReaderSettings settings =
new XmlReaderSettings
{
XmlResolver = null,
DtdProcessing=DtdProcessing.Parse,
};
XmlReader reader = XmlReader.Create(url,settings);
SyndicationFeed feed = SyndicationFeed.Load(reader);
foreach (var item in feed.Items)
{
Console.WriteLine(item.Id);
Console.WriteLine(item.Title.Text);
Console.WriteLine(item.Summary.Text);
}
if (reader != null) reader.Close();
This code works perfectly with any blog or page rss but with Facebook rss it give an exception with the following message
The element with name 'html' and namespace 'http://www.w3.org/1999/xhtml' is not an allowed feed format.
Thanks
Facebook will return HTML in this instance because it doesn't like the User Agent supplied by XmlReader. Since you can't customize it, you will need a different solution to grab the feed. This should solve your problem:
var req = (HttpWebRequest)WebRequest.Create(url);
req.Method = "GET";
req.UserAgent = "Fiddler";
var rep = req.GetResponse();
var reader = XmlReader.Create(rep.GetResponseStream());
SyndicationFeed feed = SyndicationFeed.Load(reader);
This is strictly a behavior of Facebook, but the proposed change should work equally well for other sites that are okay with your current implementation.
It works when using Gregorys code above if you change the feed format to atom10 instead of rss20.
Change the url:
string url =
"https://www.facebook.com/feeds/page.php?id=40796308305&format=atom10";
In my case also Facebook feed was difficult to consume and then I try with feedburner to burn the feed for my facebook page. Feedburner generated the feed for me in Atom1.0 format. And then I successfully :) consumed this with system.syndication class my code was:
string Main()
{
var url = "http://feeds.feedburner.com/Per.........all";
Atom10FeedFormatter formatter = new Atom10FeedFormatter();
using (XmlReader reader = XmlReader.Create(url))
{
formatter.ReadFrom(reader);
}
var s = "";
foreach (SyndicationItem item in formatter.Feed.Items)
{
s+=String.Format("[{0}][{1}] {2}", item.PublishDate, item.Title.Text, ((TextSyndicationContent)item.Content).Text);
}
return s;
}
Related
I'm having trouble getting any type of response from the Listing Recommendation API. I keep getting a generic 500 message on my return. I've set up my headers the way they recommend here: https://developer.ebay.com/devzone/listing-recommendation/Concepts/MakingACall.html
I've tried using the information from the documentation on the call here: https://developer.ebay.com/devzone/listing-recommendation/CallRef/itemRecommendations.html#Samples
But every variation of my code comes up bad. Below is a sample of the code. I've tried it with and without the commented out line of code with the same results. It always fails on the line response.GetResponseStream. Thanks for your help.
public static void test(string AuthToken, string listing, log4net.ILog log)
{
string url = "https://svcs.ebay.com/services/selling/listingrecommendation/v1/item/" + listing + "/itemRecommendations/?recommendationType=ItemSpecifics";
var listingRecommendationRequest = (HttpWebRequest)WebRequest.Create(url);
listingRecommendationRequest.Headers.Add("Authorization", "TOKEN " + AuthToken);
listingRecommendationRequest.ContentType = "application/json";
listingRecommendationRequest.Accept = "application/json";
listingRecommendationRequest.Headers.Add("X-EBAY-GLOBAL-ID", "EBAY-US");
listingRecommendationRequest.Method = "GET";
//listingRecommendationRequest.Headers.Add("recommendationType", "ItemSpecifics");
var response = (HttpWebResponse)listingRecommendationRequest.GetResponse();
string result;
using (var streamReader = new StreamReader(response.GetResponseStream()))
{
result = streamReader.ReadToEnd();
}
var reader = new JsonTextReader(new StringReader(result));
while (reader.Read())
{
if (reader.Value != null)
{
// Read Json values here
var pt = reader.Path;
var val = reader.Value.ToString();
}
}
}
Edit: Adding image of what I'm trying to accomplish. I'm trying to get the item specifics recommendations that are suggested by eBay when manually editing a listing. These suggestions change based on what is in your title.
I'm trying to complete a PUT request to the IIS media services API - to try and set a publishing point to "stopped" state.
I've read the following link, which hasn't helped me very much!
https://msdn.microsoft.com/en-us/library/hh206014%28VS.90%29.aspx
My current code throws an exception on the the httpWebRequest1.GetResponse(), it indicates the web server is returning a 401 unauthorized error code:
string url = "http://localhost/LiveStream.isml/State";
var httpWebRequest1 = (HttpWebRequest)WebRequest.Create(url);
httpWebRequest1.ContentType = "application/atom+xml";
httpWebRequest1.Method = "PUT";
httpWebRequest1.Headers.Add("Authorization", "USERNAME:PASSWORD");
using (var streamWriter = new StreamWriter(httpWebRequest1.GetRequestStream()))
{
XmlDocument document = new XmlDocument();
document.Load("Resources/XMLFile1.xml");
string test = GetXMLAsString(document);
streamWriter.Write(test);
}
var httpResponse = (HttpWebResponse)httpWebRequest1.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
var responseText = streamReader.ReadToEnd();
}
My Username/password was commented out, but they work fine when visiting the page in a browser, and inputting them in the username/password form that opens.
My Script essentially "PUT"s an XML document that is a copy of the XML document returned when visiting the state page in a browser.
Any help would be appreciated.
I'm using Argotic Syndication Framework for processing feeds.
But the problem is, if I pass a URL to Argotic, which is not a valid feed (for example, http://stackoverflow.com which is a html page, not feed), the program hangs (I mean, Argotic stays in an infinity loop)
So, How to check if a URL is pointing to a valid feed?
From .NET 3.5 you can do this below. It will throw an exception if it's not a valid feed.
using System.Diagnostics;
using System.ServiceModel.Syndication;
using System.Xml;
public bool TryParseFeed(string url)
{
try
{
SyndicationFeed feed = SyndicationFeed.Load(XmlReader.Create(url));
foreach (SyndicationItem item in feed.Items)
{
Debug.Print(item.Title.Text);
}
return true;
}
catch (Exception)
{
return false;
}
}
Or you can try parsing the document by your own:
string xml = "<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n<event>This is a Test</event>";
XmlDocument xmlDoc = new XmlDocument();
xmlDoc.LoadXml(xml);
Then try checking the root element. It should be the feed element and have "http://www.w3.org/2005/Atom" namespace:
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:creativeCommons="http://backend.userland.com/creativeCommonsRssModule" xmlns:re="http://purl.org/atompub/rank/1.0">
References:
http://msdn.microsoft.com/en-us/library/system.servicemodel.syndication.syndicationfeed.aspx
http://dotnet.dzone.com/articles/systemservicemodelsyndication
you can use Feed Validation Service. It has SOAP API.
You can check the content type. It has to be text/xml. See this question to find the content type.
you can use this code:
var request = HttpWebRequest.Create("http://www.google.com") as HttpWebRequest;
if (request != null)
{
var response = request.GetResponse() as HttpWebResponse;
string contentType = "";
if (response != null)
contentType = response.ContentType;
}
thanks to the answer of the question
Update
To check if it is a feed address you can use W3C Feed Validation service.
Update2
as BurundukXP said it has a SOAP API. to work with it you can read the answer of this question.
If you want to just have it transformed into valid RSS/ATOM, you can use http://feedcleaner.nick.pro/ to have it sanitized. Alternatively, you can fork the project.
I need to have access at the HTML of a Facebook page, to extract from it some data. So, I need to create a WebRequest.
Example:
My code worked well for other sites, but for Facebook, I must be logged in to can access the HTML.
How can I use Firefox data for creating a WebRequest for Facebook page?
I tried this:
List<string> HTML_code = new List<string>();
WebRequest request = WebRequest.Create(URL);
using (WebResponse response = request.GetResponse())
using (StreamReader stream = new StreamReader(response.GetResponseStream()))
{
string line;
while ((line = stream.ReadLine()) != null)
{
HTML_code.Add(line);
}
}
...but the HTML resulted is the HTML of Facebook Home Page when I am not logged in.
If what you are trying to is retrieve the number of likes from a Facebook page, you can use Facebook's Graph API service. Just too keep it simple, this is what I basically did in the code:
Retrieve the Facebook page's data. In this case I used the Coke page's data since it was an example FB had listed.
Parse the returned Json using Json.Net. There are other ways to do this, but this just keeps it simple, and you can get Json.Net over at Codeplex. The documentation that I looked for my code was from this page in the docs. Their documentation will also help you with parsing and serializing even more Json if you need to.
Then that basically translates in to this code. Just note that I left out all the fancy exception handling to keep it simple as using networking is not always reliable! Also don't forget to include the Json.Net library in your project!
Usings:
using System.IO;
using System.Net;
using Newtonsoft.Json.Linq;
Code:
string url = "https://graph.facebook.com/cocacola";
WebClient client = new WebClient();
string jsonData = string.Empty;
// Load the Facebook page info
Console.WriteLine("Connecting to Facebook...");
using (Stream data = client.OpenRead(url))
{
using (StreamReader reader = new StreamReader(data))
{
jsonData = reader.ReadToEnd();
}
}
// Get number of likes from Json data
JObject jsonParsed = JObject.Parse(jsonData);
int likes = (int)jsonParsed.SelectToken("likes");
// Write out the result
Console.WriteLine("Number of Likes: " + likes);
I have a problem for parsing a rss feed using c#.
I used to use this method to load the feed.
XDocument rssFeed = XDocument.Load(#url);
But, when I notice when the feed has a xml-stylesheet this method crashes saying the xml is not well formated...
Here's a rss feed that contains this tag
http://www.channelnews.fr/accueil.feed?type=rss
What would be the best way to parse any rss feed using c#?
Thanks for your help
This code works for me
static XDocument DownloadPage()
{
var req = (HttpWebRequest)WebRequest.Create("http://www.channelnews.fr/accueil.feed?type=rss");
req.UserAgent = "Mozilla";
using(var response = req.GetResponse())
using(var stream = response.GetResponseStream())
using (var reader = new StreamReader(stream))
return XDocument.Load(reader);
}
Note, that if you omit setting UserAgent, then response will contain string 'DOS' that is defnintly not xml :)
This one works nicer:
XDocument xdoc = XDocument.Load("http://pedroliska.wordpress.com/feed/");
var items = from i in xdoc.Descendants("item")
select new
{
Title = i.Element("title").Value
};
So now you can access the rss titles by doing a loop or something like:
items[0].Title
And just the code is pulling the title from the rss feed, you can pull the description, link, pubDate, etc.