This question already has answers here:
Google Weather API gone?
(5 answers)
Closed 6 years ago.
I decided to pull information from Google's Weather API - The code I'm using below works fine.
XmlDocument widge = new XmlDocument();
widge.Load("https://www.google.com/ig/api?weather=Brisbane/dET7zIp38kGFSFJeOpWUZS3-");
var weathlist = widge.GetElementsByTagName("current_conditions");
foreach (XmlNode node in weathlist)
{
City.Text = ("Brisbane");
CurCond.Text = (node.SelectSingleNode("condition").Attributes["data"].Value);
Wimage.ImageUrl = ("http://www.google.com/" + node.SelectSingleNode("icon").Attributes["data"].Value);
Temp.Text = (node.SelectSingleNode("temp_c").Attributes["data"].Value + "°C");
}
}
As I said, I am able to pull the required data from the XML file and display it, however if the page is refreshed or a current session is still active, I receive the following error:
WebException was unhandled by user code - The remote server returned
an error: 403 Forbidden Exception.
I'm wondering whether this could be to do with some kind of access limitation put on access to that particular XML file?
Further research and adaptation of suggestions
As stated below, this is by no means best practice, but I've included the catch I now use for the exception. I run this code on Page_Load so I just do a post-back to the page. I haven't noticed any problems since. Performance wise I'm not overly concerned - I haven't noticed any increase in load time and this solution is temporary due to the fact this is all for testing purposes. I'm still in the process of using Yahoo's Weather API.
try
{
XmlDocument widge = new XmlDocument();
widge.Load("https://www.google.com/ig/api?weather=Brisbane/dET7zIp38kGFSFJeOpWUZS3-");
var list2 = widge.GetElementsByTagName("current_conditions");
foreach (XmlNode node in list2)
{
City.Text = ("Brisbane");
CurCond.Text = (node.SelectSingleNode("condition").Attributes["data"].Value);
Wimage.ImageUrl = ("http://www.google.com/" + node.SelectSingleNode("icon").Attributes["data"].Value);
Temp.Text = (node.SelectSingleNode("temp_c").Attributes["data"].Value + "°C");
}
}
catch (WebException exp)
{
if (exp.Status == WebExceptionStatus.ProtocolError &&
exp.Response != null)
{
var webres = (HttpWebResponse)exp.Response;
if (webres.StatusCode == HttpStatusCode.Forbidden)
{
Response.Redirect(ithwidgedev.aspx);
}
}
}
Google article illustrating API error handling
Google API Handle Errors
Thanks to:
https://stackoverflow.com/a/12011819/1302173 (Catch 403 and recall)
https://stackoverflow.com/a/11883388/1302173 (Error Handling and General Google API info)
https://stackoverflow.com/a/12000806/1302173 (Response Handling/json caching - Future plans)
Alternative
I found this great open source alternative recently
OpenWeatherMap - Free weather data and forecast API
This is related to a change / outage of the service. See: http://status-dashboard.com/32226/47728
I have been using Google's Weather API for over a year to feed a phone server so that the PolyCom phones receive a weather page. It has run error free for over a year. As of August 7th 2012 there have been frequent intermittent 403 errors.
I make a hit of the service once per hour (As has always been the case) so I don't think frequency of request is the issue. More likely the intermittent nature of the 403 is related to the partial roll-out of a configuration change or a CDN change at Google.
The Google Weather API isn't really a published API. It was an internal service apparently designed for use on iGoogle so the level of support is uncertain. I tweeted googleapis yesterday and received no response.
It may be better to switch to a promoted weather API such as:
WUnderground Weather or
Yahoo Weather.
I have added the following 'unless defined' error handling perl code myself yesterday to cope with this but if the problem persists I will switch to a more fully supported service:
my $url = "http://www.google.com/ig/api?weather=" . $ZipCode ;
my $tpp = XML::TreePP->new();
my $tree = $tpp->parsehttp( GET => $url );
my $city = $tree->{xml_api_reply}->{weather}->{forecast_information}->{city}->{"-data"};
unless (defined($city)) {
print "The weather service is currently unavailable. \n";
open (MYFILE, '>/home/swarmp/public_html/status/polyweather.xhtml');
print MYFILE qq(<?xml version="1.0" encoding="utf-8"?>\n);
print MYFILE qq(<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "xhtml11.dtd">\n);
print MYFILE qq(<html xmlns="http://www.w3.org/1999/xhtml">\n);
print MYFILE qq(<head><title>Weather is Unavailable!</title></head>\n);
print MYFILE qq(<body>\n);
print MYFILE qq(<p>\n);
print MYFILE qq(The weather service is currently unavailable from the data vendor.\n);
print MYFILE qq(</p>\n);
print MYFILE qq(</body>\n);
print MYFILE qq(</html>\n);
close MYFILE;
exit(0);
}...
This is by no means a best practice, but I use this API heavily in some WP7 and Metro apps. I handle this by catching the exception (most of the time a 403) and simply re-calling the service inside of the catch, if there is an error on the Google end it's usually briefly and only results in 1 or 2 additional calls.
That`s the same thing we found out.
Compare the request header in a bad request and a working request. The working request includes cookies. But where are they from?
Delete all your browser cookies from google. The weather api call will not work in your browser anymore. Browse to google.com and then to the weather api, it will work again.
Google checks the cookies to block multiple api calls. Getting the cookies one time before handling all weather api requests will fix the problem. The cookies will expire in one year. I assume you will restart your application more often then once a year. So that you will get a new one. Getting cookies for each request will end in the same problem: Too many different requests.
One tip: Weather does not often change, so cache the json information (for maybe a hour). That will reduce time-consuming operations as requests!
I found that If you try the request in a clean browser (like new window incognito mode on chrome) the google weather service works. Possible problem of cookies?
Related
I am currently writing a Discord Bot in C#. I have most the bot done but for this next update I am wanting to add on the capability of checking if the Streamer has Gone live. Currently I am polling the Twitch API and Pulling the JSON File that it has and checking whether or not the JSON Stream Object is Null or Not. But this takes 3-5 min after the streamer to go live before it finally sees that Stream is not Null even though I poll the JSON every 5 seconds. Is there anyway to do this more efficiently? My code is Below:
private const string Url = "https://api.twitch.tv/kraken/streams/streamer";
var request = (HttpWebRequest)WebRequest.Create(Url);
request.Method = "Get";
request.Timeout = 12000;
request.ContentType = "application/vnd.twitchtv.v5+json";
request.Headers.Add("Client-ID", "ID");
using (var s = request.GetResponse().GetResponseStream())
{
using (var sr = new System.IO.StreamReader(s))
{
var jsonObject = JObject.Parse(sr.ReadToEnd());
var jsonStream = jsonObject["stream"];
// twitch channel is online if stream is not null.
LastTwitchStatus = jsonStream.Type != JTokenType.Null;
}
}
Looks like it's intended behavior of Twitch API.
They are definitely more focused on pushing their horsepower to streaming, not immediate data provision through API.
While there might be a limitation like this, you can try scrapping the page if timing is crucial and you don't want to wait 3-5 min for something that already happened.
One idea is to poll page each 5s or so and then query the HTML document for something characteristic that distinguish offline and online channel.
Idea for scrapping in JavaScript (just replicate in .NET):
For example, I have tried to query user pages (https://www.twitch.tv/username) in JavaScript with:
$(".recent-past-broadcast").length > 0
and for user that is not broadcasting it yields true while for broadcasting user it yields false. Problem might be for user with no recent broadcasts history though.
You can try checking videos page (https://www.twitch.tv/username/videos/all) for their live indicator too like:
$(".cn-livestatus__circle").length > 0
It will yield true for streaming user and false for the one that does not stream (even if he/she is online).
Of course that's least efficient way on doing this and requires lots of download as compared to just polling but... still it seems more up to date than asking API every 5s and still getting actual state delayed by 3-5min.
Just replicate querying like above in .NET and you're there.
You could also mix two approaches and if you see that someone started streaming, just disable page scrapping and swap to only API calls for checking if you're up-to-date still.
Useful tooling for scrapping:
For parsing HTML documents use parsers like AngleSharp to do this in .NET:
https://github.com/AngleSharp/AngleSharp
Update
Thanks to a comment by #IvanL, it turns out that the problem is Google specific. I have since tried other providers and for those everything works as expected. Google just doesn't seem to send claims information. Haven't yet been able to figure out why or what I need to differently to get Google to send it.
A wild stab in the dark says it may be related to the realm being defaulted to http://:/ as I have seen an answer by Andrew Arnott that Google changes the claimed identifier for the same account based on the realm passed with the authentication request.
Another possibly important tidbit of information: unlike many of the examples that can be found around the web for using dotnetopenauth, I am not using a "simple" textbox and composing the openIdIdentifier myself, but I am using the openID selector and that is providing the openIdIdentifier passed to the ValidateAtOpenIdProvider. (As per the Adding OpenID authentication to your ASP.NET MVC 4 application article.)
Question is: why is IAuthenticationResponse.GetExtension() always returning null when using Google as the openId provider, when otherwise all relevant gotcha's with regard to Google (Email requested as required, AXFetchAsSregTransform, etc) have been addressed?
Original
I am struggling with getting DotNetOpenAuth to parse the response returned from the provider. Followed the instructions of Adding OpenID authentication to your ASP.NET MVC 4 application up to the point where the login should be working and a login result in a return to the home page with the user's name (nick name) displayed at the top right. (That is up to "The user should at this point see the following:" just over half way down the article).
I am using Visual Studio Web Developer 2010 Express with C#. DotNetOpenAuth version is 4.0.3.12153 (according to the packages.config, 4.0.3.12163 according to Windows Explorer).
My web.config was modified following the instructions in Activating AXFetchAsSregTransform which was the solution for DotNetOpenId - Open Id get some data
Unfortunately it wasn't enough to get it working for me.
The openid-selector is working fine and resulting in a correct selection of the openid provider. The authentication request is created as follows:
public IAuthenticationRequest ValidateAtOpenIdProvider(string openIdIdentifier)
{
IAuthenticationRequest openIdRequest = openId.CreateRequest(Identifier.Parse(openIdIdentifier));
var fields = new ClaimsRequest()
{
Email = DemandLevel.Require,
FullName = DemandLevel.Require,
Nickname = DemandLevel.Require
};
openIdRequest.AddExtension(fields);
return openIdRequest;
}
This all works. I can login and authorize the page to receive my information, which then results in a call to GetUser:
public OpenIdUser GetUser()
{
OpenIdUser user = null;
IAuthenticationResponse openIdResponse = openId.GetResponse();
if (openIdResponse.IsSuccessful())
{
user = ResponseIntoUser(openIdResponse);
}
return user;
}
openIdResponse.IsSuccessful is implemented as an extension method (see linked article):
return response != null && response.Status == AuthenticationStatus.Authenticated;
and always is successful as the ResponseIntoUser method is entered:
private OpenIdUser ResponseIntoUser(IAuthenticationResponse response)
{
OpenIdUser user = null;
var claimResponseUntrusted = response.GetUntrustedExtension<ClaimsResponse>();
var claimResponse = response.GetExtension<ClaimsResponse>();
// For this to work with the newer/est version of DotNetOpenAuth, make sure web.config
// file contains required settings. See link for more details.
// http://www.dotnetopenauth.net/developers/help/the-axfetchassregtransform-behavior/
if (claimResponse != null)
{
user = new OpenIdUser(claimResponse, response.ClaimedIdentifier);
}
else if (claimResponseUntrusted != null)
{
user = new OpenIdUser(claimResponseUntrusted, response.ClaimedIdentifier);
}
else
{
user = new OpenIdUser("ikke#gmail.com;ikke van ikkenstein;ikke nick;ikkeclaimedid");
}
return user;
}
My version above only differs from the code in the linked article by my addition of the final else block to ensure that I always get the home page with a user name and a logoff link displayed (which helps when trying to do this several times in succession).
I have tried both Google and Yahoo. Both authenticate fine, both return an identity assertion as logged by the WebDev server. However, GetUntrustedExtenstion and GetExtension always return null. I always get to see "ikke nick" from the last else, never the name I actually used to authenticate.
I am at a loss on how to continue to try and get this to work. It probably is some oversight on my part (I am an experienced developer but just started dipping my toes in C# and web front-end development), and I can't see it.
Any and all suggestions on how to proceed / debug this are very much welcome.
Are you using Google as OpenId provider to test your solution against? Because Google has/had the habit of including the Claims only the first time you authenticate the application. So perhaps try using a fresh google account and see if that works?
Sorry for the slow response, doing a big migration at a client this week :-) Glad that this little comment resolved your issue.
My web service calls a url which returns a value which I must capture and use in a different function.
I've only recently starting working with web services and am very new to the concept of calling a url within a web service (Previously asked and answered on this forum for those requiring more information)
Webservice method to call a url
My web service is: Insurance Service.
My client sends me data through the Insurance service which calls a url which returns an Insurance Number.
How do I capture this insurance number? I thought I could use session to capture it but I was so wrong insurance Number comes as null with an object reference error.
int insuranceNo;
insuranceNo = Convert.ToInt16(HttpContext.Current.Session["insuranceNo"]);
It must have something to do with response right?
I thought I could try google what I am looking for but I honestly don't know what to call this in order to search for it. Thought I'd give it another shot in this forum since I found the answer to the first part of this function here.
code to call url:
string url = string.Format("www.insuranceini.com/insurance.asp?fileno1={0},&txtfileno2={1}&username={2}&userid={3}&dteinsured={4}&dteDob={5}&InsurerName={6}", txtfileno1, txtfileno2, username, userid, dteinsured,dteDob,InsurerName)
WebRequest request = HttpWebRequest.Create(url);
using(WebResponse response = request.GetResponse())
{
using(StreamReader reader = new StreamReader(response.GetResponseStream()))
{
string urlText = reader.ReadToEnd();
//Do whatever you need to do
}
}
I would be grateful for any sort of pointers or places to start looking or any advice.
Code began giving different errors. Closing this and referring to : Datetime Conversion and Parsing
Thank you everyone for the helpful comments.
Here is my simple code, which works fine if called from php or any other client then adobe air. Same code also works from calling from SWF, there is fluorineFX code for other part of project as well, but then it doesn't do anything to break this.
I do find one thing that all POST calls were somehow changing to GET, which really amazes me. I would be so glad to get the answer for this. Thanks in Advance everyone. Below is the almost same code from my web service. with AIR code just under it.
[WebMethod(EnableSession = true)]
public bool Authenticate(string UserName,string Password)
{
try
{
if (Membership.ValidateUser(UserName, Password)){
FormsAuthentication.SetAuthCookie(UserName, true);
return true;
}
return false;
}
catch (Exception ex)
{
return false;
}
}
and my call from adobe AIR code as below
var ws:WebService = new WebService();
ws.wsdl="http://mysite.com/myservice.asmx?WSDL";
ws.useProxy=false;
ws.addEventListener(LoadEvent.LOAD,onWSDLLoad);
ws.loadWSDL();
ws.Authenticate.addEventListener(ResultEvent.RESULT,resultHandler);
ws.Authenticate.addEventListener(FaultEvent.FAULT,onLoginFaultHandler);
ws.Authenticate("usrname","password");
protected function onLoginFaultHandler(event:FaultEvent):void
{
Alert.show('Login Failed with messsage\r\n[ '+event.fault.faultString+' ]');
/* Error #1085: The element type "br" must be terminated
by the matching end-tag "</br>". */
/* checking the content value of fault event shows
same out put as http://mysite.com/myservice.asmx */
}
protected function onLoginResultHandler(event:ResultEvent):void
{
/* on success code */
}
This guy tells us following in page http://verveguy.blogspot.com/2008/07/truth-about-flex-httpservice.html
All HTTP GET requests are stripped of headers. It's not in the Flex stack so it's probably the underlying Flash player runtime.
All HTTP GET requests that have content type other than "application/x-www-form-url-encoded" are turned into POST requests
All HTTP POST requests that have no actual posted data are turned into GET requests. See 1/ and 2/
All HTTP PUT and HTTP DELETE requests are turned into POST requests. This appears to be a browser limitation that the Flash player is stuck with.
I do see my request above turns into GET, but then I DO have post values in it. OR if those are somehow are not sent or recorded by Web Service Object ?
This is pretty simple... The Flex XML parser uses strict xml checking, so all tags must be closed. If you can change the web service, then change all <br> tags to <br />.
I finally found the answer myself. turns out I was having cookies set to AutoDetect. Which meant that the AIR would call a URL and it would need to redirect to keep the cookie/session value in side the URI itself.
Now I switched that to UseCookies and Everything is back to normal. I could test this from a sample web services and realized it was the server-side that was doing something wrong. And from AIR to Browser that's the only difference of cookies.
Somehow nusoap for PHP is smart to know that there is AutoDetect or New URI of the Web Services available. But AIR couldn't locate that. Anyways Thanks everyone for helping me solve this.
I have a user control that is featured on several pages of a heavily hit site. Some of these pages include our blog sidebar, our forum sidebar, and smack right in the middle of our home page. That means this control is rendered a lot. The control it meant to read in a twitter RSS feed for a specific account and write out the last 2 tweets. Below is the majority of my Page_Load for the control. I have it in a try {} catch because it seems that on production the site is unable to load the XML often, so I need to flip on a friendly error message.
try {
var feed = XmlReader.Create(ResourceManager.GetString("TwitterRSS"));
var latestItems = SyndicationFeed
.Load(feed)
.GetRss20Formatter()
.Feed
.Items
.Take(2);
rptTwitter.ItemDataBound += new RepeaterItemEventHandler(rptTwitter_ItemDataBound);
rptTwitter.DataSource = latestItems;
rptTwitter.DataBind();
} catch (Exception ex) {
phError.Visible = true;
}
As you can see, I just fetch the 2 most recent items and repeat over them in a repeater for the front-end. If anything fails, I flip on the error PlaceHolder which says something like "Twitter is unavailable".
I often see this error message on the prod site so I'm wondering if it's making too many requests to the RSS feed. I was thinking about output caching the control for 10 minutes but I thought, "what if it gets cached in the error state"? Then it's guaranteed to display the error message panel for 10 minutes. My question is, is true that if it displays the error from the catch when its creating a newly cached version, will that truly be cached for 10 minutes (assuming I set Duration="600")? Does anyone have any tips as to how I can make this work better or cache when only real Twitter data is rendered, not the error message?
Thanks in advance
Instead of caching the entire page, I would cache the application data returned by your
var latestItems = ....
statement as well as if you receive an error. You can make the cache duration of each different so if you successfully get the data, cache it longer than if you got an error. One implementation would look like this:
object Twitter = Cache["MyTwitter"];
if(Twitter==null)
{
// cache empty
try
{
var latestItems = (load items)
Cache.Insert("MyTwitter", latestitems, null, DateTime.Now.AddSeconds(600),
Cache.NoSlidingExpiration);
Twitter = latestitems;
}
catch(Exception ex)
{
Cache.Insert("MyTwitter", ex.ToString(), null, DateTime.Now.AddSeconds(60),
Cache.NoSlidingExpiration);
Twitter = ex.ToString();
}
}
if(Twitter is string)
{
phError.Visible = true;
}
else
{
rptTwitter.DataSource = Twitter;
// rest of data binding code here
}
There are two parts here. The first part is to check the cache and if the object is not in the cache, do your loading. If there's an error just store a string in the cache.
Then with you object, if it's a string you know you've got an error. Otherwise it's the result of retrieving the twitter feed.