I am working in AWS S3 upload and download from bucket. After generated URL link, it is working fine. But after expired the URL, the below XML file displayed.
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<Expires>2017-03-31T14:49:56Z</Expires>
<ServerTime>2017-05-04T11:32:40Z</ServerTime>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
How to customise the above output or any other way to display as table or simply display "Expired"?
I assume that you mean to customize the above output for your application. You're not going to be able to update the error message returned back from AWS itself.
There are a few ways to go about this, but essentially, you'll need to determine if you received an error response back. From there, you can just parse it seeing as it is an XML response.
You'll of course want to catch any exceptions from parsing the XML response.
string data = #"<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<Expires>2017-03-31T14:49:56Z</Expires>
<ServerTime>2017-05-04T11:32:40Z</ServerTime>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>";
XDocument xdoc = XDocument.Parse(data);
if(xdoc.Descendants("Error").Any())
{
var errorMessage = from lv1 in xdoc.Descendants("Error")
select lv1.Element("Message").Value;
Console.WriteLine(errorMessage.FirstOrDefault());
}
Related
The Revit Addin is working perfectly and I have also converted correctly for Design automation. I have debugged it with local debugger. It worked perfect.
So I can say app bundle is perfectly doing well.
Now coming to the web application code, it works perfect until last line of "workItemStatus".
I need a rfa file and a big Json file as input file, to run the code. Both together will be 1 mb in size. But code is stack (endlessly waiting) when uploading file, workitem does not start.
I read in another stackoverflow post, that Forge does not allow more than 16kb upload to oss bucket by.....
Url = string.Format("https://developer.api.autodesk.com/oss/v2/buckets/{0}/objects/{1}", bucketKey, inputFileNameOSS)
That post says, I will need to upload bigger files to another cloud service and use the signed-in URL instead of Forge oss bucket.
The code looks correct while debugging and it is stack, when it reach to the line
WorkItemStatus workItemStatus = await _designAutomation.CreateWorkItemAsync(workItemSpec);
I have debugged the code, looks like perfectly working until "workItemStatus" value, in DesignAutomationController.cs "StartWorkItem".
Every Key and Value looks perfectly passed.
Is it because of the file size ? As the Json file is big, I am uploading it like the other input (.rfa/.rvt) files.
string callbackUrl = string.Format("{0}/api/forge/callback/designautomation?id={1}&outputFileName={2}", OAuthController.GetAppSetting("FORGE_WEBHOOK_URL"), browerConnectionId, outputFileNameOSS);
WorkItem workItemSpec = new WorkItem()
{
ActivityId = activityName,
Arguments = new Dictionary<string, IArgument>()
{
{ "inputFile", inputFileArgument },
{ "inputJsonFile", inputFileArgument1 },
{ "outputFile", outputFileArgument },
{ "onComplete", new XrefTreeArgument { Verb = Verb.Post, Url = callbackUrl } }
}
};
***WorkItemStatus workItemStatus = await _designAutomation.CreateWorkItemAsync(workItemSpec);***
return Ok(new { WorkItemId = workItemStatus.Id }); ```
I read in another stackoverflow post, that Forge does not allow more than 16kb upload to oss bucket by..
The 16kb limit is on a payload of design automation endpoints including the workitem. The limits are defined here. If the workitem payload exceeds 16kb you will see an error HTTP 413 Payload Too Large.
To send large json inputs to design automation, you may first upload the json to OSS (or even another storage service such as Amazon S3). Then call the workitem with a signed url to the json file (similar to the signed url for the rfa file).
Edit:
1. Large JSON files can be uploaded to OSS using Data Management endpoint.
2. A signed URL with read access can then be obtained for that object using endpoint.
3. The URL obtained can then be passed to Design Automation workitem payload as an input argument, instead of embedding the json contents into the payload.
How can I extract the X-Pagination header from a response and use the next link to chain requests?
I've tried in both Postman and C# console application with RestSharp. No success.
Easiest would be a small console application to test. I just need to iterate through the pages.
This is what I get back in the headers X-Pagination:
{
"Page":1,
"PageSize":20,
"TotalRecords":1700,
"TotalPages":85,
"PreviousPageLink":"",
"NextPageLink":"www......./api/products/configurations?Filters=productid=318&IncludeApplicationPerformance=true&page=1",
"GotoPageLinkTemplate":"www..../api/products/configurations?Filters=productid=318&IncludeApplicationPerformance=true&page=0"
}
In Postman you simply retrieve the header, parse it into a Json object then use the value to set a link for your next request.
Make your initial request then in the Test tab do something like:
var nextPageLinkJson = JSON.parse(pm.response.headers.get("X-Pagination"));
var nextPageLink = nextPageLinkJson.NextPageLink;
pm.environment.set("nextPageLink", nextPageLink);
If you don't know how many pages you're going to have then you'll have to play with conditions when to set the nextPageLink variable and what not but that's the general idea.
You can set the request to run using the new link with postman.setNextRequest("request_name") as well.
Additionally this approach will only work in collection runner.
I'm trying to implement punchout catalogs on our eComm site. Honestly, the documentation for cXML is a mess and all the code examples are in javascript and/or VB.Net (I use C# and would rather not have to try and translate). Does anyone out there have examples or samples of how to receive the PunchOutSetupRequest XML and then send out the PunchOutSetupResponse XML using C#? I've been unable to find anything on the interwebs (I've been looking for two days now)...
I'm hoping I can just do this inside an ActionResult (vs. a 'launch page' as suggested).
I'm a complete noob at punchouts and could really use some help here. The bosses are being pretty pushy, so any assistance would be greatly appreciated. Suggestions as to how to make this work would also be much appreciated.
I apologize to all for the vagueness of the question (request).
This isn't trivial, but this should get you started.
You'll need 3 generic handlers (.ashx): Setup, Start, and Order....
Setup and Order will receive HTTP Post with content-type of "text/xml". Look at HttpRequest.InputStream if needed to get the XML into a string. From there, look at LINQ-to-XML to dig out the data you want. Your HTTP Response to both of these will also be content-type "text/xml" and UTF8 encoded, returning the CXML as documented...use LINQ-to-XML to produce that.
The Setup handler will need to validate credentials and return a URL with a unique QueryString token pointing to the Start handler. Do not expect session persistence between Setup and Start, because they're not from the same caller. This handler will need to create an application object for the token and associated data you extracted from the cXML.
The Start handler will be called as a simple GET, and will need to match the token in the QueryString to the appropriate application object, copy that data to the session, and then do a response.redirect to whatever page in your site you want the buyer to land on.
Once they populate their cart with some things, and are ready to check out, you'll take them to a page that has an embedded form (not to be confused with an ASP.Net form that posts back to your server) and a submit button (again, not an ASP.Net button). From your Setup handler, you captured a URL to point this form's Post, and within the form you'll have a hidden input tag with the UTF8 encoded CXML Punchout Order injected as the value produced with LINQ-to-XML. Highly recommend Base64 encoding that value to avoid ASP.Net messing with the tags it contains during rendering, and naming the hidden input "cxml-base64" per the documentation. The result is the form is client-side POSTed to your customer's server instead of yours, and their server will extract the CXML Punchout Order and that ends your visitor's session.
The Order handler will receive a CXML OrderRequest and just like Setup, you'll dump that to a string and then use LINQ-to-XML to parse it and act upon it. Again you'll get credentials to verify, possibly a credit card to process, and the order items, ship-to, etc. Note that the OrderRequest may not contain all the items that were in the Punchout Order, because the system on your customer's side may remove items or even change item quantities before submitting the final OrderRequest to you. The OrderRequest could come back to you after the Punchout Order is posted to them in a matter of minutes, days, weeks, or never...don't bother storing the cart data in hopes of matching it to the order later.
Last note...the buyer may be experiencing your site in an iframe embedded in their web-based procurement UI, so design accordingly.
If you need more info, reply to this and I'll get back.
Update...Additional considerations:
Discuss with the buyer how they want fault handling to flow, particularly with orders, because you have a choice. 1) exhaustively evaluate everything in the CXML you receive and return response codes other than 200 if anything is wrong, or 2) always return a 200 Success and deal with any issues out of band or by generating a ConfirmationRequest that rejects the order. My experience is that a mix of the two works best. Certainly you should throw a non-200 if the credentials fail, but you may not want (or be able) to run a credit card or validate stock availability inline. Your buyer's system may not be able to cope with dozens of possible faults, and/or may not show your fault messages to the user for them to make corrections. I've seen systems that will flat-out discard any non-200 response code and just blindly retry the submission repeatedly on an interval for hours or days until it gives up on a sanity check, while others will handle response codes within certain ranges differently than others, for example a 4xx invokes a retry, while a 5xx is treated as fatal. Remember that Setup and Order are not coming directly from the user...their procurement system is generating those internally.
Update...answering the comment about how to test things...
You'd use the same method as you will for generating outbound ConfirmationRequest, ShipNoticeRequest, and InvoiceDetailRequest, all of which generally are produced on your side after receiving an OrderRequest from your customer's procurement system.
Start with Linq-To-XML for an example of crafting your outgoing cXML (Creating XML Trees section). Combine that example with this bit of code:
StringBuilder output = new StringBuilder();
XmlWriterSettings objXmlWriterSettings = new XmlWriterSettings();
objXmlWriterSettings.Indent = true;
objXmlWriterSettings.NewLineChars = Environment.NewLine;
objXmlWriterSettings.NewLineHandling = NewLineHandling.Replace;
objXmlWriterSettings.NewLineOnAttributes = false;
objXmlWriterSettings.Encoding = new UTF8Encoding();
using (XmlWriter objXmlWriter = XmlWriter.Create(output, objXmlWriterSettings)) {
XElement root = new XElement("Root",
new XElement("Child", "child content")
);
root.Save(objXmlWriter);
}
Console.WriteLine(output.ToString());
So at this point the StringBuilder (output) has your whole cXML, and you need to POST it someplace. Your Web Application project, started with F5 and a default.aspx page will be listening on localhost and some port (you'll see that in the URL it opens). Separately, perhaps using VS Express for Desktop, you have the above code in a console app that you can run to do the Post using something like this:
Net.HttpWebRequest objRequest = Net.WebRequest.Create("http://localhost:12345/handler.ashx");
objRequest.Method = "POST";
objRequest.UserAgent = "Some User Agent";
objRequest.ContentLength = output.Length;
objRequest.ContentType = "text/xml";
IO.StreamWriter objStreamWriter = new IO.StreamWriter(objRequest.GetRequestStream, System.Text.Encoding.ASCII);
objStreamWriter.Write(output);
objStreamWriter.Flush();
objStreamWriter.Close();
Net.WebResponse objWebResponse = objRequest.GetResponse();
XmlReaderSettings objXmlReaderSettings = new XmlReaderSettings();
objXmlReaderSettings.DtdProcessing = DtdProcessing.Ignore;
XmlReader objXmlReader = XmlReader.Create(objWebResponse.GetResponseStream, objXmlReaderSettings);
// Pipes the stream to a higher level stream reader with the required encoding format.
IO.MemoryStream objMemoryStream2 = new IO.MemoryStream();
XmlWriter objXmlWriter2 = XmlWriter.Create(objMemoryStream2, objXmlWriterSettings);
objXmlWriter2.WriteNode(objXmlReader, true);
objXmlWriter2.Flush();
objXmlWriter2.Close();
objWebResponse.Close();
// Reset current position to the beginning so we can read all below.
objMemoryStream2.Position = 0;
StreamReader objStreamReader = new StreamReader(objMemoryStream2, Encoding.UTF8);
Console.WriteLine(objStreamReader.ReadToEnd());
objStreamReader.Close();
Since your handler should be producing cXML you'll see that spat out in the console. If it pukes, you'll get a big blob of debug mess in the console, which of course will help you fix whatever is broken.
pardon the verbosity in the variable names, done to try to make things clear.
I have a problematic bug in a production system, which I simply can’t find. Sometimes the system produces an invalid link. When the end-user clicks it I get an error report from the system, and the end-user gets an error message. The URL’s that fail are like this:
http://www.mysite.com/somepath/undefined/
The “undefined” part is the problem, which I think is produced by JavaScript, but I like to make sure it’s not from the back-end.
Is there a way to save every response to a file if it contains the string “/undefined/” using global.asax?
I’ve tried this:
protected void Application_EndRequest(object sender, EventArgs e)
{
TextReader t = new StreamReader(Response.OutputStream);
string content = t.ReadToEnd();
// look for "/undefined/" and save to a temp file is the easy part after this
}
But is says that OutputStream is not readable.
I don’t know for certain which page/ajax request that produces the faulty link, so I need to inspect every response.
You cannot read the response stream, but you can add a response filter to the output stream, and get a copy of it.
There are several related artiles on this:
Logging raw HTTP request/response in ASP.NET MVC & IIS7 here at
SO
Capturing and Transforming ASP.NET Output with
Response.Filter by Rick Strahl.
Hope, this will help you.
Juest check the IIS log file. I tracks all request with urls
This question already has answers here:
Google Weather API gone?
(5 answers)
Closed 6 years ago.
I decided to pull information from Google's Weather API - The code I'm using below works fine.
XmlDocument widge = new XmlDocument();
widge.Load("https://www.google.com/ig/api?weather=Brisbane/dET7zIp38kGFSFJeOpWUZS3-");
var weathlist = widge.GetElementsByTagName("current_conditions");
foreach (XmlNode node in weathlist)
{
City.Text = ("Brisbane");
CurCond.Text = (node.SelectSingleNode("condition").Attributes["data"].Value);
Wimage.ImageUrl = ("http://www.google.com/" + node.SelectSingleNode("icon").Attributes["data"].Value);
Temp.Text = (node.SelectSingleNode("temp_c").Attributes["data"].Value + "°C");
}
}
As I said, I am able to pull the required data from the XML file and display it, however if the page is refreshed or a current session is still active, I receive the following error:
WebException was unhandled by user code - The remote server returned
an error: 403 Forbidden Exception.
I'm wondering whether this could be to do with some kind of access limitation put on access to that particular XML file?
Further research and adaptation of suggestions
As stated below, this is by no means best practice, but I've included the catch I now use for the exception. I run this code on Page_Load so I just do a post-back to the page. I haven't noticed any problems since. Performance wise I'm not overly concerned - I haven't noticed any increase in load time and this solution is temporary due to the fact this is all for testing purposes. I'm still in the process of using Yahoo's Weather API.
try
{
XmlDocument widge = new XmlDocument();
widge.Load("https://www.google.com/ig/api?weather=Brisbane/dET7zIp38kGFSFJeOpWUZS3-");
var list2 = widge.GetElementsByTagName("current_conditions");
foreach (XmlNode node in list2)
{
City.Text = ("Brisbane");
CurCond.Text = (node.SelectSingleNode("condition").Attributes["data"].Value);
Wimage.ImageUrl = ("http://www.google.com/" + node.SelectSingleNode("icon").Attributes["data"].Value);
Temp.Text = (node.SelectSingleNode("temp_c").Attributes["data"].Value + "°C");
}
}
catch (WebException exp)
{
if (exp.Status == WebExceptionStatus.ProtocolError &&
exp.Response != null)
{
var webres = (HttpWebResponse)exp.Response;
if (webres.StatusCode == HttpStatusCode.Forbidden)
{
Response.Redirect(ithwidgedev.aspx);
}
}
}
Google article illustrating API error handling
Google API Handle Errors
Thanks to:
https://stackoverflow.com/a/12011819/1302173 (Catch 403 and recall)
https://stackoverflow.com/a/11883388/1302173 (Error Handling and General Google API info)
https://stackoverflow.com/a/12000806/1302173 (Response Handling/json caching - Future plans)
Alternative
I found this great open source alternative recently
OpenWeatherMap - Free weather data and forecast API
This is related to a change / outage of the service. See: http://status-dashboard.com/32226/47728
I have been using Google's Weather API for over a year to feed a phone server so that the PolyCom phones receive a weather page. It has run error free for over a year. As of August 7th 2012 there have been frequent intermittent 403 errors.
I make a hit of the service once per hour (As has always been the case) so I don't think frequency of request is the issue. More likely the intermittent nature of the 403 is related to the partial roll-out of a configuration change or a CDN change at Google.
The Google Weather API isn't really a published API. It was an internal service apparently designed for use on iGoogle so the level of support is uncertain. I tweeted googleapis yesterday and received no response.
It may be better to switch to a promoted weather API such as:
WUnderground Weather or
Yahoo Weather.
I have added the following 'unless defined' error handling perl code myself yesterday to cope with this but if the problem persists I will switch to a more fully supported service:
my $url = "http://www.google.com/ig/api?weather=" . $ZipCode ;
my $tpp = XML::TreePP->new();
my $tree = $tpp->parsehttp( GET => $url );
my $city = $tree->{xml_api_reply}->{weather}->{forecast_information}->{city}->{"-data"};
unless (defined($city)) {
print "The weather service is currently unavailable. \n";
open (MYFILE, '>/home/swarmp/public_html/status/polyweather.xhtml');
print MYFILE qq(<?xml version="1.0" encoding="utf-8"?>\n);
print MYFILE qq(<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "xhtml11.dtd">\n);
print MYFILE qq(<html xmlns="http://www.w3.org/1999/xhtml">\n);
print MYFILE qq(<head><title>Weather is Unavailable!</title></head>\n);
print MYFILE qq(<body>\n);
print MYFILE qq(<p>\n);
print MYFILE qq(The weather service is currently unavailable from the data vendor.\n);
print MYFILE qq(</p>\n);
print MYFILE qq(</body>\n);
print MYFILE qq(</html>\n);
close MYFILE;
exit(0);
}...
This is by no means a best practice, but I use this API heavily in some WP7 and Metro apps. I handle this by catching the exception (most of the time a 403) and simply re-calling the service inside of the catch, if there is an error on the Google end it's usually briefly and only results in 1 or 2 additional calls.
That`s the same thing we found out.
Compare the request header in a bad request and a working request. The working request includes cookies. But where are they from?
Delete all your browser cookies from google. The weather api call will not work in your browser anymore. Browse to google.com and then to the weather api, it will work again.
Google checks the cookies to block multiple api calls. Getting the cookies one time before handling all weather api requests will fix the problem. The cookies will expire in one year. I assume you will restart your application more often then once a year. So that you will get a new one. Getting cookies for each request will end in the same problem: Too many different requests.
One tip: Weather does not often change, so cache the json information (for maybe a hour). That will reduce time-consuming operations as requests!
I found that If you try the request in a clean browser (like new window incognito mode on chrome) the google weather service works. Possible problem of cookies?