I'm trying to implement punchout catalogs on our eComm site. Honestly, the documentation for cXML is a mess and all the code examples are in javascript and/or VB.Net (I use C# and would rather not have to try and translate). Does anyone out there have examples or samples of how to receive the PunchOutSetupRequest XML and then send out the PunchOutSetupResponse XML using C#? I've been unable to find anything on the interwebs (I've been looking for two days now)...
I'm hoping I can just do this inside an ActionResult (vs. a 'launch page' as suggested).
I'm a complete noob at punchouts and could really use some help here. The bosses are being pretty pushy, so any assistance would be greatly appreciated. Suggestions as to how to make this work would also be much appreciated.
I apologize to all for the vagueness of the question (request).
This isn't trivial, but this should get you started.
You'll need 3 generic handlers (.ashx): Setup, Start, and Order....
Setup and Order will receive HTTP Post with content-type of "text/xml". Look at HttpRequest.InputStream if needed to get the XML into a string. From there, look at LINQ-to-XML to dig out the data you want. Your HTTP Response to both of these will also be content-type "text/xml" and UTF8 encoded, returning the CXML as documented...use LINQ-to-XML to produce that.
The Setup handler will need to validate credentials and return a URL with a unique QueryString token pointing to the Start handler. Do not expect session persistence between Setup and Start, because they're not from the same caller. This handler will need to create an application object for the token and associated data you extracted from the cXML.
The Start handler will be called as a simple GET, and will need to match the token in the QueryString to the appropriate application object, copy that data to the session, and then do a response.redirect to whatever page in your site you want the buyer to land on.
Once they populate their cart with some things, and are ready to check out, you'll take them to a page that has an embedded form (not to be confused with an ASP.Net form that posts back to your server) and a submit button (again, not an ASP.Net button). From your Setup handler, you captured a URL to point this form's Post, and within the form you'll have a hidden input tag with the UTF8 encoded CXML Punchout Order injected as the value produced with LINQ-to-XML. Highly recommend Base64 encoding that value to avoid ASP.Net messing with the tags it contains during rendering, and naming the hidden input "cxml-base64" per the documentation. The result is the form is client-side POSTed to your customer's server instead of yours, and their server will extract the CXML Punchout Order and that ends your visitor's session.
The Order handler will receive a CXML OrderRequest and just like Setup, you'll dump that to a string and then use LINQ-to-XML to parse it and act upon it. Again you'll get credentials to verify, possibly a credit card to process, and the order items, ship-to, etc. Note that the OrderRequest may not contain all the items that were in the Punchout Order, because the system on your customer's side may remove items or even change item quantities before submitting the final OrderRequest to you. The OrderRequest could come back to you after the Punchout Order is posted to them in a matter of minutes, days, weeks, or never...don't bother storing the cart data in hopes of matching it to the order later.
Last note...the buyer may be experiencing your site in an iframe embedded in their web-based procurement UI, so design accordingly.
If you need more info, reply to this and I'll get back.
Update...Additional considerations:
Discuss with the buyer how they want fault handling to flow, particularly with orders, because you have a choice. 1) exhaustively evaluate everything in the CXML you receive and return response codes other than 200 if anything is wrong, or 2) always return a 200 Success and deal with any issues out of band or by generating a ConfirmationRequest that rejects the order. My experience is that a mix of the two works best. Certainly you should throw a non-200 if the credentials fail, but you may not want (or be able) to run a credit card or validate stock availability inline. Your buyer's system may not be able to cope with dozens of possible faults, and/or may not show your fault messages to the user for them to make corrections. I've seen systems that will flat-out discard any non-200 response code and just blindly retry the submission repeatedly on an interval for hours or days until it gives up on a sanity check, while others will handle response codes within certain ranges differently than others, for example a 4xx invokes a retry, while a 5xx is treated as fatal. Remember that Setup and Order are not coming directly from the user...their procurement system is generating those internally.
Update...answering the comment about how to test things...
You'd use the same method as you will for generating outbound ConfirmationRequest, ShipNoticeRequest, and InvoiceDetailRequest, all of which generally are produced on your side after receiving an OrderRequest from your customer's procurement system.
Start with Linq-To-XML for an example of crafting your outgoing cXML (Creating XML Trees section). Combine that example with this bit of code:
StringBuilder output = new StringBuilder();
XmlWriterSettings objXmlWriterSettings = new XmlWriterSettings();
objXmlWriterSettings.Indent = true;
objXmlWriterSettings.NewLineChars = Environment.NewLine;
objXmlWriterSettings.NewLineHandling = NewLineHandling.Replace;
objXmlWriterSettings.NewLineOnAttributes = false;
objXmlWriterSettings.Encoding = new UTF8Encoding();
using (XmlWriter objXmlWriter = XmlWriter.Create(output, objXmlWriterSettings)) {
XElement root = new XElement("Root",
new XElement("Child", "child content")
);
root.Save(objXmlWriter);
}
Console.WriteLine(output.ToString());
So at this point the StringBuilder (output) has your whole cXML, and you need to POST it someplace. Your Web Application project, started with F5 and a default.aspx page will be listening on localhost and some port (you'll see that in the URL it opens). Separately, perhaps using VS Express for Desktop, you have the above code in a console app that you can run to do the Post using something like this:
Net.HttpWebRequest objRequest = Net.WebRequest.Create("http://localhost:12345/handler.ashx");
objRequest.Method = "POST";
objRequest.UserAgent = "Some User Agent";
objRequest.ContentLength = output.Length;
objRequest.ContentType = "text/xml";
IO.StreamWriter objStreamWriter = new IO.StreamWriter(objRequest.GetRequestStream, System.Text.Encoding.ASCII);
objStreamWriter.Write(output);
objStreamWriter.Flush();
objStreamWriter.Close();
Net.WebResponse objWebResponse = objRequest.GetResponse();
XmlReaderSettings objXmlReaderSettings = new XmlReaderSettings();
objXmlReaderSettings.DtdProcessing = DtdProcessing.Ignore;
XmlReader objXmlReader = XmlReader.Create(objWebResponse.GetResponseStream, objXmlReaderSettings);
// Pipes the stream to a higher level stream reader with the required encoding format.
IO.MemoryStream objMemoryStream2 = new IO.MemoryStream();
XmlWriter objXmlWriter2 = XmlWriter.Create(objMemoryStream2, objXmlWriterSettings);
objXmlWriter2.WriteNode(objXmlReader, true);
objXmlWriter2.Flush();
objXmlWriter2.Close();
objWebResponse.Close();
// Reset current position to the beginning so we can read all below.
objMemoryStream2.Position = 0;
StreamReader objStreamReader = new StreamReader(objMemoryStream2, Encoding.UTF8);
Console.WriteLine(objStreamReader.ReadToEnd());
objStreamReader.Close();
Since your handler should be producing cXML you'll see that spat out in the console. If it pukes, you'll get a big blob of debug mess in the console, which of course will help you fix whatever is broken.
pardon the verbosity in the variable names, done to try to make things clear.
Related
My boss asked how long it would take to build a client to access a web service that will send and receive some basic data and embedded documents. Just starting playing with it to see what's involved. I have been doing web and desktop development for about 20 years but have literally never touched a web service so with that I'm at the extreme newb level.
So far I used the wsdl to create the ServiceReference1 and I can see the methods in intellisense but I don't have the first clue where to start with calling the methods, passing parameters and consuming the response. I feel stupid because I'm sure it's pretty simple but just flailing at the code and looking for on point examples has gotten me nowhere. Usually I can find something through google in minutes that is exactly on point but not having luck here. Would appreciate a push in the right direction.
So basic questions. Proper way to make the calls. How and where to land the returned data. How to add parameters.
Here is my first attempt. This gets a simple list and has no parameters. The result in fiddler returns data but there is a runtime type mismatch error which I think is caused by some stray characters leading the response which appear to be caused by chucking, what ever that is. The response starts with 1ffs every time then contains the remainder of the xml. Secondarily I need to get the list into a dataset or some other container but I was hoping to just be able to step into the code and see a result
ServiceReference1.FilingInfoClient webservice = new FilingInfoClient();
ServiceReference1.courtListRequest cr = new ServiceReference1.courtListRequest();
ServiceReference1.courtListResponse lr = new ServiceReference1.courtListResponse();
lr = webservice .getCourtList(cr);
This is essentially the same but takes a date param. When I run this fiddler shows the parameter is not being sent. No other errors but I'm sure only because it exploded immediately.
ServiceReference1.FilingInfoClient webservice = new FilingInfoClient();
ServiceReference1.messageListRequest mr = new ServiceReference1.messageListRequest();
ServiceReference1.MessageListResponse mlr = new ServiceReference1.MessageListResponse();
mr.latestMessagePullTimestamp = DateTime.Now.AddDays(-5);
mr.endTimestamp = DateTime.Now;
mlr.latestMessagePullTimestamp = DateTime.Now;
mlr = webservice.getMessageList(mr);
This is the info provided by the web service host
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:urn1="urn:green:partner:ws:schema:FilingInfo">
<x:Header/>
<x:Body>
<urn1:getcourtList>
<urn1:courtListRequest/>
</urn1:getcourtList>
</x:Body>
</x:Envelope>
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:urn1="urn:green:partner:ws:schema:FilingInfo">
<x:Header/>
<x:Body>
<urn1:getMessageList>
<urn1:messageListRequest>
<urn1:latestMessagePullTimestamp>?</urn1:latestMessagePullTimestamp>
</urn1:messageListRequest>
</urn1:getMessageList>
</x:Body>
</x:Envelope>
we've got request and response pairs for each operation in the webservice. think like request => input, response => output, operation => method.
the webservice is an API. things that consume APIs are clients. the WSDL describes the API's operations and their requests and responses. tools like visual studio know how to read WSDLs and build C# code to perform those (SOAP) operations under-the-hood. this is the client (here FilingInfoClient). visual studio'll also generate classes representing each request and response.
this allows for a familiar programming experience. you call a method, give it some input, and it returns some output.
using (var client = new FilingInfoClient())
{
var request = new courtListRequest
{
//TODO fill in relevant properties
};
var response = client.getCourtList(request);
}
I am currently writing a Discord Bot in C#. I have most the bot done but for this next update I am wanting to add on the capability of checking if the Streamer has Gone live. Currently I am polling the Twitch API and Pulling the JSON File that it has and checking whether or not the JSON Stream Object is Null or Not. But this takes 3-5 min after the streamer to go live before it finally sees that Stream is not Null even though I poll the JSON every 5 seconds. Is there anyway to do this more efficiently? My code is Below:
private const string Url = "https://api.twitch.tv/kraken/streams/streamer";
var request = (HttpWebRequest)WebRequest.Create(Url);
request.Method = "Get";
request.Timeout = 12000;
request.ContentType = "application/vnd.twitchtv.v5+json";
request.Headers.Add("Client-ID", "ID");
using (var s = request.GetResponse().GetResponseStream())
{
using (var sr = new System.IO.StreamReader(s))
{
var jsonObject = JObject.Parse(sr.ReadToEnd());
var jsonStream = jsonObject["stream"];
// twitch channel is online if stream is not null.
LastTwitchStatus = jsonStream.Type != JTokenType.Null;
}
}
Looks like it's intended behavior of Twitch API.
They are definitely more focused on pushing their horsepower to streaming, not immediate data provision through API.
While there might be a limitation like this, you can try scrapping the page if timing is crucial and you don't want to wait 3-5 min for something that already happened.
One idea is to poll page each 5s or so and then query the HTML document for something characteristic that distinguish offline and online channel.
Idea for scrapping in JavaScript (just replicate in .NET):
For example, I have tried to query user pages (https://www.twitch.tv/username) in JavaScript with:
$(".recent-past-broadcast").length > 0
and for user that is not broadcasting it yields true while for broadcasting user it yields false. Problem might be for user with no recent broadcasts history though.
You can try checking videos page (https://www.twitch.tv/username/videos/all) for their live indicator too like:
$(".cn-livestatus__circle").length > 0
It will yield true for streaming user and false for the one that does not stream (even if he/she is online).
Of course that's least efficient way on doing this and requires lots of download as compared to just polling but... still it seems more up to date than asking API every 5s and still getting actual state delayed by 3-5min.
Just replicate querying like above in .NET and you're there.
You could also mix two approaches and if you see that someone started streaming, just disable page scrapping and swap to only API calls for checking if you're up-to-date still.
Useful tooling for scrapping:
For parsing HTML documents use parsers like AngleSharp to do this in .NET:
https://github.com/AngleSharp/AngleSharp
I have an application which intended to stream videos back from our local DB. I spent a lot of time yesterday attempting to return the data a either a RangeFileContentResult or RangeFileStreamResult without success.
In short, when I return the file as either of these two results I cannot seem to get a video to stream correctly (or play at all).
The request from the browser gets sent with the following headers:
Range: bytes=0-
And the response comes provided gives these headers as an example:
Accept-Ranges: bytes
Content-Range: bytes 0-5103295/5103296
In terms of network traffic, I get a series of 206's for partial results, then a 200 at the end (according to fiddler) which seems correct.
Chrome's network tab disagrees with this and see's an initial request (always 13 bytes which I assume is a handshake) then a couple more requests which have a status of either cancelled or pending.
As far as I understand, this is more or less correct, 206 - cancel, 206 - cancel etc. But the video never plays.
If I switch the result from my controller to a FileResult, the video plays and Chrome, IE10 and Firefox and appears to begin playing before the end of the download is completed (which feels a little like it's streaming! although I suspect it's not)
But with the range result I get nothing in chrome or IE and the entire video downloads in one drop in firefox.
As far as I understood, the RangeFileContentResult should handle responding to the client with a range of bytes to download (which mine doesn't seem to do, it just tells it to get the whole file (illustrated by the response above)). And the client should respond to that, which it doesn't seem to do.
Does anyone have any thoughts in this area? Specifically:
a) Should RangeFileContentResult be sending a range of bytes back to the client?
b) Is there any way I can explicitly control the range of bytes requested from the client side?
c) Is there any reason or anything I'm doing wrong here which would cause browsers not to load the video at all, when requesting a RangeFileContentResult?
EDIT: Added a diagram to help describe what I'm seeing:
EDIT2: Ok, so the plot thickens. Whilst playing around with the RangedFile gubbins we needed to push another system test version out and I left the 'RangeFileContentResult' on my controller action as below:
private ActionResult RetrieveVideo(MediaItem media)
{
return new RangeFileContentResult(
media.Content,
media.MimeType,
media.Id.ToString(),
DateTime.Now);
}
Rather oddly, this now seems to work as expected on our Azure system test environment but still not on my local machine. I wonder if there's something IIS based which works happily on Azures IIS8, but not on my local 7.5 instance?
The reason of the issue described here is the value passed to modificationDate parameter of RangeFileContentResult constructor:
return new RangeFileContentResult(media.Content, media.MimeType, media.Id.ToString(), DateTime.Now);
This date is used by the RangeFileResult in order to create two headers:
ETag - This header is an identifier used by browser and server to make sure that they are speaking about the same entity.
Last-Modified - This header informs the browser about the last modification date of the entity.
The fact that a DateTime.Now is being passed every time the browser makes partial request might be a reason for ETag and Last-Modified headers values to change before the client will get the whole entity (usually if the entire process takes longer than one second).
In case described above, the browser is sending If-Range header with the request. This header is telling the server that the entire entity should be resend if the entity tag (or modification date because If-Range can carry either one of those two values) doesn't much. This is what happens in this case.
The fact that modification date is "dynamic" may also cause further issues if client decides to use one of following headers for verification: If-Modified-Since, If-Unmodified-Since, If-Match, If-None-Match.
The solution in this situation is to keep a modification date in database with the file to make sure it is consistent.
There is also a place for optimization here. Instead of grabbing the whole video from DB every time a partial request is being made, one can either cache it or grab only the relevant part (if the database engine which application is using allows such an operation). Such a mechanism can be used in order to create specialized action result by delivering from RangeFileResult and overwriting WriteEntireEntity and WriteEntityRange methods.
Ok So I didn't have enough time to look at RangeFileResult in details, but I have just downloaded the file (RangeFileContentResult) from
RangeFileContentResult
and modified my code so it looks like
public ActionResult Movie()
{
byte[] file = System.IO.File.ReadAllBytes(#"C:\HOME\asp\Java\Java EE. Programming Spring 3.0\01.avi");
return new RangeFileContentResult(file, "video/x-msvideo", "01.avi", DateTime.Now);
}
and again it works. However, I noticed that when I stop the video I have an exception and it happens in RangeFileResult
if (context.HttpContext.Response.IsClientConnected)
{
WriteEntityRange(context.HttpContext.Response, RangesStartIndexes[i], RangesEndIndexes[i]);
if (MultipartRequest)
context.HttpContext.Response.Write("\r\n");
context.HttpContext.Response.Flush();
}
So you better modify the code to handle it.In terms when users already disconnected , but you are still trying to send them a response.
Again, technically it's not a big difference whether you pass byte[] or Stream , because even when you pass Stream the code working with it
using (FileStream)
{
FileStream.Seek(rangeStartIndex, SeekOrigin.Begin);
int bytesRemaining = Convert.ToInt32(rangeEndIndex - rangeStartIndex) + 1;
byte[] buffer = new byte[_bufferSize];
while (bytesRemaining > 0)
{
int bytesRead = FileStream.Read(buffer, 0, _bufferSize < bytesRemaining ? _bufferSize : bytesRemaining);
response.OutputStream.Write(buffer, 0, bytesRead);
bytesRemaining -= bytesRead;
}
}
again reads data and puts them into an byte[] array!.... So it's up to you!
BUT... I suggest that you pay attention to a content type that you provide!!!
Point is that your browser must be able to handle it!So if you provide something unknown definitely you will have problems.To find your content type string please check
mime-types-by-content-type
Again I just gave a quick look and if you have problems I will help you later when come home.
mofiPlease just copy these two files in your mvc project
RangeFileResult
RangeFileStreamResult
public ActionResult Movie()
{
var path = new FileStream(#"C:\temp\01.avi", FileMode.Open);
return new RangeFileStreamResult(path, "video/x-msvideo", "01.avi", DateTime.Now);
}
Now run your project and open in chrome (for example: http://youraddress.com:45454/Main/Movie) you should see your file playing using a standard chrome video player. it's streaming and you can see it if you put a breakpoint at
return new RangeFileStreamResult(path, "video/x-msvideo", "01.avi", DateTime.Now);
Again the source is easy to modify to change the buffer size which is used for streaming!
So let's say I created a feedback form in C#.
It sens the feedback to my PHP Page and my PHP Page adds it to my MySQL Database.
Code:
private void PostFeed(string Params)
{
using (WebClient wc = new WebClient())
{
wc.Headers[HttpRequestHeader.ContentType] = "application/x-www-form-urlencoded";
wc.Headers["Accept"] = #"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
wc.Headers["Accept-Language"] = #"en-US,en;q=0.5";
string HtmlResult = wc.UploadString("http://website/feedtest.php", "POST", Params);
Console.WriteLine(HtmlResult);
}
}
On my PHP I have a code that looks similar to:
$name = $_REQUEST['name'];
$email = $_REQUEST['email'];
$desc = $_REQUEST['description'];
connect
post result...
close connection
The question I have is: is there a way to protect against flood ? I understand anyone can just spam/flood it by sending feedback continuously or even creating a third party app that sends like 1000 post request per second. I was thinking of implementing some sort of check on the PHP side, for example: if the connection password from the c# app matches, then continue if not, exit.
Basically, I dont want people to take advantage of the feedback method and spam me.
Can anyone suggest a method ? or Should I not even worry about this ?
Any help is appreciated.
A typical technique is to have some kind of submissions per X unit of time limit where you have a last_submitted_at column in a table associated with some kind of identifier. For example, you might associate it with a user if you have a fairly robust user registration system, or you might associate it with an IP if you don't.
This is the system that Stack Overflow uses if you try and vote, post, or as questions too often. Each of these has a separate timer which probably translates to a separate last_X_at column in the database somewhere.
If the last time is less than some threshold, present an error instead of accepting the submission.
I'm trying to put together a small app that will allow me to create events in Facebook. I've already got my Facebook app set up and have successfully tested a post to my feed through the application using the code below.
wc.UploadString("https://graph.facebook.com/me/feed", "access_token=" + AccessToken + "&message=" + Message);
When I try to take things to the next step, I've just hit a brick wall.
The code that I've written is here:
JavaScriptSerializer ser = new JavaScriptSerializer();
wc.UploadString("https://graph.facebook.com/me/events?" + "access_token=" + AccessToken, ser.Serialize(rawevent));
rawevent is a small object I wrote that puts together the elements of an event so I can pass it around my application.
I'm using a similar method using ser.Deserialize to parse the user data coming back from Facebook, so I believe this should work the other way too.
Setting the above code aside for a moment, I also have tried simply putting plain text in there in various formats and with differing levels of parameters, and nothing seems to work.
Is there something wrong with the way I'm approaching this? I've read over everything I could get my hands on, and very few of the samples out there that I could find deal with creating events, and when they do, they're not in C#.
I would appreciate any help on this. If you even just have a clean copy of JSON code that I can look at and see where mine should be tweaked I would appreciate it.
I have included a copy of what the ser.Serialize(rawevent) call produces below:
{"name":"Dev party!","start_time":"1308360696.86778","end_time":"1310952696.86778","location":"my house!"}
EDIT:
thanks to bronsoja below, I used the code below to successfully post an event to Facebook!
System.Collections.Specialized.NameValueCollection nvctest = new System.Collections.Specialized.NameValueCollection();
nvctest.Add("name", "test");
nvctest.Add("start_time", "1272718027");
nvctest.Add("end_time", "1272718027");
nvctest.Add("location", "myhouse");
wc.UploadValues("https://graph.facebook.com/me/events?" + "access_token=" + AccessToken, nvctest);
All the posting examples in the graph api examples in FB docs show using curl -F, which indicates values be POSTed as normal form data. Just key value pair like you did in your first example.
The error is likely due to sending JSON. If you are using WebClient you may be able to simply create a NameValueCollection with your data and use WebClient.UploadValues to send the request.
I've recently found that Facebook returns (#100) Invalid parameter when we are trying to post data when there is already a record on file with the same name. So for example, if you are creating a FriendList via the API, and the name is "foo", submitting another identical request for the same name will immediately return that error.
In testing events you probably deleted the "Dev party!" event after each test, or maybe changing the time since you don't want two events to collide. But I'm wondering if you duplicated your wc.UploadValues(...) statement just as a test if you would see that error again. You're either deleting your 'test' event or maybe changing names and didn't notice that two events with the same name might return the error.
What's really bad here is that the error comes back as a OAuthException, which seems very wrong. This isn't a matter of authentication or authorization, it's purely a data issue.
Facebook Engineers, if I'm right about how this works, it's a bug to return this error under these conditions, and this forum has many examples of related confusion. Please return more appropriate errors.