JSON vs XML - Does XML require a 'local proxy'? What is that? - c#

I'm reading this article that compares XML to JSON, and in the comments section, a user mentions the need to use a "local proxy" to access XML.
Can someone explain what a local proxy means in this context? I'm assuming he means Javascript, but I'm open to understanding what parsers are available in other languages (C#, etc...)

JavaScript has a Same Origin Policy which keeps you from access content from other domains. This prevents the XMLHttpRequest object from being able to retrieve the contents of the XML file from the other domain.
A local proxy is just a simple file that just re-routs the request from your domain to the other domain and fetches the content. This way the same orgin policy is met.
The reason JSON does not run into the restriction is JavaScript, Image, and CSS files can be referenced from other domains. Because JavaScript files can be loaded from other domains, we can use JSONP (JSON with Padding) to get the content.
Most people agree that JSONP is not secure since any content can be injected into the JavaScript file. You just have to trust your source that they will not inject any bad content (ads, popups, tracking stuff, etc) into the web page.

This is related to JSONP (as the user states in his comment) which basically defines the ability of JavaScript to execute whatever is provided in the remote source <script src="http://url.com/file"></script>, and gives a browser the ability to retrieve data from remote sources.
I don't like the JSONP term myself, as you could execute XML as well, so the user's comment is actually wrong. You could return something like run('<some xml></some xml>') on your server, and then use the built in JavaScript XML parser to get the data that you need - it doesn't need to be JSON.

Related

How to redirect to authenticated aspx page through webapi

One of our application is put under authentication which is used by other team.
Now they want us to expose this aspx page to them through webapi so that they can use it as before without any authentication.
I have searched everywhere on internet and found
HttpResponseMessage res = Request.CreateResponse(HttpStatusCode.Redirect);
that can be used.
But with that I am not able to find way to send authenticated request and also css files used in html not returned.
Webapi is not meant to serve Html and css. It's intended to provide data in json, xml or whatever format required.
Since html is a subset of xml it can be used to serve html.
But in your case not the way to go.
If they want access to certain pages without authentication they should provide those pages with anonymous access.
I.e. solve it with access attributes on the ViewControllers and/or areas.

How do I download documents from AtTask?

I'm working on a continuing API project. The current issue at hand is to be able to download my data from the AtTask server in precisely the folder structure they exist in on the AtTask servers. I've got the folder creation working nicely; the data types between Document, Document Folder and Document Version seem to be pretty clear. I am a little disillusioned about the fact that extension isn't in the document object (that I have to refer to the document VERSION for that)... but I can see some of the reason for that from a design perspective.
The issue I'm running into now is that I need to get the file content. I originally through from the API documentation that I'd be able to get to the file contents the same way as the documentation recommends uploading it -- through the handle. Unfortunately, neither document nor docv seem to support me accessing the handle except to write a new file.
So that leaves me the "download URL" as the remaining option. If I build the UI strings from the API calls using my browser, I get a URL with https://attaskURL/document/download?ID=xxxx (and can also get the versionID and such). If I paste the url into the browser where I'm logged in to the user interface of AtTask, it works fine and I can download the file. If, instead, I use my C# code to do so, I get the login page returned as a stream for me to download instead of my actual file because I'm not authenicated. I've tried creating a network credential and attaching it to the request with the username and password, but to no avail.
I imagine there's a couple ways to solve this problem -- the easy one being finding a way to "log in" to the download site through code (which doesn't seem to be the usual network credential object in C#) OR find a way to access the file contents through the API.
Appreciate your thoughts!
It looks like you can use the download URL if you put a session id in the URL. The details on getting a session id are here (basically just call login and a session id is returned in JSON):
http://developers.attask.com/api-docs/#Authentication
Then cram it on the end of your document download URL:
https://yourcompany.attask-ondemand.com/document/download?ID=xxxx&sessionID=abc1234
I've given this a quick test and I'm able to access a document.
You can use the downloadURL and a sessionID IF you are not using SAML authentication.
I have tried it both ways and using SAML will redirect you to the login page.

Handling Authentication for a File Display using a Web Service

This is my first time developing this kind of system, so many of these concepts are very new to me. Any and all help would be appreciated. I'll try to sum up what I'm doing as efficiently as possible.
Background: I have a web application running AngularJS with Bootstrap. The app communicates with the server and DB through a web service programmed using C#. On the site, users can upload files and reference them later using direct links. There's no restriction to file type (yet), so just about anything is allowed.
My Goal: Having direct links creates a big security problem for me, since the documents/images are supposed to be private data. What I would prefer to do is validate a user's credentials when the link is clicked, then load the file in the browser using a more generic url path.
--Example--
"mysite.com/attachments/1" ---> (Image)
--instead of--
"mysite.com/data/files/importantImg.jpg"
Where I'm At: Not very far. My first thought was to add a page that sends the server request and receives a file byte stream along with mime type that I can reassemble and present to the user. However, I have no idea if this is possible using a web service that sends JSON requests, nor do I have a clue about how the reassembling process would work client-side.
Like I said, I'll take any and all advice. I'd love to learn more about this subject for future projects as well, but for now I just need to be pointed in the right direction.
Your first thought is correct, for it, you need to use the Response object, and more specifically the AddHeader and Write functions. Of course this will be a different page that will only handle file downloads, so it will be perfectly fine in your JSON web service.
I don't think you want to do this with a web service. Just use a regular IHttpHandler to perform the validation and return the data. So you would have the URL "attachments/1" get rewritten to "attachments/download.ashx?id=1". When you've verified access, write the data to the response stream. You can use the Content Disposition header to set the file name.

How should I handle users browsing to pages in my site meant only for AJAX? Should I ever use GET?

Similar questions have been asked about the nature of when to use POST and when to use GET in an AJAX request
Here:
What are the advantages of using a GET request over a POST request?
and here: GET vs. POST ajax requests: When and how to use either?
However, I want to make it clear that that is not exactly what I am asking. I get idempotence, sensitive data, the ability for browsers to be able to try again in the event of an error, and the ability for the browser to be able to cache query string data.
My real scenario is such that I want to prevent my users from being able to simply enter in the URL to my "Compute.cshtml" file (i.e. the file on the server that my jQuery $.ajax function posts to).
I am in a WebMatrix C#.net web-pages environment and I have tried to precede the file name with an underscore (_), but apparently an AJAX request falls under the same criteria that this underscore was designed to prevent the display of and it, of course, breaks the request.
So if I use POST I can simply use this logic:
if (!IsPost) //if this is not a post...
{
Response.Redirect("~/") //...redirect back to home page.
}
If I use GET, I suppose I can send additional data like a string containing the value "AccessGranted" and check it on the other side to see if it equals this value and redirect if not, but this could be easily duplicated through typing in the address bar (not that the data is sensitive on the other side, but...).
Anyway, I suppose I am asking if it is okay to always use POST to handle this logic or what the appropriate way to handle my situation is in regards to using GET or POST with AJAX in a WebMatrix C#.net web-pages environment.
My advice is, don't try to stop them. It's harmless.
You won't have direct links to it, so it won't really come up. (You might want your robots.txt to exclude the whole /api directory, for Google's sake).
It is data they have access to anyway (otherwise you need server-side trimming), so you can't be exposing anything dangerous or sensitive.
The advantages in using GETs for GET-like requests are many, as you linked to (caching, semantics, etc)
So what's the harm in having that url be accessible via direct browser entry? They can POST directly too, if they're crafty enough, using Fiddler "compose" for example. And having the GETs be accessible via url is useful for debugging.
EDIT: See sites like http://www.robotstxt.org/orig.html for lots of details, but a robots.txt that excluded search engines from your web services directory called /api would look like this:
User-agent: *
Disallow: /api/
Similar to IsPost, you can use IsAjax to determine whether the request was initiated by the XmlHttpRequest object in most browsers.
if(!IsAjax){
Response.Redirect("~/WhatDoYouThinkYoureDoing.cshtml");
}
It checks the request to see if it has an X-Requested-With header with the value of XmlHttpRequest, or if there is an item in the Request object with the key X-Requested-With that has a value of XmlHttpRequest.
One way to detect a direct AJAX call is to check for the presence of the http_referer header. Directly typed URLs won't generate a referrer, but you still won't be able to differentiate the call from a simple anchor link.
(Just keep in mind that some browsers don't generate the header for XHR requests.)

Raw SOAP data with WebServices in C#

Where can I find the RAW/object data of a SOAP request in C# when using WebServices.
Can't find it anywhere. Shouldent it be available in the HttpContext.Current.Request object ?
Shouldent it be available in the HttpContext.Current.Request object ?
No, it shouldn't.
What are you trying to accomplish? If you just want to see that data so you can log it, or as an aid to debugging, then see the example in the SoapExtension class. It's a working sample of an extension that can log input and output as XML. I've used a modified version of it myself.
If you're just looking to debug your web service, then you can install Fiddler, and that allows you to inspect the data sent to and from your web service.
It sounds like you're going to have to go lower level on your implementation if you want to see the raw XML. Check out the generic handler (ASHX extension). This will allow you to deal with the request/response streams directly. It's very low level, but gives you full control over the service lifecycle.
I found
Request.Params[null]
refers to the RAW data posted to the page in C# ASP.NET.

Categories

Resources