Authenticating requests to external REST service - c#

I'm really new to web development, and I don't really have a good grip on the main concepts of web. However, I've been tasked with writing an asp.net application where users can search documents by querying an external RESTful web service. Requests to this REST service must be authenticated by HTTP Basic Authentication.
So far so good, I've been able to query the service using HttpWebRequest and HttpWebResponse, adding the encoded user:pass to the request's authorization header, deserialize the Json response and produce a list of strings with url's to the pdf documents resulting from the search.
So now I'm programmatically adding HyperLink elements to the page with these urls:
foreach (string url in urls) {
HyperLink link = new HyperLink();
link.Text = url;
link.NavigateUrl = url;
Page.Controls.Add(link);
}
The problem is that requests to these documents has to be authorized with the same basic http authentication and the same user:pass as when querying the REST service, and since I'm just creating links for the user to click, and not creating any HttpWebRequest objects, I don't know how to authenticate such a request resulting from a user clicking a link.
Any pointers to how I can accomplish this is very much appreciated. Thanks in advance!

You probably want to do the request server-side, as I think you're already doing, and then show the results embedded in your own pages, or just stream the result directly back to the users.
It's a bit unclear what it is you need (what are the links, what do you show the users, etc.), so this is the best suggesting I can do based on the info you give.
Update:
I would create a HttpHandler (an .ashx file in an ASP.NET project), and link to that, with arguments so you can make the request to the REST service and get the correct file, then stream the data directly back to the visitor. Here's a simple example:
public class DocumentHandler : IHttpHandler {
public Boolean IsReusable {
get { return true; }
}
public void ProcessRequest(HttpContext context) {
// TODO: Get URL of the document somehow for the REST request
// context.Request
// TODO: Make request to REST service
// Some pseudo-code for you:
context.Response.ContentType = "application/pdf";
Byte[] buffer = new WebClient().DownloadData(url);
context.Response.OutputStream.Write(buffer, 0, buffer.Length);
context.Response.End();
}
}
I hope you can fill in the blanks yourself.

Related

Scraping multiple lists from a website.

I'm currently working on a web scraper for a website that displays a table of data. The problem I'm running into is that the website doesn't sort my searches by state on the first search. I have to do it though the drop down menu on the second page when it loads. The way I load the first page is with what I believe to be a WebClient POST request. I get the proper html response and can parse though it, but I want to load the more filtered search, but the html I get back is incorrect when I compare it to the html I see in the chrome developers tab.
Here's my code
// The website I'm looking at.
public string url = "https://www.missingmoney.com/Main/Search.cfm";
// The POST requests for the working search, but doesn't filter by states
public string myPara1 = "hJava=Y&SearchFirstName=Jacob&SearchLastName=Smith&HomeState=MN&frontpage=1&GO.x=19&GO.y=18&GO=Go";
// The POST request that also filters by state, but doesn't return the correct html that I would need to parse
public string myPara2 = "hJava=Y&SearchLocation=1&SearchFirstName=Jacob&SearchMiddleName=&SearchLastName=Smith&SearchCity=&SearchStateID=MN&GO.x=17&GO.y=14&GO=Go";
// I save the two html responses in these
public string htmlResult1;
public string htmlResult2;
public void LoadHtml(string firstName, string lastName)
{
using (WebClient client = new WebClient())
{
client.Headers[HttpRequestHeader.ContentType] = "application/x-www-form-urlencoded";
htmlResult1 = client.UploadString(url, myPara1);
htmlResult2 = client.UploadString(url, myPara2);
}
}
Just trying to figure out why the first time I pass in my parameters it works and when I do it in the second one it doesn't.
Thank you for the time you spent looking at this!!!
I simply forgot to add the cookie to the new search. Using google chrome or fiddler you can see the web traffic. All I needed to do was add
client.Headers.Add(HttpRequestHeader.Cookie, "cookie");
to my code right before it uploaded it. Doing so gave me the right html response and I can now parse though my data.
#derloopkat pointed it out, credits to that individual!!!

How to read data from WebClient.UploadData

First time posting! I've been breaking my head on this particular case. I've got a Web application that needs to upload a file towards a web-api and receive an SVG file (in a string) back.
The web-app uploads the file as follows:
using (var client = new WebClient())
{
var response = client.UploadFile(apiUrl, FileIGotEarlierInMyCode);
ViewBag.MessageTest = response.ToString();
}
Above works, but then we get to the API Part:
How do I access the uploaded file? Pseudocode:
public string Post([FromBody]File f)
{
File uploadedFile = f;
String svgString = ConvertDataToSVG(uploadedFile);
return s;
}
In other words: How do I upload/send an XML-file to my Web-api, use/manipulate it there and send other data back?
Thanks in advance!
Nick
PS: I tried this answer:
Accessing the exact data sent using WebClient.UploadData on the server
But my code did not compile on Request.InputStream.
The reason Request.InputStream didn't work for you is that the Request property can refer to different types of Request objects, depending on what kind of ASP.NET solution you are developing. There is:
HttpRequest, as available in Web Forms,
HttpRequestBase, as available in MVC Controllers
HttpRequestMessage, as available in Web API Controllers.
You are using Web API, so HttpRequestMessage it is. Here is how you read the raw request bytes using this class:
var data = Request.Content.ReadAsByteArrayAsync().Result;

How to ensure a url is called from my application and not manually from browser

I have an application that contains a button, on click of this button, it will open a browser window using a URL with querystring parameters (the url of a page that i am coding).
Is there a way to ensure that the URL is coming from my application and only from my application - and not just anyone typing the URL manually in a webbrowser?
If not, what is the best way to ensure that a specific URL is coming from a specific application - and not just manually entered in the address bar or a web browser-
Im using asp.net.
You can check if the request was made from one of the pages of your application using:
Request.UrlReferrer.Contains("mywebsite.com")
That's the simple way.
The secure way is to put a cookie on the client containing a value encrypted using a secure key or hashed using a secure salt. If the cookie is set to expire when the page is closed it should be impossible for someone to forge.
Here's an example:
On the pages that would redirect to the page you are trying to protect:
HttpCookie cookie = new HttpCookie("SecureCheck");
//don't set the cookie's expiration so it's deleted when the browser is closed
cookie.Value = System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile(Session.SessionID, "SHA1");
Response.Cookies.Add(cookie);
On the page you are trying to protect:
//check to see if the cookie is there and it has the correct value
if (string.IsNullOrEmpty(Request.Cookies["SecureCheck"]) || System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile(Session.SessionID, "SHA1") != Request.Cookies["SecureCheck"])
throw Exception("Invalid request. Please access this page only from the application.");
//if we got this far the exception was not thrown and we are safe to continue
//insert whatever code here
There's no reliable way to do this for a GET request, nor is their any reason to try for a legitimate user. What you should do instead is ensure that regardless of where the request comes from the user has the proper permissions and access rights and that the session is protected appropriately (HTTP only cookies, SSL, etc.) If the request is changing data, then it should be a POST, not a GET, and it should be accompanied by some suitable cross-site request forgery prevention techniques (such as a cookie containing a nonce that is verified against a matching nonce on the form itself).
There is no way, other than rejecting the request if it doesn't contain a previously generated random one-time token in the parameters (that would be stored in the session, for example).
While there is no 100% secure way to do this, what I am suggesting might at least take care of your basic needs.
This is what you can do .
Client: Add a HTTP header with an encoded string that is like hash (sha256) of some word.
Then make your client always do a POST request instead of GET.
Server: Check the HTTP Header for encoded string. Also make sure it is a POST request.
This is not 100% as ofcourse someone smart enough could figure out and still generate a request, but depending on your need you might find this enough or not
You can check the referer, the user agent, add an additional header to the request, always do post requests to that url. However, considering HTTP is transmitted in plain text, somebody is always able to let wireshark or fiddler run, capture the HTTP packets and recreate the requests with your measures in place.
Pass parameters from your application so that you can verify on the server side.
I suggest you use an encryption algorithm and generate random text using a password(key). Then, decrypt the param on the server side and check if it matches your expectation.
I am not very clear though. sorry about that, If had to do something like this, then, I would do something similar to mentioned above.
You can use to check the header on MVC controller like Request.Headers["Accept"]; if it is coming from your code in angularjs or jquery:
sample angularjs like this:
var url = ServiceServerPath + urlSearchService + '/SearchCustomer?input=' + $scope.strInput;
$http({
method: 'GET',
url: url,
headers: {
'Content-Type': 'application/json'
},.....
And on the MVC [HttpGet] Action method
[HttpGet]
[PreventDirectAccess]//It is my custom filters
// ---> /Index/SearchCustomer?input={input}/
public string SearchCustomer(string input)
{
try
{
var isJsonRequestOnMVC = Request.Headers["Accept"];//TODO: This will check if the request comes from MVC else comes from Browser
if (!isJsonRequestOnMVC.Contains("application/json")) return "Error Request on server!";
var serialize = new JavaScriptSerializer();
ISearch customer = new SearchCustomer();
IEnumerable<ContactInfoResult> returnSearch = customer.GetCustomerDynamic(input);
return serialize.Serialize(returnSearch);
}
catch (Exception err)
{
throw;
}
}

Logging into website programmatically with C# and WebRequest class

I'm trying to login to a website using C# and the WebRequest class. This is the code I wrote up last night to send POST data to a web page:
public string login(string URL, string postData)
{
Stream webpageStream;
WebResponse webpageResponse;
StreamReader webpageReader;
byte[] byteArray = Encoding.UTF8.GetBytes(postData);
_webRequest = WebRequest.Create(URL);
_webRequest.Method = "POST";
_webRequest.ContentType = "application/x-www-form-urlencoded";
_webRequest.ContentLength = byteArray.Length;
webpageStream = _webRequest.GetRequestStream();
webpageStream.Write(byteArray, 0, byteArray.Length);
webpageResponse = _webRequest.GetResponse();
webpageStream = webpageResponse.GetResponseStream();
webpageReader = new StreamReader(webpageStream);
string responseFromServer = webpageReader.ReadToEnd();
webpageReader.Close();
webpageStream.Close();
webpageResponse.Close();
return responseFromServer;
}
and it works fine, but I have no idea how I can modify it to send POST data to a login script and then save a cookie(?) and log in.
I have looked at my network transfers using Firebug on the websites login page and it is sending POST data to a URL that looks like this:
accountName=myemail%40gmail.com&password=mypassword&persistLogin=on&app=com-sc2
As far as I'm aware, to be able to use my account with this website in my C# app I need to save the cookie that the web server sends, and then use it on every request? Is this right? Or can I get away with no cookie at all?
Any help is greatly apprecated, thanks! :)
The login process depends on the concrete web site. If it uses cookies, you need to use them.
I recommend to use Firefox with some http-headers watching plugin to look inside headers how they are sent to your particular web site, and then implement it the same way in C#. I answered very similar question the day before yesterday, including example with cookies. Look here.
I've found more luck using the HtmlElement class to manipulate around websites.
Here is cross post to an example of how logging in through code would work (provided you're using a WebBrowser Control)

Writing a stream to the response in ASP.Net

Ok just want to clarify something with my solution.
I have a requirement to grab a file from a respository somewhere, this repository requires a session token to be passed in the form of a cookie along with the request for the file.
I am authenticating the user against this repository and storing the session token in the users cookie collection for my application when the user first logs onto my application.
Problem is the cookie will not get sent to the repository when a user tried to access a file because the repository is on a different URL and domain. Therefore I am creating a new http request, appending the cookie and getting the response stream back.
I now need to send this response stream back to the user, headers and all (as this response stream will contain the headers for the file the user is trying to access)
Can I use this:
string session = cookie.Value;
StreamReader reader = new StreamReader(Utility.GetLinkStream(url, session));
Context.Response.ClearHeaders();
Context.Response.Clear();
Context.Response.Write(reader.ReadToEnd());
Essentially the call to Utility.GetLinkStream goes off and creates a http request then returns the stream of the response. Will the call to Write write out the whole response headers and all, or is there a better way to acheive this?
Response.Write() will only write content, you have to set the headers before calling this. You could enumerate the headers from the WebResponse and add them to the Response.Headers manually.

Categories

Resources