HTTPS Request to HTTP Drops Headers? - c#

This is more for curiosity as I'm failing to find any answers or documentation for this phenomenon, but here's the scenario:
There are 2 services/applications, both hosted on IIS 7. Service 1 receives an HTTPS request from an external source (browser, fiddler, etc.) and to validate the request it needs to call service 2, so service 1 makes its own, new, separate call over HTTP to service 2. This call has an Authorization header added to the request object. When service 2 receives this call, the authentication header is gone, as if stripped out. Thus the authentication fails, this returns to service 1 which then rejects the external call.
Does anyone have an explanation why this header, and some others from what I've seen in testing, doesn't make it through with the HTTP call? Is this a behavior of IIS, or ASP.NET, or something? If the call to service 2 was HTTPS then the headers make it through fine. I'm generating the request like so:
string uriendpoint = "http://service.test.com/testService.svc/authtest";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uriendpoint);
request.Credentials = CredentialCache.DefaultCredentials;
var authField = MD5Hash("test:test!!2013");
request.Headers.Add(HttpRequestHeader.Authorization, authField.ToString());
request.Method = WebRequestMethods.Http.Get;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();

Most likely the "service 2" have code similar to "If incoming request is HTTP ignore authorization headers". It is very reasonable behavior as HTTP traffic can be very easily sniffed and replayed - so honest servers block callers from potentially unsecure behavior.

A co-worker of mine came across the root cause of this behavior being IIS's "URL Rewrite" module. We had it setup to do a permanent redirect of http requests to https, and this redirect is where the headers get dropped. It's a bit odd that IIS does this, but I guess I'll just try something else to get around this problem.

Related

What is NTLM/Authenticate/Negotiate web authentication

I understand basic and digest authentication. But I've searched a lot and I'm struggling with NTLM, Authenticate, & Negotiate.
I think, correct me if I am wrong, that NTLM & Authenticate are two terms for the same protocol.
And negotiate is trying first NTLM, then falling back to digest, then falling back to basic to connect.
Is that correct? And if so where is a good example of how to connect in C# both for NTLM only and for negotiate.
I have two use cases. The first is I need to pull down a single file. So make a request, get an XML file as the response, read it down, done.
The second is querying OData so hundreds to thousands of web requests, each of which will provide JSON (or XML) as the response.
Microsoft Negotiate is a security support provider (SSP) that acts as
an application layer between Security Support Provider Interface
(SSPI) and the other SSPs. When an application calls into SSPI to log
on to a network, it can specify an SSP to process the request. If the
application specifies Negotiate, Negotiate analyzes the request and
picks the best SSP to handle the request based on customer-configured
security policy.
https://learn.microsoft.com/en-us/windows/desktop/secauthn/microsoft-negotiate
As given in the article Negotiate does not fall back to digest. In a way Negotiate is like Kerberos but with a default backup of NTLM
Currently, the Negotiate security package selects between Kerberos and
NTLM. Negotiate selects Kerberos unless it cannot be used by one of
the systems involved in the authentication or the calling application
did not provide sufficient information to use Kerberos.
Windows Challenge/Response (NTLM) is the authentication protocol used
on networks that include systems running the Windows operating system
and on stand-alone systems.
Authenticate is just an internal method, not sure why you are getting confused with it and the protocols, a good look at the internals is here: https://blogs.msdn.microsoft.com/dsnotes/2015/12/30/negotiate-vs-ntlm/
The way to look at this is:
Microsoft came up initially with a way to authenticate on Windows servers/machine which they called NTLM , this used a request/response (sometimes called challenge) method.
Subsequently they came up with a new protocol called Kerberos which was adopted.
To make sure that existing application all function properly with old/new we have a new way to authenticate called Negotiate, which tried Kerberos and if that is not available goes for NTLM.
Edit 1 : Applying these authentication mechanisms for the Web was formalized in RFC 4559.
Edit 2 : NTLM authenticates one connection, not a request, while other authentication mechanisms usually authenticate one request. On the first use case this should not change so much, but for the second use case this makes sense to try NTLM while keeping one single connection (by using the HTTP Keep-Alive, and sending the credentials only once in the first request). There's maybe a performance difference. Keep us updated with your results.
A sample WebRequest code taken from Microsoft docs, you can replace the Webrequest with HttpWebRequest.
// Create a request for the URL.
WebRequest request = WebRequest.Create(
"http://www.contoso.com/default.html");
// If required by the server, set the credentials.
request.Credentials = CredentialCache.DefaultCredentials;
// Get the response.
WebResponse response = request.GetResponse();
// Display the status.
Console.WriteLine (((HttpWebResponse)response).StatusDescription);
// Get the stream containing content returned by the server.
Stream dataStream = response.GetResponseStream();
// Open the stream using a StreamReader for easy access.
StreamReader reader = new StreamReader(dataStream);
// Read the content.
string responseFromServer = reader.ReadToEnd();
// Display the content.
Console.WriteLine(responseFromServer);
// Clean up the streams and the response.
reader.Close();
response.Close();

CORS preflight and Non Preflight issue [duplicate]

Apparently, I have completely misunderstood its semantics. I thought of something like this:
A client downloads JavaScript code MyCode.js from http://siteA - the origin.
The response header of MyCode.js contains Access-Control-Allow-Origin: http://siteB, which I thought meant that MyCode.js was allowed to make cross-origin references to the site B.
The client triggers some functionality of MyCode.js, which in turn make requests to http://siteB, which should be fine, despite being cross-origin requests.
Well, I am wrong. It does not work like this at all. So, I have read Cross-origin resource sharing and attempted to read Cross-Origin Resource Sharing in w3c recommendation.
One thing is sure - I still do not understand how I am supposed to use this header.
I have full control of both site A and site B. How do I enable the JavaScript code downloaded from the site A to access resources on the site B using this header?
P.S.: I do not want to utilize JSONP.
Access-Control-Allow-Origin is a CORS (cross-origin resource sharing) header.
When Site A tries to fetch content from Site B, Site B can send an Access-Control-Allow-Origin response header to tell the browser that the content of this page is accessible to certain origins. (An origin is a domain, plus a scheme and port number.) By default, Site B's pages are not accessible to any other origin; using the Access-Control-Allow-Origin header opens a door for cross-origin access by specific requesting origins.
For each resource/page that Site B wants to make accessible to Site A, Site B should serve its pages with the response header:
Access-Control-Allow-Origin: http://siteA.com
Modern browsers will not block cross-domain requests outright. If Site A requests a page from Site B, the browser will actually fetch the requested page on the network level and check if the response headers list Site A as a permitted requester domain. If Site B has not indicated that Site A is allowed to access this page, the browser will trigger the XMLHttpRequest's error event and deny the response data to the requesting JavaScript code.
Non-simple requests
What happens on the network level can be slightly more complex than explained above. If the request is a "non-simple" request, the browser first sends a data-less "preflight" OPTIONS request, to verify that the server will accept the request. A request is non-simple when either (or both):
using an HTTP verb other than GET or POST (e.g. PUT, DELETE)
using non-simple request headers; the only simple requests headers are:
Accept
Accept-Language
Content-Language
Content-Type (this is only simple when its value is application/x-www-form-urlencoded, multipart/form-data, or text/plain)
If the server responds to the OPTIONS preflight with appropriate response headers (Access-Control-Allow-Headers for non-simple headers, Access-Control-Allow-Methods for non-simple verbs) that match the non-simple verb and/or non-simple headers, then the browser sends the actual request.
Supposing that Site A wants to send a PUT request for /somePage, with a non-simple Content-Type value of application/json, the browser would first send a preflight request:
OPTIONS /somePage HTTP/1.1
Origin: http://siteA.com
Access-Control-Request-Method: PUT
Access-Control-Request-Headers: Content-Type
Note that Access-Control-Request-Method and Access-Control-Request-Headers are added by the browser automatically; you do not need to add them. This OPTIONS preflight gets the successful response headers:
Access-Control-Allow-Origin: http://siteA.com
Access-Control-Allow-Methods: GET, POST, PUT
Access-Control-Allow-Headers: Content-Type
When sending the actual request (after preflight is done), the behavior is identical to how a simple request is handled. In other words, a non-simple request whose preflight is successful is treated the same as a simple request (i.e., the server must still send Access-Control-Allow-Origin again for the actual response).
The browsers sends the actual request:
PUT /somePage HTTP/1.1
Origin: http://siteA.com
Content-Type: application/json
{ "myRequestContent": "JSON is so great" }
And the server sends back an Access-Control-Allow-Origin, just as it would for a simple request:
Access-Control-Allow-Origin: http://siteA.com
See Understanding XMLHttpRequest over CORS for a little more information about non-simple requests.
Cross-Origin Resource Sharing - CORS (A.K.A. Cross-Domain AJAX request) is an issue that most web developers might encounter, according to Same-Origin-Policy, browsers restrict client JavaScript in a security sandbox, usually JS cannot directly communicate with a remote server from a different domain. In the past developers created many tricky ways to achieve Cross-Domain resource request, most commonly using ways are:
Use Flash/Silverlight or server side as a "proxy" to communicate
with remote.
JSON With Padding (JSONP).
Embeds remote server in an iframe and communicate through fragment or window.name, refer here.
Those tricky ways have more or less some issues, for example JSONP might result in security hole if developers simply "eval" it, and #3 above, although it works, both domains should build strict contract between each other, it neither flexible nor elegant IMHO:)
W3C had introduced Cross-Origin Resource Sharing (CORS) as a standard solution to provide a safe, flexible and a recommended standard way to solve this issue.
The Mechanism
From a high level we can simply deem CORS as a contract between client AJAX call from domain A and a page hosted on domain B, a typical Cross-Origin request/response would be:
DomainA AJAX request headers
Host DomainB.com
User-Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/json
Accept-Language en-us;
Accept-Encoding gzip, deflate
Keep-Alive 115
Origin http://DomainA.com
DomainB response headers
Cache-Control private
Content-Type application/json; charset=utf-8
Access-Control-Allow-Origin DomainA.com
Content-Length 87
Proxy-Connection Keep-Alive
Connection Keep-Alive
The blue parts I marked above were the kernal facts, "Origin" request header "indicates where the cross-origin request or preflight request originates from", the "Access-Control-Allow-Origin" response header indicates this page allows remote request from DomainA (if the value is * indicate allows remote requests from any domain).
As I mentioned above, W3 recommended browser to implement a "preflight request" before submiting the actually Cross-Origin HTTP request, in a nutshell it is an HTTP OPTIONS request:
OPTIONS DomainB.com/foo.aspx HTTP/1.1
If foo.aspx supports OPTIONS HTTP verb, it might return response like below:
HTTP/1.1 200 OK
Date: Wed, 01 Mar 2011 15:38:19 GMT
Access-Control-Allow-Origin: http://DomainA.com
Access-Control-Allow-Methods: POST, GET, OPTIONS, HEAD
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Max-Age: 1728000
Connection: Keep-Alive
Content-Type: application/json
Only if the response contains "Access-Control-Allow-Origin" AND its value is "*" or contain the domain who submitted the CORS request, by satisfying this mandtory condition browser will submit the actual Cross-Domain request, and cache the result in "Preflight-Result-Cache".
I blogged about CORS three years ago: AJAX Cross-Origin HTTP request
According to this Mozilla Developer Network article,
A resource makes a cross-origin HTTP request when it requests a resource from a different domain, or port than the one which the first resource itself serves.
An HTML page served from http://domain-a.com makes an <img> src request for http://domain-b.com/image.jpg.
Many pages on the web today load resources like CSS style sheets, images and scripts from separate domains (thus it should be cool).
Same-Origin Policy
For security reasons, browsers restrict cross-origin HTTP requests initiated from within scripts.
For example, XMLHttpRequest and Fetch follow the same-origin policy.
So, a web application using XMLHttpRequest or Fetch could only make HTTP requests to its own domain.
Cross-Origin Resource Sharing (CORS)
To improve web applications, developers asked browser vendors to allow cross-domain requests.
The Cross-origin resource sharing (CORS) mechanism gives web servers cross-domain access controls, which enable secure cross-domain data transfers.
Modern browsers use CORS in an API container - such as XMLHttpRequest or fetch - to mitigate risks of cross-origin HTTP requests.
How CORS works (Access-Control-Allow-Origin header)
Wikipedia:
The CORS standard describes new HTTP headers which provide browsers and servers a way to request remote URLs only when they have permission.
Although some validation and authorization can be performed by the server, it is generally the browser's responsibility to support these headers and honor the restrictions they impose.
Example
The browser sends the OPTIONS request with an Origin HTTP header.
The value of this header is the domain that served the parent page. When a page from http://www.example.com attempts to access a user's data in service.example.com, the following request header would be sent to service.example.com:
Origin: http://www.example.com
The server at service.example.com may respond with:
An Access-Control-Allow-Origin (ACAO) header in its response indicating which origin sites are allowed.
For example:
Access-Control-Allow-Origin: http://www.example.com
An error page if the server does not allow the cross-origin request
An Access-Control-Allow-Origin (ACAO) header with a wildcard that allows all domains:
Access-Control-Allow-Origin: *
Whenever I start thinking about CORS, my intuition about which site hosts the headers is incorrect, just as you described in your question. For me, it helps to think about the purpose of the same-origin policy.
The purpose of the same-origin policy is to protect you from malicious JavaScript on siteA.com accessing private information you've chosen to share only with siteB.com. Without the same-origin policy, JavaScript written by the authors of siteA.com could have your browser make requests to siteB.com, using your authentication cookies for siteB.com. In this way, siteA.com could steal the secret information you share with siteB.com.
Sometimes you need to work cross domain, which is where CORS comes in. CORS relaxes the same-origin policy for siteB.com, using the Access-Control-Allow-Origin header to list other domains (siteA.com) that are trusted to run JavaScript that can interact with siteB.com.
To understand which domain should serve the CORS headers, consider this. You visit malicious.com, which contains some JavaScript that tries to make a cross domain request to mybank.com. It should be up to mybank.com, not malicious.com, to decide whether or not it sets CORS headers that relax the same-origin policy, allowing the JavaScript from malicious.com to interact with it. If malicous.com could set its own CORS headers allowing its own JavaScript access to mybank.com, this would completely nullify the same-origin policy.
I think the reason for my bad intuition is the point of view I have when developing a site. It's my site, with all my JavaScript. Therefore, it isn't doing anything malicious, and it should be up to me to specify which other sites my JavaScript can interact with. When in fact I should be thinking: Which other sites' JavaScript are trying to interact with my site and should I use CORS to allow them?
From my own experience, it's hard to find a simple explanation why CORS is even a concern.
Once you understand why it's there, the headers and discussion becomes a lot clearer. I'll give it a shot in a few lines.
It's all about cookies. Cookies are stored on a client by their domain.
An example story: On your computer, there's a cookie for yourbank.com. Maybe your session is in there.
Key point: When a client makes a request to the server, it will send the cookies stored under the domain for that request.
You're logged in on your browser to yourbank.com. You request to see all your accounts, and cookies are sent for yourbank.com. yourbank.com receives the pile of cookies and sends back its response (your accounts).
If another client makes a cross origin request to a server, those cookies are sent along, just as before. Ruh roh.
You browse to malicious.com. Malicious makes a bunch of requests to different banks, including yourbank.com.
Since the cookies are validated as expected, the server will authorize the response.
Those cookies get gathered up and sent along - and now, malicious.com has a response from yourbank.
Yikes.
So now, a few questions and answers become apparent:
"Why don't we just block the browser from doing that?" Yep. That's CORS.
"How do we get around it?" Have the server tell the request that CORS is OK.
1. A client downloads javascript code MyCode.js from http://siteA - the origin.
The code that does the downloading - your html script tag or xhr from javascript or whatever - came from, let's say, http://siteZ. And, when the browser requests MyCode.js, it sends an Origin: header saying "Origin: http://siteZ", because it can see that you're requesting to siteA and siteZ != siteA. (You cannot stop or interfere with this.)
2. The response header of MyCode.js contains Access-Control-Allow-Origin: http://siteB, which I thought meant that MyCode.js was allowed to make cross-origin references to the site B.
no. It means, Only siteB is allowed to do this request. So your request for MyCode.js from siteZ gets an error instead, and the browser typically gives you nothing. But if you make your server return A-C-A-O: siteZ instead, you'll get MyCode.js . Or if it sends '*', that'll work, that'll let everybody in. Or if the server always sends the string from the Origin: header... but... for security, if you're afraid of hackers, your server should only allow origins on a shortlist, that are allowed to make those requests.
Then, MyCode.js comes from siteA. When it makes requests to siteB, they are all cross-origin, the browser sends Origin: siteA, and siteB has to take the siteA, recognize it's on the short list of allowed requesters, and send back A-C-A-O: siteA. Only then will the browser let your script get the result of those requests.
Using React and Axios, join a proxy link to the URL and add a header as shown below:
https://cors-anywhere.herokuapp.com/ + Your API URL
Just adding the proxy link will work, but it can also throw an error for No Access again. Hence it is better to add a header as shown below.
axios.get(`https://cors-anywhere.herokuapp.com/[YOUR_API_URL]`,{headers: {'Access-Control-Allow-Origin': '*'}})
.then(response => console.log(response:data);
}
Warning: Not to be used in production
This is just a quick fix. If you're struggling with why you're not able to get a response, you can use this.
But again it's not the best answer for production.
If you are using PHP, try adding the following code at the beginning of the php file:
If you are using localhost, try this:
header("Access-Control-Allow-Origin: *");
If you are using external domains such as server, try this:
header("Access-Control-Allow-Origin: http://www.website.com");
I worked with Express.js 4, Node.js 7.4 and Angular, and I had the same problem. This helped me:
a) server side: in file app.js I add headers to all responses, like:
app.use(function(req, res, next) {
res.header('Access-Control-Allow-Origin', req.headers.origin);
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
This must be before all routes.
I saw a lot of added this headers:
res.header("Access-Control-Allow-Headers","*");
res.header('Access-Control-Allow-Credentials', true);
res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE');
But I don’t need that,
b) client side: in sending by Ajax, you need to add "withCredentials: true," like:
$http({
method: 'POST',
url: 'url',
withCredentials: true,
data : {}
}).then(function(response){
// Code
}, function (response) {
// Code
});
If you want just to test a cross-domain application in which the browser blocks your request, then you can just open your browser in unsafe mode and test your application without changing your code and without making your code unsafe.
From macOS, you can do this from the terminal line:
open -a Google\ Chrome --args --disable-web-security --user-data-dir
In Python, I have been using the Flask-CORS library with great success. It makes dealing with CORS super easy and painless. I added some code from the library's documentation below.
Installing:
pip install -U flask-cors
Simple example that allows CORS for all domains on all routes:
from flask import Flask
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
#app.route("/")
def helloWorld():
return "Hello, cross-origin-world!"
For more specific examples, see the documentation. I have used the simple example above to get around the CORS issue in an Ionic application I am building that has to access a separate flask server.
Simply paste the following code in your web.config file.
Noted that, you have to paste the following code under <system.webServer> tag
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Content-Type" />
<add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" />
</customHeaders>
</httpProtocol>
I can't configure it on the back-end server, but with these extensions in the browsers, it works for me:
For Firefox:
CORS Everywhere
For Google Chrome:
Allow CORS: Access-Control-Allow-Origin
Note: CORS works for me with this configuration:
For cross origin sharing, set header: 'Access-Control-Allow-Origin':'*';
Php: header('Access-Control-Allow-Origin':'*');
Node: app.use('Access-Control-Allow-Origin':'*');
This will allow to share content for different domain.
Nginx and Apache
As an addition to apsiller's answer, I would like to add a wiki graph which shows when a request is simple or not (and OPTIONS pre-flight request is send or not)
For a simple request (e.g., hotlinking images), you don't need to change your server configuration files, but you can add headers in the application (hosted on the server, e.g., in PHP) like Melvin Guerrero mentions in his answer - but remember: if you add full CORS headers in your server (configuration) and at same time you allow simple CORS in the application (e.g., PHP), this will not work at all.
And here are configurations for two popular servers:
turn on CORS on Nginx (nginx.conf file)
location ~ ^/index\.php(/|$) {
...
add_header 'Access-Control-Allow-Origin' "$http_origin" always; # if you change "$http_origin" to "*" you shoud get same result - allow all domain to CORS (but better change it to your particular domain)
add_header 'Access-Control-Allow-Credentials' 'true' always;
if ($request_method = OPTIONS) {
add_header 'Access-Control-Allow-Origin' "$http_origin"; # DO NOT remove THIS LINES (doubled with outside 'if' above)
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Max-Age' 1728000; # cache preflight value for 20 days
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; # arbitrary methods
add_header 'Access-Control-Allow-Headers' 'My-First-Header,My-Second-Header,Authorization,Content-Type,Accept,Origin'; # arbitrary headers
add_header 'Content-Length' 0;
add_header 'Content-Type' 'text/plain charset=UTF-8';
return 204;
}
}
turn on CORS on Apache (.htaccess file)
# ------------------------------------------------------------------------------
# | Cross-domain Ajax requests |
# ------------------------------------------------------------------------------
# Enable cross-origin Ajax requests.
# http://code.google.com/p/html5security/wiki/CrossOriginRequestSecurity
# http://enable-cors.org/
# change * (allow any domain) below to your domain
Header set Access-Control-Allow-Origin "*"
Header always set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT"
Header always set Access-Control-Allow-Headers "My-First-Header,My-Second-Header,Authorization, content-type, csrf-token"
Header always set Access-Control-Allow-Credentials "true"
The Access-Control-Allow-Origin response header indicates whether the
response can be shared with requesting code from the given origin.
Header type Response header
-------------------------------------------
Forbidden header name no
A response that tells the browser to allow code from any origin to
access a resource will include the following:
Access-Control-Allow-Origin: *
For more information, visit Access-Control-Allow-Origin...
For .NET Core 3.1 API With Angular
Startup.cs : Add CORS
//SERVICES
public void ConfigureServices(IServiceCollection services){
//CORS (Cross Origin Resource Sharing)
//=====================================
services.AddCors();
}
//MIDDLEWARES
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseRouting();
//ORDER: CORS -> Authentication -> Authorization)
//CORS (Cross Origin Resource Sharing)
//=====================================
app.UseCors(x=>x.AllowAnyHeader().AllowAnyMethod().WithOrigins("http://localhost:4200"));
app.UseHttpsRedirection();
}
}
Controller : Enable CORS For Authorized Controller
//Authorize all methods inside this controller
[Authorize]
[EnableCors()]
public class UsersController : ControllerBase
{
//ActionMethods
}
Note: Only a temporary solution for testing
For those who can't control the backend for Options 405 Method Not Allowed, here is a workaround for theChrome browser.
Execute in the command line:
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-web-security --user-data-dir="path_to_profile"
Example:
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-web-security --user-data-dir="C:\Users\vital\AppData\Local\Google\Chrome\User Data\Profile 2"
Most CORS issues are because you are trying to request via client side ajax from a react, angular, jquery apps that are frontend basic libs.
You must request from a backend application.
You are trying to request from a frontend API, but the API you are trying to consume is expecting this request to be made from a backend application and it will never accept client side requests.

Why would my REST service .NET clients send every request without authentication headers and then retry it with authentication header?

We happen to run a REST web service with API requiring that clients use Basic authentication. We crafted a set of neat samples in various languages showing how to interface with our service. Now I'm reviewing IIS logs of the service and see that the following pattern happens quite often:
a request comes, gets rejected with HTTP code 401
the same request is resent and succeeds
which looks like the first request is sent without Authorization headers and then the second one is sent with the right headers and succeeds. Most of the time the log record contains "user-agent" which is the same string we planted into our .NET sample.
So I assume the problem is with .NET programs only. The problem is not reproduced with our sample code so I assume the users somehow modified the code or wrote their own from scratch.
We tried contacting the users but apparently they don't want to invest time into research. So it'd be nice to find what the most likely scenario is which leads to this behavior of .NET programs.
Why would they do this? Why would they not attach the headers on the first attempt?
This is the default behavior of HttpClient and HttpWebRequest classes which is exposed the following way.
Note: Below text explains suboptimal behavior causing the problem described in the question. Most likely you should not write your code like this. Instead scroll below to the corrected code
In both cases, instantiate a NetworkCredenatial object and set the username and password in there
var credentials = new NetworkCredential( username, password );
If you use HttpWebRequest - set .Credentials property:
webRequest.Credentials = credentials;
If you use HttpClient - pass the credentials object into HttpClientHandler (altered code from here):
var client = new HttpClient(new HttpClientHandler() { Credentials = credentials })
Then run Fiddler and start the request. You will see the following:
the request is sent without Authorization header
the service replies with HTTP 401 and WWW-Authenticate: Basic realm="UrRealmHere"
the request is resent with proper Authorization header (and succeeds)
This behavior is explained here - the client doesn't know in advance that the service requires Basic and tries to negotiate the authentication protocol (and if the service requires Digest sending Basic headers in open is useless and can compromise the client).
Note: Here suboptimal behavior explanation ends and better approach is explained. Most likely you should use code from below instead of code from above.
For cases when it's known that the service requires Basic that extra request can be eliminated the following way:
Don't set .Credentials, instead add the headers manually using code from here. Encode the username and password:
var encoded = Convert.ToBase64String( Encoding.ASCII.GetBytes(
String.Format( "{0}:{1}", username, password ) ) );
When using HttpWebRequest add it to the headers:
request.Headers.Add( "Authorization", "Basic " + encoded );
and when using HttpClient add it to default headers:
client.DefaultRequestHeaders.Authorization =
new AuthenticationHeaderValue( "Basic", encoded );
When you do that the request is sent with the right authorization headers every time. Note that you should not set .Credentials, otherwise if the username or password is wrong the same request will be sent twice both time with the wrong credentials and both times of course yielding HTTP 401.

Credentials on HttpClient is not validated after first successful REST call

I'm creating an application where the user is logging in with a Username, Password and a Domain. I want to make as much as it is reusable across Windows platforms so I'm using the nuget package Microsoft HTTP Client libraries in a Portable Class Library.
Here is how i create the HttpClient with a HttpClientHandler and then calling the GetAsync.
HttpClientHandler handler = new HttpClientHandler();
ICredentials myCredentials = new NetworkCredential("Username", "Password", "Domain");
handler.Credentials = myCredentials;
HttpClient client = new HttpClient(handler);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.BaseAddress = new Uri("https://....");
HttpResponseMessage response = await client.GetAsync("...");
This seems to work fine. The credentials are send in the request and only registered users are allowed to get the data.
In my application the users also have the option to sign out and then sign in again with possibly another username, password or domain. And here is where the problem is.
If I have called the client.GetAsync with some valid credentials one time, the HttpClient seems to remember the old user credentials, although I'm creating a new instance of HttpClient each time and setting the correct credentials for the new user.
So my questions is, Is the HttpClient keeping a network channel open or is there some session problem that I'm not aware of?
--- Update #1 ---
If I make the URLs unique in GetAsync(...), e.g. I could pass some random parameter with the request, the server will validate the credentials and only Authorized users will get access to the resource. It is not really a good solution, so I did some more research.
I looks like the server is sending a response header called Persistent-Auth: true. This tells the client that the Authorization header is not required for the next request. I geuss thats why the credentials are not sent the next I try to call the GetAsync for the same resource. Surprisingly I also noticed in Fiddler that for the second request to this resource, no HTTP request is being sent at all from the client.
One interesting thing is that if I try the same approach in a browser, the Authorization has the same behavior, so its only included in the first request. For the second request to the same resource, I can see in Fiddler that a HTTP request is being sent as you would expect.
So to sum it all. I guess I'm stuck with 2 issues. First, is it possible to change this Persistent-Auth behavior so it is set to false in the server response. Second, why is my application not sending any request at all the second time I'm requesting the same resource.
According to the answer of this question:
How to stop credential caching on Windows.Web.Http.HttpClient?
It should work for Windows build 10586 onwards.
To manual clear all cached credentials, we can also call the method HttpBaseProtocolFilter.ClearAuthenticationCache() which clears all cached credential information. Documentation for this method can be found here: https://learn.microsoft.com/en-us/uwp/api/Windows.Web.Http.Filters.HttpBaseProtocolFilter

WCF HTTP Headers Using HttpRequestMessageProperty and OperationContextScope

I realize this is a question that's been asked time and again, but I can't find a list of "gotchas" that I can take a look at.
I'm writing a WCF client that will consume an SAP web service, using a customBinding in my web.config with allowCookies set to false and support for reliable sessions enabled. I'm setting my HTTP headers as follows:
var authCookie = new System.Net.Cookie();
var wcfClient = new SomeWcfClient();
using (var context = new OperationContextScope(wcfClient.InnerChannel))
{
var cookies = new CookieContainer();
cookies.Add(authCookie);
var endPoint = new EndpointAddress("http://someDomain.test/");
var httpRequest = new System.ServiceModel.Channels.HttpRequestMessageProperty();
OperationContext.Current.OutgoingMessageProperties.Add(System.ServiceModel.Channels.HttpRequestMessageProperty.Name, httpRequest);
httpRequest.Headers.Add(HttpRequestHeader.Cookie, cookies.GetCookieHeader(endPoint.Uri));
wcfClient.PerformOperation();
}
When I use Fiddler, my HTTP header does not come across. I've tried creating dummy Referer and User-Agent headers, too, thinking that maybe something specific was happening with my cookie header, but even those other headers did not come across. Any thoughts? Where should I look next?
For this kind of stuff you should be implementing IClientMessageInspector - for some sample code see http://msmvps.com/blogs/paulomorgado/archive/2007/04/27/wcf-building-an-http-user-agent-message-inspector.aspx
See also (more current):
http://blog.khedan.com/2009/02/inspecting-messages-with.html
http://social.technet.microsoft.com/wiki/contents/articles/how-to-inspect-wcf-message-headers-using-iclientmessageinspector.aspx
http://yuzhangqi.itpub.net/post/37475/500654
http://wcfpro.wordpress.com/2011/03/29/iclientmessageinspector/
http://wcfpro.wordpress.com/2010/12/19/extended-wcf-preview/
http://wcfpro.wordpress.com/2011/01/31/realproxy/
http://wcfpro.wordpress.com/category/wcf-extensions/
http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/19500d14-78b7-4356-b817-fcc9abc2afcf/
http://msdn.microsoft.com/en-us/library/aa395196.aspx
WCF Content-Length HTTP header on outbound message
Adding Custom WCF header to Endpoint Programatically for Reliable Sessions
So, this issue was a lot different than we were expecting. I am still trying to find a fix, but at least I know root cause:
I am unable to send HTTP cookies to authenticate my requests; our SAP services use a MYSAPSSO2 token (HTTP cookie) for authentication. When trying to use WCF to connect to a Reliable Session-enabled SAP web service, our cookies don't get sent up front.
We are looking for a way to build a custom authentication provider that can use HTTP cookies.

Categories

Resources