I use HttpClient in my app to send my user/password to a service that returns some cookies that I can later use for all my other requests. The service is located at https://accounts.dev.example.com/login and returns two cookies that have Domain=.dev.example.com. The issue I'm finding is that, in some machines (Windows Domain Controllers), these cookies are not being used when I request resources in subdomains like https://accounts.dev.example.com/health-check, but according to the MDN docs a cookie for a domain can be used for requesting resources to subdomains:
Domain= Optional
Specifies those hosts to which the cookie will be sent. If not specified, defaults to the host portion of the current document location (but not including subdomains). Contrary to earlier specifications, leading dots in domain names are ignored. If a domain is specified, subdomains are always included.
Do you know how to properly configure HttpClient to pass the domain cookies to subdomain requests?
A bit more of details:
The cookies returned by my authentication service at https://accounts.dev.example.com/login look like this in the HTTP headers:
Set-Cookie: AK=112233;Version=1;Domain=.dev.example.com;Path=/;Max-Age=5400;Secure;HttpOnly,
Set-Cookie: AS=445566;Version=1;Domain=.dev.example.com;Path=/;Max-Age=5400;Secure;HttpOnly,
Then I can query C#'s CookieContainer with either of these calls in normal workstations:
cookies.GetCookies("https://accounts.dev.example.com")
cookies.GetCookies("https://dev.example.com")
Both of which will return the 2 cookies like:
$Version=1; AK=112233; $Path=/; $Domain=.dev.example.com
$Version=1; AS=445566; $Path=/; $Domain=.dev.example.com
But in the other machines (the Domain Controller's) the first call will return an empty list, while the second will return the 2 cookies.
Why this difference on the behaviour of CookieContainer.GetCookies depending on which machine is running the code?
My workstations are using Microsoft Windows 10 Home Single Language (.Net 4.0.30319.42000) and the DCs are using Microsoft Windows Server 2012 R2 Datacenter (.Net 4.0.30319.36399).
The code
This is a modified version of my code:
public static async Task<string> DoAuth(CookieContainer cookies,
Dictionary<string, string> postHeaders,
StringContent postBody)
{
try
{
using (var handler = new HttpClientHandler())
{
handler.CookieContainer = cookies;
using (var client = new HttpClient(handler, true))
{
foreach (var key in postHeaders.Keys)
client.DefaultRequestHeaders.Add(key, postHeaders[key]);
var response = await client.PostAsync("https://accounts.dev.example.com/login", postBody);
response.EnsureSuccessStatusCode();
// This line returns 0 in Domain Controllers, and 2 in all other machines
Console.Write(cookies.GetCookies("https://accounts.dev.example.com").Count);
return await response.Content.ReadAsStringAsync();
}
}
}
catch (HttpRequestException e)
{
...
throw;
}
}
As I couldn't find an answer to this (not in TechNet either), I decided to go with the following solution, which works, but not sure if there is a proper way of solving the issue:
foreach (Cookie cookie in cookies.GetCookies(new Uri("https://dev.example.com")))
{
cookies.Add(new Uri("https://accounts.dev.example.com"), new Cookie(cookie.Name, cookie.Value, cookie.Path, ".accounts.dev.example.com"));
}
So, I'm duplicating the cookie for each one of the subdomains that my app should send these cookies to.
The underlying issue seems to be a bug in the Set-Cookie header. It seems the cause of the issue is the Version= component in the Set-Cookie header. This makes the CookieContainer fall on its face and results in the strange $Version and $Domain cookies then being sent in subsequent client requests. As far as I can tell there is no way to remove these broken cookies either. Iterating GetCookies() with the originating domain does not reveal the erroneous cookies.
Related
I have an Azure function that sends a request to a URL and sends back the response. This function kept failing with timeout error for URLs from a particular domain (confidential).
To debug this, I created a very minimal Azure function:
var content = string.Empty;
using (var response = await _httpClient.GetAsync(url))
{
response.EnsureSuccessStatusCode();
content = await response.Content.ReadAsByteArrayAsync();
}
return new OkObjectResult(content);
This code works fine in local. When I try using the deployed Azure function, it works for all the other domains I tried (ex: https://google.com) but it hits request timeout error for a particular domain after trying for about 90 seconds. The error happens at this particular line: _httpClient.GetAsync(url). Again, it works fine for this (confidential) domain in local.
I have tried deploying the Azure function to two completely different Azure service plans and regions. Same result. It doesn't work for URLs from the required domain. Works for URLs of other domains.
Error:
System.IO.IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request..
Update (solution):
I tried sending a request from Postman, copied the code from there for C# and deployed it to the Azure function and it is now working for the problematic domain. Something like below:
var client = new RestClient(url);
client.Timeout = -1;
var request = new RestRequest(Method.GET);
IRestResponse response = client.Execute(request);
The key here is client.Timeout = -1, which seems to have fixed the problem.
Now, in my original code, I tried setting HttpClient's timeout to Timeout.InfiniteTimeSpan both in Startup configuration as well as at individual request level but it did not work.
services.AddHttpClient("AzureTestClient", options =>
{
options.Timeout = Timeout.InfiniteTimeSpan;
});
Am I setting the timeout wrong in the HttpClient solution?
If you are using a Consumption plan then maybe the confidential URL need to whitelist the whole Azure Data center. You can follow the guide here or consider upgrading the Consumption plan to a premium one and have a dedicated linked VNET.
Maybe your local machine is already linked to the domain/whitelisted so azure function operates from different range.
Another reason maybe the URL returns a different HttpStatusCode that is't Successful range (200-299) so it fails with "EnsureSuccessStatusCode" in the old code?
Normally for the http code initialization, I did something like that:
public void Configure(IWebJobsBuilder builder)
{
builder.Services.AddHttpClient("AzureTestClient",
options => { options.Timeout = Timeout.InfiniteTimeSpan; });
}
Then when I want to use it, I do like that in any other function and it worked:
var client = clientFactory.CreateClient("AzureTestClient");
I have this code:
var handler = new HttpClientHandler
{
AllowAutoRedirect = false,
CookieContainer = new CookieContainer(),
UseCookies = true
};
return new HttpClient(handler)
{
Timeout = Timeout.InfiniteTimeSpan
};
I'm using OpenID Connect with ASP.Net Core which returns a correlation cookie like this:
cookieHeader = "correlation=ABCDEFG; path=/signin; secure; HttpOnly";
I make my call to the site:
using (HttpResponseMessage msg = client.SendAsync(request).Result)
And get redirected to the OpenID server. However, HttpClient's cookie container doesn't contain any cookies. If I manually add an identical cookie to the response modifying only path to equal / e.g.
cookieHeader = "correlation=ABCDEFG; path=/; secure; HttpOnly";
Then the cookie appears in HttpClient's CookieContainer. I don't want to have to modify ASP.Net Core base OpenID functionality to change the cookie paths so HttpClient will pick up the cookies and I can authenticate. Is there a way to make HttpClient save off cookies with paths specified?
It looks like this is a by-design silent failure. Trying to add these cookies manually yields a CookieException. It looks like ASP.Net Core's OpenID Connect implementation relies on an RFC that supersedes the one .NET Framework currently implements. In RFC 2109 (.NET Framework), you can't set a cookie with a path that is not part of the current request while .NET Core (per RFC 6265) does permit and even rely on this behavior. I guess it'll be manual cookie storage for now.
Source
i am just working on my first Windows Phone 8.1 app (Universal if this matters, but only Windows Phone implemented at the moment). And at first all is working very smooth but as soon as my app is running for about 25-30 Minutes I can no longer use my HttpClient. I use the Windows.Web.Http.HttpClient.
In my first trys I used a singleHttpClientand reused it all the time. As I became aware that this is not working I started using a newHttpClient` for each request. But still no luck.
This is my method to get a new HttpClient:
private HttpClient GetClient()
{
var filter = new HttpBaseProtocolFilter
{
AllowUI = false,
CacheControl = { WriteBehavior = HttpCacheWriteBehavior.NoCache },
ServerCredential =
new PasswordCredential(
BaseApiUri.ToString(),
credentials.UserName,
credentials.Password),
};
var httpClient = new HttpClient(filter);
var headers = httpClient.DefaultRequestHeaders;
var httpConnectionOptionHeaderValueCollection = headers.Connection;
httpConnectionOptionHeaderValueCollection.Clear();
headers.Accept.TryParseAdd("application/json");
headers.CacheControl.TryParseAdd("no-cache");
headers.Add("Pragma", "no-cache");
headers.Add("Keep-Alive", "false");
headers.Cookie.Clear();
return httpClient;
}
The extra code setting the headers and clearing cookies are my attempts to stop some kind of caching of connections under the surface that might happen. But still no luck.
My method to make requests my API is like the following:
private async Task<bool> PostNoResponseRequestTo(string relativeUri, object requestContent, CancellationToken cancellationToken)
{
var targetUri = new Uri(BaseApiUri, relativeUri);
var requestJson = JsonConvert.SerializeObject(requestContent);
var content = new HttpStringContent(requestJson, UnicodeEncoding.Utf8, "application/json");
try
{
using (var httpClient = this.GetClient())
{
var post =
await httpClient.PostAsync(targetUri, content).AsTask(cancellationToken).ContinueWith(
async request =>
{
using (var response = await request)
{
return response.IsSuccessStatusCode;
}
},
cancellationToken);
return await post;
}
}
catch (Exception)
{
return false;
}
}
This works fine for about 25-30 Minutes after which the calls to the api suddenly start to fail. I start getting a 401 but as you can see i have specified credentials and because those are working and do not change (hardcoded them to test this) i start believing that the problem is on the API side.
This is the response I get:
StatusCode: 401, ReasonPhrase: 'Unauthorized', Version: 2, Content: Windows.Web.Http.HttpStreamContent, Headers:
{
Server: Microsoft-IIS/8.5
Date: Fri, 20 Mar 2015 14:25:06 GMT
WWW-Authenticate: Digest qop="auth",algorithm=MD5-sess,nonce="+Upgraded+NounceRemoved",charset=utf-8,realm="Digest", Negotiate, NTLM
X-Powered-By: ASP.NET
}
{
Content-Length: 1344
Content-Type: text/html
}
My API consists of a Asp.Net project with ServiceStack for its API functionality.
This is running on an IIS with activated digest authentication (all other are disabled).
By inspecting the logs i became aware of a failing API call in front of each successful call. But if i'm right this is by design of digest auth because i have not found a way to tell the client that the other side is using digest auth. I was able to specify this kind of information in my other .Net projects but for some reason Microsoft changed the code (and namespace) for the HttpClient. I am also aware of the HttpClient in the original namespace that you can get through nuget but this is not working for me as i get an error in my output window as soon as i make any call. This closes my app without any kind of information.
Back to the log i was able to get some information with the help of the extended logging and the tool to analyze them. The error is something like (can't access it right now will edit it later):'Invalid token passed to function/method'.
I really hope that someone can help me to solve this problem as it makes the app nearly unusable. My users have to restart the app every 15 Minutes to be on the save site.
Thanks for all advices that help me.
Try Checking the Machine Key setting in IIS. Automatically generate at runtime if tick will generate a new key every time the app pool is restarted. This might be causing your issue. The Machine Key can be set on the server, website or application level. As activated digest authentication is encrypted this might be the issue.
Managing Websites with IIS Manager (part 6) - The Machine Key and Windows Authentication
My app communicates with an internal web API that requires authentication.
When I send the request I get the 401 challenge as expected, the handshake occurs, the authenticated request is re-sent and everything continues fine.
However, I know that the auth is required. Why do I have to wait for the challenge? Can I force the request to send the credentials in the first request?
My request generation is like this:
private static HttpWebRequest BuildRequest(string url, string methodType)
{
var request = HttpWebRequest.CreateHttp(url);
request.PreAuthenticate = true;
request.AuthenticationLevel = AuthenticationLevel.MutualAuthRequested;
request.Credentials = CredentialCache.DefaultNetworkCredentials;
request.Proxy.Credentials = CredentialCache.DefaultNetworkCredentials;
request.ContentType = CONTENT_TYPE;
request.Method = methodType;
request.UserAgent = BuildUserAgent();
return request;
}
Even with this code, the auth header isn't included in the initial request.
I know how to include the auth info with basic.... what I want to do is to use Windows Auth of the user executing the app (so I can't store the password in a config file).
UPDATE I also tried using a HttpClient and its own .Credentials property with the same result: no auth header is added to the initial request.
The only way I could get the auth header in was to hack it in manually using basic authentication (which won't fly for this use-case)
Ntlm is a challenge/response based authentication protocol. You need to make the first request so that the server can issue the challenge then in the subsequent request the client sends the response to the challenge. The server then verifies this response with the domain controller by giving it the challenge and the response that the client sent.
Without knowing the challenge you can't send the response which is why 2 requests are needed.
Basic authentication is password based so you can short circuit this by sending the credentials with the first request but in my experience this can be a problem for some servers to handle.
More details available here:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa378749(v=vs.85).aspx
I'm not 100% sure, but I suspect that there is no way around this; it's simply the way HttpWebRequest works.
In the online .NET source, function DoSubmitRequestProcessing which is here, you can see this comment just after the start of the function, line 1731:
// We have a response of some sort, see if we need to resubmit
// it do to authentication, redirection or something
// else, then handle clearing out state and draining out old response.
A little further down (line 1795) (some lines removed for brevity)
if (resubmit)
{
if (CacheProtocol != null && _HttpResponse != null) CacheProtocol.Reset();
ClearRequestForResubmit(ntlmFollowupRequest);
...
}
And in ClearRequestForResubmit line 5891:
// We're uploading and need to resubmit for Authentication or Redirect.
and then (Line 5923):
// The second NTLM request is required to use the same connection, don't close it
if (ntlmFollowupRequest) {....}
To my (admittedly n00bish) eyes these comments seem to indicate that the developers decided to follow the "standard" challenge-response protocol for NTML/Kerberos and not include any way of sending authentication headers up-front.
Setting PreAuthenticate is what you want, which you are doing. The very first request will still do the handshake but for subsequent requests it will automatically send the credentials (based on the URL being used). You can read up on it here: http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.preauthenticate(v=vs.110).aspx.
I am writing a utility that would allow me to monitor the health of our websites. This consists of a series of validation tasks that I can run against a web application. One of the tests is to anticipate the expiration of a particular SSL certificate.
I am looking for a way to pre-fetch the SSL certificate installed on a web site using .NET or a WINAPI so that I can validate the expiration date of the certificate associated with a particular website.
One way I could do this is to cache the certificates when they are validated in the ServicePointManager.ServerCertificateValidationCallback handler and then match them up with configured web sites, but this seems a bit hackish. Another would be to configure the application with the certificate for the website, but I'd rather avoid this if I can in order to minimize configuration.
What would be the easiest way for me to download an SSL certificate associated with a website using .NET so that I can inspect the information the certificate contains to validate it?
EDIT:
To extend on the answer below there is no need to manually create the ServicePoint prior to creating the request. It is generated on the request object as part of executing the request.
private static string GetSSLExpiration(string url)
{
HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
using (WebResponse response = request.GetResponse()) { }
if (request.ServicePoint.Certificate != null)
{
return request.ServicePoint.Certificate.GetExpirationDateString();
}
else
{
return string.Empty;
}
}
I've just tried this, which "works on my machine (tm)".
It returns the text string representing the expiry date on the server's certificate, and requires an actual hit to be made on the web site.
private string GetSSLExpiryDate(string url)
{
Uri u = new Uri(url);
ServicePoint sp = ServicePointManager.FindServicePoint(u);
string groupName = Guid.NewGuid().ToString();
HttpWebRequest req = HttpWebRequest.Create(u) as HttpWebRequest;
req.ConnectionGroupName = groupName;
using (WebResponse resp = req.GetResponse())
{
// Ignore response, and close the response.
}
sp.CloseConnectionGroup(groupName);
// Implement favourite null check pattern here on sp.Certificate
string expiryDate = sp.Certificate.GetExpirationDateString();
return expiryDate;
}
I'm afraid I don't know all the rights and wrongs of using ServicePoint, and any other hoops that you might need to jump through, but you do get an SSL expiry date for the actual web site you want to know about.
EDIT:
ensure the "url" parameter use the https:// protocol.
e.g. string contosoExpiry = GetSSLExpiryData("https://www.contoso.com/");