ASP.Net MVC application Defaults to TLS 1.0 - c#

We have an ASP.Net MVC application that uses server-to-server communication for retrieving some info.
When we run an installation in the AWS cloud, the request fails because, by default, WebRequest uses TLS 1.0, which we have disabled on our environment. Using the same code in another project defaults to TLS 1.2. Also, hardcoding the protocol in the ServicePointManager fixes the issue.
Does anyone have experience with a similar problem and the underlying cause? I would like to fix this without hardcoding the protocol because it is not future-proof.

I had a similar problem, and ended up simply making it a configuration setting:
//read setting as comma-separated string from wherever you want to store settings
//e.g. "SSL3, TLS, TLS11, TLS12"
string tlsSetting = GetSetting('tlsSettings')
//by default, support whatever mix of protocols you want..
var tlsProtocols = SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12;
if (!string.IsNullOrEmpty(tlsSetting))
{
//we have an explicit setting, So initially set no protocols whatsoever.
SecurityProtocolType selOpts = (SecurityProtocolType)0;
//separate the comma-separated list of protocols in the setting.
var settings = tlsSetting.Split(new[] { ',' });
//iterate over the list, and see if any parse directly into the available
//SecurityProtocolType enum values.
foreach (var s in settings)
{
if (Enum.TryParse<SecurityProtocolType>(s.Trim(), true, out var tmpEnum))
{
//It seems we want this protocol. Add it to the flags enum setting
// (bitwise or)
selOpts = selOpts | tmpEnum;
}
}
//if we've allowed any protocols, override our default set earlier.
if ((int)selOpts != 0)
{
tlsProtocols = selOpts;
}
}
//now set ServicePointManager directly to use our protocols:
ServicePointManager.SecurityProtocol = tlsProtocols;
This way, you can enable/disable specific protocols, and if any values are added or removed to the enum definition, you won't even need to re-visit the code.
Obviously a comma-separated list of things that map to an enum is a little unfriendly as a setting, but you could set up some sort of mapping or whatever if you like of course... it suited our needs fine.

Related

C# SSL routines:tls_post_process_client_hello:no shared cipher

when i use this code block about TlsCipherSuite, i get this error "SSL routines:tls_post_process_client_hello:no shared cipher". can you give some advice?
public static KestrelServerOptions ListenSera(this KestrelServerOptions options, SeraSettings seraSettings)
{
options.Listen(IPAddress.Parse(seraSettings.ListenIP), seraSettings.Port, listenOptions =>
{
listenOptions.UseConnectionLimits(veraSettings.ConnectionLimit);
listenOptions.UseHttps(adapterOptions =>
{
adapterOptions.OnAuthenticate = (context, authenticationOptions) =>
{
authenticationOptions.CipherSuitesPolicy = new CipherSuitesPolicy(new[]
{
TlsCipherSuite.TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
TlsCipherSuite.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
});
};
adapterOptions.SslProtocols = SslProtocols.Tls12;
adapterOptions.CheckCertificateRevocation = false;
adapterOptions.HandshakeTimeout = TimeSpan.FromSeconds(veraSettings.TlsHandshakeTimeout);
adapterOptions.ClientCertificateMode = ClientCertificateMode.AllowCertificate;
adapterOptions.ServerCertificate =
new X509Certificate2(Path.Combine("certs", veraSettings.ServerCertificateFilename),
veraSettings.ServerCertificatePassword);
adapterOptions.AllowAnyClientCertificate();
});
listenOptions.UseConnectionLogging();
listenOptions.UseConnectionHandler<VeraKecManager>();
});
return options;
}
}
This means that the ciphers you offered to the server are not available in the server. For some unknown reason you only offered these two ciphers:
TlsCipherSuite.TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
TlsCipherSuite.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
The first one is using plain DHE is key exchange, which is slow and thus often not enabled in the server (too much load on the server). In the second cipher you offer ECDHE as key exchange which is much faster and usually available. But you offer it only in connection with ECDSA which means that the server needs an ECC certificate and not the more common RSA certificate.
In general, it is not a good idea to change the offered ciphers from the defaults. It is even worse if these are restricted to only a few ones for a reason you cannot explain. In general, you should never change security settings without understanding what these are actually doing and what implications the change has, since this might not only make your code not working but it might actually work but in an insecure way. Thus, better leave any security settings at their default and change only these, were the default is not sufficient.
I guess there are a few more thing you can do to diagnose the problem.
try to run Wireshark and listen to the TLS handshake packets. If you take a close look you should see which cipher suites are being offered by the client and server.
If you are using windows, check the registry (Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL\00010002\Functions). This key should list all cipher suites on your machine.
If you are using a certificate, check what sort of cipher suite is mentioned and if any elliptic curves are used. In my case, the certificate mentioned NistP521 curve (Public key parameters ECDSA_P521) which is not enabled by default in windows. I had to modify the registry to enable it (I changed registry value Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL\00010002\Functions\EccCurves from:
curve25519
NistP256
NistP384
to
curve25519
NistP256
NistP384
NistP521
Hope any of the above will put you on the right track.

When should I use SecureSockOptions or useSsl when connecting using Mailkit

I'm confused on how to use the third parameter when setting up smtp with MailKit.
Here is what I have so far:
// *************** SEND EMAIL *******************
using (var client = new MailKit.Net.Smtp.SmtpClient(new ProtocolLogger("smtp.log")))
{
client.SslProtocols = System.Security.Authentication.SslProtocols.Tls12;
//accept all SSL certificates
client.ServerCertificateValidationCallback = (s, c, h, e) => true;
// Note: since we don't have an OAuth2 token, disable
// the XOAUTH2 authentication mechanism.
client.AuthenticationMechanisms.Remove("XOAUTH2");
// client.Connect(emailSettings.SmtpServer, emailSettings.SmtpPort, emailSettings.IsSslEnabled);
client.Connect(emailSettings.SmtpServer, emailSettings.SmtpPort, emailSettings.AuthType);
if (emailSettings.IsAuthenticationRequired)
{
// Note: only needed if the SMTP server requires authentication
client.Authenticate(emailSettings.SmtpUsername, emailSettings.SmtpPassword);
}
if (emailSettings.TimeOut == 0) emailSettings.TimeOut = 10;
client.Timeout = emailSettings.TimeOut * 1000;
client.Send(message);
client.Disconnect(true);
}
My confusion is on this line:
client.Connect(emailSettings.SmtpServer, emailSettings.SmtpPort , true);
I have the option to pass in either true/false or SecureSockOptions.
This is what I have on my form:
I'm not sure I understand how the two different settings affect the sending of emails. I assume I use either the true/false for useSsl or the SecureSockOptions? I'm not sure how these work together.
The options for SecureSockOptions are:
None Auto SslOnConnect StartTls StartTlsWhenAvailable
Do these options negate the need for useSsl?
useSsl is a dumbed-down version of SecureSocketOptions.
When you pass true for useSsl, it maps to SecureSocketOptions.SslOnConnect.
When you pass false for useSsl, it maps to SecureSocketOptions.StartTlsWhenAvailable.
Looking at the mailkit documentation Connect method has 5 different signatures (different parameters)
In the case of passing boolean to the Connect it means use ssl if true and don't use ssl if false. There is no method that accepts both the boolean and SecureSocketOptions.
http://www.mimekit.net/docs/html/Overload_MailKit_Net_Smtp_SmtpClient_Connect.htm
You should read up their documentation on the link above.
Also this might be useful from their documentation:
The useSsl argument only controls whether or not the client makes an
SSL-wrapped connection. In other words, even if the useSsl parameter
is false, SSL/TLS may still be used if the mail server supports the
STARTTLS extension.
To disable all use of SSL/TLS, use the Connect(String, Int32,
SecureSocketOptions, CancellationToken) overload with a value of
SecureSocketOptions.None instead.
You should use it when you have a trusted internal network Domain Controller type or trusted box also, when you are securing transmissions (no eavesdrop) and by default most mail servers use it even if you code it in and it reads false you may have a system wrap, which is when the system itself overrides it due to the OSI Model lower levels. I would recommend personally using it when you can it solves a couple of the older transmission model drops on syn ack hand requests and has a higher requested time out value if I remember correctly.

UriHelper how to get production url path

var displayUrl = UriHelper.GetDisplayUrl(Request);
var urlBuilder = new UriBuilder(displayUrl) { Query = null, Fragment = null };
string _activation_url = urlBuilder.ToString().Substring(0, urlBuilder.ToString().LastIndexOf("/")) +"/this_is_my_link.html";
I expect to get correct uri production path, but I still get
http://localhost:5000/api/mdc/this_is_my_link.html
I deployed this on centos 7
please help me..
Thanks
Don
If you are using a reverse proxy, you should read this guide from Microsoft.
Essentially, your reverse proxy should provide these headers to your ASP.NET Core Application:
X-Forwarded-For - The client IP
X-Forwarded-Host - The Host header from the client (e.g. www.example.com:80)
X-Forwarded-Proto - The protocl (e.g. HTTPS)
Then you need to configure your ASP.NET Core application to accept them. You can do so by calling the app.UseForwardedHeaders() method in your Startup's Configure method.
By default (if I'm reading the docs correctly) UseForwardedHeaders (called as above) will accept X-Forwarded-For and X-Forwarded-Proto from a localhost reverse proxy.
If your situation is more complicated than that, you must configure the headers you want/the trusted reverse proxies:
var forwardedOptions = new ForwardedHeadersOptions()
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedHost | ForwardedHeaders.XForwardedProto // allow for, host, and proto (ForwardedHeaders.All also works here)
};
// if it's a single IP or a set of IPs but not a whole subnet
forwardedOptions.KnownProxies.Add(IPAddress.Parse("192.168.0.5"));
// if it's a whole subnet
forwardedOptions.KnownNetworks.Add(new IPNetwork("192.168.0.1", 24)); // 192.168.0.1 - 192.168.0.254
app.UseForwardedHeaders(forwardedOptions);
Also note that, depending on the reverse proxy you use, you might need to configure this on the reverse proxy
on asp core use
absoluteUri = string.Concat(
request.Scheme,
"://",
request.Host.ToUriComponent(),
request.PathBase.ToUriComponent(),
request.Path.ToUriComponent(),
request.QueryString.ToUriComponent());
or you may choose either
Getting absolute URLs using ASP.NET Core

Easy and reasonable secure way to identify a specific network

We like to enable some hidden features of our software only if it is run inside of the company network. The key requirements are:
no need for a third party library outside of DotNet 4.5.1
easy to implement (should not be more than some dozens of lines .. I don't want to reimplement a crypto library)
It should be reasonable safe:
at least: hard to reverse engineer
at best: "impossible" to break even with read-access to the source code
low maintenance overhead
Win2012-Server is available for installation of additional software (open source or own implementation prefered - server can be assumed to be safe)
What I have thought about:
Check if a specific PC is available with a known MAC or IP (current implementation, not really secure and some other flaws)
Test, if a service is available on a specific response (i.e. I send 'Hello' to MyServer:12345 - server responses with 'World')
Similar to 2nd but a more complex challenge (i.e. send a seed for a RNG to the server, verify the response)
Set up an apache with HTTPS and verify the certificate
If you use ActiveDirectory, you could add a reference to the System.DirectoryServices namespace and check
ActiveDirectorySite currentSite = ActiveDirectorySite.GetComputerSite();
then you can get a bit of information from the currentSite object and check against that. That's how I enable/disable features of an application I'm developing currently.
I also grab:
var client = Dns.GetHostEntry(Dns.GetHostName());
foreach (var ip in client.AddressList)
{
if(ip.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork)
{
ipAddress = ip;
}
}
Which you can check to make sure the client is connected with the proper protocol.
I've choosen the last option: Set up a webserver in the intranet and verify the certificate.
It was easier than expected. There are enough tutorials for setting up an apache with https for every supported OS. The self-signed certificate have a lifetime of 9999 days - should be okay until 2042. The C#-part is also reasonable small:
private static bool m_isHomeLocation = false;
public static bool IsHomeLocation
{
get
{
if (m_isHomeLocation)
return true;
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("https://yourLicenseServer:yourConfiguredPort");
request.ServerCertificateValidationCallback += ((s, certificate, chain, sslPolicyErrors) => true);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
response.Close();
var thumbprint = new X509Certificate2(request.ServicePoint.Certificate).Thumbprint;
m_isHomeLocation = (thumbprint == "WhateverThumbprintYourCertificateHave");
}
catch
{
// pass - maybe next time
}
return m_isHomeLocation;
}
}

X509Certificate2.Verify() method always return false for the valid certificate

I am using smart card for authentication.
The SecurityTokenService (authentication service) is hosted on my machine only. The smart card has a valid certificate and it's root certificate is also installed in Local Computer store on my machine.
When I use X509Certificate2.Verify method to validate the certificate in my service, it always return false.
Can someone help me to understand why X509Certificate2.Verify() method always return false?
Note:
I used X509Chain and checked for all the flags (X509VerificationFlags.AllFlags). When I build the chanin, it returns true with ChainStatus as RevocationStatusUnknown.
EDIT 1:
I observed that X509Certificate2.Verify() method returns true if i write this code in windows form application. It returns false only in the service side code. Why so? Strange but true!
The X509VerificationFlags values are suppressions, so specifying X509VerificationFlags.AllFlags actually prevents Build from returning false in most situations.
The RevocationStatusUnknown response seems particularly relevant. Whichever certificate it is reporting that for cannot be verified to be not revoked. The Verify method can be modeled as
public bool Verify()
{
using (X509Chain chain = new X509Chain())
{
// The defaults, but expressing it here for clarity
chain.ChainPolicy.RevocationMode = X509RevocationMode.Online;
chain.ChainPolicy.RevocationFlag = X509RevocationFlag.ExcludeRoot;
chain.ChainPolicy.VerificationTime = DateTime.Now;
chain.ChainPolicy.VerificationFlags = X509VerificationFlags.NoFlag;
return chain.Build(this);
}
}
Which, since it is not asserting X509VerificationFlags.IgnoreCertificateAuthorityRevocationUnknown or X509VerificationFlags.IgnoreEndRevocationUnknown while requesting an X509RevocationMode other than None, fails.
First, you should identify which certificate(s) in the chain is(/are) failing:
using (X509Chain chain = new X509Chain())
{
// The defaults, but expressing it here for clarity
chain.ChainPolicy.RevocationMode = X509RevocationMode.Online;
chain.ChainPolicy.RevocationFlag = X509RevocationFlag.ExcludeRoot;
chain.ChainPolicy.VerificationTime = DateTime.Now;
chain.Build(cert);
for (int i = 0; i < chain.ChainElements.Count; i++)
{
X509ChainElement element = chain.ChainElements[i];
if (element.ChainElementStatus.Length != 0)
{
Console.WriteLine($"Error at depth {i}: {element.Certificate.Subject}");
foreach (var status in element.ChainElementStatus)
{
Console.WriteLine($" {status.Status}: {status.StatusInformation}}}");
}
}
}
}
If you look at any failing certificate in the Windows CertUI (double-click the .cer in Explorer or in the Certificates MMC Snap-In), look for a field named "CRL Distribution Points". These are the URLs that will be retrieved during runtime. Perhaps your system has a data egress restriction that doesn't allow those particular values to be queried for. You can always try issuing a web request from your web service to see if it can fetch the URLs without the context of being in the certificate subsystem.
I think, the problem is due to the proxy server and some security settings in my organization. I cannot give valid reason why it works from WinForm client and why does not from code hosted under IIS.
But the fact I want to let readers know is that Verify() method worked on server side code too when I hosted service in IIS running on the machine outside my usual domain! So you may check if the firewall settings of your domain/organization is coming in you way.

Categories

Resources