Related
I have a .NET 3.5 desktop application that had been showing periodic slow downs in functionality whenever the test machine it was on was out of the office.
I managed to replicate the error on a machine in the office without an internet connection, but it was only when i used ANTS performance profiler that i got a clearer picture of what was going on.
In ANTS I saw a "Waiting for synchronization" taking up to 16 seconds that corresponded to the delay I could see in the application when NHibernate tried to load the System.Data.SqlServerCE.dll assembly.
If I tried the action again immediately it would work with no delay but if I left it for 5 minutes then it would be slow to load again the next time I tried it.
From my research so far it appears to be because the SqlServerCE dll is signed and so the system is trying to connect to get the certificate revocation lists and timing out.
Disabling the "Automatically detect settings" setting in the Internet Options LAN settings makes the problem go away, as does disabling the "Check for publishers certificate revocation".
But the admins where this application will be deployed are not going to be happy with the idea of disabling certificate checking on a per machine or per user basis so I really need to get the application level disabling of the CRL check working.
There is the well documented bug in .net 2.0 which describes this behaviour, and offers a possible fix with a config file element.
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<runtime>
<generatePublisherEvidence enabled="false"/>
</runtime>
</configuration>
This is NOT working for me however even though I am using .net 3.5.
The SQLServerCE dll is being loaded dynamically by NHibernate and I wonder if the fact that it's dynamic could somehow be why the setting isn't working, but I don't know how I could check that.
Can anyone offer suggestions as to why the config setting might not work?
Or is there another way I could disable the check at the application level, perhaps a CAS policy setting that I can use to set an exception for the application when it's installed?
Or is there something I can change in the application to up the trust level or something like that?
You can specify in code if you want to check the revocation list per application:
ServicePointManager.CheckCertificateRevocationList = false;
In this blog posting (which cites another source) you have two options: disable CRL checking system wide or per app:
Disable CRL Checking Machine-Wide Control Panel -> Internet Options ->
Advanced -> Under security, uncheck the Check for publisher's
certificate revocation option
Disable CRL Checking For a Specific .Net Application See this
Microsoft KB Article: http://support.microsoft.com/kb/936707
What solved the problem for me:
I (think I) had a problem with online revocation before, so I explicitly switched to offline. Due to to warning, I now had to change...
_ = builder.Services.AddAuthentication(CertificateAuthenticationDefaults.AuthenticationScheme)
.AddCertificate(
options =>
{
options.AllowedCertificateTypes = CertificateTypes.All;
options.RevocationMode = X509RevocationMode.Offline;
}
);
... to ...
_ = builder.Services.AddAuthentication(CertificateAuthenticationDefaults.AuthenticationScheme)
.AddCertificate(
options =>
{
options.AllowedCertificateTypes = CertificateTypes.All;
options.RevocationMode = X509RevocationMode.NoCheck;
}
);
Is anyone else having a difficult time getting Twitters oAuth's callback URL to hit their localhost development environment.
Apparently it has been disabled recently. http://code.google.com/p/twitter-api/issues/detail?id=534#c1
Does anyone have a workaround. I don't really want to stop my development
Alternative 1.
Set up your .hosts (Windows) or etc/hosts file to point a live domain to your localhost IP. such as:
127.0.0.1 xyz.example
where xyz.example is your real domain.
Alternative 2.
Also, the article gives the tip to alternatively use a URL shortener service. Shorten your local URL and provide the result as callback.
Alternative 3.
Furthermore, it seems that it works to provide for example http://127.0.0.1:8080 as callback to Twitter, instead of http://localhost:8080.
I just had to do this last week. Apparently localhost doesn't work but 127.0.0.1 does Go figure.
This of course assumes that you are registering two apps with Twitter, one for your live www.mysite.example and another for 127.0.0.1.
Just put http://127.0.0.1:xxxx/ as the callback URL, where xxxx is the port for your framework
Yes, it was disabled because of the recent security issue that was found in OAuth. The only solution for now is to create two OAuth applications - one for production and one for development. In the development application you set your localhost callback URL instead of the live one.
Callback URL edited
http://localhost:8585/logintwitter.aspx
Convert to
http://127.0.0.1:8585/logintwitter.aspx
This is how i did it:
Registered Callback URL:
http://127.0.0.1/Callback.aspx
OAuthTokenResponse authorizationTokens =
OAuthUtility.GetRequestToken(ConfigSettings.getConsumerKey(),
ConfigSettings.getConsumerSecret(),
"http://127.0.0.1:1066/Twitter/Callback.aspx");
ConfigSettings:
public static class ConfigSettings
{
public static String getConsumerKey()
{
return System.Configuration.ConfigurationManager.AppSettings["ConsumerKey"].ToString();
}
public static String getConsumerSecret()
{
return System.Configuration.ConfigurationManager.AppSettings["ConsumerSecret"].ToString();
}
}
Web.config:
<appSettings>
<add key="ConsumerKey" value="xxxxxxxxxxxxxxxxxxxx"/>
<add key="ConsumerSecret" value="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"/>
</appSettings>
Make sure you set the property 'use dynamic ports' of you project to 'false' and enter a static port number instead. (I used 1066).
I hope this helps!
Use http://smackaho.st
What it does is a simple DNS association to 127.0.0.1 which allows you to bypass the filters on localhost or 127.0.0.1 :
smackaho.st. 28800 IN A 127.0.0.1
So if you click on the link, it will display you what you have on your local webserver (and if you don't have one, you'll get a 404). You can of course set it to any page/port you want :
http://smackaho.st:54878/twitter/callback
I was working with Twitter callback url on my localhost. If you are not sure how to create a virtual host ( this is important ) use Ampps. He is really cool and easy. In a few steps you have your own virtual host and then every url will work on it. For example:
download and install ampps
Add new domain. ( here you can set for example twitter.local) that means your virtual host will be http://twitter.local and it will work after step 3.
I am working on Win so go under to your host file -> C:\Windows\System32\Drivers\etc\hosts and add line: 127.0.0.1 twitter.local
Restart your Ampps and you can use your callback. You can specify any url, even if you are using some framework MVC or you have htaccess url rewrite.
Hope This Help!
Cheers.
Seems nowadays http://127.0.0.1 also stopped working.
A simple solution is to use http://localtest.me instead of http://localhost it is always pointing to 127.0.0.1 And you can even add any arbitrary subdomain to it, and it will still point to 127.0.0.1
See Website
When I develop locally, I always set up a locally hosted dev name that reflects the project I'm working on. I set this up in xampp through xampp\apache\conf\extra\httpd-vhosts.conf and then also in \Windows\System32\drivers\etc\hosts.
So if I am setting up a local dev site for example.com, I would set it up as example.dev in those two files.
Short Answer: Once this is set up properly, you can simply treat this url (http://example.dev) as if it were live (rather than local) as you set up your Twitter Application.
A similar answer was given here: https://dev.twitter.com/discussions/5749
Direct Quote (emphasis added):
You can provide any valid URL with a domain name we recognize on the
application details page. OAuth 1.0a requires you to send a
oauth_callback value on the request token step of the flow and we'll
accept a dynamic locahost-based callback on that step.
This worked like a charm for me. Hope this helps.
It can be done very conveniently with Fiddler:
Open menu Tools > HOSTS...
Insert a line like 127.0.0.1 your-production-domain.com, make sure that "Enable remapping of requests..." is checked. Don't forget to press Save.
If access to your real production server is needed, simply exit Fiddler or disable remapping.
Starting Fiddler again will turn on remapping (if it is checked).
A pleasant bonus is that you can specify a custom port, like this:
127.0.0.1:3000 your-production-domain.com (it would be impossible to achieve this via the hosts file). Also, instead of IP you can use any domain name (e.g., localhost).
This way, it is possible (but not necessary) to register your Twitter app only once (provided that you don't mind using the same keys for local development and production).
edit this function on TwitterAPIExchange.php at line #180
public function performRequest($return = true)
{
if (!is_bool($return))
{
throw new Exception('performRequest parameter must be true or false');
}
$header = array($this->buildAuthorizationHeader($this->oauth), 'Expect:');
$getfield = $this->getGetfield();
$postfields = $this->getPostfields();
$options = array(
CURLOPT_HTTPHEADER => $header,
CURLOPT_HEADER => false,
CURLOPT_URL => $this->url,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_SSL_VERIFYPEER => false,
CURLOPT_SSL_VERIFYHOST => false
);
if (!is_null($postfields))
{
$options[CURLOPT_POSTFIELDS] = $postfields;
}
else
{
if ($getfield !== '')
{
$options[CURLOPT_URL] .= $getfield;
}
}
$feed = curl_init();
curl_setopt_array($feed, $options);
$json = curl_exec($feed);
curl_close($feed);
if ($return) { return $json; }
}
I had the same challenge and I was not able to give localhost as a valid callback URL. So I created a simple domain to help us developers out:
https://tolocalhost.com
It will redirect any path to your localhost domain and port you need. Hope it can be of use to other developers.
set callbackurl in twitter app : 127.0.0.1:3000
and set WEBrick to bind on 127.0.0.1 instead of 0.0.0.0
command : rails s -b 127.0.0.1
Looks like Twitter now allows localhost alongside whatever you have in the Callback URL settings, so long as there is a value there.
I struggled with this and followed a dozen solutions, in the end all I had to do to work with any ssl apis on local host was:
Go download: cacert.pem file
In php.ini * un-comment and change:
curl.cainfo = "c:/wamp/bin/php/php5.5.12/cacert.pem"
You can find where your php.ini file is on your machine by running php --ini in your CLI
I placed my cacert.pem in the same directory as php.ini for ease.
These are the steps that worked for me to get Facebook working with a local application on my laptop:
goto apps.twitter.com
enter the name, app description and your site URL
Note: for localhost:8000, use 127.0.0.1:8000 since the former will not work
enter the callback URL matching your callback URL defined in TWITTER_REDIRECT_URI your application
Note: eg: http://127.0.0.1/login/twitter/callback (localhost will not work).
Important enter both the "privacy policy" and "terms of use" URLs if you wish to request the user's email address
check the agree to terms checkbox
click [Create Your Twitter Application]
switch to the [Keys and Access Tokens] tab at the top
copy the "Consumer Key (API Key)" and "Consumer Secret (API Secret)" to TWITTER_KEY and TWITTER_SECRET in your application
click the "Permissions" tab and set appropriately to "read only", "read and write" or "read, write and direct message" (use the least intrusive option needed for your application, for just and OAuth login "read only" is sufficient
Under "Additional Permissions" check the "request email addresses from users" checkbox if you wish for the user's email address to be returned to the OAuth login data (in most cases check yes)
First question!
Environment
MVC, C#, AppHarbor.
Problem
I am calling an openid provider, and generating an absolute callback url based on the domain.
On my local machine, this works fine if I hit http://localhost:12345/login
Request.Url; //gives me `http://localhost:12345/callback`
However, on AppHarbor where I'm deploying, because they are using non-standard ports, even if I'm hitting it at "http://sub.example.com/login"
Request.Url; //gives me http://sub.example.com:15232/callback
And this screws up my callback, because the port number wasn't in the original source url!
I've tried
Request.Url
Request.Url.OriginalString
Request.RawUrl
All gives me "http://sub.example.com:15232/callback".
Also to clear up that this isn't a Realm issue, the error message I am getting from DotNetOpenAuth is
'http://sub.example.com:14107/accounts/openidcallback' not under realm 'http://*.example.com/'.
I don't think I've stuffed that up?
Now, I'm about to consider some hacky stuff like
preprocessor commands (#IF DEBUG THEN PUT PORT)
string replace (Request.URL.Contains("localhost"))
All of these are not 100% solutions, but I'm sick of mulling over what could be a simple property that I am missing. I have also read this but that doesn't seem to have an accepted answer (and is more about the path rather than the authority). So I'm putting it towards you guys.
Summary
So if I had http://localhost:12345/login, I need to get http://localhost:12345/callback from the Request context.
And if I had "http://sub.example.com/login", I should get "http://sub.example.com/callback", regardless of what port it is on.
Thanks! (Sleep time, will answer any questions in the morning)
This is a common problem in load balanced setups like AppHarbor's - we've provided an example workaround.
Update: A more desirable solution for many ASP.NET applications may be to set the aspnet:UseHostHeaderForRequestUrl appSetting to true. We (AppHarbor) have seen several customers experience issues using it with their WCF apps, which is why we haven't enabled it by default and stil recommend the above solution for those situations. You can configure it using AppHarbor's "Configuration Variables" to inject the appsettings when deployed. More information can be found in this article.
I recently ran into an issue where I compared a URL to the current URL, and then highlighted navigation based on that. It worked locally, but not in production.
I had http://example.com/path/to/file.aspx as my file, but when viewing that file and running Request.Url.ToString() it produced https://example.com:81/path/to/file.aspx in a load balanced production environment.
Now I am using Request.Url.AbsolutePath to just give me /path/to/file.aspx, thus ignoring the schema, hostname, and port numbers.
When I need to compare it to the URL on each navigation item I used:
New Uri(theLink.Href).AbsolutePath
My initial thoughts are get the referrer variable and check if that includes a port, if so use it otherwise don't.
If that’s not an option because a proxy might remove the referrer header variable then you might need to use some client side script to get the location and pass it back to the server.
I'm guessing that AppHarbor use port forwarding to the IIS server so even though publicly the site is on port 80 IIS has it hosted on another port so it can't know what port the client connected on.
Something like
String port = Request.ServerVariables["SERVER_PORT"] == "80" ? "" : ":" + Request.ServerVariables["SERVER_PORT"];
String virtualRoot = Url.Content("~/");
destinationUrl = String.Format("http://{0}{1}{2}", Request.ServerVariables["SERVER_NAME"], port + virtualRoot, "/callback");
If you use the UrlBuilder class in the framework you can easly get around this. On the builder class if you set the port to -1 then the port number will be removed:
new UriBuilder("http://sub.example.com:15232/callback"){ Port = -1}
returns : http://sub.example.com/callback
To keep the port number on a local machine just check Request.IsLocal and don't apply -1 to the port.
I would wrap this into a extension method to keep it clean.
I see that this is an old thread. I had this issue running MVC5, on IIS 7.5, with an Apache proxy in front. Outside of the server, I would get "Empty Response", since the asp.net app gets the Url from apache with the custom port.
In order to have the app redirect to a subpath without including the "custom" port, forget the Response/Request objects, and use the Transfer method. For instance, if I want that users are automatically redirected to the login page in case they are not logged already:
if (!User.Identity.IsAuthenticated)
Server.TransferRequest("Account/Login");
I am using WatiN (2.0.10.928) with C# and Visual Studio 2008 to test a SSL secured website that requires a certificate. When you navigate to the homepage a "Choose a digital certificate" dialog is displayed and requires that you select a valid certificate and click the 'OK' button.
I'm looking for a way to automate the certificate selection so that every time a new test or fixture is executed (and my browser restarts) I don't have to manually interfere with the automated test and select the certificate. I've tried using various WatiN Dialog Handler classes and even looked into using the Win32 API to automate this but haven't had much luck.
I finally found a solution but its adds another dependency to the solution (a third party library called AutoIT). Since this solution isn't ideal but does work and is the best I could find, I will post the solution and mark it as the answer but I am still looking for an 'out of the box' WatiN solution that is more consistent with the rest of my code and test fixtures.
Thanks for your responses!
In my situation I have exactly one certificate attached, so I have to pick up the one and only existing on the list, so I have really simple DialogHandler for this - it only clicks on the button if it cans handle the dialog:
public class CertificateChoosingHandler : BaseDialogHandler
{
public override bool HandleDialog(Window window)
{
new WinButton(1, window.Hwnd).Click();
return true;
}
public override bool CanHandleDialog(Window window)
{
return window.StyleInHex == "94C808CC";
}
}
AFAIR this solution won't work in Windows 7.
EDIT: I forgot about something useful. When I found that this solution is not working in Windows 7, I discovered very interesting option in IE Internet Options somewhere in "Custom Level": Don’t prompt for client certificate selection when no certificates or only one certificate exists. So I have added my site to trusted sites and edited settings, and there is no need now for me to use this DialogHandler, but it still can be used even if no dialog appears. If it is not clear, what I wrote, here is how to Enable Prompt for Certificate in Internet Explorer to show certificate dialog.
The best solution I could find so far was posted here:
http://andrey-zhukov.blogspot.com/2009/10/recently-i-wanted-to-choose-digital.html
As stated in the post, it requires a reference to the AutoIT library: http://www.autoitscript.com/autoit3/index.shtml
I've taken #prostynick's hint and automated it. Basically, if you ENABLE the setting "Don’t prompt for client certificate selection when no certificates or only one certificate exists" in the IE security settings, then the whole dialog doesn't appear (if you only have one or no certificate, that is).
So, we just have to make sure that the user has that setting enabled before we initialize your WebBrowser object. And since these settings are conveniently stored in the registry, we can do it ourselves, without bothering the user. Here's some code that does just that:
// What this does is changes this setting in Internet Explorer:
// Tools -> Internet Options -> Security -> Custom Level ->
// Don't prompt for client certificate selection when no certificates
// or only one certificate exists -> ENABLE
//
// If you're not convinced that we need this, please reset all the security
// levels in IE to the default settings, comment out this code, and try to fetch
// <your url>.
//
// If it finishes, great! Then leave it commented out. Otherwise, curse and accept
// that we need this ugly hack OR that we need to instruct people to find & change
// some unholy IE setting...
RegistryKey stupidBrokenDefaultSetting = Registry.CurrentUser.OpenSubKey("Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings\\Zones\\3", true);
stupidBrokenDefaultSetting.SetValue("1A04", "0", RegistryValueKind.DWord);
I'm not sure if this works for everyone, or that you need Administrator rights or something, but it works for me.
So I've got a ServiceReference added to a C# Console Application which calls a Web Service that is exposed from Oracle.
I've got everything setup and it works like peaches when it's not using SSL (http). I'm trying to set it up using SSL now, and I'm running into issues with adding it to the Service References (or even Web References). For example, the URL (https) that the service is being exposed on, isn't returning the appropriate web methods when I try to add it into Visual Studio.
The underlying connection was closed: An unexpected error occurred on a send.
Received an unexpected EOF or 0 bytes from the transport stream.
Metadata contains a reference that cannot be resolved: 'https://srs204.mywebsite.ca:7776/SomeDirectory/MyWebService?WSDL'
Another quandary I've got is in regards to certificate management and deployment. I've got about 1000 external client sites that will need to use this little utility and they'll need the certificate installed in the appropriate cert store in order to connect to the Web Service. Not sure on the best approach to handling this. Do they need to be in the root store?
I've spent quite a few hours on the web looking over various options but can't get a good clean answer anywhere.
To summarize, I've got a couple of questions here:
1) Anybody have some good links on setting up Web Services in Visual Studio that use SSL?
2) How should I register the certificate? Which store should it exist in? Can I just use something like CertMgr to register it?
There's gotta be a good book/tutorial/whatever that will show me common good practices on setting something like this up. I just can't seem to find it!
Well, I've figured this out. It took me far longer than I care to talk about, but I wanted to share my solution since it's a HUGE pet peeve of mine to see the standard. "Oh I fixed it! Thanks!" posts that leave everyone hanging on what actually happened.
So.
The root problem was that by default Visual Studio 2008 uses TLS for the SSL handshake and the Oracle/Java based Webservice that I was trying to connect to was using SSL3.
When you use the "Add Service Reference..." in Visual Studio 2008, you have no way to specify that the security protocol for the service point manager should be SSL3.
Unless.
You take a static WSDL document and use wsdl.exe to generate a proxy class.
wsdl /l:CS /protocol:SOAP /namespace:MyNamespace MyWebService.wsdl
Then you can use the C Sharp Compiler to turn that proxy class into a library (.dll) and add it to your .Net projects "References".
csc /t:library /r:System.Web.Services.dll /r:System.Xml.dll MyWebService.cs
At this point you also need to make sure that you've included System.Web.Services in your "References" as well.
Now you should be able to call your web service without an issue in the code. To make it work you're going to need one magic line of code added before you instantiate the service.
// We're using SSL here and not TLS. Without this line, nothing workie.
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
Okay, so I was feeling pretty impressed with myself as testing was great on my dev box. Then I deployed to another client box and it wouldn't connect again due to a permissions/authority issue. This smelled like certificates to me (whatever they smell like). To resolve this, I used certmgr.exe to register the certificate for the site to the Trusted Root on the Local Machine.
certmgr -add -c "c:\someDir\yourCert.cer" -s -r localMachine root
This allows me to distribute the certificate to our client sites and install it automatically for the users. I'm still not sure on how "security friendly" the different versions of windows will be in regards to automated certificate registrations like this one, but it's worked great so far.
Hope this answer helps some folks. Thanks to blowdart too for all of your help on this one and providing some insight.
It sounds like the web service is using a self signed certificate. Frankly this isn't the best approach.
Assuming you're a large organisation and it's internal you can setup your own trusted certificate authority, this is especially easy with Active Directory. From that CA the server hosting the Oracle service could request a certificate and you can use AD policy to trust your internal CA's root certificate by placing it in the trusted root of the machine store. This would remove the need to manually trust or accept the certificate on the web service.
If the client machines are external then you're going to have to get the folks exposing the service to either purchase a "real" certificate from one of the well known CAs like Verisign, Thawte, GeoTrust etc. or as part of your install bundle the public certificate and install it into Trusted Root certificate authorities at the machine level on every machine. This has problems, for example no way to revoke the certificate, but will remove the prompt.
Thanks for this great tip, took a quick look around at your stuff and you have a lot of good ideas going on. Here's my little bit to add -- I'm figuring out webMethods and (surprise!) it has the same problems as the Oracle app server you connected to (SSL3 instead of TLS). Your approach worked great, here's my addendum.
Given static class "Factory," provide these two handy-dandy items:
/// <summary>
/// Used when dispatching code from the Factory (for example, SSL3 calls)
/// </summary>
/// <param name="flag">Make this guy have values for debugging support</param>
public delegate void CodeDispatcher(ref string flag);
/// <summary>
/// Run code in SSL3 -- this is not thread safe. All connections executed while this
/// context is active are set with this flag. Need to research how to avoid this...
/// </summary>
/// <param name="flag">Debugging context on exception</param>
/// <param name="dispatcher">Dispatching code</param>
public static void DispatchInSsl3(ref string flag, CodeDispatcher dispatcher)
{
var resetServicePoint = false;
var origSecurityProtocol = System.Net.ServicePointManager.SecurityProtocol;
try
{
System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Ssl3;
resetServicePoint = true;
dispatcher(ref flag);
}
finally
{
if (resetServicePoint)
{
try { System.Net.ServicePointManager.SecurityProtocol = origSecurityProtocol; }
catch { }
}
}
}
And then to consume this stuff (as you have no doubt already guessed, but put a drum roll in here anyway):
var readings = new ArchG2.Portal.wmArchG201_Svc_fireWmdReading.wmdReading[] {
new ArchG2.Portal.wmArchG201_Svc_fireWmdReading.wmdReading() {
attrID = 1, created = DateTime.Now.AddDays(-1), reading = 17.34, userID = 2
},
new ArchG2.Portal.wmArchG201_Svc_fireWmdReading.wmdReading() {
attrID = 2, created = DateTime.Now.AddDays(-2), reading = 99.76, userID = 3
},
new ArchG2.Portal.wmArchG201_Svc_fireWmdReading.wmdReading() {
attrID = 3, created = DateTime.Now.AddDays(-5), reading = 82.17, userID = 4
}
};
ArchG2.Portal.Utils.wmArchG201.Factory.DispatchInSsl3(ref flag, (ref string flag_inner) =>
{
// creates the binding, endpoint, etc. programatically to avoid mucking with
// SharePoint web.config.
var wsFireWmdReading = ArchG2.Portal.Utils.wmArchG201.Factory.Get_fireWmdReading(ref flag_inner, LH, Context);
wsFireWmdReading.fireWmdReading(readings);
});
That does the trick -- when I get some more time I'll solve the threading issue (or not).
Since I have no reputation to comment, I'd like to mention that Mat Nadrofsky's answer and code sample for forcing SSL3 is also the solution for an error similar to
An error occurred while making the
HTTP request to https://xxxx/whatever.
This could be due to the fact that the
server certificate is not configured
properly with HTTP.SYS in the HTTPS
case. This could also be caused by a
mismatch of the security binding
between the client and the server.
Just use
// We're using SSL here and not TLS. Without this line, nothing workie.
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
as mentioned by Mat. Tested with an SAP NetWeaver PI server in HTTPS. Thanks!
Mat,
I had such issues too and I have a way to avoid using certmgr.exe to add certificates to trusted root on a remote machine.
X509Store store;
store = new X509Store("ROOT", StoreLocation.LocalMachine);
store.Open(OpenFlags.ReadWrite);
store.Add(certificate);
The 'certificate object' can be created like this:
X509Certificate2 certificate = new X509Certificate2("Give certificate location path here");