First question!
Environment
MVC, C#, AppHarbor.
Problem
I am calling an openid provider, and generating an absolute callback url based on the domain.
On my local machine, this works fine if I hit http://localhost:12345/login
Request.Url; //gives me `http://localhost:12345/callback`
However, on AppHarbor where I'm deploying, because they are using non-standard ports, even if I'm hitting it at "http://sub.example.com/login"
Request.Url; //gives me http://sub.example.com:15232/callback
And this screws up my callback, because the port number wasn't in the original source url!
I've tried
Request.Url
Request.Url.OriginalString
Request.RawUrl
All gives me "http://sub.example.com:15232/callback".
Also to clear up that this isn't a Realm issue, the error message I am getting from DotNetOpenAuth is
'http://sub.example.com:14107/accounts/openidcallback' not under realm 'http://*.example.com/'.
I don't think I've stuffed that up?
Now, I'm about to consider some hacky stuff like
preprocessor commands (#IF DEBUG THEN PUT PORT)
string replace (Request.URL.Contains("localhost"))
All of these are not 100% solutions, but I'm sick of mulling over what could be a simple property that I am missing. I have also read this but that doesn't seem to have an accepted answer (and is more about the path rather than the authority). So I'm putting it towards you guys.
Summary
So if I had http://localhost:12345/login, I need to get http://localhost:12345/callback from the Request context.
And if I had "http://sub.example.com/login", I should get "http://sub.example.com/callback", regardless of what port it is on.
Thanks! (Sleep time, will answer any questions in the morning)
This is a common problem in load balanced setups like AppHarbor's - we've provided an example workaround.
Update: A more desirable solution for many ASP.NET applications may be to set the aspnet:UseHostHeaderForRequestUrl appSetting to true. We (AppHarbor) have seen several customers experience issues using it with their WCF apps, which is why we haven't enabled it by default and stil recommend the above solution for those situations. You can configure it using AppHarbor's "Configuration Variables" to inject the appsettings when deployed. More information can be found in this article.
I recently ran into an issue where I compared a URL to the current URL, and then highlighted navigation based on that. It worked locally, but not in production.
I had http://example.com/path/to/file.aspx as my file, but when viewing that file and running Request.Url.ToString() it produced https://example.com:81/path/to/file.aspx in a load balanced production environment.
Now I am using Request.Url.AbsolutePath to just give me /path/to/file.aspx, thus ignoring the schema, hostname, and port numbers.
When I need to compare it to the URL on each navigation item I used:
New Uri(theLink.Href).AbsolutePath
My initial thoughts are get the referrer variable and check if that includes a port, if so use it otherwise don't.
If that’s not an option because a proxy might remove the referrer header variable then you might need to use some client side script to get the location and pass it back to the server.
I'm guessing that AppHarbor use port forwarding to the IIS server so even though publicly the site is on port 80 IIS has it hosted on another port so it can't know what port the client connected on.
Something like
String port = Request.ServerVariables["SERVER_PORT"] == "80" ? "" : ":" + Request.ServerVariables["SERVER_PORT"];
String virtualRoot = Url.Content("~/");
destinationUrl = String.Format("http://{0}{1}{2}", Request.ServerVariables["SERVER_NAME"], port + virtualRoot, "/callback");
If you use the UrlBuilder class in the framework you can easly get around this. On the builder class if you set the port to -1 then the port number will be removed:
new UriBuilder("http://sub.example.com:15232/callback"){ Port = -1}
returns : http://sub.example.com/callback
To keep the port number on a local machine just check Request.IsLocal and don't apply -1 to the port.
I would wrap this into a extension method to keep it clean.
I see that this is an old thread. I had this issue running MVC5, on IIS 7.5, with an Apache proxy in front. Outside of the server, I would get "Empty Response", since the asp.net app gets the Url from apache with the custom port.
In order to have the app redirect to a subpath without including the "custom" port, forget the Response/Request objects, and use the Transfer method. For instance, if I want that users are automatically redirected to the login page in case they are not logged already:
if (!User.Identity.IsAuthenticated)
Server.TransferRequest("Account/Login");
Related
I am creating a Nancy Module that will eventually be hosted inside of a Windows Service. To start the Nancy hosting, I am using Nancy.Hosting.Self. Below is the code to start Nancy host.
string strHostProtocol = Convert.ToString(ConfigurationManager.AppSettings["HostProtocol"]);
string strHostIP = Convert.ToString(ConfigurationManager.AppSettings["HostIP"]);
string strHostPort = Convert.ToString(ConfigurationManager.AppSettings["HostPort"]);
//Here strHostProtocol="https", strHostIP = "192.168.100.88" i.e. System IPv4, strHostPort = "9003"
var url = strHostProtocol + "://" + strHostIP + ":" + strHostPort;
//url ="https://192.168.100.88:9003"
this.host = new NancyHost(new Uri(url));
this.host.Start();
Now once the windows service starts, it will start the above host and I could see this in netstat -a command. When I browse this in browser using https://192.168.100.88:9003 I will get proper response.
The problem arises when the same is browsed using its external IP. Say this system has been assigned with external IP of 208.91.158.66 and when I try browsing this as https://208.91.158.66:9003 I will just get a browser default loading progress continuosly which does not stop and without any error thrown. I have also added the below command and reserved URL successfully.
netsh http add urlacl url=https://192.168.100.88:9003/ user=everyone
But even after this the host cannot be browsed using external IP assigned to that system. Is there any restricting Nancy is putting up? Firewalls are turned off, defenders are turned off. Anyone has any idea on this?
UPDATE
The duplicate linked question talks about LAN but here I am trying through external IP and I've tried answer mentioned over there and also specified the same in question
Alright. This issue was also posted to GitHub Nancy Repo and below is what #Khellang had to say.
When you bind to https://192.168.100.88:9003, the
TcpListener/HttpListener won't listen on other interfaces. You either
have to bind to https://208.91.158.66:9003 or https://localhost:9003
and set RewriteLocalhost = true (default).
Further he also said that
If you also want to listen to requests coming to the external IP, yes.
Or you could use a wildcard, like https://+:9003/, https://*:9003/ or
https://localhost:9003/ (with RewriteLocalhost = true, this will
result in https://+:9003/). You can read more about them in the link I
posted.
and thanks to #TimBourguignon as he suggested the same in his comments. Hope this helps someone in future.
He has also suggested to read this link to know more about the Strong Wildcard and Weak Wildcard
I am running an asp.net mvc website, and i want to block every user that reaches my site through TOR. By now i have two solutions:
Download list of TOR exit nodes once every hour, store that list in
memory, and check every request IP address with that list.
Try to block TOR exit nodes with windows firewall - i think that this would
be better, but i don't know how to do that.
Is there any other possible solution? Have any of you maybe had a similar problem to mine? How did you solve it?
The answer is absolutely the second option you listed. You will have to download a list of known exit node IP's every so often regardless of which solution you use, but using the firewall that already exists is much more simple than rolling your own primitive replica.
How the IP's can be added to the firewall depends on your version of Windows. A previous StackOverflow question whose answer includes links that explain how to programmatically block IP addresses via the Windows Server 2008 firewall can be found here.
Here(https://github.com/RD17/DeTor) is a simple REST API which use TorDNSEl to determine whether a request was made from TOR network or not. I think it will be pretty simple to use it from C# with RESTSharp for example.
The request is:
curl -X GET http://detor.ambar.cloud/.
The response is
{
"sourceIp": "104.200.20.46",
"destIp": "89.207.89.82",
"destPort": "8080",
"found": true
}
As a bonus you can add a badge to your site to detect whether a user comes from TOR or not:
<img src='http://detor.ambar.cloud/badge' />
Is anyone else having a difficult time getting Twitters oAuth's callback URL to hit their localhost development environment.
Apparently it has been disabled recently. http://code.google.com/p/twitter-api/issues/detail?id=534#c1
Does anyone have a workaround. I don't really want to stop my development
Alternative 1.
Set up your .hosts (Windows) or etc/hosts file to point a live domain to your localhost IP. such as:
127.0.0.1 xyz.example
where xyz.example is your real domain.
Alternative 2.
Also, the article gives the tip to alternatively use a URL shortener service. Shorten your local URL and provide the result as callback.
Alternative 3.
Furthermore, it seems that it works to provide for example http://127.0.0.1:8080 as callback to Twitter, instead of http://localhost:8080.
I just had to do this last week. Apparently localhost doesn't work but 127.0.0.1 does Go figure.
This of course assumes that you are registering two apps with Twitter, one for your live www.mysite.example and another for 127.0.0.1.
Just put http://127.0.0.1:xxxx/ as the callback URL, where xxxx is the port for your framework
Yes, it was disabled because of the recent security issue that was found in OAuth. The only solution for now is to create two OAuth applications - one for production and one for development. In the development application you set your localhost callback URL instead of the live one.
Callback URL edited
http://localhost:8585/logintwitter.aspx
Convert to
http://127.0.0.1:8585/logintwitter.aspx
This is how i did it:
Registered Callback URL:
http://127.0.0.1/Callback.aspx
OAuthTokenResponse authorizationTokens =
OAuthUtility.GetRequestToken(ConfigSettings.getConsumerKey(),
ConfigSettings.getConsumerSecret(),
"http://127.0.0.1:1066/Twitter/Callback.aspx");
ConfigSettings:
public static class ConfigSettings
{
public static String getConsumerKey()
{
return System.Configuration.ConfigurationManager.AppSettings["ConsumerKey"].ToString();
}
public static String getConsumerSecret()
{
return System.Configuration.ConfigurationManager.AppSettings["ConsumerSecret"].ToString();
}
}
Web.config:
<appSettings>
<add key="ConsumerKey" value="xxxxxxxxxxxxxxxxxxxx"/>
<add key="ConsumerSecret" value="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"/>
</appSettings>
Make sure you set the property 'use dynamic ports' of you project to 'false' and enter a static port number instead. (I used 1066).
I hope this helps!
Use http://smackaho.st
What it does is a simple DNS association to 127.0.0.1 which allows you to bypass the filters on localhost or 127.0.0.1 :
smackaho.st. 28800 IN A 127.0.0.1
So if you click on the link, it will display you what you have on your local webserver (and if you don't have one, you'll get a 404). You can of course set it to any page/port you want :
http://smackaho.st:54878/twitter/callback
I was working with Twitter callback url on my localhost. If you are not sure how to create a virtual host ( this is important ) use Ampps. He is really cool and easy. In a few steps you have your own virtual host and then every url will work on it. For example:
download and install ampps
Add new domain. ( here you can set for example twitter.local) that means your virtual host will be http://twitter.local and it will work after step 3.
I am working on Win so go under to your host file -> C:\Windows\System32\Drivers\etc\hosts and add line: 127.0.0.1 twitter.local
Restart your Ampps and you can use your callback. You can specify any url, even if you are using some framework MVC or you have htaccess url rewrite.
Hope This Help!
Cheers.
Seems nowadays http://127.0.0.1 also stopped working.
A simple solution is to use http://localtest.me instead of http://localhost it is always pointing to 127.0.0.1 And you can even add any arbitrary subdomain to it, and it will still point to 127.0.0.1
See Website
When I develop locally, I always set up a locally hosted dev name that reflects the project I'm working on. I set this up in xampp through xampp\apache\conf\extra\httpd-vhosts.conf and then also in \Windows\System32\drivers\etc\hosts.
So if I am setting up a local dev site for example.com, I would set it up as example.dev in those two files.
Short Answer: Once this is set up properly, you can simply treat this url (http://example.dev) as if it were live (rather than local) as you set up your Twitter Application.
A similar answer was given here: https://dev.twitter.com/discussions/5749
Direct Quote (emphasis added):
You can provide any valid URL with a domain name we recognize on the
application details page. OAuth 1.0a requires you to send a
oauth_callback value on the request token step of the flow and we'll
accept a dynamic locahost-based callback on that step.
This worked like a charm for me. Hope this helps.
It can be done very conveniently with Fiddler:
Open menu Tools > HOSTS...
Insert a line like 127.0.0.1 your-production-domain.com, make sure that "Enable remapping of requests..." is checked. Don't forget to press Save.
If access to your real production server is needed, simply exit Fiddler or disable remapping.
Starting Fiddler again will turn on remapping (if it is checked).
A pleasant bonus is that you can specify a custom port, like this:
127.0.0.1:3000 your-production-domain.com (it would be impossible to achieve this via the hosts file). Also, instead of IP you can use any domain name (e.g., localhost).
This way, it is possible (but not necessary) to register your Twitter app only once (provided that you don't mind using the same keys for local development and production).
edit this function on TwitterAPIExchange.php at line #180
public function performRequest($return = true)
{
if (!is_bool($return))
{
throw new Exception('performRequest parameter must be true or false');
}
$header = array($this->buildAuthorizationHeader($this->oauth), 'Expect:');
$getfield = $this->getGetfield();
$postfields = $this->getPostfields();
$options = array(
CURLOPT_HTTPHEADER => $header,
CURLOPT_HEADER => false,
CURLOPT_URL => $this->url,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_SSL_VERIFYPEER => false,
CURLOPT_SSL_VERIFYHOST => false
);
if (!is_null($postfields))
{
$options[CURLOPT_POSTFIELDS] = $postfields;
}
else
{
if ($getfield !== '')
{
$options[CURLOPT_URL] .= $getfield;
}
}
$feed = curl_init();
curl_setopt_array($feed, $options);
$json = curl_exec($feed);
curl_close($feed);
if ($return) { return $json; }
}
I had the same challenge and I was not able to give localhost as a valid callback URL. So I created a simple domain to help us developers out:
https://tolocalhost.com
It will redirect any path to your localhost domain and port you need. Hope it can be of use to other developers.
set callbackurl in twitter app : 127.0.0.1:3000
and set WEBrick to bind on 127.0.0.1 instead of 0.0.0.0
command : rails s -b 127.0.0.1
Looks like Twitter now allows localhost alongside whatever you have in the Callback URL settings, so long as there is a value there.
I struggled with this and followed a dozen solutions, in the end all I had to do to work with any ssl apis on local host was:
Go download: cacert.pem file
In php.ini * un-comment and change:
curl.cainfo = "c:/wamp/bin/php/php5.5.12/cacert.pem"
You can find where your php.ini file is on your machine by running php --ini in your CLI
I placed my cacert.pem in the same directory as php.ini for ease.
These are the steps that worked for me to get Facebook working with a local application on my laptop:
goto apps.twitter.com
enter the name, app description and your site URL
Note: for localhost:8000, use 127.0.0.1:8000 since the former will not work
enter the callback URL matching your callback URL defined in TWITTER_REDIRECT_URI your application
Note: eg: http://127.0.0.1/login/twitter/callback (localhost will not work).
Important enter both the "privacy policy" and "terms of use" URLs if you wish to request the user's email address
check the agree to terms checkbox
click [Create Your Twitter Application]
switch to the [Keys and Access Tokens] tab at the top
copy the "Consumer Key (API Key)" and "Consumer Secret (API Secret)" to TWITTER_KEY and TWITTER_SECRET in your application
click the "Permissions" tab and set appropriately to "read only", "read and write" or "read, write and direct message" (use the least intrusive option needed for your application, for just and OAuth login "read only" is sufficient
Under "Additional Permissions" check the "request email addresses from users" checkbox if you wish for the user's email address to be returned to the OAuth login data (in most cases check yes)
There are several questions like this, but my situation seems a bit different. I have extremely simple code:
WebClient client = new WebClient();
client.DownloadFile("http://www.xkcd.com", "xkcd.html");
However, I get the error "No connection could be made because target machine actively refused the connection." However, I also see this problem with connections to any website. It also only appears in .NET applications, all of a sudden, none of them can access the web.
Any ideas?
For the purpose of a sanity check, I like using PowerShell to call api's, so I'd suggest that if you can.
Also, make sure to try that url in IE on the system just to make sure there's nothing weird going on (forced proxy, site is really down, DNS or hosts file is resolving it to something else, etc).
C:\Users\james » $wc = new-object system.net.webclient
C:\Users\james » $wc.DownloadFile("http://www.xkcd.com", "xkcd.html")
C:\Users\james » dir .\xkcd.html
Directory: Microsoft.PowerShell.Core\FileSystem::C:\Users\james
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 8/26/2010 1:08 AM 7454 xkcd.html
The user that your code is running as is relevant. For example the code might work in a console/WinForms app but not in an ASP.NET app.
Try to go to the site using Internet Explorer browser. The problem is the WebClient uses proxy settings from IE. And there are any wrong proxy settings you'll get the message you've got.
I have a web application that you can use to import information from another site by giving it a url. It's been pointed out that you could use this feature to access a private site that is hosted on the same web server.
So...
How can I check that a given url is publicly accessible (whether on the same web server or somewhere different)?
FIX:
I ended up doing this:
protected static bool IsHostWithinSegment(string Host)
{
Ping pinger = new Ping();
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
PingOptions options = new PingOptions();
options.Ttl = 1;
PingReply reply = pinger.Send(Host, 1000, buffer, options);
return reply.Status == IPStatus.Success;
}
private static Uri BindStringToURI(string value)
{
Uri uri;
if (Uri.TryCreate(value, UriKind.Absolute, out uri))
return uri;
// Try prepending default scheme
value = string.Format("{0}://{1}", "http", value);
if (Uri.TryCreate(value, UriKind.Absolute, out uri))
return uri;
return null;
}
The only requirement of mine that it doesn't fulfil is that some installations of our product will exist alongside each other and you won't be able to import information across them - I suspect this will require using a proxy server to get an extrenal view of things but as it's not a requirement for my project I'll leave it for someone else.
-- I've just realised that this does entirely solve my problem since all the publicly accessible urls resolve to virtual or routable ips meaning they hop.
Run a traceroute (a series of pings with short TTL's to the address, if the firewall(s) is(are) one of the hops then it's visible from outside the organisation so should be acceptable.
System.Net.NetworkInformation has a ping class that should give you enough information for a tracert like routine.
This does sound like a big hole though, another approach should probably be considered. Preventing the machine that runs this prog. from accessing any other machine on the internal network may be better - a kind of internal firewall.
I've added a simple traceroute, since you like the concept:-
class Program
{
static void Main(string[] args)
{
PingReply reply = null;
PingOptions options = new PingOptions();
options.DontFragment = true;
Ping p = new Ping();
for (int n = 1; n < 255 && (reply == null || reply.Status != IPStatus.Success); n++)
{
options.Ttl = n;
reply = p.Send("www.yahoo.com", 1000, new byte[1], options);
if (reply.Address != null)
Console.WriteLine(n.ToString() + " : " + reply.Address.ToString());
else
Console.WriteLine(n.ToString() + " : <null>");
}
Console.WriteLine("Done.");
System.Console.ReadKey();
}
}
Should be good enough for a reliable local network.
Only two things spring to mind.
Have a trusted external server verify the visibility of the address (like an HTTP Proxy)
Check the DNS record on the site -- if it resolves to something internal (127.0.0.1, 10.*, 192.168.*, etc) the reject it -- of course, this might not work depending on how your internal network is set up
Not knowing if this is on a 3rd-party hosting solution or inside your/your company's internal network makes it hard to say which solution would be best; good luck.
EDIT: On second thought, I've canceled the second suggestion as it would still leave you open to DNS rebinding. I'll leave this here for that purpose, but I don't think it's a good idea.
That said, if you have some ability to control the network makeup for this server, then it should probably live in its own world, dedicated, with nothing else on its private network.
Check the URL address, and see if it matches your server address?
edit: or check against a range of addresses...
But all this does not answer the question: could the client access it?
Maybe some script in the browser to check that the url is accessible, and informing the server of the result.
But the user could edit the page, or simulate the result...
Have the client read the url contents and send it back to the server, instead of having the server fetch it?
Don't worry about the public accessibility of anyone else's web assets, that question does not have a definite answer in all cases. Just try not to compromise the access policy to your own (or your customer's etc.) web assets.
Use the existing access control mechanisms to control the web application's access. Don't just consult the access control mechanisms in order to duplicate them in the web application. That would be relying on the web application to refrain from using its full access - a false reliance if the web application ever gets compromised or if it simply has a bug in the access control duplication functionality. See http://en.wikipedia.org/wiki/Confused_deputy_problem.
Since the web application acts as a deputy of external visitors, treat it if you can as if it resided outside the internal network. Put it in the DMZ perhaps. Note that I'm not claiming that the solution is one of network configuration, I'm just saying that the solution should be at the same level at which it is solved if the visitor would try to access the page directly.
Make the web application jump through the same hoops the external visitor would have to jump. Let it fail to access resources the external visitors would have failed to access, too. Provide an error page that does not let the external visitor distinguish between "page not found" and "access denied".
The wininet dll has a function InternetCheckConnection
Allso look at InternetGetConnectedState
You are asking the wrong question. You should be asking, how can I limit access to a given URL so that only people on a certain network can access it?
The fact is, that you cannot test in the way that you wanted, because you likely do not have access to other sites on the same web server, in order to run a script that attempts to retrieve a URL. It is better to deny all access except the access that you wish to allow.
Perhaps a firewall could do this for you, but if you want more finegrained control, so that some URLs are wide open, and others are restricted, then you probably either need help from the web server software or you need to code this into the application that serves the restricted URLs.
If you are worried that your web application might be used to transfer data that comes from other servers protected by the same firewall which protects you, then you should change the application to disallow any URLs where the domain name portion of the URL resolves to an IP address in the range which is protected by the firewall. You can get that address range information from the firewall administrator.
This is only really a concern on in-house systems because in 3rd party data centers there should not be any private servers that don't have their own protection. In other words, if it is at your company, they may expect their firewall to protect the whole data center and that is reasonable, if a bit risky. But when you rent hosting from a 3rd party with a data center on the Internet, you have to assume that everything inside that data center is equally as potentially hostile as the stuff outside.