I have some wrapper code that runs a set of NUnit tests that scan live websites for certain response codes.
I'd like to run these tests against a different server. When running manually, I can do this by editing the /etc/hosts file in Windows\System32\drivers and temporarily setting www.mysite.com to 10.0.0.whatever
Is there any way I can do the same within a .NET console application - temporarily override a DNS record or somehow intercept the resolution and return a different IP address?
EDIT: This is for testing multiple servers in a web farm. I have three live servers, all of which THINK they are www.example.com. Because the servers use HTTP host headers, I can't just run a test against server1, then server2, then server3, because an HTTP request to http://server1/ will NOT return the same thing as a request to http://www.example.com/ that's resolved to server1...
In the past with C++ I was able to hook to the WSOCK32.DLL's gethostbyname function and reroute DNS requests. I used the Microsoft Detours library to do that.
As for C# I found this: http://easyhook.codeplex.com/ maybe it will help you. Basically you can hook to the gethostbyname windows function and execute your own code or return a different result (different IP).
The other possible solution is to temporarily (and programatically) edit the hosts file when the application starts and ends. From your own code.
EDIT: I found my old C++ code, maybe it will give you a hint what to do.
struct hostent FAR * WSAAPI MyGetHostByName(IN const char FAR * name)
{
// Call the regular function
struct hostent* ret = GetHostByNameFunction(name);
// Check if it's the hostname you want to reroute
if ( strcmp(host, (char*)name) == 0 )
{
// Edit the IP returned by the regular gethostbyname
ret->h_addr_list[0] = hostIP;
ret->h_length = 15;
}
// Return the result
return ret;
}
EDIT2: Found another link with newer release of easyhooks
Related
I wrote an application that use grpc with ASP.NET client/server.
This application is built for watch and inform about some operations.
Because I would to have an application also compatible with linux, I transformed it in a dll, then for Windows I made a little exe that uses it, with an icon in the systray.
It works for Windows, I must test for linux except for a part, I would to get the addresses used by the server after launching... it works if I back the dll in console mode, but with the short "launcher" it's impossible (add by edit) to get the addresses with the code i wrote between balises, BUT you can connect to the server (by example i changed port in config file to verify if it was not an address by default and it works.). My problem is only to retrieve addresses after launch, i need them to add the possibility to receive a mail with addresses to connect, show them in systray..
Yesterday I found this way to get addresses used:
public void Configure(IApplicationBuilder app,...)
{
string ServerAddress = app.ServerFeatures.Get<IServerAddressesFeature().Addresses.FirstOrDefault();
// ---
}
with the short launcher the string is null while it works when I back transform the dll as a program and launch it.
I would to have an easy way, without to make big modifications when I make an app for a system or another, I have chosen what was the more compatible and would to stay in this way if possible. I know i could make an identifier to know on which system I then launch according to the OS, but i hoped to find another way in case I want to make something else with this app.
Edit:
It seems it could be a problem between the fact to launch a systray that launch Kestrel. I'm beginner with Asp .net. I copy/paste contract and constructor to the systray, it seems i have the same problem (i used a different port). I can connect but IApplicationBuilder don't have the list of adresses used.
I'm trying to write my own controller for a USB device instead of using the SDK that comes with the product (I feel the sdk is sub-par).
The USB Device is plugged into the SAME SERVER that this application is running on.
So I decided to head over to Nuget and grab the HidLibrary
PM> Install-Package hidlibrary
and I proceeded to follow the example found on GitHub.
First I went into my control panel to verify the VendorID and the ProductID
And I dropped it into my code.
Then I set a breakpoint on the line that grabs the device, but unfortunately it always comes back null.
using HidLibrary;
public class MyController : ApiController
{
private const int VendorId = 0x0BC7;
private const int ProductId = 0x0001;
private static HidDevice _device;
// POST api/<controller>
public string Post(CommandModel command)
{
_device = HidDevices.Enumerate(VendorId, ProductId).FirstOrDefault();
if (_device != null)
{
// getting here means the device exists
}
else
{
// ending up here means the device doesn't exist
throw new Exception("device not connected");
}
return null;
}
I'm hoping it's something silly, and not some deal-breaking permissions issue regarding connecting to a USB device directly from an IIS worker.
Despite your hopes to be something silly, it is not. You have some deal-breaking permission issues. If you will browse Mike O'Brien's code from GitHub of the Hid Library you will see that it calls Win32 API functions located in: kernel32.dll, setupapi.dll, user32.dll, hid.dll (Native.cs).
The enumeration itself it's done through setupapi.dll functions. It browse all the installed devices and filters what it's need it.
So... I think it's a security issue to execute kernel32.dll code directly from a web-app in IIS with anonymous authentication, don't you?
If you really need to communicate with that HID (who knows maybe it's a temperature sensor, or something else) I would do a separate Windows service and the IIS hosted web app would communication through WCF with this service. This service would like a proxy.
Put the same code in a console application and run it. That will help you verify if it's your code or environment.
If it's environment, try using Process Monitor to see if there are any hidden access errors. Also try enumerating all devices, not just looking for the one device you're after, just to see if you can do it in ASP.NET.
#Chase, unless this is an experiment - it is best not to attempt connecting to a device from IIS process. [It's a Pandora's box if you start down this path].
Best way to do this is to have another (WCF) service as proxy to the device and expose just what you need out of the service, hook it up with your app. Feel free to ask for an example if you think that would help.
I +1 #garzanti.
Is anyone else having a difficult time getting Twitters oAuth's callback URL to hit their localhost development environment.
Apparently it has been disabled recently. http://code.google.com/p/twitter-api/issues/detail?id=534#c1
Does anyone have a workaround. I don't really want to stop my development
Alternative 1.
Set up your .hosts (Windows) or etc/hosts file to point a live domain to your localhost IP. such as:
127.0.0.1 xyz.example
where xyz.example is your real domain.
Alternative 2.
Also, the article gives the tip to alternatively use a URL shortener service. Shorten your local URL and provide the result as callback.
Alternative 3.
Furthermore, it seems that it works to provide for example http://127.0.0.1:8080 as callback to Twitter, instead of http://localhost:8080.
I just had to do this last week. Apparently localhost doesn't work but 127.0.0.1 does Go figure.
This of course assumes that you are registering two apps with Twitter, one for your live www.mysite.example and another for 127.0.0.1.
Just put http://127.0.0.1:xxxx/ as the callback URL, where xxxx is the port for your framework
Yes, it was disabled because of the recent security issue that was found in OAuth. The only solution for now is to create two OAuth applications - one for production and one for development. In the development application you set your localhost callback URL instead of the live one.
Callback URL edited
http://localhost:8585/logintwitter.aspx
Convert to
http://127.0.0.1:8585/logintwitter.aspx
This is how i did it:
Registered Callback URL:
http://127.0.0.1/Callback.aspx
OAuthTokenResponse authorizationTokens =
OAuthUtility.GetRequestToken(ConfigSettings.getConsumerKey(),
ConfigSettings.getConsumerSecret(),
"http://127.0.0.1:1066/Twitter/Callback.aspx");
ConfigSettings:
public static class ConfigSettings
{
public static String getConsumerKey()
{
return System.Configuration.ConfigurationManager.AppSettings["ConsumerKey"].ToString();
}
public static String getConsumerSecret()
{
return System.Configuration.ConfigurationManager.AppSettings["ConsumerSecret"].ToString();
}
}
Web.config:
<appSettings>
<add key="ConsumerKey" value="xxxxxxxxxxxxxxxxxxxx"/>
<add key="ConsumerSecret" value="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"/>
</appSettings>
Make sure you set the property 'use dynamic ports' of you project to 'false' and enter a static port number instead. (I used 1066).
I hope this helps!
Use http://smackaho.st
What it does is a simple DNS association to 127.0.0.1 which allows you to bypass the filters on localhost or 127.0.0.1 :
smackaho.st. 28800 IN A 127.0.0.1
So if you click on the link, it will display you what you have on your local webserver (and if you don't have one, you'll get a 404). You can of course set it to any page/port you want :
http://smackaho.st:54878/twitter/callback
I was working with Twitter callback url on my localhost. If you are not sure how to create a virtual host ( this is important ) use Ampps. He is really cool and easy. In a few steps you have your own virtual host and then every url will work on it. For example:
download and install ampps
Add new domain. ( here you can set for example twitter.local) that means your virtual host will be http://twitter.local and it will work after step 3.
I am working on Win so go under to your host file -> C:\Windows\System32\Drivers\etc\hosts and add line: 127.0.0.1 twitter.local
Restart your Ampps and you can use your callback. You can specify any url, even if you are using some framework MVC or you have htaccess url rewrite.
Hope This Help!
Cheers.
Seems nowadays http://127.0.0.1 also stopped working.
A simple solution is to use http://localtest.me instead of http://localhost it is always pointing to 127.0.0.1 And you can even add any arbitrary subdomain to it, and it will still point to 127.0.0.1
See Website
When I develop locally, I always set up a locally hosted dev name that reflects the project I'm working on. I set this up in xampp through xampp\apache\conf\extra\httpd-vhosts.conf and then also in \Windows\System32\drivers\etc\hosts.
So if I am setting up a local dev site for example.com, I would set it up as example.dev in those two files.
Short Answer: Once this is set up properly, you can simply treat this url (http://example.dev) as if it were live (rather than local) as you set up your Twitter Application.
A similar answer was given here: https://dev.twitter.com/discussions/5749
Direct Quote (emphasis added):
You can provide any valid URL with a domain name we recognize on the
application details page. OAuth 1.0a requires you to send a
oauth_callback value on the request token step of the flow and we'll
accept a dynamic locahost-based callback on that step.
This worked like a charm for me. Hope this helps.
It can be done very conveniently with Fiddler:
Open menu Tools > HOSTS...
Insert a line like 127.0.0.1 your-production-domain.com, make sure that "Enable remapping of requests..." is checked. Don't forget to press Save.
If access to your real production server is needed, simply exit Fiddler or disable remapping.
Starting Fiddler again will turn on remapping (if it is checked).
A pleasant bonus is that you can specify a custom port, like this:
127.0.0.1:3000 your-production-domain.com (it would be impossible to achieve this via the hosts file). Also, instead of IP you can use any domain name (e.g., localhost).
This way, it is possible (but not necessary) to register your Twitter app only once (provided that you don't mind using the same keys for local development and production).
edit this function on TwitterAPIExchange.php at line #180
public function performRequest($return = true)
{
if (!is_bool($return))
{
throw new Exception('performRequest parameter must be true or false');
}
$header = array($this->buildAuthorizationHeader($this->oauth), 'Expect:');
$getfield = $this->getGetfield();
$postfields = $this->getPostfields();
$options = array(
CURLOPT_HTTPHEADER => $header,
CURLOPT_HEADER => false,
CURLOPT_URL => $this->url,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_SSL_VERIFYPEER => false,
CURLOPT_SSL_VERIFYHOST => false
);
if (!is_null($postfields))
{
$options[CURLOPT_POSTFIELDS] = $postfields;
}
else
{
if ($getfield !== '')
{
$options[CURLOPT_URL] .= $getfield;
}
}
$feed = curl_init();
curl_setopt_array($feed, $options);
$json = curl_exec($feed);
curl_close($feed);
if ($return) { return $json; }
}
I had the same challenge and I was not able to give localhost as a valid callback URL. So I created a simple domain to help us developers out:
https://tolocalhost.com
It will redirect any path to your localhost domain and port you need. Hope it can be of use to other developers.
set callbackurl in twitter app : 127.0.0.1:3000
and set WEBrick to bind on 127.0.0.1 instead of 0.0.0.0
command : rails s -b 127.0.0.1
Looks like Twitter now allows localhost alongside whatever you have in the Callback URL settings, so long as there is a value there.
I struggled with this and followed a dozen solutions, in the end all I had to do to work with any ssl apis on local host was:
Go download: cacert.pem file
In php.ini * un-comment and change:
curl.cainfo = "c:/wamp/bin/php/php5.5.12/cacert.pem"
You can find where your php.ini file is on your machine by running php --ini in your CLI
I placed my cacert.pem in the same directory as php.ini for ease.
These are the steps that worked for me to get Facebook working with a local application on my laptop:
goto apps.twitter.com
enter the name, app description and your site URL
Note: for localhost:8000, use 127.0.0.1:8000 since the former will not work
enter the callback URL matching your callback URL defined in TWITTER_REDIRECT_URI your application
Note: eg: http://127.0.0.1/login/twitter/callback (localhost will not work).
Important enter both the "privacy policy" and "terms of use" URLs if you wish to request the user's email address
check the agree to terms checkbox
click [Create Your Twitter Application]
switch to the [Keys and Access Tokens] tab at the top
copy the "Consumer Key (API Key)" and "Consumer Secret (API Secret)" to TWITTER_KEY and TWITTER_SECRET in your application
click the "Permissions" tab and set appropriately to "read only", "read and write" or "read, write and direct message" (use the least intrusive option needed for your application, for just and OAuth login "read only" is sufficient
Under "Additional Permissions" check the "request email addresses from users" checkbox if you wish for the user's email address to be returned to the OAuth login data (in most cases check yes)
First question!
Environment
MVC, C#, AppHarbor.
Problem
I am calling an openid provider, and generating an absolute callback url based on the domain.
On my local machine, this works fine if I hit http://localhost:12345/login
Request.Url; //gives me `http://localhost:12345/callback`
However, on AppHarbor where I'm deploying, because they are using non-standard ports, even if I'm hitting it at "http://sub.example.com/login"
Request.Url; //gives me http://sub.example.com:15232/callback
And this screws up my callback, because the port number wasn't in the original source url!
I've tried
Request.Url
Request.Url.OriginalString
Request.RawUrl
All gives me "http://sub.example.com:15232/callback".
Also to clear up that this isn't a Realm issue, the error message I am getting from DotNetOpenAuth is
'http://sub.example.com:14107/accounts/openidcallback' not under realm 'http://*.example.com/'.
I don't think I've stuffed that up?
Now, I'm about to consider some hacky stuff like
preprocessor commands (#IF DEBUG THEN PUT PORT)
string replace (Request.URL.Contains("localhost"))
All of these are not 100% solutions, but I'm sick of mulling over what could be a simple property that I am missing. I have also read this but that doesn't seem to have an accepted answer (and is more about the path rather than the authority). So I'm putting it towards you guys.
Summary
So if I had http://localhost:12345/login, I need to get http://localhost:12345/callback from the Request context.
And if I had "http://sub.example.com/login", I should get "http://sub.example.com/callback", regardless of what port it is on.
Thanks! (Sleep time, will answer any questions in the morning)
This is a common problem in load balanced setups like AppHarbor's - we've provided an example workaround.
Update: A more desirable solution for many ASP.NET applications may be to set the aspnet:UseHostHeaderForRequestUrl appSetting to true. We (AppHarbor) have seen several customers experience issues using it with their WCF apps, which is why we haven't enabled it by default and stil recommend the above solution for those situations. You can configure it using AppHarbor's "Configuration Variables" to inject the appsettings when deployed. More information can be found in this article.
I recently ran into an issue where I compared a URL to the current URL, and then highlighted navigation based on that. It worked locally, but not in production.
I had http://example.com/path/to/file.aspx as my file, but when viewing that file and running Request.Url.ToString() it produced https://example.com:81/path/to/file.aspx in a load balanced production environment.
Now I am using Request.Url.AbsolutePath to just give me /path/to/file.aspx, thus ignoring the schema, hostname, and port numbers.
When I need to compare it to the URL on each navigation item I used:
New Uri(theLink.Href).AbsolutePath
My initial thoughts are get the referrer variable and check if that includes a port, if so use it otherwise don't.
If that’s not an option because a proxy might remove the referrer header variable then you might need to use some client side script to get the location and pass it back to the server.
I'm guessing that AppHarbor use port forwarding to the IIS server so even though publicly the site is on port 80 IIS has it hosted on another port so it can't know what port the client connected on.
Something like
String port = Request.ServerVariables["SERVER_PORT"] == "80" ? "" : ":" + Request.ServerVariables["SERVER_PORT"];
String virtualRoot = Url.Content("~/");
destinationUrl = String.Format("http://{0}{1}{2}", Request.ServerVariables["SERVER_NAME"], port + virtualRoot, "/callback");
If you use the UrlBuilder class in the framework you can easly get around this. On the builder class if you set the port to -1 then the port number will be removed:
new UriBuilder("http://sub.example.com:15232/callback"){ Port = -1}
returns : http://sub.example.com/callback
To keep the port number on a local machine just check Request.IsLocal and don't apply -1 to the port.
I would wrap this into a extension method to keep it clean.
I see that this is an old thread. I had this issue running MVC5, on IIS 7.5, with an Apache proxy in front. Outside of the server, I would get "Empty Response", since the asp.net app gets the Url from apache with the custom port.
In order to have the app redirect to a subpath without including the "custom" port, forget the Response/Request objects, and use the Transfer method. For instance, if I want that users are automatically redirected to the login page in case they are not logged already:
if (!User.Identity.IsAuthenticated)
Server.TransferRequest("Account/Login");
I have a web application that you can use to import information from another site by giving it a url. It's been pointed out that you could use this feature to access a private site that is hosted on the same web server.
So...
How can I check that a given url is publicly accessible (whether on the same web server or somewhere different)?
FIX:
I ended up doing this:
protected static bool IsHostWithinSegment(string Host)
{
Ping pinger = new Ping();
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
PingOptions options = new PingOptions();
options.Ttl = 1;
PingReply reply = pinger.Send(Host, 1000, buffer, options);
return reply.Status == IPStatus.Success;
}
private static Uri BindStringToURI(string value)
{
Uri uri;
if (Uri.TryCreate(value, UriKind.Absolute, out uri))
return uri;
// Try prepending default scheme
value = string.Format("{0}://{1}", "http", value);
if (Uri.TryCreate(value, UriKind.Absolute, out uri))
return uri;
return null;
}
The only requirement of mine that it doesn't fulfil is that some installations of our product will exist alongside each other and you won't be able to import information across them - I suspect this will require using a proxy server to get an extrenal view of things but as it's not a requirement for my project I'll leave it for someone else.
-- I've just realised that this does entirely solve my problem since all the publicly accessible urls resolve to virtual or routable ips meaning they hop.
Run a traceroute (a series of pings with short TTL's to the address, if the firewall(s) is(are) one of the hops then it's visible from outside the organisation so should be acceptable.
System.Net.NetworkInformation has a ping class that should give you enough information for a tracert like routine.
This does sound like a big hole though, another approach should probably be considered. Preventing the machine that runs this prog. from accessing any other machine on the internal network may be better - a kind of internal firewall.
I've added a simple traceroute, since you like the concept:-
class Program
{
static void Main(string[] args)
{
PingReply reply = null;
PingOptions options = new PingOptions();
options.DontFragment = true;
Ping p = new Ping();
for (int n = 1; n < 255 && (reply == null || reply.Status != IPStatus.Success); n++)
{
options.Ttl = n;
reply = p.Send("www.yahoo.com", 1000, new byte[1], options);
if (reply.Address != null)
Console.WriteLine(n.ToString() + " : " + reply.Address.ToString());
else
Console.WriteLine(n.ToString() + " : <null>");
}
Console.WriteLine("Done.");
System.Console.ReadKey();
}
}
Should be good enough for a reliable local network.
Only two things spring to mind.
Have a trusted external server verify the visibility of the address (like an HTTP Proxy)
Check the DNS record on the site -- if it resolves to something internal (127.0.0.1, 10.*, 192.168.*, etc) the reject it -- of course, this might not work depending on how your internal network is set up
Not knowing if this is on a 3rd-party hosting solution or inside your/your company's internal network makes it hard to say which solution would be best; good luck.
EDIT: On second thought, I've canceled the second suggestion as it would still leave you open to DNS rebinding. I'll leave this here for that purpose, but I don't think it's a good idea.
That said, if you have some ability to control the network makeup for this server, then it should probably live in its own world, dedicated, with nothing else on its private network.
Check the URL address, and see if it matches your server address?
edit: or check against a range of addresses...
But all this does not answer the question: could the client access it?
Maybe some script in the browser to check that the url is accessible, and informing the server of the result.
But the user could edit the page, or simulate the result...
Have the client read the url contents and send it back to the server, instead of having the server fetch it?
Don't worry about the public accessibility of anyone else's web assets, that question does not have a definite answer in all cases. Just try not to compromise the access policy to your own (or your customer's etc.) web assets.
Use the existing access control mechanisms to control the web application's access. Don't just consult the access control mechanisms in order to duplicate them in the web application. That would be relying on the web application to refrain from using its full access - a false reliance if the web application ever gets compromised or if it simply has a bug in the access control duplication functionality. See http://en.wikipedia.org/wiki/Confused_deputy_problem.
Since the web application acts as a deputy of external visitors, treat it if you can as if it resided outside the internal network. Put it in the DMZ perhaps. Note that I'm not claiming that the solution is one of network configuration, I'm just saying that the solution should be at the same level at which it is solved if the visitor would try to access the page directly.
Make the web application jump through the same hoops the external visitor would have to jump. Let it fail to access resources the external visitors would have failed to access, too. Provide an error page that does not let the external visitor distinguish between "page not found" and "access denied".
The wininet dll has a function InternetCheckConnection
Allso look at InternetGetConnectedState
You are asking the wrong question. You should be asking, how can I limit access to a given URL so that only people on a certain network can access it?
The fact is, that you cannot test in the way that you wanted, because you likely do not have access to other sites on the same web server, in order to run a script that attempts to retrieve a URL. It is better to deny all access except the access that you wish to allow.
Perhaps a firewall could do this for you, but if you want more finegrained control, so that some URLs are wide open, and others are restricted, then you probably either need help from the web server software or you need to code this into the application that serves the restricted URLs.
If you are worried that your web application might be used to transfer data that comes from other servers protected by the same firewall which protects you, then you should change the application to disallow any URLs where the domain name portion of the URL resolves to an IP address in the range which is protected by the firewall. You can get that address range information from the firewall administrator.
This is only really a concern on in-house systems because in 3rd party data centers there should not be any private servers that don't have their own protection. In other words, if it is at your company, they may expect their firewall to protect the whole data center and that is reasonable, if a bit risky. But when you rent hosting from a 3rd party with a data center on the Internet, you have to assume that everything inside that data center is equally as potentially hostile as the stuff outside.