Scope:
I am trying to get all my HttpRequests issued via C# to get routed through the TOR Network.
After some quick research I've found some stack overflow questions like This One and This One, so i followed their examples and tried it myself.
Code Sample:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(
"http://whatismyipaddress.com/"
);
request.Proxy = new WebProxy("127.0.0.1:8118");
using (var response = request.GetResponse())
{
using (var reader = new StreamReader(
response.GetResponseStream(),
Encoding.GetEncoding("utf-8")
))
{
string resp = reader.ReadToEnd();
}
}
Results:
I have Privoxy installed and running (netstat -b -a shows it is running/listening on the port 8118).
The Request is not logged onto the Privoxy client, although it seems like it is working.
The Problem:
As the user #Junior Mayhé have pointed out, i have to uncomment this line on the privoxy config file
forward-socks5 / 127.0.0.1:9050
After doing so, my web requests start to get Error 503 - Server Unavailable.
I have tried starting the Tor Browser but it still raises me this error.
What am i doing wrong?
Edit One:
After playing a bit with Netstat -b -a seems like Firefox's Tor is actually running on Port:9151 instead of Port:9050 as stated by these older questions.
After changing the port number on the Privoxy config file to 9151 i no longer get the Server Unavailable error, instead i get a Operation TimedOut. I already increased the value of the request timeout (both connection timeout and readwrite timeout) to 2 minutes and i still get this error.
Maybe missing a period? That's what I have in my config file.
forward-socks5 / 127.0.0.1:9050 .
Use this port: 9150 in privoxy settings. Worked for me
Related
I am stuck with this, I am calling a simple report server URL which returns report's PDF, but strangely the WebRequest.GetResponse method doesn't return anything, when I say this, I mean the code just stop executing at that point, no exception, no error, no status code, no event viewer log on server, nothing!! And so I am not able to debug it
This is my code
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url);
req.PreAuthenticate = true;
req.UseDefaultCredentials = true;
req.Credentials = CredentialCache.DefaultNetworkCredentials;
req.ImpersonationLevel = TokenImpersonationLevel.Impersonation;
req.Timeout = int.MaxValue;
Log.Write("Log before");
var response = req.GetResponse();
Log.Write("Log after");
It just prints Log before log and then nothing is printed after that.
This code works perfectly fine when I run through visual studio and stops working when it is deployed in dev and test servers!
I am just expecting it to atleast through the exception or return unauthorized or any other status code, then I will be able to debug the issue.
Any suggestions what I can try to debug it?
Have you tried leaving it for 24 days 20 hours 31 minutes and 24 seconds? In other words, have you left it for as long as you have set the timeout to?
The server is not returning a response and the code is waiting int.MaxValue milliseconds to tell you that. The most likely cause of this is that there is a piece of networking infrastructure between your client and server that is stopping the request. This could be a firewall or proxy. It may also be being caused by the server not liking the request and refusing to respond.
Things I would try:
Try accessing the URL though a web browser on the machine that the code is failing on.
Set the timeout to something sensible like one minute and run the request to timeout.
Try pinging the remote server from the client machine.
Use a product like Fiddler to check what is actually being sent and received.
Have a chat with your network provider to see if they can help.
Check the server logs to see if the server has erred.
Change the first log to Log.Write("Log before: " + url); to check what is actually being requested.
I have some code that calls HttpWebRequest's GetResponse() method to retrieve HTML from a URL and return it to the calling method.
This has been working perfectly fine within my Development and QA environments but now that I have uploaded it to my UAT server, I keep getting the following error:
The remote server returned an error: (404) Not Found.
The main difference between Dev/QA and UAT is that UAT uses SSL/HTTPS based URLs whereas Dev/QA uses HTTP. I introduced the following line of code to help progress me a little futher:
ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(AcceptAllCertifications);
where AcceptAllCertifications always returns true but I still get my 404 error.
I that people who previously had this error have been able to resolve the issue by merely ensuring the URI used for the HttpWebRequest doesn't have a slash at the end (see: Simple HttpWebRequest over SSL (https) gives 404 Not Found under C#) but this does not make a difference to me.
I have now tried what was suggested at this post (see: HttpWebResponse returns 404 error) where I render the exception on the page. This bypassed the yellow-warning screen and gives me a bit more informtion, including the URL it is trying to get a response from. However, when I copy and paste the URL into my browser, it works perfectly fine and renders the HTML on the page. I'm quite happy therefore that the correct URL is being used in the GetResponse call.
Has anyone got any ideas as to what may be causing me this grief? As said, it only seems to be a problem on my UAT server where I am using SSL.
Here is my code to assist:
public static string GetHtmlValues()
{
var webConfigParentUrlValue = new Uri(ConfigurationManager.AppSettings["ParentUrl"]);
var destinationUrl = HttpContext.Current.Request.Url.AbsoluteUri;
var path = "DestinationController" + "/" + "DestinationAction" + "?destinationUrl=" + destinationUrl;
var redirect = new Uri(webConfigParentUrlValue, path).AbsoluteUri;
ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(AcceptAllCertifications);
var request = (HttpWebRequest)WebRequest.Create(redirect);
//Ensures that if the user has already signed in to the application,
// their authorisation is carried on through to this new request
AttachAuthorisedCookieIfExists(request);
HttpWebResponse result;
try
{
result = (HttpWebResponse)request.GetResponse();
}
catch (WebException ex)
{
result = ex.Response as HttpWebResponse;
}
String responseString;
using (Stream stream = result.GetResponseStream())
{
StreamReader reader = new StreamReader(stream, Encoding.UTF8);
responseString = reader.ReadToEnd();
}
return responseString;
}
More details of the error as it is rendered on the page:
I ran into a similar situation, but with a different error message. My problem turned out to be that my UAT environment was Windows 2008 with .NET 4.5. In this environment the SSL handshake/detecting is performed differently than most web browsers. So I was seeing the URL render without error in a web browser but my application would generate an error. My error message included "The underlying connection was closed: An unexpected error occurred on a send". This might be your issue.
My solution was to force the protocol change. I detect the specific error, then I force a change in the security protocol of my application and try again.
This is the code I use:
catch (Exception ex)
{
if(ex.Message.Contains("The underlying connection was closed: An unexpected error occurred on a send."))
{
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
// retry the retrieval
}
}
I finally found the solution to my problem...
The first clue to get me on the right track was the wrong physical path being displayed in the 404 error from IIS. It turns out that this incorrect physical path was mapped to another site in my IIS setup. This particular naturally had a binding also; port 443. As you may know, port 443 is the default port for https.
Now looking at my URL that I was trying to pass into the HTTPWebRequest.GetResponse() method, it looked something like this:
https://www.my-web-site.com
Taking this into account, when this application was hosted on IIS within the bounds of SSL, the error was occuring as follows:
Code enters the aforementioned method GetHtmlValues()
The code gets https://www.my-web-site.com from the web.config file
A response is requested from https://www.my-web-site.com
At this point, as no port has been specified and application is now out there on the open internet, it tries to get a response from https://www.my-web-site.com:443
The problem is, my application isn't hosted via IIS on port 443. A different application lives here. Subsequently, as the page can't be found on port 443, a 404 error is produced.
Now for the solution...
Looking in IIS, I found the port that my application sits on. Let's say port 16523.
Whereas previously in my web.config I had my key of ParentUrl decalred with a value of https://www.my-web-site.com, this is to be changed to http://www.my-web-site.com:16523
Note how the https has become http and the port number is specified at the end. Now when the application tries to get the response, it no longer uses the default ssl port as the correct one was specified.
We have a written a C# application that communicates with any one of a group of IP in the cloud.
Any one of which may not be working. We use the URL of the address as the IIS server is expecting a Host Header Name in order to route to the correct application interface.
So we set the Hosts file to point the URL at an IP.
We then send a command at the URL to get back the server time.
This tells us the connection is working.
If we don't get a response we assume the connection is dead. We then write a new IP from a list into the Hosts file and we try again.
This is where we hit a bug. The application doesn't seem to see the Hosts file has changed and uses the old (bad) IP.
There is no caching built into the application so we are assuming that Windows is caching for us.
We've tried to flush caches with:
ipconfig /flushdns
arp -d *
nbtstat -R
We still get the same problem.
Any thoughts on how to clear the cache?
If you can't address this at the server end (e.g. a load balancer, etc), then just use the IP address list in your code:
var req = HttpWebRequest.Create("http://" + IPAdd.ToString() + "/path_to_query_time");
((HttpWebRequest)req).Host = "yourhostheaderhere";
var resp = req.GetResponse();
//If things have gone wrong here, change IPAdd to the next IP address and start over.
Don't go messing with the users settings to try and solve a problem in your application that's of your own making.
Is there a way to get a System.Net.WebRequest or System.Net.WebClient to respect the hosts or lmhosts file?
For example: in my hosts file I have:
10.0.0.1 www.bing.com
When I try to load Bing in a browser (both IE and FF) it fails to load as expected.
Dns.GetHostAddresses("www.bing.com")[0]; // 10.0.0.1
WebRequest.Create("http://10.0.0.1").GetResponse(); // throws exception (expected)
WebRequest.Create("http://www.bing.com/").GetResponse(); // unexpectedly succeeds
Similarly:
WebClient wc = new WebClient();
wc.DownloadString("http://www.bing.com"); //succeeds
Why would System.Net.Dns respect the hosts file but System.Net.WebRequest ignore it? What do I need to change to make the WebRequest respect the hosts file?
Additional Info:
If I disable IPv6 and set my IPv4 DNS Server to 127.0.0.1, the above code works (fails) as expected. However if I add my normal DNS servers back as alternates, the unexpected behavior resumes.
I've reproduced this on 3 Win7 and 2 Vista boxes. The only constant is my company's network.
I'm using .NET 3.5 SP1 and VS2008
Edit
Per #Richard Beier's suggestion, I tried out System.Net tracing. With tracing ON the WebRequest fails as it should. However as soon as I turn tracing OFF the behavior reverts to the unexpected success. I have reproduced this on the same machines as before in both debug and release mode.
Edit 2
This turned out to be the company proxy giving us issues. Our solution was a custom proxy config script for our test machines that had "bing.com" point to DIRECT instead of the default proxy.
I think that #Hans Passant has spotted the issue here. It looks like you have a proxy setup in IE.
Dns.GetHostAddresses("www.bing.com")[0]; // 10.0.0.1
This works because you are asking the OS to get the IP addresses for www.bing.com
WebRequest.Create("http://www.bing.com/").GetResponse(); // unexpectedly succeeds
This works because you are asking the framework to fetch a path from a server name. The framework uses the same engine and settings that IE frontend uses and hence if your company has specified by a GPO that you use a company proxy server, it is that proxy server that resolves the IP address for www.bing.com rather than you.
WebRequest.Create("http://10.0.0.1").GetResponse(); // throws exception (expected)
This works/fails because you have asked the framework to fetch you a webpage from a specific server (by IP). Even if you do have a proxy set, this proxy will still not be able to connect to this IP address.
I hope that this helps.
Jonathan
I'm using VS 2010 on Windows 7, and I can't reproduce this. I made the same hosts-file change and ran the following code:
Console.WriteLine(Dns.GetHostAddresses("www.bing.com")[0]); // 10.0.0.1
var response = WebRequest.Create("http://www.bing.com/").GetResponse(); // * * *
Console.WriteLine(new StreamReader(response.GetResponseStream()).ReadToEnd());
I got an exception on the line marked "* * *". Here's the exception detail:
System.Net.WebException was unhandled
Message=Unable to connect to the remote server
Source=System
StackTrace:
at System.Net.HttpWebRequest.GetResponse()
at ConsoleApplication2.Program.Main(String[] args) in c:\Data\Projects\ConsoleApplication2\ConsoleApplication2\Program.cs:line 17
InnerException: System.Net.Sockets.SocketException
Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.0.0.1:80
Source=System
ErrorCode=10060
Maybe it's an issue with an earlier .NET version, that's now fixed in .NET 4 / VS 2010? Which version of .NET are you using?
I also found this thread from 2007, where someone else ran into the same problem. There are some good suggestions there, including the following:
Turn on system.net tracing
Work around the problem by using Dns.GetHostAddresses() to resolve it to an IP. Then put the IP in the URL - e.g. "http://10.0.0.1/". That may not be an option for you though.
In the above thread, mariyaatanasova_msft also says: "HttpWebRequest uses Dns.GetHostEntry to resolve the host, so you may get a different result from Dns.GetHostAddresses".
You should overwrite the default proxy.
HttpWebRequest & WebRequest will set a default proxy if present in Internet Explorer and your file hosts will be bypassed.
request.Proxy = new WebProxy();
The following is just an example of code:
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("www.bing.com");
request.Proxy = new WebProxy();
request.Method = "POST";
request.AllowAutoRedirect = false;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
//some code here
}
}
catch (exception e)
{
//Some other code here
}
I have an application that needs to download several files in a row in succession (sometimes a few thousand). However, what ends up happening when several files need to be downloaded is I get an exception with an inner exception of type SocketException and the error code 10048 (WSAEADDRINUSE). I did some digging and basically it's because the server has run out of sockets (and they are all waiting for 240s or so before they become available again) - not coincidentally it starts happening around the 1024 file range. I would expect that the HttpWebRequest/ServicePointManager would be reusing my connection, but apparently it is not (and the files are https, so that may be part of it). I never saw this problem in the C++ code that this was ported from (but that doesn't mean it didn't ever happen - I'd be surprised if it was, though).
I am properly closing the WebRequest object and the HttpWebRequest object has KeepAlive set to true by default. Next my intent is to fiddle around with ServicePointManager.SetTcpKeepAlive(). However, I can't see how more people haven't run into this problem.
Has anyone else run into the problem, and if so, what did you do to get around it? Currently I have a retry scheme that detects this error and waits it out, but that doesn't seem like the right thing to do.
Here's some basic code to verify what I'm doing (just in case I'm missing closing something):
WebRequest webRequest = WebRequest.Create(uri);
webRequest.Method = "GET";
webRequest.Credentials = new NetworkCredential(username, password);
WebResponse webResponse = webRequest.GetResponse();
try
{
using(Stream stream = webResponse.GetResponseStream())
{
// read the stream
}
}
finally
{
webResponse.Close()
}
What kind of application is this? You mentioned that the server is running out of ports, but then you mentioned HttpWebRequest. Are you running this code in a webservice or ASP.NET page, which is trying to then download multiple files for the same incoming request from the client?
What kind of authentication is the page using? If it is using NTLM authentication, then the connections cannot be shared if the credentials being used are different for each request.
What I would suggest is to group your request per credential. So, for eg, all requests using username "John" would be grouped. You can specify the "ConnectionGroupName" property on the service point, so the system will try to reuse connections for the same credential and server.
If that also doesnt work, you will need to do one or more of the following:
1) Throttle your requests.
2) Increase the wildcard port range.
3) Use the BindIPConnectionCallback on ServicePoint to make it bind to a non-wildcard port (i.e a port in the range 1024-16384)
More digging seems to point to it possibly being due to authentication and the UnsafeAuthenticatedConnectionSharing property might alleviate this. However, I'm not sure that's the best thing, either.