We're currently attempting to implement a sql server 2008 udf to do expansion of shortened urls. We have it working quite well against most of the major url shortening services. However, at seemingly random times it will "hang" and refuse to work against a certain domain (for example bit.ly) while subsequent calls to other services (for example tinyurl.com) will continue to succeed.
We initially thought this was due to some sort of blocking by the url shortening provider, but stopping and restarting the dbserver service cause subsequent requests to succeed. Could it be that SQL server is somehow pooling outgoing http connections in some way?
Here's the code...
using System;
using System.Data;
using System.Net;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
public partial class UserDefinedFunctions
{
[Microsoft.SqlServer.Server.SqlFunction]
public static SqlString UrlExpander(string url)
{
// Set up the Webrequest
HttpWebRequest wr = (HttpWebRequest)HttpWebRequest.Create(url);
try
{
// Set autoredirect off so the redirected URL will not be loaded
wr.AllowAutoRedirect = false;
// Get the response
HttpWebResponse wresp = (HttpWebResponse)wr.GetResponse();
return new SqlString(wresp.Headers["Location"].ToString());
}
catch (Exception ex)
{
wr.Abort();
throw ex;
}
}
};
You're missing wresp.Close().
Given the feedback from Jesse and our desire to have a function that returns either a properly expanded url or NULL, we have come up with the following which seems to process 1000s of minified URL with no further issues:
[Microsoft.SqlServer.Server.SqlFunction]
public static SqlString UrlExpander(string url)
{
// Set up the Webrequest
try
{
HttpWebRequest wr = (HttpWebRequest)HttpWebRequest.Create(url);
try
{
// Set autoredirect off so the redirected URL will not be loaded
wr.AllowAutoRedirect = false;
// Get the response
HttpWebResponse wresp = (HttpWebResponse)wr.GetResponse();
wresp.Close();
if (wresp != null)
return new SqlString(wresp.Headers["Location"].ToString());
}
finally
{
if (wr != null)
wr.Abort();
}
}
catch
{
}
return null;
}
Related
hello I am trying to launch an intent with a webview from a user entered URL, I have been looking everywhere online and I can't find a concrete answer as to how to make sure the website will actually connect before allowing the user to proceed to the next activity. I have found many tools to make sure the URL follows the correct format but none that actually let me make sure it can actually connect.
You can use WebClient and check if any exception is thrown:
using (var client = new HeadOnlyClient())
{
try
{
client.DownloadString("http://google.com");
}
catch (Exception ex)
{
// URL is not accessible.
}
}
You can catch more specific exceptions to make it more elegant.
You can also use custom modification to WebClient to check HEAD only and decrease the amount of data downloaded:
class HeadOnlyClient : WebClient
{
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest req = base.GetWebRequest(address);
req.Method = "HEAD";
return req;
}
}
I would suggest you to use HttpHead for simple request with AndroidHttpClient, but it is deprecated now. You can try to implement HEAD Request by sockets.
You can try to ping the address first.
See this SO question: How to Ping External IP from Java Android
Another option:
Connectivity Plugin for Xamarin and Windows
Task<bool> IsReachable(string host, int msTimeout = 5000);
But, any pre-check that succeeds isn't guaranteed as the very next request might fail so you should still handle that.
Here's what I ended up doing to Check if a Host name is reachable. I was connecting to a site with a self signed certificate so that's why I have the delegate in the ServiceCertificateValidationCallback.
private async Task<bool> CheckHostConnectionAsync (string serverName)
{
string Message = string.Empty;
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(serverName);
ServicePointManager.ServerCertificateValidationCallback += delegate
{
return true;
};
// Set the credentials to the current user account
request.Credentials = System.Net.CredentialCache.DefaultCredentials;
request.Method = "GET";
request.Timeout = 1000 * 40;
try
{
using (HttpWebResponse response = (HttpWebResponse) await request.GetResponseAsync ())
{
// Do nothing; we're only testing to see if we can get the response
}
}
catch (WebException ex)
{
Message += ((Message.Length > 0) ? "\n" : "") + ex.Message;
return false;
}
if (Message.Length == 0)
{
goToMainActivity (serverName);
}
return true;
}
I need to inspect some HTTP headers and so I'm using C#'s HttpWebRequest and HttpWebResponse classes.
I'm developing using Visual Studio 2012 and would like to test out the request and response on my localhost so I can see what kind of headers I'm dealing with (I use devtools in Chrome so I can see there, but I want to make sure my code is returning the proper values). When I simply put in http://localhost:[Port]/ it doesn't connect.
I can see that it repeatedly is making a request to the server and eventually I get the exception WebException: The Operation has timed out. If I add request.KeepAlive = false' then I get the exception WebException: Unable to connect to the remote server.
So I'm wondering:
Is something wrong with my code? (see below)
How can I test HttpWebRequest and HttpWebResponse on localhost?
I've tried using the IP address in place of "localhost" but that didn't work (ex http://127.0.0.1:[port]/)
Code:
public class AuthorizationFilterAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
base.OnActionExecuting(filterContext);
string url = "http://127.0.0.1:7792/";
string responseString;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
//request.KeepAlive = false;
//request.AllowAutoRedirect = false;
System.Diagnostics.Debug.Write(request);
try
{
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
responseString = reader.ReadToEnd();
System.Diagnostics.Debug.Write(responseString);
}
}
}
catch (Exception e)
{
System.Diagnostics.Debug.Write(e.Message);
System.Diagnostics.Debug.Write(e.Source);
}
}
}
Have you tried Fiddler? Fiddler will show you all the HTTP Requests, all headers, response status, with different data views like raw, hex, image etc, a timeline view, HTTPS Connects, pretty much everything.
There are also Firefox extensions like httpfox. But I strongly recommend you try out Fiddler.
When I run the program contained below the first HTTPS request succeeds, but the second request fails. Both url's are valid and both can be accessed successfully in a browser. Any suggestions as to what needs to be done to access the second url successfully?
using System;
using System.IO;
using System.Net;
public class Program
{
private static void Main(string[] args)
{
var content = "";
bool status;
var url1 = "https://mail.google.com";
var url2 = "https://my.ooma.com";
status = DoHttpRequest(url1, out content);
OutputStatus(url1, status, content);
status = DoHttpRequest(url2, out content);
OutputStatus(url2, status, content);
Console.ReadLine();
}
private static void OutputStatus(string url, bool status, string content)
{
if (status) Console.WriteLine("Url={0}, Status=Success, content length = {1}", url, content.Length);
else Console.WriteLine("Url={0}, Status=Fail, ErrorMessage={1}", url, content);
}
private static bool DoHttpRequest(string url, out string content)
{
content = "";
var request = (HttpWebRequest) WebRequest.Create(url);
try
{
request.Method = "GET";
request.CookieContainer = null;
request.Timeout = 25000; // 25 seconds
var response = (HttpWebResponse) request.GetResponse();
var streamReader = new StreamReader(response.GetResponseStream());
content = streamReader.ReadToEnd();
return true;
}
catch (WebException ex)
{
content = ex.Message;
return false;
}
}
}
Historically, most problems of this description that I've seen occur when you forget to call .Close() on the object returned from GetResponseStream(). The problem exists because when you forget to close the first request, the second request deadlocks waiting for a free connection.
Typically this hang happens on the 3rd request, not the second.
Update: Looking at your repro, this has nothing to do with the order of the requests. You're hitting a problem because this site is sending a TLS Warning at the beginning of the HTTPS handshake, and .NET will timeout when that occurs. See http://blogs.msdn.com/b/fiddler/archive/2012/03/29/https-request-hangs-.net-application-connection-on-tls-server-name-indicator-warning.aspx. The problem only repros on Windows Vista and later, because the warning is related to a TLS extension that doesn't exist in the HTTPS stack on WinXP.
Increse your request TimeOut.
request.Timeout = 60000; //60 second.
May be your network connection is a bit slow. I run with 25 seconds, okay. (Yeah, the second url is a bit longer to get response, than the first one.)
I am making an asyncronous HttpWebRequest and if that fails, I want to call a backup web service. Like so:
public void CallService1()
{
HttpWebRequest request = HttpWebRequest.Create("http://MyFirstWebService")
request.BeginGetResponse(this.CallService1Completed, request);
}
public void CallService1Completed(IAsyncResult result)
{
HttpWebRequest request = (HttpWebRequest)result.AsyncState;
try
{
using (HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(result))
{
using (Stream responseStream = response.GetResponseStream())
{
// Use Data
}
}
}
catch (WebException webException)
{
if (?????)
{
CallBackupService2();
}
}
}
Bearing in mind that this is a mobile applications where you may not always have an internet connection. I do not want to call the backup service if there is no internet connection. I only want to call the backup service if the first service is down for some reason. What would I put in the 'IF' statement above.
It can be implemented like:
if (NetworkInterface.GetIsNetworkAvailable())
{
CallBackupService2();
}
I am trying to use the System.Net.HttpWebRequest class to perform a HTTP GET request to a specific web server for web applications we have load balanced over numerous servers. In order to achieve this, I need to be able to set the Host header value for the request, and I have been able to achieve this by using the System.Net.WebProxy class.
However, this all breaks down when I try to perform a GET using SSL. When I attempt to do this, the call to HttpWebRequest.GetResponse throws a System.Net.WebException, with a HTTP status code of 400 (Bad Request).
Is what I'm trying to achieve possible with HttpWebRequest, or should I be looking for an alternative way to perform what I want?
Here is the code I've been using to try and get this all to work :-
using System;
using System.Web;
using System.Net;
using System.IO;
namespace UrlPollTest
{
class Program
{
private static int suffix = 1;
static void Main(string[] args)
{
PerformRequest("http://www.microsoft.com/en/us/default.aspx", "www.microsoft.com");
PerformRequest("https://www.microsoft.com/en/us/default.aspx", "");
PerformRequest("https://www.microsoft.com/en/us/default.aspx", "www.microsoft.com");
Console.WriteLine("Press any key to continue");
Console.ReadKey();
}
static void PerformRequest(string AUrl, string AProxy)
{
Console.WriteLine("Fetching from {0}", AUrl);
try
{
HttpWebRequest request = WebRequest.Create(AUrl) as HttpWebRequest;
if (AProxy != "")
{
Console.WriteLine("Proxy = {0}", AProxy);
request.Proxy = new WebProxy(AProxy);
}
WebResponse response = request.GetResponse();
using (Stream httpStream = response.GetResponseStream())
{
using (StreamReader reader = new StreamReader(httpStream))
{
string s = reader.ReadToEnd();
File.WriteAllText(string.Format("D:\\Temp\\Response{0}.html", suffix++), s);
}
}
Console.WriteLine(" Success");
}
catch (Exception e)
{
Console.WriteLine(" " + e.Message);
}
}
}
}
I've used a hacky approach before - of having a test exe that modifies my "hosts" file, injecting an entry for the farm name to the specific server's address (and issuing an ipconfig /flushdns). After that, requests should get routed to the right server as though it were the only one there.
Obviously this requires admin access... I use it as part of an automated smoke-test to hit the farm as though it were individual machines.
The other thing you might try is something on the NLB, perhaps in TCL (in the case of F5) - maybe add a custom header that the NLB understands? (assuming it is doing SSL re-signing).
In .NET 4.0 the Host header can be set independently of the URL that the request is going to so I would recommend that you use HttpwebRequest.Host to solve your problem. This (I think) will ship as part of .NET 4.0 beta2. There is NO way to re-write the host header to be different under previous versions of .Net without implementing your own custom subclass of WebRequest (believe me, it's been tried).