How do you check if a website is online in C#? - c#

I want my program in C# to check if a website is online prior to executing, how would I make my program ping the website and check for a response in C#?

A Ping only tells you the port is active, it does not tell you if it's really a web service there.
My suggestion is to perform a HTTP HEAD request against the URL
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("your url");
request.AllowAutoRedirect = false; // find out if this site is up and don't follow a redirector
request.Method = "HEAD";
try {
response = request.GetResponse();
// do something with response.Headers to find out information about the request
} catch (WebException wex)
{
//set flag if there was a timeout or some other issues
}
This will not actually fetch the HTML page, but it will help you find out the minimum of what you need to know. Sorry if the code doesn't compile, this is just off the top of my head.

You have use System.Net.NetworkInformation.Ping see below.
var ping = new System.Net.NetworkInformation.Ping();
var result = ping.Send("www.google.com");
if (result.Status != System.Net.NetworkInformation.IPStatus.Success)
return;

Small remark for Digicoder's code and complete example of Ping method:
private bool Ping(string url)
{
try
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url);
request.Timeout = 3000;
request.AllowAutoRedirect = false; // find out if this site is up and don't follow a redirector
request.Method = "HEAD";
using (var response = request.GetResponse())
{
return true;
}
}
catch
{
return false;
}
}

if (!NetworkInterface.GetIsNetworkAvailable())
{
// Network does not available.
return;
}
Uri uri = new Uri("http://stackoverflow.com/any-uri");
Ping ping = new Ping();
PingReply pingReply = ping.Send(uri.Host);
if (pingReply.Status != IPStatus.Success)
{
// Website does not available.
return;
}

The simplest way I can think of is something like:
WebClient webClient = new WebClient();
byte[] result = webClient.DownloadData("http://site.com/x.html");
DownloadData will throw an exception if the website is not online.
There is probably a similar way to just ping the site, but it's unlikely that the difference will be noticeable unless you are checking many times a second.

Related

How to know if a website/domain is available before loading a webview with that URL

hello I am trying to launch an intent with a webview from a user entered URL, I have been looking everywhere online and I can't find a concrete answer as to how to make sure the website will actually connect before allowing the user to proceed to the next activity. I have found many tools to make sure the URL follows the correct format but none that actually let me make sure it can actually connect.
You can use WebClient and check if any exception is thrown:
using (var client = new HeadOnlyClient())
{
try
{
client.DownloadString("http://google.com");
}
catch (Exception ex)
{
// URL is not accessible.
}
}
You can catch more specific exceptions to make it more elegant.
You can also use custom modification to WebClient to check HEAD only and decrease the amount of data downloaded:
class HeadOnlyClient : WebClient
{
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest req = base.GetWebRequest(address);
req.Method = "HEAD";
return req;
}
}
I would suggest you to use HttpHead for simple request with AndroidHttpClient, but it is deprecated now. You can try to implement HEAD Request by sockets.
You can try to ping the address first.
See this SO question: How to Ping External IP from Java Android
Another option:
Connectivity Plugin for Xamarin and Windows
Task<bool> IsReachable(string host, int msTimeout = 5000);
But, any pre-check that succeeds isn't guaranteed as the very next request might fail so you should still handle that.
Here's what I ended up doing to Check if a Host name is reachable. I was connecting to a site with a self signed certificate so that's why I have the delegate in the ServiceCertificateValidationCallback.
private async Task<bool> CheckHostConnectionAsync (string serverName)
{
string Message = string.Empty;
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(serverName);
ServicePointManager.ServerCertificateValidationCallback += delegate
{
return true;
};
// Set the credentials to the current user account
request.Credentials = System.Net.CredentialCache.DefaultCredentials;
request.Method = "GET";
request.Timeout = 1000 * 40;
try
{
using (HttpWebResponse response = (HttpWebResponse) await request.GetResponseAsync ())
{
// Do nothing; we're only testing to see if we can get the response
}
}
catch (WebException ex)
{
Message += ((Message.Length > 0) ? "\n" : "") + ex.Message;
return false;
}
if (Message.Length == 0)
{
goToMainActivity (serverName);
}
return true;
}

validate sharepoint link in c# without using microsoft.sharepoint dll

I am validating a regular URL as follows:
private static bool IsUrlAvailable(string url)
{
if ((string.IsNullOrEmpty(url.Trim()) == true) ||
(url.Trim().ToLower().Equals("http://")) ||
(url.Trim().ToLower().Equals("https://")))
{
return false;
}
if (!url.ToLower().StartsWith("http://") && !url.ToLower().StartsWith("https://"))
{
url = "http://" + url;
}
try
{
var req = (HttpWebRequest)WebRequest.Create(url);
req.Timeout = 15000;
req.Method = "HEAD";
using (var rsp = (HttpWebResponse)req.GetResponse())
{
if (rsp.StatusCode == HttpStatusCode.OK)
{
return true;
}
}
}
catch (Exception ex)
{
// Eat it because all we want to do is return false
}
// Otherwise
return false;
}
But since I am using a WebRequest.Create, authenticated SharePoint urls on the intranet are failing validation because of permission denied (404) error. Now I know we can validate it using SPSite.Exists or OpenWeb but these are only available in microsoft.sharepoint.dll and I was wondering if there is a way of doing this without using this DLL?
404 is not permission denied... but file-not-found. So you may indeed hit non-existing Url.
You need to pass credentials with your request
WebRequest request = WebRequest.Create ("http://www.contoso.com/default.html");
// If required by the server, set the credentials.
request.Credentials = CredentialCache.DefaultCredentials;
// Get the response.
HttpWebResponse response = (HttpWebResponse)request.GetResponse ();
Side note: please do not "eat all exceptions" - there is very small set of exceptions that are interesting in case of making WebRequests which you can find in article about HttpWebRequest.GetResponse (you likley should only handle WebException).

HTTPS web request failing

When I run the program contained below the first HTTPS request succeeds, but the second request fails. Both url's are valid and both can be accessed successfully in a browser. Any suggestions as to what needs to be done to access the second url successfully?
using System;
using System.IO;
using System.Net;
public class Program
{
private static void Main(string[] args)
{
var content = "";
bool status;
var url1 = "https://mail.google.com";
var url2 = "https://my.ooma.com";
status = DoHttpRequest(url1, out content);
OutputStatus(url1, status, content);
status = DoHttpRequest(url2, out content);
OutputStatus(url2, status, content);
Console.ReadLine();
}
private static void OutputStatus(string url, bool status, string content)
{
if (status) Console.WriteLine("Url={0}, Status=Success, content length = {1}", url, content.Length);
else Console.WriteLine("Url={0}, Status=Fail, ErrorMessage={1}", url, content);
}
private static bool DoHttpRequest(string url, out string content)
{
content = "";
var request = (HttpWebRequest) WebRequest.Create(url);
try
{
request.Method = "GET";
request.CookieContainer = null;
request.Timeout = 25000; // 25 seconds
var response = (HttpWebResponse) request.GetResponse();
var streamReader = new StreamReader(response.GetResponseStream());
content = streamReader.ReadToEnd();
return true;
}
catch (WebException ex)
{
content = ex.Message;
return false;
}
}
}
Historically, most problems of this description that I've seen occur when you forget to call .Close() on the object returned from GetResponseStream(). The problem exists because when you forget to close the first request, the second request deadlocks waiting for a free connection.
Typically this hang happens on the 3rd request, not the second.
Update: Looking at your repro, this has nothing to do with the order of the requests. You're hitting a problem because this site is sending a TLS Warning at the beginning of the HTTPS handshake, and .NET will timeout when that occurs. See http://blogs.msdn.com/b/fiddler/archive/2012/03/29/https-request-hangs-.net-application-connection-on-tls-server-name-indicator-warning.aspx. The problem only repros on Windows Vista and later, because the warning is related to a TLS extension that doesn't exist in the HTTPS stack on WinXP.
Increse your request TimeOut.
request.Timeout = 60000; //60 second.
May be your network connection is a bit slow. I run with 25 seconds, okay. (Yeah, the second url is a bit longer to get response, than the first one.)

Test if a URI is up

I'm trying to make a simple app that will "ping" a uri and tell me if its responding or not
I have the following code but it only seems to check domains at the root level
ie www.google.com and not www.google.com/voice
private bool WebsiteUp(string path)
{
bool status = false;
try
{
Uri uri = new Uri(path);
WebRequest request = WebRequest.Create(uri);
request.Timeout = 3000;
WebResponse response;
response = request.GetResponse();
if (response.Headers != null)
{
status = true;
}
}
catch (Exception loi)
{
return false;
}
return status;
}
Is there any existing code out there that would better fit this need?
Edit: Actually, I tell a lie - by default 404 should cause a web exception anyway, and I've just confirmed this in case I was misremembering. While the code given in the example is leaky, it should still work. Puzzling, but I'll leave this answer here for the better safety with the response object.
The problem with the code you have, is that while it is indeed checking the precise URI given, it considers 404, 500, 200 etc. as equally "successful". It also is a bit wasteful in using GET to do a job HEAD suffices for. It should really clean up that WebResponse too. And the term path is a silly parameter name for something that isn't just a path, while we're at it.
private bool WebsiteUp(string uri)
{
try
{
WebRequest request = WebRequest.Create(uri);
request.Timeout = 3000;
request.Method = "HEAD";
using(WebResponse response = request.GetResponse())
{
HttpWebResponse hRes = response as HttpWebResponse;
if(hRes == null)
throw new ArgumentException("Not an HTTP or HTTPS request"); // you may want to have this specifically handle e.g. FTP, but I'm just throwing an exception for now.
return hRes.StatusCode / 100 == 2;
}
}
catch (WebException)
{
return false;
}
}
Of course there are poor websites out there that return a 200 all the time and so on, but this is the best one can do. It assumes that in the case of a redirect you care about the ultimate target of the redirect (do you finally end up on a successful page or an error page), but if you care about the specific URI you could turn off automatic redirect following, and consider 3xx codes successful too.
There is a Ping class you can utilize for that, more details can be found here:
http://msdn.microsoft.com/en-us/library/system.net.networkinformation.ping.aspx
I did something similar when I wrote a torrent client to check valid tracker URLS, pretty sure I found the answer on SO but cant seem to find it anymore, heres the code sample I have from that post.
using(var client = new WebClient()) {
client.HeadOnly = true;
// exists
string Address1 = client.DownloadString("http://google.com");
// doesnt exist - 404 error
string Address2 = client.DownloadString("http://google.com/sdfsddsf");
}

Why is this WebRequest code slow?

I requested 100 pages that all 404. I wrote
{
var s = DateTime.Now;
for(int i=0; i < 100;i++)
DL.CheckExist("http://google.com/lol" + i.ToString() + ".jpg");
var e = DateTime.Now;
var d = e-s;
d=d;
Console.WriteLine(d);
}
static public bool CheckExist(string url)
{
HttpWebRequest wreq = null;
HttpWebResponse wresp = null;
bool ret = false;
try
{
wreq = (HttpWebRequest)WebRequest.Create(url);
wreq.KeepAlive = true;
wreq.Method = "HEAD";
wresp = (HttpWebResponse)wreq.GetResponse();
ret = true;
}
catch (System.Net.WebException)
{
}
finally
{
if (wresp != null)
wresp.Close();
}
return ret;
}
Two runs show it takes 00:00:30.7968750 and 00:00:26.8750000. Then i tried firefox and use the following code
<html>
<body>
<script type="text/javascript">
for(var i=0; i<100; i++)
document.write("<img src=http://google.com/lol" + i + ".jpg><br>");
</script>
</body>
</html>
Using my comp time and counting it was roughly 4 seconds. 4 seconds is 6.5-7.5faster then my app. I plan to scan through a thousands of files so taking 3.75hours instead of 30mins would be a big problem. How can i make this code faster? I know someone will say firefox caches the images but i want to say 1) it still needs to check the headers from the remote server to see if it has been updated (which is what i want my app to do) 2) I am not receiving the body, my code should only be requesting the header. So, how do i solve this?
I noticed that an HttpWebRequest hangs on the first request. I did some research and what seems to be happening is that the request is configuring or auto-detecting proxies. If you set
request.Proxy = null;
on the web request object, you might be able to avoid an initial delay.
With proxy auto-detect:
using (var response = (HttpWebResponse)request.GetResponse()) //6,956 ms
{
}
Without proxy auto-detect:
request.Proxy = null;
using (var response = (HttpWebResponse)request.GetResponse()) //154 ms
{
}
change your code to asynchronous getresponse
public override WebResponse GetResponse() {
•••
IAsyncResult asyncResult = BeginGetResponse(null, null);
•••
return EndGetResponse(asyncResult);
}
Async Get
Probably Firefox issues multiple requests at once whereas your code does them one by one. Perhaps adding threads will speed up your program.
The answer is changing HttpWebRequest/HttpWebResponse to WebRequest/WebResponse only. That fixed the problem.
Have you tried opening the same URL in IE on the machine that your code is deployed to? If it is a Windows Server machine then sometimes it's because the url you're requesting is not in IE's (which HttpWebRequest works off) list of secure sites. You'll just need to add it.
Do you have more info you could post? I've doing something similar and have run into tons of problems with HttpWebRequest before. All unique. So more info would help.
BTW, calling it using the async methods won't really help in this case. It doesn't shorten the download time. It just doesn't block your calling thread that's all.
close the response stream when you are done, so in your checkExist(), add wresp.Close() after wresp = (HttpWebResponse)wreq.GetResponse();
OK if you are getting status code 404 for all webpages then it is due to not specifying credentials. So you need to add
wreq.Credentials = CredentialCache.DefaultCredentials;
Then you may also come across status code= 500 for that you need to specify User Agent. Which looks something like the below line
wreq.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0";
"A WebClient instance does not send optional HTTP headers by default. If your request requires an optional header, you must add the header to the Headers collection. For example, to retain queries in the response, you must add a user-agent header. Also, servers may return 500 (Internal Server Error) if the user agent header is missing."
reference: https://msdn.microsoft.com/en-us/library/system.net.webclient(v=vs.110).aspx
To improve the Performance of the HttpWebrequest you need to add
wreq.Proxy=null
now the code will look like:
static public bool CheckExist(string url)
{
HttpWebRequest wreq = null;
HttpWebResponse wresp = null;
bool ret = false;
try
{
wreq = (HttpWebRequest)WebRequest.Create(url);
wreq.Credentials = CredentialCache.DefaultCredentials;
wreq.Proxy=null;
wreq.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0";
wreq.KeepAlive = true;
wreq.Method = "HEAD";
wresp = (HttpWebResponse)wreq.GetResponse();
ret = true;
}
catch (System.Net.WebException)
{
}
finally
{
if (wresp != null)
wresp.Close();
}
return ret;
}
Set cookie is matter and you must add AspxAutoDetectCookieSupport=1 like this code
req.CookieContainer = new CookieContainer();
req.CookieContainer.Add(new Cookie("AspxAutoDetectCookieSupport", "1") { Domain = target.Host });

Categories

Resources