I would like to know how to check if a website is offline or online using C#?
Try to hit the URL using HttpWebClient over an HTTP-GET Request. Call GetResponse() method for the HttpWebClient which you just created. Check for the HTTP-Status codes in the Response.
Here you will find the list of all HTTP status codes. If your request status code is statrting from 5 [5xx] which means the site is offline. There are other codes that can also tell you if the site is offline or unavailable.You can compare the codes against your preferred ones from the entire List.
//Code Example
HttpWebRequest httpReq = (HttpWebRequest)WebRequest.Create("http://www.stackoverflow.com");
httpReq.AllowAutoRedirect = false;
HttpWebResponse httpRes = (HttpWebResponse)httpReq.GetResponse();
if (httpRes.StatusCode==HttpStatusCode.NotFound)
{
// Code for NotFound resources goes here.
}
// Close the response.
httpRes.Close();
First off, define "online" and "offline". However, if your codebehind code is running, your site is online.
For my web apps, I use a setting called Offline, which admin can set on/off.
Then I can check that setting programmatically.
I use this Offline setting, to show friendly maintenance message to my users.
Additionally you can use App_Offline.htm,
reference :
http://www.15seconds.com/issue/061207.htm
http://weblogs.asp.net/scottgu/archive/2005/10/06/426755.aspx
If you mean online/offline state that controls IIS, then you can control this, with custom Web Events (Application Lifetime Events)
http://support.microsoft.com/kb/893664
http://msdn.microsoft.com/en-us/library/aa479026.aspx
You can use Pingdom.com and its API's. Check the source code of the 'Alerter for Pingdom API' at the bottom of this page
Related
How can I extract the X-Pagination header from a response and use the next link to chain requests?
I've tried in both Postman and C# console application with RestSharp. No success.
Easiest would be a small console application to test. I just need to iterate through the pages.
This is what I get back in the headers X-Pagination:
{
"Page":1,
"PageSize":20,
"TotalRecords":1700,
"TotalPages":85,
"PreviousPageLink":"",
"NextPageLink":"www......./api/products/configurations?Filters=productid=318&IncludeApplicationPerformance=true&page=1",
"GotoPageLinkTemplate":"www..../api/products/configurations?Filters=productid=318&IncludeApplicationPerformance=true&page=0"
}
In Postman you simply retrieve the header, parse it into a Json object then use the value to set a link for your next request.
Make your initial request then in the Test tab do something like:
var nextPageLinkJson = JSON.parse(pm.response.headers.get("X-Pagination"));
var nextPageLink = nextPageLinkJson.NextPageLink;
pm.environment.set("nextPageLink", nextPageLink);
If you don't know how many pages you're going to have then you'll have to play with conditions when to set the nextPageLink variable and what not but that's the general idea.
You can set the request to run using the new link with postman.setNextRequest("request_name") as well.
Additionally this approach will only work in collection runner.
I'm trying to web-crawl a site that uses php sessions via cookies. It is a good-ol' Squirrelmail webmail server.
I saw a couple of posts like this one, but it's not working for me.
When reaching the part when the cookies are sent by the host, I tried to retrieve the cookies using:
HttpWebResponse rs = (HttpWebResponse)rq.GetResponse();
CookieCollection cc = new CookieCollection();
cc.Add(rs.Cookies);
But rs.Cookies comes empty. However, there are set-cookie headers on the response, which I try to use as a guide to build actual cookies, like this:
for (int i = 0; i < rs.Headers.Count; i++)
{
if (rs.Headers.Keys[i].ToLower().Contains("cookie"))
{
string val = rs.Headers[i];
string[] vv = val.Split(";=,".ToCharArray());
Cookie co = new Cookie(vv[0], vv[1]);
// I know this is not the cleanest way to do it
// I've tried to manually set different values for
// co.Domain, co.Path and co.HttpOnly, just to get a working
// example. I tried different alternatives, but it doesn't
// seem to change anything
cc.Add(co);
}
}
Next, I send the cookies to request the next page, which is nothing but a frameset. The fact that I reach the frameset means I've been successfully authenticated and the session cookie is working. However, when I request one of the frames, I get an authentication-error web page. I've done my research, and the cookies do not change in the meantime. What may be going wrong?
Some may wonder why I'm trying to access webmail when there is pop/smtp to do a cleaner job. The answer is this is just a first example to learn the basics, I don't really care what the site is as long as I can successfully manage sessions.
I don't think posting all the code is a good idea yet, since it is a bit messy, and long: I planned to clean it once it worked (I'll post it, if you think it's worth the confusion). Moreover, I think I may have a conceptual error related to the frames, that may be the key to solve the problem.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I am doing a project in which I have to make a windows application that can Take a URL in textbox from user. Now when the user press the Proceed button, the application should open that URl in a webbrowser control and fill the form on that page containing userID & password textboxes and submit it via the login button on that web page. Now my application should show the next page in that webbrowser control to the user.
I can open the url in the application's webbrowser control through my C# Code, but I can't figure it out that how to find the userID & pasword textboxes on that web page that is currently opened in the webbrowser control of my application, how to fill them, how to find the login button & how to click it through my C# Code.
For this you will have to look into the page source of the 3rd party site and find the id of the username, password textbox and submit button. (If you provide a link I would check it for you). Then use this code:
//add a reference to Microsoft.mshtml in solution explorer
using mshtml;
private SHDocVw.WebBrowser_V1 Web_V1;
Form1_Load()
{
Web_V1 = (SHDocVw.WebBrowser_V1)webBrowser1.ActiveXInstance;
}
webBrowser1_Document_Complete()
{
if (webBrowser1.ReadyState == WebBrowserReadyState.Complete)
{
if (webBrowser1.Url.ToString() == "YourLoginSite.Com")
{
try
{
HTMLDocument pass = new HTMLDocument();
pass = (HTMLDocument)Web_V1.Document;
HTMLInputElement passBox = (HTMLInputElement)pass.all.item("PassIDThatyoufoundinsource", 0);
passBox.value = "YourPassword";
HTMLDocument log = new HTMLDocument();
log = (HTMLDocument)Web_V1.Document;
HTMLInputElement logBox = (HTMLInputElement)log.all.item("loginidfrompagesource", 0);
logBox.value = "yourlogin";
HTMLInputElement submit = (HTMLInputElement)pass.all.item("SubmitButtonIDFromPageSource", 0);
submit.click();
}
catch { }
}
}
}
I would use Selenium as opposed to the WebBrowser control.
It has an excellent C# library, and this kind of thing is the main reason it was developed.
You don't have to simulate filling in the username/password fields nor clicking on the login button. You need to simulate the browser rather than the user.
Read the login page html and parse it to find the ids of the username and password fields. The username can be obtained by looking for tags with name set as "username", "user", "login", etc. The password will usually be an tag with type="password". Javascript based popup panels for login would involve parsing the js.
Then follow the example code shown here, How do you programmatically fill in a form and 'POST' a web page?
The important thing here is that you're simulating a browser POST event. Don't worry about text boxes and other visual form elements, your goal is to generate a HTTP POST request with the appropriate key-value pairs.
Your first step is to look through the HTML of the page you're pretend to be and figure out the names of the user id and password form elements. So, let's say for example that they're called "txtUsername" and "txtPassword" respectively, then the post arguments that the browser (or user-agent) will be sending up in its POST request will besomething like:
txtUsername=fflintstone&txtPassword=ilikerocks
As a background to this, you might like to do a little research on how HTTP works. But I'll leave that to you.
The other important thing is to figure out what URL it posts this login request to. Normally, this is whatever appears in the address bar of the browser when you log in, but it may be something else. You'll need to check the action attribute of the form element so see where it goes.
It may be useful to download a copy of Fiddler2. Yes, weird name, but it's a great web debugging tool that basically acts as a proxy and captures everything going between the browser and the remote host. Once you figure out how to use it, you can then pull apart each request-response to see what's happening. It'll give you the URL being called, the type of the request (usually GET or POST), the request arguments, and the full text of the response.
Now, you want to build your app. You need to build logic which make the correct HTTP requests, pass in the form arguments, and get back the results. Luckily, the System.Net.HttpWebRequest class will help you do just that.
Let's say the login page is at www.hello.org/login.aspx and it expects you to POST the login arguments. So your code might look something like this (obviously, this is very simplified):
Imports System.IO
Imports System.Net
Imports System.Web
Dim uri As String = "http://www.hello.org/login.aspx"
Dim request As HttpWebRequest = DirectCast(WebRequest.Create(uri), HttpWebRequest)
request.Timeout = 10000 ' 10 seconds
request.UserAgent = "FlintstoneFetcher/1.0" ' or whatever
request.Accept = "text/*"
request.Headers.Add("Accept-Language", "en")
request.Method = "POST"
Dim data As Byte() = New ASCIIEncoding().GetBytes("txtUsername=fflintstone&txtPassword=ilikerocks")
request.ContentType = "application/x-www-form-urlencoded"
request.ContentLength = data.Length
Dim postStream As Stream = request.GetRequestStream()
postStream.Write(data, 0, data.Length)
postStream.Close()
Dim webResponse As HttpWebResponse
webResponse = DirectCast(request.GetResponse(), HttpWebResponse)
Dim streamReader As StreamReader = New StreamReader(webResponse.GetResponseStream(), Encoding.GetEncoding(1252))
Dim response As String = streamReader.ReadToEnd()
streamReader.Close()
webResponse.Close()
The response string now contains the full response text from the remote host, and that host should consider you logged in. You may need to do a little extra work if the remote host is trying to set cookies (you'll need to return those cookies). Alternatively, if it expects you to pass integrated authentication on successive pages, you'll need to add credentials to your successive requests, something like:
request.Credentials = New NetworkCredential(theUsername, thePassword)
That should be enough information to get cracking. I would recommend that you modularise your logic for working with HTTP into a class of its own. I've implemented a complex solution that logs into a certain website, navigates to a pre-determined page, parses the html and looks for a daily file to be downloaded in the "invox" and if it exists then downloads it. I set this up as a batch process which runs each morning, saving someone having to do this manually. Hopefully, my experience will benefit you!
This question already has answers here:
Google Weather API gone?
(5 answers)
Closed 6 years ago.
I decided to pull information from Google's Weather API - The code I'm using below works fine.
XmlDocument widge = new XmlDocument();
widge.Load("https://www.google.com/ig/api?weather=Brisbane/dET7zIp38kGFSFJeOpWUZS3-");
var weathlist = widge.GetElementsByTagName("current_conditions");
foreach (XmlNode node in weathlist)
{
City.Text = ("Brisbane");
CurCond.Text = (node.SelectSingleNode("condition").Attributes["data"].Value);
Wimage.ImageUrl = ("http://www.google.com/" + node.SelectSingleNode("icon").Attributes["data"].Value);
Temp.Text = (node.SelectSingleNode("temp_c").Attributes["data"].Value + "°C");
}
}
As I said, I am able to pull the required data from the XML file and display it, however if the page is refreshed or a current session is still active, I receive the following error:
WebException was unhandled by user code - The remote server returned
an error: 403 Forbidden Exception.
I'm wondering whether this could be to do with some kind of access limitation put on access to that particular XML file?
Further research and adaptation of suggestions
As stated below, this is by no means best practice, but I've included the catch I now use for the exception. I run this code on Page_Load so I just do a post-back to the page. I haven't noticed any problems since. Performance wise I'm not overly concerned - I haven't noticed any increase in load time and this solution is temporary due to the fact this is all for testing purposes. I'm still in the process of using Yahoo's Weather API.
try
{
XmlDocument widge = new XmlDocument();
widge.Load("https://www.google.com/ig/api?weather=Brisbane/dET7zIp38kGFSFJeOpWUZS3-");
var list2 = widge.GetElementsByTagName("current_conditions");
foreach (XmlNode node in list2)
{
City.Text = ("Brisbane");
CurCond.Text = (node.SelectSingleNode("condition").Attributes["data"].Value);
Wimage.ImageUrl = ("http://www.google.com/" + node.SelectSingleNode("icon").Attributes["data"].Value);
Temp.Text = (node.SelectSingleNode("temp_c").Attributes["data"].Value + "°C");
}
}
catch (WebException exp)
{
if (exp.Status == WebExceptionStatus.ProtocolError &&
exp.Response != null)
{
var webres = (HttpWebResponse)exp.Response;
if (webres.StatusCode == HttpStatusCode.Forbidden)
{
Response.Redirect(ithwidgedev.aspx);
}
}
}
Google article illustrating API error handling
Google API Handle Errors
Thanks to:
https://stackoverflow.com/a/12011819/1302173 (Catch 403 and recall)
https://stackoverflow.com/a/11883388/1302173 (Error Handling and General Google API info)
https://stackoverflow.com/a/12000806/1302173 (Response Handling/json caching - Future plans)
Alternative
I found this great open source alternative recently
OpenWeatherMap - Free weather data and forecast API
This is related to a change / outage of the service. See: http://status-dashboard.com/32226/47728
I have been using Google's Weather API for over a year to feed a phone server so that the PolyCom phones receive a weather page. It has run error free for over a year. As of August 7th 2012 there have been frequent intermittent 403 errors.
I make a hit of the service once per hour (As has always been the case) so I don't think frequency of request is the issue. More likely the intermittent nature of the 403 is related to the partial roll-out of a configuration change or a CDN change at Google.
The Google Weather API isn't really a published API. It was an internal service apparently designed for use on iGoogle so the level of support is uncertain. I tweeted googleapis yesterday and received no response.
It may be better to switch to a promoted weather API such as:
WUnderground Weather or
Yahoo Weather.
I have added the following 'unless defined' error handling perl code myself yesterday to cope with this but if the problem persists I will switch to a more fully supported service:
my $url = "http://www.google.com/ig/api?weather=" . $ZipCode ;
my $tpp = XML::TreePP->new();
my $tree = $tpp->parsehttp( GET => $url );
my $city = $tree->{xml_api_reply}->{weather}->{forecast_information}->{city}->{"-data"};
unless (defined($city)) {
print "The weather service is currently unavailable. \n";
open (MYFILE, '>/home/swarmp/public_html/status/polyweather.xhtml');
print MYFILE qq(<?xml version="1.0" encoding="utf-8"?>\n);
print MYFILE qq(<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "xhtml11.dtd">\n);
print MYFILE qq(<html xmlns="http://www.w3.org/1999/xhtml">\n);
print MYFILE qq(<head><title>Weather is Unavailable!</title></head>\n);
print MYFILE qq(<body>\n);
print MYFILE qq(<p>\n);
print MYFILE qq(The weather service is currently unavailable from the data vendor.\n);
print MYFILE qq(</p>\n);
print MYFILE qq(</body>\n);
print MYFILE qq(</html>\n);
close MYFILE;
exit(0);
}...
This is by no means a best practice, but I use this API heavily in some WP7 and Metro apps. I handle this by catching the exception (most of the time a 403) and simply re-calling the service inside of the catch, if there is an error on the Google end it's usually briefly and only results in 1 or 2 additional calls.
That`s the same thing we found out.
Compare the request header in a bad request and a working request. The working request includes cookies. But where are they from?
Delete all your browser cookies from google. The weather api call will not work in your browser anymore. Browse to google.com and then to the weather api, it will work again.
Google checks the cookies to block multiple api calls. Getting the cookies one time before handling all weather api requests will fix the problem. The cookies will expire in one year. I assume you will restart your application more often then once a year. So that you will get a new one. Getting cookies for each request will end in the same problem: Too many different requests.
One tip: Weather does not often change, so cache the json information (for maybe a hour). That will reduce time-consuming operations as requests!
I found that If you try the request in a clean browser (like new window incognito mode on chrome) the google weather service works. Possible problem of cookies?
Here is my simple code, which works fine if called from php or any other client then adobe air. Same code also works from calling from SWF, there is fluorineFX code for other part of project as well, but then it doesn't do anything to break this.
I do find one thing that all POST calls were somehow changing to GET, which really amazes me. I would be so glad to get the answer for this. Thanks in Advance everyone. Below is the almost same code from my web service. with AIR code just under it.
[WebMethod(EnableSession = true)]
public bool Authenticate(string UserName,string Password)
{
try
{
if (Membership.ValidateUser(UserName, Password)){
FormsAuthentication.SetAuthCookie(UserName, true);
return true;
}
return false;
}
catch (Exception ex)
{
return false;
}
}
and my call from adobe AIR code as below
var ws:WebService = new WebService();
ws.wsdl="http://mysite.com/myservice.asmx?WSDL";
ws.useProxy=false;
ws.addEventListener(LoadEvent.LOAD,onWSDLLoad);
ws.loadWSDL();
ws.Authenticate.addEventListener(ResultEvent.RESULT,resultHandler);
ws.Authenticate.addEventListener(FaultEvent.FAULT,onLoginFaultHandler);
ws.Authenticate("usrname","password");
protected function onLoginFaultHandler(event:FaultEvent):void
{
Alert.show('Login Failed with messsage\r\n[ '+event.fault.faultString+' ]');
/* Error #1085: The element type "br" must be terminated
by the matching end-tag "</br>". */
/* checking the content value of fault event shows
same out put as http://mysite.com/myservice.asmx */
}
protected function onLoginResultHandler(event:ResultEvent):void
{
/* on success code */
}
This guy tells us following in page http://verveguy.blogspot.com/2008/07/truth-about-flex-httpservice.html
All HTTP GET requests are stripped of headers. It's not in the Flex stack so it's probably the underlying Flash player runtime.
All HTTP GET requests that have content type other than "application/x-www-form-url-encoded" are turned into POST requests
All HTTP POST requests that have no actual posted data are turned into GET requests. See 1/ and 2/
All HTTP PUT and HTTP DELETE requests are turned into POST requests. This appears to be a browser limitation that the Flash player is stuck with.
I do see my request above turns into GET, but then I DO have post values in it. OR if those are somehow are not sent or recorded by Web Service Object ?
This is pretty simple... The Flex XML parser uses strict xml checking, so all tags must be closed. If you can change the web service, then change all <br> tags to <br />.
I finally found the answer myself. turns out I was having cookies set to AutoDetect. Which meant that the AIR would call a URL and it would need to redirect to keep the cookie/session value in side the URI itself.
Now I switched that to UseCookies and Everything is back to normal. I could test this from a sample web services and realized it was the server-side that was doing something wrong. And from AIR to Browser that's the only difference of cookies.
Somehow nusoap for PHP is smart to know that there is AutoDetect or New URI of the Web Services available. But AIR couldn't locate that. Anyways Thanks everyone for helping me solve this.