I need to write a simple WinForms apps that can be fired to test if a website is still alive and that that website is able to read from a database.
I am using the whole "(HttpWebResponse)myHttpWebRequest.GetResponse()" thing in c# to test whether the site is alive, but I am at a lose for how to get a test page in my website to write something to the "Response" to indicate that it was able to test it's own connectivity to the database.
Here is the sample code for my Winforms side (ripped from the MSDN):
private void CheckUrl()
{
try
{
HttpWebRequest myHttpWebRequest = (HttpWebRequest)WebRequest.Create("http://www.google.com");
HttpWebResponse myHttpWebResponse = (HttpWebResponse)myHttpWebRequest.GetResponse();
myHttpWebResponse.Close();
label1.Text = myHttpWebRequest.Address.AbsoluteUri;
}
catch (WebException e)
{
label1.Text = "This program is expected to throw WebException on successful run." +
"\n\nException Message :" + e.Message;
if (e.Status == WebExceptionStatus.ProtocolError)
{
label1.Text = String.Format("Status Code : {0}", ((HttpWebResponse)e.Response).StatusCode);
label2.Text =String.Format("Status Description : {0}", ((HttpWebResponse)e.Response).StatusDescription);
}
}
catch (Exception e)
{
label1.Text = e.Message;
}
}
I was hoping for some help on the webform side of things to return to the above code.
Thanks for any help that you folks can provide.
Richard
You can create a webservice inside of the project called IAMALIVE and have it return a single char.
On your WinForms area, consume said WebService and if it works, your site is alive.
In the essence of Papuccino's answer: you can actually create web services that are embedded in the C# code-behind of your WebForms pages by marking them with the [WebMethod] attribute. Those will reside within the web application, not just the server.
What happens when your site fails? Does it return a 500 status code or timeout?
Another way to look at it: does it always do something expected if it succeeds?
You might call a URL in your web app that you know will either return a 200 response code or will have some expected HTML markup in the response if things are working fine.
Have your winform call this URL and examine the Response.status or the text in the output buffer for your expected markup. You should also create a timeout in your httprequest. If the page does not load within the timeout, you will get a web exception and you will know the site is failing.
Also, if you have the budget, there are external monitoring services like gomez.com that can automate this and provide reporting on site availability.
have your webform page open a database connection and perform something simple/low-impact, e.g.
select SystemTableId from dbo.[SystemTable] where SystemTableId = 1
where SystemTable is a single-row table.
If the page throws an exception for any reason, Response.Write the exception message, otherwise Response.Write("SUCCESS") or similar.
Related
I wrote a HttpModule to intercept, evaluate and authorize requests, checking if logged user has appropriate access to the url being requested, in a pretty old legacy system written in ASP.NET 2.0(Web pages, not Web app), whose customer does not want to port to a newer framework. Restrictions have been loaded and cached at login time.
Everything works fine, except when some page contains an <asp:MultiView> component or when there is a button that launch an ajax method. When one of these situations occur, and user doesn't have rights to access that url, an alert box pops up with an "Unknown error" message, that came from a ThreadAbortException thrown by Response.End() method.
The question is: Why does my "Unauthorized" message is being overwritten by "Unknown Error" from the exception, only on these two situations?
Is there a way of doing an Url Authorization system, using database and caching and without cluttering Web.config with roles like those older ASP.NET samples?
// My module init method.
public void Init(HttpApplication context)
{
context.PreRequestHandlerExecute += new EventHandler(context_PreRequestHandlerExecute);
// PreRequestHandlerExecute is the first stage at ASP.NET Pipeline
// where we could get a fulfilled Session variable
}
private void context_PreRequestHandlerExecute(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpContext context = application.Context;
// additional request filtering/validation/etc.
LoggedUser user = (LoggedUser)application.Session["user"];
string path = context.Request.Path;
// more checks and rules...
if (!checkUserAuthorization(path, user))
{
context.Response.Write("<script>alert('Unauthorized. Contact your manager.');</script>");
context.Response.Write("<script>window.history.back();</script>");
context.Response.StatusCode = 403;
context.Response.End();
}
}
EDIT: What I've already tried (with no goal):
Response.OutputStream.Close();
Response.Flush();
HttpApplication.CompleteRequest();
it's by design. you must ignore it and add a catch for that exception.
try {
context.Response.End();
}
catch{}
Foreword
After a lot of research, finally I got it. Considering ASP.NET 2.0, concerning AJAX operations, the project I'm working uses a Microsoft component called "Atlas", which in turn got renamed to ASP.NET AJAX. At the time this system was written, the developers used the beta ASP.NET AJAX (codename "Atlas") to address all ajax and partial rendering needs.
I needed to dig deeper in source code (thanks to Reflector), to understand and inspect from where does that "Unknown Error" comes.
Inside the Microsoft.Web.Atlas, there is a file named Microsoft.Web.Resources.ScriptLibrary.*.Atlas.js (where * could be Debug or Release) which is rendered at runtime through a WebResource.axd "proxy".
This javascript file have a bug, because it expects to ASP.NET request always return an HTTP 200 (OK) response code, which in my code it's not happening (I'm returning a 403 Forbidden code at my module).
Code
From Microsoft.Web.Resources.ScriptLibrary.*.Atlas.js taken from WebResource.axd:
this._onFormSubmitCompleted = function(sender, eventArgs) {
var isErrorMode = true;
var errorNode;
var delta;
if (sender.get_statusCode() == 200) {
delta = sender.get_xml();
if (delta) {
errorNode = delta.selectSingleNode("/delta/pageError");
if (!errorNode) {
isErrorMode = false;
}
}
}
if (isErrorMode) {
if (errorNode) {
pageErrorMessage = errorNode.attributes.getNamedItem('message').nodeValue;
}
else {
pageErrorMessage = 'Unknown error';
}
this._enterErrorMode(pageErrorMessage);
return;
}
// Code continues.
}
From this code, we can see that since response code is not an 200 OK, that errorNode variable won't be set, and this if (errorNode) statement will always be false.
In this case, I was left with two options: Always return HTTP 200 and modify all pages that have an <atlas:ScriptManager> with and add an ErrorTemplate tag on each, or supersede that script with one that consider non-HTTP 200 responses, loading it below </form> tag at the Master page.
There is a lot of tutorials on how to do a proper error handling when using ScriptManager and UpdatePanels (an official one here), by subscribing to the AsyncPostBackError event), but this beta version (Atlas) simply don't have this event.
This question already has answers here:
Google Weather API gone?
(5 answers)
Closed 6 years ago.
I decided to pull information from Google's Weather API - The code I'm using below works fine.
XmlDocument widge = new XmlDocument();
widge.Load("https://www.google.com/ig/api?weather=Brisbane/dET7zIp38kGFSFJeOpWUZS3-");
var weathlist = widge.GetElementsByTagName("current_conditions");
foreach (XmlNode node in weathlist)
{
City.Text = ("Brisbane");
CurCond.Text = (node.SelectSingleNode("condition").Attributes["data"].Value);
Wimage.ImageUrl = ("http://www.google.com/" + node.SelectSingleNode("icon").Attributes["data"].Value);
Temp.Text = (node.SelectSingleNode("temp_c").Attributes["data"].Value + "°C");
}
}
As I said, I am able to pull the required data from the XML file and display it, however if the page is refreshed or a current session is still active, I receive the following error:
WebException was unhandled by user code - The remote server returned
an error: 403 Forbidden Exception.
I'm wondering whether this could be to do with some kind of access limitation put on access to that particular XML file?
Further research and adaptation of suggestions
As stated below, this is by no means best practice, but I've included the catch I now use for the exception. I run this code on Page_Load so I just do a post-back to the page. I haven't noticed any problems since. Performance wise I'm not overly concerned - I haven't noticed any increase in load time and this solution is temporary due to the fact this is all for testing purposes. I'm still in the process of using Yahoo's Weather API.
try
{
XmlDocument widge = new XmlDocument();
widge.Load("https://www.google.com/ig/api?weather=Brisbane/dET7zIp38kGFSFJeOpWUZS3-");
var list2 = widge.GetElementsByTagName("current_conditions");
foreach (XmlNode node in list2)
{
City.Text = ("Brisbane");
CurCond.Text = (node.SelectSingleNode("condition").Attributes["data"].Value);
Wimage.ImageUrl = ("http://www.google.com/" + node.SelectSingleNode("icon").Attributes["data"].Value);
Temp.Text = (node.SelectSingleNode("temp_c").Attributes["data"].Value + "°C");
}
}
catch (WebException exp)
{
if (exp.Status == WebExceptionStatus.ProtocolError &&
exp.Response != null)
{
var webres = (HttpWebResponse)exp.Response;
if (webres.StatusCode == HttpStatusCode.Forbidden)
{
Response.Redirect(ithwidgedev.aspx);
}
}
}
Google article illustrating API error handling
Google API Handle Errors
Thanks to:
https://stackoverflow.com/a/12011819/1302173 (Catch 403 and recall)
https://stackoverflow.com/a/11883388/1302173 (Error Handling and General Google API info)
https://stackoverflow.com/a/12000806/1302173 (Response Handling/json caching - Future plans)
Alternative
I found this great open source alternative recently
OpenWeatherMap - Free weather data and forecast API
This is related to a change / outage of the service. See: http://status-dashboard.com/32226/47728
I have been using Google's Weather API for over a year to feed a phone server so that the PolyCom phones receive a weather page. It has run error free for over a year. As of August 7th 2012 there have been frequent intermittent 403 errors.
I make a hit of the service once per hour (As has always been the case) so I don't think frequency of request is the issue. More likely the intermittent nature of the 403 is related to the partial roll-out of a configuration change or a CDN change at Google.
The Google Weather API isn't really a published API. It was an internal service apparently designed for use on iGoogle so the level of support is uncertain. I tweeted googleapis yesterday and received no response.
It may be better to switch to a promoted weather API such as:
WUnderground Weather or
Yahoo Weather.
I have added the following 'unless defined' error handling perl code myself yesterday to cope with this but if the problem persists I will switch to a more fully supported service:
my $url = "http://www.google.com/ig/api?weather=" . $ZipCode ;
my $tpp = XML::TreePP->new();
my $tree = $tpp->parsehttp( GET => $url );
my $city = $tree->{xml_api_reply}->{weather}->{forecast_information}->{city}->{"-data"};
unless (defined($city)) {
print "The weather service is currently unavailable. \n";
open (MYFILE, '>/home/swarmp/public_html/status/polyweather.xhtml');
print MYFILE qq(<?xml version="1.0" encoding="utf-8"?>\n);
print MYFILE qq(<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "xhtml11.dtd">\n);
print MYFILE qq(<html xmlns="http://www.w3.org/1999/xhtml">\n);
print MYFILE qq(<head><title>Weather is Unavailable!</title></head>\n);
print MYFILE qq(<body>\n);
print MYFILE qq(<p>\n);
print MYFILE qq(The weather service is currently unavailable from the data vendor.\n);
print MYFILE qq(</p>\n);
print MYFILE qq(</body>\n);
print MYFILE qq(</html>\n);
close MYFILE;
exit(0);
}...
This is by no means a best practice, but I use this API heavily in some WP7 and Metro apps. I handle this by catching the exception (most of the time a 403) and simply re-calling the service inside of the catch, if there is an error on the Google end it's usually briefly and only results in 1 or 2 additional calls.
That`s the same thing we found out.
Compare the request header in a bad request and a working request. The working request includes cookies. But where are they from?
Delete all your browser cookies from google. The weather api call will not work in your browser anymore. Browse to google.com and then to the weather api, it will work again.
Google checks the cookies to block multiple api calls. Getting the cookies one time before handling all weather api requests will fix the problem. The cookies will expire in one year. I assume you will restart your application more often then once a year. So that you will get a new one. Getting cookies for each request will end in the same problem: Too many different requests.
One tip: Weather does not often change, so cache the json information (for maybe a hour). That will reduce time-consuming operations as requests!
I found that If you try the request in a clean browser (like new window incognito mode on chrome) the google weather service works. Possible problem of cookies?
Currently I have a very simple code that downloads a file from a server, however i keep running into the following exceptions:
The remote server returned an error: (500)
Unable to connect to the remote server
There is nothing wrong with the webserver it has to do with my service and i guess it times out, how can i handle these more robustly? I have my code shown below, it's really simple.
try
{
string[] splitCrawlerid = StaticStringClass.crawlerID.Split('t');
WebClient webClient = new WebClient();
if (Directory.Exists("C:\\ImageDepot\\" + splitCrawlerid[2]))
{
}
else
{
Directory.CreateDirectory("C:\\ImageDepot\\" + splitCrawlerid[2]);
}
webClient.DownloadFile(privateHTML, #"C:\ImageDepot\" + splitCrawlerid[2] + "\\" + "AT" + carID + ".jpeg");
}
catch (Exception ex)
{
//not sure how to really handle these two exceptions reliably
}
The ideal situation for me would be to attempt to download the file again.
Try setting a user-agent header. The WebClient doesn't send that be default and MSDN warns that some web servers will return a 500 error if user-agent isn't set.
A WebClient instance does not send optional HTTP headers by default.
If your request requires an optional header, you must add the header
to the Headers collection. For example, to retain queries in the
response, you must add a user-agent header. Also, servers may return
500 (Internal Server Error) if the user agent header is missing.
See the example on the MSDN page for how to add the header.
You could wrap the whole thing in a for loop that goes 0..3, and the line after webClient.DownloadFile(...) could be a break;. That way if there's an exception, the break gets skipped and the app tries again. But that seems to be more of a band-aid to me; I'd spend more time figuring out exactly why things are going wrong.
If you want to remove all the "try while blah else until rethrow whatever" code from the business logic of your app, you could define an extension method like
public static T TryNTimes<T>(this Func<T> func, int n) {
while (true) {
try {
return func();
} catch {
if (++i == n) throw;
}
}
}
and use it like
Func<File> downloader = () => client.DownloadFile(...);
var file = downloader.TryNTimes(5);
I am limiting file size users can upload to the site from Web.config. As explained here, it should throw a ConfigurationErrorsException if size is not accepted. I tried to catch it from the action method or controller for upload requests but no luck. Connection is resetted and I can't get it to show an error page.
I tried catching it in BeginRequest event but no matter what I do the exception is unhandled.
Here's the code:
protected void Application_BeginRequest(Object sender, EventArgs e)
{
HttpContext context = ((HttpApplication)sender).Context;
try
{
if (context.Request.ContentLength > maxRequestLength)
{
IServiceProvider provider = (IServiceProvider)context;
HttpWorkerRequest workerRequest = (HttpWorkerRequest)provider.GetService(typeof(HttpWorkerRequest));
// Check if body contains data
if (workerRequest.HasEntityBody())
{
// get the total body length
int requestLength = workerRequest.GetTotalEntityBodyLength();
// Get the initial bytes loaded
int initialBytes = 0;
if (workerRequest.GetPreloadedEntityBody() != null)
initialBytes = workerRequest.GetPreloadedEntityBody().Length;
if (!workerRequest.IsEntireEntityBodyIsPreloaded())
{
byte[] buffer = new byte[512];
// Set the received bytes to initial bytes before start reading
int receivedBytes = initialBytes;
while (requestLength - receivedBytes >= initialBytes)
{
// Read another set of bytes
initialBytes = workerRequest.ReadEntityBody(buffer, buffer.Length);
// Update the received bytes
receivedBytes += initialBytes;
}
initialBytes = workerRequest.ReadEntityBody(buffer, requestLength - receivedBytes);
}
}
}
}
catch(HttpException)
{
context.Response.Redirect(this.Request.Url.LocalPath + "?action=exception");
}
}
But I still get this:
Maximum request length exceeded.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Web.HttpException: Maximum request length exceeded.
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
Update:
What method raises the exception anyway? If I read the request it raises exception If I don't read it at all, I get "101 Connection Reset" in browser. What can be done here?
You cant catch error in action method becouse exception comes earlier, but you can catch it here
protected void Application_Error() {
var lastError = Server.GetLastError();
if(lastError !=null && lastError is HttpException && lastError.Message.Contains("exceed")) {
Response.Redirect("~/errors/RequestLengthExceeded");
}
}
Actualy when file size exceeds limits HttpException error arise.
There is also IIS limit on content - wich can't be catched in application. IIS 7 throws
HTTP Error 404.13 - Not Found The
request filtering module is configured
to deny a request that exceeds the
request content length.
You can google it, there is a lot of information about this iis error.
There is no way to do it right without a client-side help. You cannot determine if the request is too long unless you read all of it. If you read each request to the end, anyone come and keep your server busy. If you just look at content length and drop the request, other side is going to think there is a connection problem. It's nothing you can do with error handling, it's a shortcoming of HTTP.
You can use Flash or Javascript components to make it right because this thing can't fail nicely.
I am not 100% on this, but I think it might help if you tried changing:
context.Response.Redirect(this.Request.Url.LocalPath + "?action=exception");
to
Server.Transfer(this.Request.Url.LocalPath + "?action=exception,false)
My thinking is that the the over-max-request-length Request is still being processed in the Redirect call but if you tell it to ditch the form data, it will become under the max request length and then it might behave differently.
No guarantees, but its easy to check.
catch (Exception ex)
{
if (ex is HttpException && (ex as HttpException).WebEventCode == 3004)
{
//-- you can now inform the client that file uploaded was too large.
}
else
throw;
}
I have a similar issue in that I want to catch the 'Maximum request length exceeded' exception within the Application_Error handler and then do a Redirect.
(The difference is that I am writing a REST service with ASP.Net Web API and instead of redirecting to an error page, I wanted to redirect to an Error controller which would then return the appropriate response).
However, what I found was that when running the application through the ASP.Net Development Server, the Response.Redirect didn't seem to be working. Fiddler would state "ReadResponse() failed: The server did not return a response for this request."
My client (Advanced REST Client for Chrome) would simply show "0 NO RESPONSE".
If I then ran the application via a local copy of IIS on my development machine then the redirect would work correctly!
I'm not sure i can definitively say that Response.Redirect does not work on the ASP.Net Development Server but it certainly wasn't working in my situation.
So, I recommend trying to run your application through IIS instead of IIS Express or the Development Server and see if you get a different result.
See this link on how to Specify the Web Server for Web Projects in Visual Studio:
http://msdn.microsoft.com/en-us/library/ms178108(v=vs.100).aspx
I want to use Response.Redirect to redirect the browser when an exception occurs.
I also want to pass the exception message to my error page.
For example:
string URL = "Page2.aspx?Exception=" + ex.ToString()
Response.Redirect(URL)
Can it be done? Is this the right syntax?
Instead of Response.Redirect, which sends a response to the client asking it to request a different page, you should call Server.Transfer, which runs a different page immediately and sends that page directly to the client.
You can then put the exception in HttpContext.Items and read it from HttpContext.Items in your error page.
For example:
catch (Exception ex) {
HttpContext.Current.Items.Add("Exception", ex);
Server.Transfer("Error.aspx");
}
In Error.aspx, you can then get the exception like this:
<%
Exception error;
if (!HttpContext.Current.Items.Contains("Exception"))
Response.Redirect("/"); //There was no error; the user typed Error.aspx into the browser
error = (Exception)HttpContext.Current.Items["Exception"];
%>
Yes that would work (with some semicolons added of course and you probably just want to send the exception message):
String URL = "Page2.aspx?Exception=" + ex.Message;
Response.Redirect(URL);
As Andrew said, it should work.
However, if you're looking for Error Management, you're better off using Server.GetLastError() so you get the full Exception object including stack trace.
Here's an MSDN article that deals with Application Errors in general and uses Server.GetLastError().
Typically I would have panels in my page and toggle visibility in the catch block to display a friendly message to the user. I would also include an emailed report to myself detailing the error message.
try
{
}
catch (Exception ex)
{
formPanel.Visible = false;
errorPanel.Visible = true;
// Log error
LogError(ex);
}
As for reporting/forwarding the error to another page:
string errorURL = "ErrorPage.aspx?message=" + ex.Message;
Response.Redirect(errorURL, true);
And don't forget ELMAH!
http://bit.ly/HsnFh
We would always advise against redirecting to a .aspx page on an error condition.
In the past we've seen scenarios where a fundamental issue with the application has caused an error to occur, which has in turn redirected to an error.aspx page, which it's self has errored resulted in an endless redirection loop.
We strongly advise people to use a .htm page or something which is not handled by the ASP.NET framework for error pages.
There is built in support within ASP.NET using the customErrors section of the Web.config to automatically handle error redirection for you.
customError tag
You can look into global exception handling too, this can be managed via the Application_OnError event which you can find within the global.asax
Thanks,
Phil
http://exceptioneer.com