How can I debug problems related to (lack of) postback - c#

I have created a custom wizard control that dynamically loads usercontrols as you progress though it. The wizard is behaving as expected in all environments (PC/MAC) and browsers I have tested however a client is reporting that she is unable to complete the wizard. What I know about the issue:
It always fails on the same wizard step for this user (not the first step)
When the user clicks on the 'next' button in the step, the controller reports that the request was not a postback request (ie. IsPostBack() == false) and displays the first page of the wizard
The client is using a Mac and is accessing the site using the latest version of Safari
If the client switches to Firefox, or even just switches the user agent in Safari to something other than Safari the problem goes away.
So the problem is that when the client reaches a certain step in the wizard and clicks 'next', instead of re-loading that step to initiate the save event, the controller is merely displaying the first step of the wizard.
The step that fails contains many different form controls including textboxes, dropdowns, checkboxes and a fileupload control. We thought that it might have something to do with invalid characters getting pasted in from Word or something similar but that seems strange seeing as the problem only appears to be happening in Safari.
No exceptions are thrown and the windows event log is not displaying any related errors/warnings.
What I am looking for is ways to diagnose this error. At the moment I've been unable to reproduce the behavior that the client is experiencing but after going on site and seeing it for myself I can verify that it is definitely a valid issue.
Update 26/10/2010:
We installed a proxy on the clients NIC in order to retrieve the requests and responses. Problem is that when running the proxy the client appears to not have to problem any more. Does this behavior make sense to anyone?
Update 27/10/2010:
After investigating the traffic on the clients machine we noticed that the response headers were including some entries related to a client side proxy and we confirmed that they are in fact running the squid proxy in their office. To rule out that it had anything to do with the problem we got them to turn if off and then try the wizard again. This time no problems were encountered! So the proxy seems to be interfering with the requests causing .NET to somehow record the POST request as a non-postback. The following lines were found in the response header of a failed request. Can anyone comment on how squid could cause the behavior we are experiencing and what we can do about it?
Via:1.0 squid-12 (squid/3.1.0.13), 1.0 ClientSiteProxy:3128 (squid/2.7.STABLE4)
X-Cache:MISS from squid-12, MISS from ClientSiteProxy
X-Cache-Lookup:MISS from ClientSiteProxy:3128

If I have to troubleshoot this, I would first take a fiddler trace (www.fiddlertool.com) on the client and see what the requests are up to. I am not sure if Fiddler works on Mac, but any HTTP Watch, Network Monitor tool should be good. The reason that I am not doubting the code is that it works very well on all the other browsers, so the code shouldn't be bad.
May be there is something in the code [like adding cookies, etc] that is messing with the specific Client's browser.
HTH,
Rahul

For Mac There's a HTTPScoop which lets you to debug http post data's....it is similar to fiddler

The problem is not solved as such but we ended up just adding an exception to the clients squid proxy to bypass our website. The problem seems proxy/IIS/Safari related but we haven't been able to track the problem down any further and the client is happy with this solution as long as the problem doesn't resurface somewhere else. I'll re-post if more information surfaces.

Related

MVC5 application form results in multiple post requests on submit

We have an MVC5/Razor application that last week suddenly started acting really weird. It's hosted on a Windows Server 2008 R2 (with IIS 7.5) and the problem started after installing Windows updates last week. Up until then the application was working just fine.
Problem is that when a user submits a simple form consisting of 10 text fields, 4 text areas and a drop-down list, the server doesn't respond properly resulting in an "Error_Connection_Reset" (in Chrome) / "Page Unavailable" (in IE11).
We use POST-Redirect-GET pattern with RedirectToAction in the receiving action in the controller which would normally result in a 302 response and redirect.
The form is rendered like this:
#using (Html.BeginForm("Create", "Controller"))
{
#Html.AntiForgeryToken()
<div class="editor-fields">
#Html.EditorFor(m => m.Model)
</div>
<div class="clear-fix"></div>
<div class="submit-area">
<input type="submit" value="Submit" />
</div>
}
The action has these attributes:
[HttpPost]
[ValidateAntiForgeryToken]
[AllowAnonymous]
[ValidateInput(false)]
We also use Google Analytics and jQuery, jQuery validate, unobtrusive ajax with Optimization (minification). Most JS scripts are included with Scripts.Render.
The application works fine when we access it from inside our own domain, but since all our users need access from outside, we need to fix this error. This could suggest a DNS issue but our IT support says DNS looks just fine and hasn't been changed recently.
Here's what we've done and found out so far:
Log file in inetpub\logs\LogFiles shows multiple (between 3 and 10) POST requests all with status code 302 but no following GET request. And there really should be only one POST request and then a GET request!
Log file in %windir%\System32\LogFiles\HTTPERR shows nothing interesting, just a bunch of Timer_ConnectionIdle "errors" whenever the web site reaches it's idle timeout value (which is the default 20 minutes).
Inspected requests with Fiddler and dev tools in Chrome and IE11 and all shows the same request headers. With fiddler we get [Fiddler] ReadResponse() failed: The server did not return a complete response for this request. Server returned 0 bytes. In Chrome Dev Tools it says (failed) in the Status column.
Disabled caching and compression in IIS.
Turned CustomErrors off in web.config file
Added <modules runAllManagedModulesForAllRequests="true">in web.config
Searched Google and SO for answers but so far to no avail
Checked the recently installed updates from Windows Update regarding .NET 4.5.2 and related Knowledge Base articles but nothing that really seemed related to this problem was mentioned
Edit: Also, we enabled Failed Request Tracing but we only get failed request logs for a missing favicon.ico in inetpub\logs\FailedReqLogFiles folder
Funny thing is that if i put a check mark in "Disable cache" in Chrome Dev Tools, the application also works just fine. This could suggest that it's a caching issue which is also why we tried turning on Output Caching in the IIS.
Our next step would be to either fire up a new server (Windows 2012 and a more recent version of IIS) and install the application there or install WireShark on our current server to further investigate requests. But if anyone has experienced this behaviour and know a fix for it, we would rather just fix it for now. So, please, if anyone can help, please advice.
Since the problem only seemed to occur to users outside our domain, we started thinking that this issue might be related to our firewall so we contacted our firewall provider and they confirmed that there was an issue with the latest version of the firewall software. As it happened, the firewall software was updated same day as Windows Updates were installed on our server which probably caused a bit of confusion. After reverting to the previous version of the firewall software, the issue has disappeared and everything is working as expected again. Phew!

MVC5 Application not Debugging Correctly

I have a very weird problem.
I've been creating my application and building it and running it. Chrome pops up with its tab and the page loads.
The last things I can remember installing before the app went haywire is Unity.
So now, I build, I click run in Chrome and the VS doesn't show any pages. The IIS express is running to point where I can query for pages like Home/Index but when I query my JsonResult Blog/Blogs I get a 500 error. I've used fiddler and I can hit the standard URLs but not my JsonResult.
Usually if you're on a page editing it, then you hit F5, chrome will load up that page in the browser. VS isn't doing that anymore.
The only thing I can pin it down to is Unity.....
If in the Fiddler response you cannot pinpoint the exact reason why your server returned 500 error you may try debugging your code. So start by enabling all errors. In VS use Ctrl+D+E and then make sure that Common Language Runtime Exceptions is selected:
Now F5 into your application, click Continue on all potential exceptions you don't recognize, and then in the browser navigate to the controller action that unleashed the 500 error. Chances are that the debugger will pinpoint you to the precise reason of this unhandled error.
NOTE: Don't forget to turn off the breaking on all Common Language Runtime exceptions settings once you have identified the problem or you might get flooded with lots of verbose errors.
NOTE2: Usually you don't need to resort to this heavy debugging artillery, just by inspecting the error response in Fiddler/Web developer toolbar of your webbrowser, you could come to the conclusion.
It was due to the last thing I added: Unity.
Just as an FYI, the reason no errors were coming back is because none of the controllers serving up my pages had an interface injected into them.
The only controller that had an interface injected into it was the toolbox controller and that was getting accessed via this JSON call. The response that was coming back was a HTML page but you could only view that in the Chrome network tab of the console.

ASP.NET 4.0 MAC validation failure

To begin, I should mention that I'm quite new to C# and ASP.NET 4.0. The solution to this problem may be elementary so don't hesitate to ask fundamental questions.
I've inherited an ASP.NET 4.0 application that failed our automated security test because of <page enableViewStateMac="false"> (not my fault). Of course, I turned it on. At that point a very specific pattern of behavior emerged:
1) I can navigate to the application landing page
2) attempting to click on any link leaving the landing page results in a "Validation of viewstate MAC failed..." error.
2a) the exception to this is that clicking on the link that takes me to the landing page (the page I'm already on) works just fine
I should mention that navigation to other ASPs occurs by way of Response.Redirect(...). I can successfully navigate to a page if I enter the url directly into the nav bar (http://dummyhost.com:12345/Enroll.aspx as opposed to http://dummyhost.com:12345/LandingPage.aspx and then clicking on enroll).
In the Page_Init() method of the master page, I'm setting:
Page.ViewStateUserKey = Session.SessionID;
If I comment out this line, I can turn on MAC and the application is perfectly happy. Can anyone illuminate what's going on?
The most likely cause is that some landing-page-specific data is being submitted to the server and persisting through the call to Response.Redirect, so the enrollment page tries to read the landing-page-specific data and fails the request since the data cannot be interpreted correctly.
Instead of using Response.Redirect, consider using ... directly in your markup when you want to generate a simple link. This will cause the browser to make a vanilla HTTP GET request to the specified resource, free of any current-page-specific date.

ASHX renders as broken image

I've got a really vexxing problem with an ASHX handler that renders a captcha image. The thing that makes it really vexxing is that it was working fine two months ago and when I went back to it again today it had stopped working.
What I've got is a page that throws in a captcha every so often. This is the markup from an example of a challenge:
<img class="challengedtl" src="Challenge.ashx?tkn=0057ea27-4d35-4850-9c6f-7a6fdc9818e2"/>
The GUID references a record in a SQL table that contains the actual content of the captcha as well as the status of the captcha challenge, i.e. has it been processed and if so did the user get it right etc.
On the page where this markup is found, the image displays as a broken jpeg. When I drop a breakpoint in the ASHX ProcessRequest() method I can see that the ASHX is never being called.
When I take the URL out of the source attribute and run it directly from the address bar in my browser, then I hit my break point in ProcessRequest and the captch image is rendered just fine.
I don't believe that my ASHX code is the problem, since it works when I call it directly. The problem seems to be with why the ASHX isn't being called by the main page. Given that this was working in February I am at a loss to explain what is going on.
I know that something has happened to my machine since then. I suspect a Windows Update or a service pack for something. The reason for this is that my captcha processing includes tracking the IP address of the caller. Back when this was working my local host was being registered as 127.0.0.1 (IPv4) but now it is being registered as ::1 (IPv6). Probably a red herring.
Does anyone know what might be causing this or do you have any suggestions for how to troubleshoot this problem?
Is the handler in the same folder as the page containing the html you posted above?
Here are the two key parts:
When I drop a breakpoint in the ASHX ProcessRequest() method I can see that the ASHX is never being called.
and
src="Challenge.ashx?tkn=0057ea27-4d35-4850-9c6f-7a6fdc9818e2"
Put those together, and what we can surmise that the path in your src attribute is wrong.
It's just an image tag. If the html loads it will send a request for that resource. Since your breakpoint is not hit, it can only mean that either you aren't testing somewhere that allows breakpoints or that it's sending the request to the wrong place.
It could be as simple as sending the request to the production version of the site, using the wrong schema (ie: https vs http), or missing a folder or port number somewhere. The browser should be able to give you the entire path of the resource -- make sure this matches what you expect.

POSTing to a re-written URL on IIS 6 doesn't work

I am working on a site which is programmed in C# .net. It uses a CMS called ADX Studio (a decision which predates my time there) which provides a shonky form of URL Rewriting (as far as I can tell it works by assigning an aspx page as the default 404 handler in IIS).
I have an web form which lives at a rewritten URL. I edited it so that the html form's action points back to the rewritten URL:
var u = new Uri(Request.RawUrl.Split(new char[1] { ';' }).Last());
userAdminForm.Action = u.PathAndQuery;
(kind of ugly but works based on what Request.RawUrl is on these rewritten URLs).
The "pretty" URL is something like this:
http://www.site.com/admin/user/edit/
On my development box (Windows XP/ IIS 5) when I initially tried POSTing back to URLs like this I got a HTTP 405 error. I worked around this by adding a script mapping so Aspnet_isapi.dll handles all (*) requests. And everything works fine on my development machine.
I just pushed my changes to the live server (Windows Server 2003 R2 and IIS 6) and the post fails silently. The page refreshes but all of my logic (from within an IsPostBack path in the code) doesn't get hit. No errors are displayed, it just doesn't work.
If I remove my code setting the .Action of the form then the postback works but it is posting to the ugly URL corresponding to the physical location of the aspx file rather than my page.
Am I missing a simple way to make this work? I don't want to be switching URL rewriting method or anything as this is a large legacy site and is unfortunately pretty dependent on ADX Studio so I don't want to do anything that will break that.
[edited because somehow the code above lost its code highlighting]
The issue is that the page's <form> tag is referencing the "ugly" url as the action. You can resolve that by completely removing the action tag from the form. Browsers will, by default, postback to the same page, ie. the "pretty" url.
This article explains how to accomplish an "actionless" form (~ two thirds of the way down) http://msdn.microsoft.com/en-us/library/ms972974.aspx
It seems like the problem is the same as it was on IIS 5. I can get it to work by doing the following in the IIS Manager:
Right click on the relevant website and select "Properties"
Choose the "Home Directory" tab
Click "Configuration" down in the "Application settings"
Click "Insert" next to the "Wildcard application maps"
Browse to the location of aspnet_isapi.dll (in my case: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll )
Untick "Check that file exists"
Click "OK" back through the Russian doll of dialogs.
This is basically the same as the approach that I linked to in the question for IIS5. However, it's not optimal because IIS is running every request through asp (even static files). Which seems like it can only slow things down. I'd like to be able to specify that asp only needs invoking for HTTP POST requests at least.
The weird thing is that IIS5 gave a HTTP 405 error when POSTing to an extension without a registered ISAPI extension but IIS6 just fails silently. And the page is being run through IIS (I can debug with a breakpoint in the Page_Load function) but IsPostBack (and IsCrossPagePostBack) don't get correctly set. Could it be related to the view state? Is there any alternative to my solution described above?
I've come to what I think is an optimal solution for this problem. It turns out that ADXStudio CMS does use the default 404 rule to do some form of URL rewriting. This has a problem with http POST:
when IIS initially executes a custom
URL on a 404 error, it changes POST to
GET, even if the client does a POST
request.
(thanks to elite brains' blog post about setting up IIS6 and ASP.NET MVC).
Rather than creating my own HttpModule I decided instead to use Ionics Isapi Rewrite Filter to rewrite my URLs. I then set the 404 error handler in IIS to the default. And I created this IIRF.ini file to redirect all requests to the same format as the 404 handler produced:
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /Default.aspx?404;http://%{HTTP_HOST}$1 [U,L]
And everything seems to work great. The advantage over my previous answer is that the rewrite code is low level and runs fast and the -f and -d switches mean that if a file actually exists it isn't re-written and so static files don't have the overhead of running through .net.

Categories

Resources