I have a custom Sharepoint 2010 web part that runs the user through a series of steps in a registration process. At each step, when whatever required input is completed, the user clicks the Continue button which is a standard server side button control. The code behind does some validation and DB updates before calling Response.Redirect which refreshes the same page with updated session data.
(Note: the session data is kept in the URL as an encrypted query string parameter, not by the conventional Session object)
This solution works fine in my single server test environment, but as soon as I deploy it to a load balanced stage or production environment some requests simply time out without receiving a response after clicking Continue (ERR_TIMED_OUT).
The Webpart log shows that the webpart is in fact calling Response.Redirect with a valid URL
This is no server resource issue. The timeout can be set to a minute or more, no response is received.
Only happens when deployed to load balanced servers
Everything works fine when I complete a registration on one of the load balanced servers - which leads me to believe there is a problem with load balancing and server sessions. I know that when interacting with a load balanced web application from one of the server nodes in the NLB, all requests will go to that particular server.
I know I have faced a similar issue before, but it is several years ago and I cannot remember what the solution was.
try
{
// get clean URL without query string parameters
string url;
if (string.IsNullOrEmpty(Request.Url.Query))
url = Request.Url.AbsoluteUri;
else
url = Request.Url.AbsoluteUri.Replace(Request.Url.Query, "");
// add encrypted serialized session object
url += "?" + Constants.QueryStringParameterData + "=" + SessionData.Serialize(true);
_log.Info("Redirecting to url '" + url + "'..");
Response.Redirect(url);
}
catch (Exception) { }
OK, the problem has been resolved.
It turned out to be UAG that was doing something in the background, and the way I discovered it was that the links that triggered the postbacks got changed from
http://some_url.com/sites/work/al2343/page.aspx
to
http://some_other_url.domain.com/uniquesigfed6a45cdc95e5fa9aa451d1a37451068d36e625ec2be5d4bc00f965ebc6a721/uniquesig1/sites/work/al2343/page.aspx
(Take note of the "uniquesig" in there)
This was the URL the browser actually tried to redirect to, but because of whatever the issue was with UAG the navigation froze.
I don't know how they fixed it, but at least the problem was not in my component.
One possibility that Request.Url is how particular server sees the url (something like http://internalServer44/myUrl) instead of externally visible load-balanced Ulr (like http://NlbFarmUrl/myUrl).
In case of SharePoint it will be better to use SPContext.Current.Site/Web properties to get base portion of Url since this Urls should already be in externally visible form.
Related
Problem Background
In my ASP.net MVC4 web application, we allow user to download data in an Excel workbook, wherein one of the cell contains a hyperlink to a report page. We prepare the link such that when user click the link in Excel, ReportController gets called with parameters, processes the request and return a report summary view i.e. .cshtml page. All works well...
I generate excel using SpreadSheetGear, code snippet that generate link:
rrid = (int.TryParse((string) values[row][column], out outInt) ? outInt : 0);
worksheet.Hyperlinks.Add(worksheet.Cells[row + 1, column],
PrepareProspectProfileLink((int) rrid, downloadCode),
string.Empty,
"CTRL + click to follow link",
rrid.ToString(CultureInfo.InvariantCulture));
Problem
I just noticed that when I click the link in excel, the same request is sent to the web server twice.
Analysis
I checked using Fiddler and placed a breakpoint in application code and its confirmed that the request is indeed sent twice.
In fiddler, Under Process column I found that first request is coming from "excel:24408" and second request is coming from "chrome:4028".
Also if I copy paste link in Outlook, it invokes request just once.
I understand this indicate, the first request is invoked by excel, when excel is served with html, it knows nothing about how to render it hence handover the request to default web browser which is Chrome on my system. Now Chrome fires the same request and on receiving html, it opens the html page.
Question
How can I stop this behavior? It puts unnecessary load on web server. And secondly when I audit user action, I get two entry :(
I'm not sure about excel, but you can handle this weird behavior on web server instead. You can create html page (without auditing) that will use javascript to redirect user to page with real report (and auditing stuff).
If you're concerned just about auditing, you can track requests for report in cache (or db) and consider making auditing entry only if same request for report wasn't fired let's say 5 seconds ago.
When using WebResource.axd you will see two parameters being passed in the query string. Usually looks something like this:
WebResource.axd?d=9H3mkymBtDwEocEoKm-S4A2&t=634093400273197793
I have run into an issue where I need a permanent link to the resource in question. Recently the link I was using stopped working. What would cause these ids to change? Rebooting the server? Recompiling the code? Is there anyway to make these ids permanent?
Background -
As part of a site monitoring service we are subscribed to, we have "recorded" several sets of user actions for our website. For example, we recorded the process of logging into the site. The monitoring is now saying that the user login process fails (it's working fine) because it cannot find the WebResource.axd with the ids it recorded.
This page provides all the information on the makeup of the URL
http://support.microsoft.com/kb/910442
The "d" stands for the requested Web Resource
Something worth noting is that you don't need to have the timestamp (t) parameter there to call the resource. Try it on your own site, view the source and grab a webresource.axd url and navigate to the it, remove the t
I want to run my personal web sites via an httphandler (I have a web server and static ip at home.)
Eventually, I will incorporate a data access layer and domain router into the handler, but for now, I am just trying to use it to return static web content.
I have the handler mapped to all verbs and paths with no access restrictions in IIS 7 on Windows 7.
I have added a little file logging at the beginning of process request. As it is the first thing in the handler, I use the logging to tell me when the handler is hit.
At the moment, the handler just returns a single web page that I have already written.
The handler itself is mostly just this:
using (FileStream fs = new FileStream(Request.PhysicalApplicationPath + "index.htm",
FileMode.Open))
{
fs.CopyTo(Response.OutputStream);
}
I understand that this won't work for anything but the one file.
So my issue is this: the HTML file has links to some images in it. I would expect that the browser would come back to the server to get those images as new requests. I would expect those requests to fail (because they'd be mapped to index.htm). But I would expect to see the logging hit at least twice (and potentially hit recursively). However, I only see a single request. The web page comes up and the images are 'X's.
When I refresh the browser, I see another request come through, but only for the root page again. The page is basic HTML, I do not have an asp.net application (nor do I want one, I like HTML/CSS/JS).
What do I have to do to get more than just the first request sent from the browser? I assume I'm just totally off the mark because I wrote an HTTP Module first, but strangely got the same exact behavior. I'm thinking I need to specify some response headers, but don't see that in any example.
I am trying to create an HttpModule in C# which will redirect arbitrary URLs and missing files, and which will perform canonicalization on all URLs that come in. Part of my canonicalization process is to redirect from default documents (such as http://www.contoso.com/default.aspx) to a bare directory. (like http://www.contoso.com/)
I have discovered that when an IIS server receives a request for a bare directory, it processes this request normally, and then it creates a child request for the selected default document. This is producing a redirect loop in my module - the first request goes through just fine, but when it sees the child request it removes the default document from the url and redirects back to the bare directory, starting the process over again.
Obviously, all I need to solve this problem is for my module to know when it's seeing a child request, so that it can ignore it. But I cannot find anything online describing how to tell the two requests apart. I found that request headers persist between the two requests, so I tried adding a value to the request headers and then looking for that value. This worked in IIS 7, but apparently IIS 6 won't let you alter request headers, and my code needs to run in both.
These child requests can also be triggered by any Server.Transfer or Server.Executes in the code. One trick that works to detect a child request would be to add a custom request header during the first request and checking for it later (when in the child request). Example:
private bool IsChildRequest(HttpRequest request)
{
var childRequestHeader = request.Headers["x-parent-breadcrumb"];
if (childRequestHeader != null)
{
return true;
}
request.Headers["x-parent-breadcrumb"] = "1"; // arbitrary value
return false;
}
This works because the request headers are passed to the child request. I initially tried this with HttpContext.Current.Items, but that seemed to get reset for the child request.
What's happening with your module is perfectly the way it should. If your default page is Default.aspx, then IIS is bound to redirect to Default.aspx, which causes your module to redo the work. However one thing I don't understand is that why would you want to have http://www.contoso.com/default.aspx to be redirected to http://www.contoso.com? probably you need to redefine your requirement. Or else, if possible you could have another default page (like http://www.contoso.com/Home.aspx) and then your IIS should forward the bare requests to that URL.
I am working on a site which is programmed in C# .net. It uses a CMS called ADX Studio (a decision which predates my time there) which provides a shonky form of URL Rewriting (as far as I can tell it works by assigning an aspx page as the default 404 handler in IIS).
I have an web form which lives at a rewritten URL. I edited it so that the html form's action points back to the rewritten URL:
var u = new Uri(Request.RawUrl.Split(new char[1] { ';' }).Last());
userAdminForm.Action = u.PathAndQuery;
(kind of ugly but works based on what Request.RawUrl is on these rewritten URLs).
The "pretty" URL is something like this:
http://www.site.com/admin/user/edit/
On my development box (Windows XP/ IIS 5) when I initially tried POSTing back to URLs like this I got a HTTP 405 error. I worked around this by adding a script mapping so Aspnet_isapi.dll handles all (*) requests. And everything works fine on my development machine.
I just pushed my changes to the live server (Windows Server 2003 R2 and IIS 6) and the post fails silently. The page refreshes but all of my logic (from within an IsPostBack path in the code) doesn't get hit. No errors are displayed, it just doesn't work.
If I remove my code setting the .Action of the form then the postback works but it is posting to the ugly URL corresponding to the physical location of the aspx file rather than my page.
Am I missing a simple way to make this work? I don't want to be switching URL rewriting method or anything as this is a large legacy site and is unfortunately pretty dependent on ADX Studio so I don't want to do anything that will break that.
[edited because somehow the code above lost its code highlighting]
The issue is that the page's <form> tag is referencing the "ugly" url as the action. You can resolve that by completely removing the action tag from the form. Browsers will, by default, postback to the same page, ie. the "pretty" url.
This article explains how to accomplish an "actionless" form (~ two thirds of the way down) http://msdn.microsoft.com/en-us/library/ms972974.aspx
It seems like the problem is the same as it was on IIS 5. I can get it to work by doing the following in the IIS Manager:
Right click on the relevant website and select "Properties"
Choose the "Home Directory" tab
Click "Configuration" down in the "Application settings"
Click "Insert" next to the "Wildcard application maps"
Browse to the location of aspnet_isapi.dll (in my case: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll )
Untick "Check that file exists"
Click "OK" back through the Russian doll of dialogs.
This is basically the same as the approach that I linked to in the question for IIS5. However, it's not optimal because IIS is running every request through asp (even static files). Which seems like it can only slow things down. I'd like to be able to specify that asp only needs invoking for HTTP POST requests at least.
The weird thing is that IIS5 gave a HTTP 405 error when POSTing to an extension without a registered ISAPI extension but IIS6 just fails silently. And the page is being run through IIS (I can debug with a breakpoint in the Page_Load function) but IsPostBack (and IsCrossPagePostBack) don't get correctly set. Could it be related to the view state? Is there any alternative to my solution described above?
I've come to what I think is an optimal solution for this problem. It turns out that ADXStudio CMS does use the default 404 rule to do some form of URL rewriting. This has a problem with http POST:
when IIS initially executes a custom
URL on a 404 error, it changes POST to
GET, even if the client does a POST
request.
(thanks to elite brains' blog post about setting up IIS6 and ASP.NET MVC).
Rather than creating my own HttpModule I decided instead to use Ionics Isapi Rewrite Filter to rewrite my URLs. I then set the 404 error handler in IIS to the default. And I created this IIRF.ini file to redirect all requests to the same format as the 404 handler produced:
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /Default.aspx?404;http://%{HTTP_HOST}$1 [U,L]
And everything seems to work great. The advantage over my previous answer is that the rewrite code is low level and runs fast and the -f and -d switches mean that if a file actually exists it isn't re-written and so static files don't have the overhead of running through .net.