I have this following code:
if (Request.UrlReferrer != null)
{
if (Request.UrlReferrer.PathAndQuery.ToLowerInvariant() == "/test/content.htm")
{
postbacklink = Request.UrlReferrer.AbsoluteUri.Replace("/TEST/Content.htm", "/Testing.aspx?") + Request.QueryString;
}
else
{
postbacklink = Request.UrlReferrer.AbsoluteUri;
}
}
ExtendedLoanView.PostbackLink = postbacklink;
Now this page can be accessed by two different locations. Which means this code:
postbacklink = Request.UrlReferrer.AbsoluteUri.Replace("/TEST/Content.htm", "/Test.aspx?") + Request.QueryString;
can only work with one page (Test.aspx) and is hard coded. So in IE7 Request.UrlReferrer shows me this:
Request.UrlReferrer = {http://Testing:12345/PPP/Content.htm}
Whereas in IE8+ I am getting this value:
Request.UrlReferrer = {http://Testing:12345/PPP/TestingPage.aspx?Name=Xyz&Address=123 YYY
How should I solve this issue? Its been bugging me for past month.
I would definitely advice not to base your logic on request information (not anymore than user entered values). The thing is that it will be different across browsers, and it is really hackable.
If you still need to pass information from client to server, make sure to have those validated. If you need those to stay in sync and have valid information, do not rely on what the browsers give you, but set it yourself and then take it from a place in the request you did set (for example, a hidden input, a control, a variable on the viewstate, or whatever allows the technology you're using).
Most sites handle the situation you're trying to solve by passing the destination URL in the URL itself, in a query parameter. For example:
http://www.example.com/Login.aspx?returnUrl=/TEST/content.htm
EDIT: I do realize that everything you send to the client is very hackable anyway, but if you set it yourself, it's easier for you to validate that it hasn't been tampered with. An example is the ViewState validation methods.
Related
I know on client side (javascript) you can use windows.location.hash but could not find anyway to access from the server side. I'm using asp.net.
We had a situation where we needed to persist the URL hash across ASP.Net post backs. As the browser does not send the hash to the server by default, the only way to do it is to use some Javascript:
When the form submits, grab the hash (window.location.hash) and store it in a server-side hidden input field Put this in a DIV with an id of "urlhash" so we can find it easily later.
On the server you can use this value if you need to do something with it. You can even change it if you need to.
On page load on the client, check the value of this this hidden field. You will want to find it by the DIV it is contained in as the auto-generated ID won't be known. Yes, you could do some trickery here with .ClientID but we found it simpler to just use the wrapper DIV as it allows all this Javascript to live in an external file and be used in a generic fashion.
If the hidden input field has a valid value, set that as the URL hash (window.location.hash again) and/or perform other actions.
We used jQuery to simplify the selecting of the field, etc ... all in all it ends up being a few jQuery calls, one to save the value, and another to restore it.
Before submit:
$("form").submit(function() {
$("input", "#urlhash").val(window.location.hash);
});
On page load:
var hashVal = $("input", "#urlhash").val();
if (IsHashValid(hashVal)) {
window.location.hash = hashVal;
}
IsHashValid() can check for "undefined" or other things you don't want to handle.
Also, make sure you use $(document).ready() appropriately, of course.
[RFC 2396][1] section 4.1:
When a URI reference is used to perform a retrieval action on the
identified resource, the optional fragment identifier, separated from
the URI by a crosshatch ("#") character, consists of additional
reference information to be interpreted by the user agent after the
retrieval action has been successfully completed. As such, it is not
part of a URI, but is often used in conjunction with a URI.
(emphasis added)
[1]: https://www.rfc-editor.org/rfc/rfc2396#section-4
That's because the browser doesn't transmit that part to the server, sorry.
Probably the only choice is to read it on the client side and transfer it manually to the server (GET/POST/AJAX).
Regards
Artur
You may see also how to play with back button and browser history
at Malcan
Just to rule out the possibility you aren't actually trying to see the fragment on a GET/POST and actually want to know how to access that part of a URI object you have within your server-side code, it is under Uri.Fragment (MSDN docs).
Possible solution for GET requests:
New Link format: http://example.com/yourDirectory?hash=video01
Call this function toward top of controller or http://example.com/yourDirectory/index.php:
function redirect()
{
if (!empty($_GET['hash'])) {
/** Sanitize & Validate $_GET['hash']
If valid return string
If invalid: return empty or false
******************************************************/
$validHash = sanitizeAndValidateHashFunction($_GET['hash']);
if (!empty($validHash)) {
$url = './#' . $validHash;
} else {
$url = '/your404page.php';
}
header("Location: $url");
}
}
I'm trying to web-crawl a site that uses php sessions via cookies. It is a good-ol' Squirrelmail webmail server.
I saw a couple of posts like this one, but it's not working for me.
When reaching the part when the cookies are sent by the host, I tried to retrieve the cookies using:
HttpWebResponse rs = (HttpWebResponse)rq.GetResponse();
CookieCollection cc = new CookieCollection();
cc.Add(rs.Cookies);
But rs.Cookies comes empty. However, there are set-cookie headers on the response, which I try to use as a guide to build actual cookies, like this:
for (int i = 0; i < rs.Headers.Count; i++)
{
if (rs.Headers.Keys[i].ToLower().Contains("cookie"))
{
string val = rs.Headers[i];
string[] vv = val.Split(";=,".ToCharArray());
Cookie co = new Cookie(vv[0], vv[1]);
// I know this is not the cleanest way to do it
// I've tried to manually set different values for
// co.Domain, co.Path and co.HttpOnly, just to get a working
// example. I tried different alternatives, but it doesn't
// seem to change anything
cc.Add(co);
}
}
Next, I send the cookies to request the next page, which is nothing but a frameset. The fact that I reach the frameset means I've been successfully authenticated and the session cookie is working. However, when I request one of the frames, I get an authentication-error web page. I've done my research, and the cookies do not change in the meantime. What may be going wrong?
Some may wonder why I'm trying to access webmail when there is pop/smtp to do a cleaner job. The answer is this is just a first example to learn the basics, I don't really care what the site is as long as I can successfully manage sessions.
I don't think posting all the code is a good idea yet, since it is a bit messy, and long: I planned to clean it once it worked (I'll post it, if you think it's worth the confusion). Moreover, I think I may have a conceptual error related to the frames, that may be the key to solve the problem.
We did a Fortify scan on our ASP.net application. We found that there many header manipulation issues. All the issues are pointing to Response.Redirect(). Please have a look at the below code where I encoded the parameters. Even then the below code is counted as header manipulation issue.
int iCount = 0;
foreach (string Name in Request.QueryString.Keys)
{
iCount++;
if (iCount > 1)
{
url += "&";
}
url += Name;
if (Request.Params[Name]!=null)
{
url += "=" + AntiXss.UrlEncode(Request.Params[Name]);
}
}
Response.redirect(Server.UrlPathEncode(page.root) + "\Test.aspx?" + url);
Can some body let me know what else is required to change here to resolve the issue?
Take off the Server.UrlPathEncode(page.root) portion and use Server.Transfer() instead of Response.Redirect().
Server.Transfer() transfers the user to another page on the same site and poses little to no danger of accidentally directing someone to another site.
Response.Redirect() is good for when you want to redirect someone to another site.
Also, Fortify doesn't tend to like Request.Params[] due to its possible ambiguity. A careful attacker may be able, on some servers, to send a UTF-7 or non-printing version of a name as one of the request variables and let the name of the variable contain the actual XSS injection, or overwrite the GET-request value with a cookie of the same name. Make sure both the name and value are htmlencoded, and consider using Request.QueryString[parametername] instead of Request.Params[parametername] to avoid more issues with Fortify.
Hopefully this gets you past your Fortify issues!
It appears that Fortify percieves Name as user defined and that will triger "Manupulation" error. If it's true try to use predefined list if possible.
I have a C# Property CategoryID, I want to set it's value in Javascript.
I am trying to set the value CategoryID like below:
var sPath = window.location.pathname;
var catId = null;
var sPage = sPath.substring(sPath.lastIndexOf('/') + 1);
if (sPage == 'xyz.aspx')
{
<%=CommonUtility.CategoryID=4%>;
}
else if(sPage == 'zxy.aspx')
{
<%=CommonUtility.CategoryID=5%>;
}
But by this method I always get the value of CategoryID= 5(which is in else block) .
Please suggest me how can get the Property value based on condition.
You can't set a C# property from a client-side (js). You may use ajax to do some work, but you simply can't manipulate server-side code.
edit:
if you still wonder how it's possible you get a value, see Mike's explanation of that fact. But the truth remains. You can't. It's impossible. If you want to know the longer explanation, see how asp.net actually works, it's lifecycle etc. Simple way of putting it would be like this:
A user sends a request to the server using his browser. The server receives it, creates a requested page and instantiates needed classes etc. Then it's gets parsed and sent to the client as html (and other resources of course, like images, css...). The instantiated page class CAN'T be accessed and modified afterwards by the client, because it's already flushed by the server. Every request creates a new instance. There's no way of interacting js with c# anyway. Can you imagine what it would be like, if you could use some js to modify C# on a remote server? It doesn't make sense at all.
You cannot set properties in your code-behind using client-side script this way. The only way to do something like that would be to use AJAX to send data to your server, although I'm pretty sure that's not appropriate for your case.
When you call <%=CommonUtility.CategoryID = 4%>, the server actually executes that statement when it is parsing the page before it sends it to the client. The reason that the property value is 5 is that both of those statements get executed, regardless of the logic in your Javascript if block. Your client side code will not actually be executed by the browser until the server has already parsed both of those tags, which at that point it would be too late to accomplish what you want anyways.
Is there any reason that you simply can't do all of this in the code-behind on page load? Is there some reason you feel like this has to be handled in JS?
Edit:
If you are unable to access the code-behind file (.aspx.vb or .aspx.cs) then simply use a server script block in the top of your .aspx page
<%
If (Request.Path.ToLower().Contains("xyz.aspx")) Then
CommonUtility.CategoryId = 4
ElseIf (Request.Path.ToLower().Contains("zxy.aspx")) Then
CommonUtility.CategoryId = 5
End If
%>
You can't set the C# variable from client script, because all the server code runs first, then the page is sent to the browser.
The client code will end up looking like this:
var sPath = window.location.pathname;
var catId = null;
var sPage = sPath.substring(sPath.lastIndexOf('/') + 1);
if (sPage == 'xyz.aspx')
{
4;
}
else if(sPage == 'zxy.aspx')
{
5;
}
}
I would like to take the original URL, truncate the query string parameters, and return a cleaned up version of the URL. I would like it to occur across the whole application, so performing through the global.asax would be ideal. Also, I think a 301 redirect would be in order as well.
ie.
in: www.website.com/default.aspx?utm_source=twitter&utm_medium=social-media
out: www.website.com/default.aspx
What would be the best way to achieve this?
System.Uri is your friend here. This has many helpful utilities on it, but the one you want is GetLeftPart:
string url = "http://www.website.com/default.aspx?utm_source=twitter&utm_medium=social-media";
Uri uri = new Uri(url);
Console.WriteLine(uri.GetLeftPart(UriPartial.Path));
This gives the output: http://www.website.com/default.aspx
[The Uri class does require the protocol, http://, to be specified]
GetLeftPart basicallys says "get the left part of the uri up to and including the part I specify". This can be Scheme (just the http:// bit), Authority (the www.website.com part), Path (the /default.aspx) or Query (the querystring).
Assuming you are on an aspx web page, you can then use Response.Redirect(newUrl) to redirect the caller.
Here is a simple trick
Dim uri = New Uri(Request.Url.AbsoluteUri)
dim reqURL = uri.GetLeftPart(UriPartial.Path)
Here is a quick way of getting the root path sans the full path and query.
string path = Request.Url.AbsoluteUri.Replace(Request.Url.PathAndQuery,"");
This may look a little better.
string rawUrl = String.Concat(this.GetApplicationUrl(), Request.RawUrl);
if (rawUrl.Contains("/post/"))
{
bool hasQueryStrings = Request.QueryString.Keys.Count > 1;
if (hasQueryStrings)
{
Uri uri = new Uri(rawUrl);
rawUrl = uri.GetLeftPart(UriPartial.Path);
HtmlLink canonical = new HtmlLink();
canonical.Href = rawUrl;
canonical.Attributes["rel"] = "canonical";
Page.Header.Controls.Add(canonical);
}
}
Followed by a function to properly fetch the application URL.
Works perfectly.
I'm guessing that you want to do this because you want your users to see pretty looking URLs. The only way to get the client to "change" the URL in its address bar is to send it to a new location - i.e. you need to redirect them.
Are the query string parameters going to affect the output of your page? If so, you'll have to look at how to maintain state between requests (session variables, cookies, etc.) because your query string parameters will be lost as soon as you redirect to a page without them.
There are a few ways you can do this globally (in order of preference):
If you have direct control over your server environment then a configurable server module like ISAPI_ReWrite or IIS 7.0 URL Rewrite Module is a great approach.
A custom IHttpModule is a nice, reusable roll-your-own approach.
You can also do this in the global.asax as you suggest
You should only use the 301 response code if the resource has indeed moved permanently. Again, this depends on whether your application needs to use the query string parameters. If you use a permanent redirect a browser (that respects the 301 response code) will skip loading a URL like .../default.aspx?utm_source=twitter&utm_medium=social-media and load .../default.aspx - you'll never even know about the query string parameters.
Finally, you can use POST method requests. This gives you clean URLs and lets you pass parameters in, but will only work with <form> elements or requests you create using JavaScript.
Take a look at the UriBuilder class. You can create one with a url string, and the object will then parse this url and let you access just the elements you desire.
After completing whatever processing you need to do on the query string, just split the url on the question mark:
Dim _CleanUrl as String = Request.Url.AbsoluteUri.Split("?")(0)
Response.Redirect(_CleanUrl)
Granted, my solution is in VB.NET, but I'd imagine that it could be ported over pretty easily. And since we are only looking for the first element of the split, it even "fails" gracefully when there is no querystring.