I've encountered an issue with HttpWebRequest that if the URI is over 2048 characters long the request fails and returns a 404 error even though the server is perfectly capable of servicing a request with a URI that long. I know this since the same URI that causes an error if submitted via HttpWebRequest works fine when pasted directly into a browser address bar.
My current workaround is to allow users to set a compatability flag to say that it's safe to send the parameters as a POST request instead in the case where the URI would be too long but this is not ideal since the protocol I'm using is RESTful and GET should be used for queries. Plus there is no guarentee that other implementors of the protocol will accept POSTed queries
Is there another class in .Net that has equivalent functionality to HttpWebRequest that doesn't suffer from the URI length limit that I could use?
I'm aware of WebClient but I don't really want to use that as I need to be able to fully control the HTTP Headers which WebClient restricts the ability to do.
Edit
Because Shoban asked for it:
http://localhost/BBCDemo/sparql/?query=PREFIX+rdf%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22-rdf-syntax-ns%23%3E%0D%0APREFIX+rdfs%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23%3E%0D%0APREFIX+xsd%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2001%2FXMLSchema%23%3E%0D%0APREFIX+skos%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2004%2F02%2Fskos%2Fcore%23%3E%0D%0APREFIX+dc%3A+%3Chttp%3A%2F%2Fpurl.org%2Fdc%2Felements%2F1.1%2F%3E%0D%0APREFIX+po%3A+%3Chttp%3A%2F%2Fpurl.org%2Fontology%2Fpo%2F%3E%0D%0APREFIX+timeline%3A+%3Chttp%3A%2F%2Fpurl.org%2FNET%2Fc4dm%2Ftimeline.owl%23%3E%0D%0ASELECT+*+WHERE+{%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+dc%3Atitle+%3Ftitle+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Ashort_synopsis+%3Fsynopsis-short+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Amedium_synopsis+%3Fsynopsis-med+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Along_synopsis+%3Fsynopsis-long+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Amasterbrand+%3Fchannel+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Agenre+%3Fgenre+.%0D%0A++++%3Fchannel+dc%3Atitle+%3Fchanneltitle+.%0D%0A++++OPTIONAL+{%0D%0A++++++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Abrand+%3Fbrand+.%0D%0A++++++++%3Fbrand+dc%3Atitle+%3Fbrandtitle+.%0D%0A++++}%0D%0A++++OPTIONAL+{%0D%0A++++++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Aversion+%3Fver+.%0D%0A++++++++%3Fver+po%3Atime+%3Finterval+.%0D%0A++++++++%3Finterval+timeline%3Astart+%3Fstart+.%0D%0A++++++++%3Finterval+timeline%3Aend+%3Fend+.%0D%0A++++}%0D%0A}&default-graph-uri=&timeout=30000
Which is the following encoded onto the querystring:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX dc: <http://purl.org/dc/elements/1.1/>
PREFIX po: <http://purl.org/ontology/po/>
PREFIX timeline: <http://purl.org/NET/c4dm/timeline.owl#>
SELECT * WHERE {
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> dc:title ?title .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:short_synopsis ?synopsis-short .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:medium_synopsis ?synopsis-med .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:long_synopsis ?synopsis-long .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:masterbrand ?channel .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:genre ?genre .
?channel dc:title ?channeltitle .
OPTIONAL {
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:brand ?brand .
?brand dc:title ?brandtitle .
}
OPTIONAL {
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:version ?ver .
?ver po:time ?interval .
?interval timeline:start ?start .
?interval timeline:end ?end .
}
}
the protocol I'm using is RESTful and GET should be used for queries.
There's no reason POST can't also be used for queries; for really long request data you have to, as very-long-URIs aren't globally supported, and have never been. This is one area where HTTP does not live up to the REST ideal.
The reason POST generally isn't used on a plain-HTML level is to stop the browser prompting for reloads, and promote eg. bookmarking. But for HttpWebRequest you don't have either of those concerns, so go ahead and POST it. Web applications should use a parameter or a URI path part to distinguish write requests from queries, not merely the request method. (Of course a write request from a GET method should still be denied.)
I don't think HttpWebRequest is actually incompatible with GET URLs of the size you are talking about. I say this based on two things:
In my own work I use HttpWebRequest to send HTTP GET requests longer than 2048 characters without trouble. I'm not sure what my longest ones are, but we're talking 10,000+ characters. (This is primarily between a web application and an instance of Solr running under Tomcat.)
.NET does have some limits on GET URL lengths, but the ones I'm aware of are much higher than 2048 characters. For example, I learned today from my profiler that WebRequest.Create(string url) calls the Uri class constructor, and that is documented to throw a UriFormatException if "the length of uriString exceeds 65534 characters."
I'm not sure where your problem might be, if it's not HttpWebRequest itself. Do you know under what conditions your web service will return HTTP 404 (i.e. "not found")? (I assume the 404 is coming from your web service, rather than being faked inside the depths of .NET.) I'd also want to double-check that the address you're pasting into the browser is actually the same one that's being sent by .NET; as feroze suggested, you should use a network sniffing tool for this. If the two addresses are the same, then maybe next compare how the HTTP headers vary between the .NET case and the browser case. (Incidentally, I personally find Fiddler a bit handier than wireshark for HTTP debugging tasks along these lines.)
See also this somewhat related question: How does HttpWebRequest differ (functional) from pasteing a URL into an address bar?
Here's a snippet which constructs HttpWebRequest instances with bigger and bigger url values until an exception gets thrown:
using System.Net;
...
StringBuilder url = new StringBuilder("http://example.com?p=");
try
{
for (int i = 1; i < Int32.MaxValue; i++)
{
url.Append("0");
HttpWebRequest request = HttpWebRequest.CreateHttp(url.ToString());
}
}
catch (Exception ex)
{
Console.Out.WriteLine("Error occurred at url length: " + url.Length);
Console.Out.WriteLine(ex.GetType().ToString() + ": " + ex.Message);
return;
}
Console.Out.WriteLine("Completed without error!");
On my machine (in LINQPad running .Net 4.5), this snippet outputs:
Error occurred at url length: 65520
System.UriFormatException: Invalid URI: The Uri string is too long.
Your query string is wrong according to RFC3986. '{' and '}' characters are not allowed in a URI.
Related
I'm getting the error "HTTP Error 414. The request URL is too long." From the following article, I understand that this is due to a very long query string:
http://www.mytecbits.com/microsoft/iis/query-string-too-long
In web.config, I have maxQueryStringLength="2097151". Is this the maximum value?
In order to solve this problem, should I set maxUrl in web.config? If so, what's the maximum value supported?
What should I do to fix this error?
The GET request should never be this long. You need to change it to POST method instead since it was designed to transmit block of data such as forms.
An excerpt from the RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1:
The POST method is used to request that the origin server accept the
entity enclosed in the request as a new subordinate of the resource
identified by the Request-URI in the Request-Line. POST is designed to
allow a uniform method to cover the following functions:
Annotation of existing resources;
Posting a message to a bulletinboard, newsgroup, mailing list, or
similar group of articles;
Providing a block of data, such as the result of submitting a
form, to a data-handling process;
Extending a database through an appendoperation.
If I do this in .Net Core 3.1:
await new HttpClient().GetAsync("http://test.com/page?parameter=%2D%2E%5F%2E%2D");
then this happens:
GET http://test.com/page?parameter=-._.- HTTP/1.1
but this is what I want:
GET http://test.com/page?parameter=%2D%2E%5F%2E%2D HTTP/1.1
The background is that I get a signed Url from a third party and I need to use the url as it is, non-unescaped. I manage to find the resource with the unescaped url, but the signature check fails on the other end because the url they see in the request is not the url that was signed.
I can paste the url into any browser and get the resource, but the signature check fails when I do it programatically in .Net Core 3.1.
The unescaping is supposed to happen according to documentation on the Uri Class:
Escaped characters (also known as percent-encoded octets) that don't
have a reserved purpose are decoded (also known as being unescaped).
These unreserved characters include uppercase and lowercase letters
(%41-%5A and %61-%7A), decimal digits (%30-%39), hyphen (%2D), period
(%2E), underscore (%5F), and tilde (%7E).
I have tried solutions listed in these questions:
GETting a URL with an url-encoded slash. But the schemeSetting seems not to work for .Net Core 3.1 and and neither does the workaround ForceCanonicalPathAndQuery.
How to make System.Uri not to unescape %2f (slash) in path?. Again schemeSetting seems not to work for .Net Core 3.1, and neither does the workaround LeaveDotsAndSlashesEscaped.
So, does anyone know how I can use the signed url as is, non-unescaped, on .Net Core 3.1?
So after fiddling around a bit I came up with this:
private static Uri CreateNonUnescapedUri(string url)
{
// Initiate Uri as e.g "http://test.com" so internal flags will indicate that the url does not include characters that needs unescaping
int offset = url.IndexOf("://");
offset = url.IndexOf('/', offset + 4);
var uri = new Uri(url.Substring(0, offset));
// Then replace internal field with complete url that would otherwise be unescaped
typeof(Uri).GetField("_string", BindingFlags.Instance | BindingFlags.NonPublic).SetValue(uri, url);
return uri;
}
I tested it on 300 signed url's.
Offcourse changing the internal state of the Uri which is 5600 lines of pure madness is bound to fail in the future, but I need this working by monday and this is what I've got. Let me know if anyone has a real solution.
Edit April 2022:
In .Net 6 there is a new constructor that will keep the original url as is, using UriCreationOptions:
var uri = new Uri("http://test.com/page?parameter=%2D%2E%5F%2E%2D",
new UriCreationOptions { DangerousDisablePathAndQueryCanonicalization = true });
I have no idea whats supposedly dangerous about it though.
For .Net Core 3.1 I'm still using the hack above, I never did find a better solution for it.
I've been tasked with building a service which pulls information from a 3rd party API into our internal datawarehouse. They have a get request to pull the data I want where you specify the parameters you want via query strings. E.g.
http://www.api.com?parameter=firstname¶meter=surname
In my code the length of the URL is over 3600 characters long as the requirement is for 116 parameters.
My web request is generated using this code:
private HttpWebRequest GetWebRequest(string url, string type, int timeout)
{
var httpWebRequest = (HttpWebRequest) WebRequest.Create(_baseUrl + url);
httpWebRequest.Method = type;
httpWebRequest.Timeout = timeout;
httpWebRequest.ContentType = "application/json";
httpWebRequest.Headers.Add("Authorization", "Bearer " + _token.access_token);
httpWebRequest.ContentLength = 0;
return httpWebRequest;
}
When I run the code I am getting back a web exception with the message "Unable to connect to the remote server" with an internal exception message of "No connection could be made because the target machine actively refused it IP Address"
I have not included the entire URL in this post as I have found that if I copy and paste the url into Postman and run the request I get the response I expect so I know that the URL is formatted correctly. I have also discovered that if I cut down the length of the url to around 2100 characters the request then works.
I have been searching but have not found any definitive documentation to suggest that there is a limit to the length of the URL, but I can not explain why the whole url works in Postman but not in a c# web request and that if I cut the length of the URL it then works in the web request!
If anyone has any ideas about this I'd be greatfull.
An old post suggests that depending on the server and client the maximum request length is somewhere between 2 - 4 and 8 KB, which is consistent with your observations.
Postman is 'another' client, so it is well possible that it works there while it doesn't in your program. Bottom-line is: you should not GET such long requests. You should POST them instead. If you have control over the service, you could change it so it supports POST too, if not already (documented or not).
I have an application that contains a button, on click of this button, it will open a browser window using a URL with querystring parameters (the url of a page that i am coding).
Is there a way to ensure that the URL is coming from my application and only from my application - and not just anyone typing the URL manually in a webbrowser?
If not, what is the best way to ensure that a specific URL is coming from a specific application - and not just manually entered in the address bar or a web browser-
Im using asp.net.
You can check if the request was made from one of the pages of your application using:
Request.UrlReferrer.Contains("mywebsite.com")
That's the simple way.
The secure way is to put a cookie on the client containing a value encrypted using a secure key or hashed using a secure salt. If the cookie is set to expire when the page is closed it should be impossible for someone to forge.
Here's an example:
On the pages that would redirect to the page you are trying to protect:
HttpCookie cookie = new HttpCookie("SecureCheck");
//don't set the cookie's expiration so it's deleted when the browser is closed
cookie.Value = System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile(Session.SessionID, "SHA1");
Response.Cookies.Add(cookie);
On the page you are trying to protect:
//check to see if the cookie is there and it has the correct value
if (string.IsNullOrEmpty(Request.Cookies["SecureCheck"]) || System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile(Session.SessionID, "SHA1") != Request.Cookies["SecureCheck"])
throw Exception("Invalid request. Please access this page only from the application.");
//if we got this far the exception was not thrown and we are safe to continue
//insert whatever code here
There's no reliable way to do this for a GET request, nor is their any reason to try for a legitimate user. What you should do instead is ensure that regardless of where the request comes from the user has the proper permissions and access rights and that the session is protected appropriately (HTTP only cookies, SSL, etc.) If the request is changing data, then it should be a POST, not a GET, and it should be accompanied by some suitable cross-site request forgery prevention techniques (such as a cookie containing a nonce that is verified against a matching nonce on the form itself).
There is no way, other than rejecting the request if it doesn't contain a previously generated random one-time token in the parameters (that would be stored in the session, for example).
While there is no 100% secure way to do this, what I am suggesting might at least take care of your basic needs.
This is what you can do .
Client: Add a HTTP header with an encoded string that is like hash (sha256) of some word.
Then make your client always do a POST request instead of GET.
Server: Check the HTTP Header for encoded string. Also make sure it is a POST request.
This is not 100% as ofcourse someone smart enough could figure out and still generate a request, but depending on your need you might find this enough or not
You can check the referer, the user agent, add an additional header to the request, always do post requests to that url. However, considering HTTP is transmitted in plain text, somebody is always able to let wireshark or fiddler run, capture the HTTP packets and recreate the requests with your measures in place.
Pass parameters from your application so that you can verify on the server side.
I suggest you use an encryption algorithm and generate random text using a password(key). Then, decrypt the param on the server side and check if it matches your expectation.
I am not very clear though. sorry about that, If had to do something like this, then, I would do something similar to mentioned above.
You can use to check the header on MVC controller like Request.Headers["Accept"]; if it is coming from your code in angularjs or jquery:
sample angularjs like this:
var url = ServiceServerPath + urlSearchService + '/SearchCustomer?input=' + $scope.strInput;
$http({
method: 'GET',
url: url,
headers: {
'Content-Type': 'application/json'
},.....
And on the MVC [HttpGet] Action method
[HttpGet]
[PreventDirectAccess]//It is my custom filters
// ---> /Index/SearchCustomer?input={input}/
public string SearchCustomer(string input)
{
try
{
var isJsonRequestOnMVC = Request.Headers["Accept"];//TODO: This will check if the request comes from MVC else comes from Browser
if (!isJsonRequestOnMVC.Contains("application/json")) return "Error Request on server!";
var serialize = new JavaScriptSerializer();
ISearch customer = new SearchCustomer();
IEnumerable<ContactInfoResult> returnSearch = customer.GetCustomerDynamic(input);
return serialize.Serialize(returnSearch);
}
catch (Exception err)
{
throw;
}
}
I need to check that our visitors are using HTTPS. In BasePage I check if the request is coming via HTTPS. If it's not, I redirect back with HTTPS. However, when someone comes to the site and this function is used, I get the error:
System.Web.HttpException: Server
cannot append header after HTTP
headers have been sent. at
System.Web.HttpResponse.AppendHeader(String
name, String value) at
System.Web.HttpResponse.AddHeader(String
name, String value) at
Premier.Payment.Website.Generic.BasePage..ctor()
Here is the code I started with:
// If page not currently SSL
if (HttpContext.Current.Request.ServerVariables["HTTPS"].Equals("off"))
{
// If SSL is required
if (GetConfigSetting("SSLRequired").ToUpper().Equals("TRUE"))
{
string redi = "https://" +
HttpContext.Current.Request.ServerVariables["SERVER_NAME"].ToString() +
HttpContext.Current.Request.ServerVariables["SCRIPT_NAME"].ToString() +
"?" + HttpContext.Current.Request.ServerVariables["QUERY_STRING"].ToString();
HttpContext.Current.Response.Redirect(redi.ToString());
}
}
I also tried adding this above it (a bit I used in another site for a similar problem):
// Wait until page is copletely loaded before sending anything since we re-build
HttpContext.Current.Response.BufferOutput = true;
I am using c# in .NET 3.5 on IIS 6.
Chad,
Did you try ending the output when you redirect? There is a second parameter that you'd set to true to tell the output to stop when the redirect header is issued. Or, if you are buffering the output then maybe you need to clear the buffer before doing the redirect so the headers are not sent out along with the redirect header.
Brian
This error usually means that something has bee written to the response stream before a redirection is initiated. So you should make sure that the test for https is done fairly high up in the page load function.