What is the difference between maxAllowedContentLength and maxRequestLength - c#

I looked online and couldn't find a proper explanation.
Link I checked: Difference
This link says:
If you are trying to upload large files (like images or documents) you
need to be aware that you may need to adjust your maxRequestLength.
Then if files are really big you may need to adjust the
maxAllowedContentLength.
But both sentence mean the same and I am confused.
Another link: Difference
This says
The maxRequestLength indicates the maximum file upload size supported
by ASP.NET, the maxAllowedContentLength specifies the maximum length
of content in a request supported by IIS. Hence, we need to set both
maxRequestLength and maxAllowedContentLength values to upload large
files.
My Question is: If I have a file upload of 10GB. Is my content 10GB or is my FileSize 10GB? I don't understand what the difference is between file size being uploaded and the content size?
Bottom Line: Please tell me in layman terms if I have a file upload of 10GB how these two parameters come into picture.

Request consist of headers and body (that provides encoded content of the file in your case). So request length is total size of the request, content length is size in bytes of the body (which is likely more than size of data you are sending).
Fake sample:
User-agent: Bob the builder the 4th
Authorization: hereIcome
Content-length: 4
Content-Encoding: Base64
BEEF
So request length here is about 100, content length is just 4 (length of "BEEF") but actual data is 3 bytes (FromBase64String("BEEF") - 0x04 0x41 0x05).
For case of huge files size of headers can be ignored and both maxRequestLength and maxAllowedContentLength set to the same very high value. Depending on encoding used to send files the values need to be some multiplier of max size of the file.

These settings differ both in semantics as well as usage.
maxAllowedContentLength
This is a IIS specific setting. Any request you send is going to be handled by IIS first, irrespective of the fact whether it is going to be handled by your application or any other. So, if you imagine the web server as a building, this is going to be your entry gate to the building. And as mentioned by #Alexei, this considers only the content or payload size and is measured in bytes. If you send a request whose payload size exceeds this limit, you are going to get a Http 404.13 error response (http response 404 with a subcode of 13. You can check the different IIS status codes in this link).
maxRequestLength
In comparison, maxRequestLength is an ASP.Net specific setting, which defines the buffering threshold of the input stream. So, in the building example, this is the door of an apartment and hence is apartment specific. So, your request has to fit through both the building door and the apartment door. And this considers the entire request length, not just the payload, and is measured in kbs. If your request passes the IIS setting and fails to pass here due to size, you will get a Http 500 error.

Related

aspnet:MaxJsonDeserializerMembers vs maxRequestLength

I am running into errors like The JSON request was too large to be deserialized..
Quick search on stackoverflow tells you that you should set appSetting aspnet:MaxJsonDeserializerMembers to be higher to fix the issue. However, the msdn documentation on the appSettings says
Caution
Setting this attribute to too large a number can pose a security risk.
I would expect that you are cautioned against setting this value to higher numbers to prevent anyone from submitting large payloads that could impact your system. However, given that I am already setting the value of maxRequestLength to a large number, will changing the aspnet:MaxJsonDeserializerMembers value have any other security implications?
I do not see how 1001 small json members could pose any more security threat that a single large json object.
ASP.NET applications reject requests that have more than 1000 of these elements.
https://support.microsoft.com/en-us/kb/2661403
The Microsoft security update that security bulletin MS11-100 addresses changes the default maximum number of form keys, files, and JSON members that ASP.NET will accept in a request to 1,000. This change was made to address the Denial of Service vulnerability that the Microsoft security bulletin MS11-100 documents.

HTTP Error 414. The request URL is too long. asp.net

I'm getting the error "HTTP Error 414. The request URL is too long." From the following article, I understand that this is due to a very long query string:
http://www.mytecbits.com/microsoft/iis/query-string-too-long
In web.config, I have maxQueryStringLength="2097151". Is this the maximum value?
In order to solve this problem, should I set maxUrl in web.config? If so, what's the maximum value supported?
What should I do to fix this error?
This error is actually thrown from http.sys, not from IIS. The error gets thrown before the request is passed along to IIS in the request-handling pipeline.
To verify this, you can check the Server header value in the HTTP response headers, as per https://stackoverflow.com/a/32022511/12484.
To get https.sys to accept longer request URLs without throwing the HTTP 414 error, in the Windows Registry on the server PC, at Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters, create a DWORD-type value with name MaxFieldLength and value sufficiently large, e.g. 65535.
Reference: Http.sys registry settings for Windows
If you decide to make this change, then obviously it’ll need to be made in all environments (including all production server(s)) -- not just on your local dev PC.
Also, whatever script and/or documentation your team uses to set up new server instances will need to be updated to include this registry setting, so that your team doesn’t forget to apply this setting 18 months from now when setting up a new production server.
Finally, be aware making this change could have adverse security consequences for all applications running on your server, as a large HTTP request submitted by an attacker won’t be rejected early in the pipeline as it would normally.
As an alternative to making this change to bypass the http.sys security, consider changing the request to accept HTTP POST instead of HTTP GET, and put the parameters into the POST request body instead of into a long URL. For more discussion on this, see question Design RESTful GET API with a long list of query parameters.
As described in this answer -> What is the maximum length of a URL in different browsers?
The allowed length of a url depends on a combination of browser and server. Hence it's hard to say exactly how long the url can be. The answer recommends to stay below 2000 char in the url. I do not know why your querystring is so long. Can you shorten it? It's hard to give you any recommendations without knowing more about the solution and your query string.
Generally, Url has its own limits in length and if you set this value you may solve the problem for a while, but bear in mind that for a long url situations, best practice is working with forms. To be specific, it is better to use POST actions instead of Get.
just to complement, if you try with massive parameters, using Request ajax and receive de 414 ERROR. change the dataType property to JSON then submit as POST type.
this resolved my problem.

Communicating with a HTTP server

I'm currently trying to program my own HttpWebRequest class. I've already written the code that sends the header and the body of the request to the server and awaits a response. However, I am unsure which charset I should use for the header.
I've also been wondering what would be a good way of processing the response (header + body). Should I try to decode all received data into a string, or should I do it differently? I was thinking of splitting the header from the body using the two line feeds/carriage returns that separate those two parts. Then I would be able to decode the header and leave the body for later, when I know its charset.
So my questions in short:
What charset does HTTP use for its headers?
What's a good way of processing the response?
First I would recommend that you become intimately familiar with RFC-2616 which is the RFC for the HTTP 1.1 protocol.
From the above RFC you will find the following statement
The TEXT rule is only used for descriptive field contents and values
that are not intended to be interpreted by the message parser. Words
of *TEXT MAY contain characters from character sets other than ISO-
8859-1 [22] only when encoded according to the rules of RFC 2047
[14].
The headers should use ISO-8859-1 encoding unless encoded using the MIME encoding outlined in RFC-2047.
As for the parsing of the response, that really depends on the message. Personally, I would processes the response based on the BNF defined for HTTP as I identify tokens that I recognize I would update the state of the parser to process the rest of the response accordingly. For example as the response data is processed you might find that the response is a JPG image and the the content length is X, so you can setup the appropriate memory stream to read the content into and then create an Image etc.

Compressing string

I tried send some data(variable 1 - 4 MB) by http headers,but returned the following error in ajax.response:
HTTP Error 400. The size of the request headers is too long.
there something that I can do or the method single is compressing the data? if yes,how I do this?
any help is appreciated. Thanks in advance!
If you're sending around that much data, put it in the body of the request (e.g, in a HTTP POST), not in the headers. Increasing the header size limit (as cwallenpoole suggests) will still cause problems with users who are behind web proxies.
Most http server accept about 8-16KB for the header. Therefore, if your data is too large, just use POST method to send it.
Personally, I would just up the size of the acceptable header. MS suggests the same and gives instructions on how to raise it to 16 MB if necessary (see MaxRequestBytes).

Unbuffered output from IHTTPHandler

I want to stream data from an IHttpHandler class. I'm loading a large number of rows from the DB, serializing, and compressing them, then sending them down the wire. On the other end, I want my client to be able decompress, and deserialize the data before the server is even done serializing all the objects.
I'm using context.Response.OutputSteam.Write to write my data, but it still seems like the output data is being put into a buffer before being sent to the client. Is there a way to avoid this buffering?
The Response.Flush method should send it down the wire; however, there are some exceptions. If IIS is using Dynamic Compression, that is it's configured to compress dynamic content, then IIS will not flush the stream. Then there is the whole 'chunked' transfer encoding. If you have not specified Content-Length then the recieving end does not know how large the response body will be. This is accomplished with the chunked transfer encoding. Some HTTP servers require that the client uses an Accept-Encoding request header containing the chunked keyword. Others just default to chunked when you begin writing bytes before the full length is specified; however, they do not do this if you have specified your own Transfer-Encoding response header.
With IIS 7 and compression disabled, Response.Flush should then always do the trick, right? Not really. IIS 7 can have many modules that intercept and interact with the request and response. I don't know if any that are installed/enabled by default, but you should still be aware that they can effect your desired result.
... I'm loading a large number of rows from the DB, serializing, and compressing them, then sending them down the wire...
Curious that you are compressing this content. If you are using GZIP then you will not be in control of when and how much data is sent by calling flush. Additionally using GZIP content means that the receiving end may also be unable to start reading data right away.
You may want to break the records into smaller, digestible chucks of 10, 50, or 100 rows. Compress that and send it, then work on the next set of rows. Of course now you will need to write something to the client so they know how big each compressed set of rows is, and when they have reached the end. see http://en.wikipedia.org/wiki/Chunked_transfer_encoding for an example of how the chunked transfer works.
You can use context.Response.Flush() or context.Response.OutputSteam.Flush() to force buffered content to be written immediately.

Categories

Resources