I am using ASP.NET server on .NET 4.5 and client is C# HttpClient on WinRT platform. I want to upload files using the HttpClient and used System.Net.Http.MultipartFormDataContent class to construct a valid http request. Everything worked fine until I had a filename with DBCS characters.
MultiPartFormDataContent class correctly encodes characters in the uploaded filename and sends both filename and filename* keys as per RFC 6266 in the content disposition header.
However, ASP.NET server ignores the filename* and read filename only and hence the file gets saved on the server with weird characters.
Has someone else faced the same problem? How can I get filename* at the server end and ignore filename key from the HttpRequest? [This would be my preferred solution. ]
Alternatively, how can I force MultiPartFormDataContent to send filename key only and force set UTF-8 encoded string?
Add a reference to System.Net.Http and do something like below...
string suggestedFileName;
string dispositionString = response.GetResponseHeader("Content-Disposition");
if (dispositionString.StartsWith("attachment")) {
System.Net.Http.Headers.ContentDispositionHeaderValue contentDisposition = System.Net.Http.Headers.ContentDispositionHeaderValue.Parse(dispositionString);
if (!string.IsNullOrEmpty(contentDisposition.FileNameStar))
{
suggestedFileName = contentDisposition.FileNameStar;
}
else
{
suggestedFileName = contentDisposition.FileName.Trim('"');
}
}
ContentDispositionHeaderValue From Microsoft
Late to the party..
With control over both the client and server, my (dirty) workaround was simply to always Base64-encode the filename in HttpClient when creating the content, and decode it again on the server side.
This way you avoid having to deal with the aptly named FileNameStar.
You could also try manually detecting the FileName encoding and decode it on the server.
Related thread: System.Net.Mail and =?utf-8?B?XXXXX.... Headers
Related
I have a simple GET request that returns .txt file's content. You can try it out using a browser or fiddler: http://134.17.24.10:8054/Documents/13f37b0c-4b04-42e1-a86b-3658a67770da/1.txt
The encoding of this file is cp-1251. When I try to get it, I see smth like this:
response text view
How can I modify HttpRequest in C# to get the text in the encoding that I want? Or just return only bytes in order to convert them manually. Is it possible with HttpClient?
Another attachment:
If you can't change the charset sent by the server, yes, you can still just request the bytes and then choose which encoding to apply:
Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);
using (var httpClient = new HttpClient())
{
var response = await httpClient.GetByteArrayAsync(RequestUrl);
var responseString = Encoding.GetEncoding("windows-1251").GetString(response, 0, response.Length - 1);
Console.WriteLine(responseString);
}
References:
How to change the encoding of the HttpClient response
Encoding.GetEncoding can't work in UWP app
The .NET Framework Class Library provides one static property, CodePagesEncodingProvider.Instance, that returns an EncodingProvider object that makes the full set of encodings available on the desktop .NET Framework Class Library available to .NET Core applications.
https://learn.microsoft.com/en-us/dotnet/api/system.text.encodingprovider?redirectedfrom=MSDN&view=netcore-3.1
For file downloading from an ASP.NET Core 2 application I’ve been using the PhysicalFileResult (an implementation of IActionResult) to return files from a controller GET action. For filenames with 'typical' characters everything works fine. This includes the accented and Kanji character sets that I've tested thus far.
However, if the filename contains an Emoji (something allowed by the Windows file system) the resulting filename is malformed or miss-encoded.
For example I'd expect the following code to deliver a file named 😀.png, instead it delivers a file named: ��.png.
public async Task<IActionResult> GetByIdAsync([FromRoute] long id)
{
return PhysicalFile("c:\\file.png", "image/png", "😀.png");
}
Is there a particular way that filenames containing Emojis need to be encoded in order for the filename to be preserved, or is this an issue with PhysicalFileResult?
The reality is that your issue is related to the mess that is the Content-Disposition header and more generally character encoding in HTTP headers. There is a very old Stack Overflow question: How to encode the filename parameter of Content-Disposition header in HTTP whose answers while dated cover a lot of what your issue is, but lets break it down.
The PhysicalFileResult inherits from FileResult and when being prepared for return to the client is processed by the FileResultExecutorBase and the code you are interested from that class is contained in the SetContentDispositionHeader method:
private static void SetContentDispositionHeader(ActionContext context, FileResult result)
{
if (!string.IsNullOrEmpty(result.FileDownloadName))
{
// From RFC 2183, Sec. 2.3:
// The sender may want to suggest a filename to be used if the entity is
// detached and stored in a separate file. If the receiving MUA writes
// the entity to a file, the suggested filename should be used as a
// basis for the actual filename, where possible.
var contentDisposition = new ContentDispositionHeaderValue("attachment");
contentDisposition.SetHttpFileName(result.FileDownloadName);
context.HttpContext.Response.Headers[HeaderNames.ContentDisposition] = contentDisposition.ToString();
}
}
Which ultimately lands you in the class ContentDispositionHeaderValue from the Microsoft.Net.Http.Headers namespace. There are currently 725 lines of code in this class most of which are there to ensure that the values returned are valid given the complicated nature of HTTP response headers.
RFC7230 specified that in the headers there is no useful encoding besides ASCII:
Historically, HTTP has allowed field content with text in the
ISO-8859-1 charset [ISO-8859-1], supporting other charsets only
through use of [RFC2047] encoding. In practice, most HTTP header field
values use only a subset of the US-ASCII charset [USASCII]. Newly
defined header fields SHOULD limit their field values to US-ASCII
octets. A recipient SHOULD treat other octets in field content
(obs-text) as opaque data.
Ultimately, some relief may be in sight as RFC8187 from September 2017 provides clarification which should alleviate some of this mess. Issue 2688 filed against HTTP Abstractions covers the need to incorporate this RFC into ASP.Net.
In the meantime you can try slugging an appropriately http encoded filename into the URI (http://example.com/filedownload/33/%F0%9F%98%80.png) rather than relying upon the flawed Content-Disposition header for filename suggestion. To be clear this is an incomplete fix and has the potential for not working on all clients.
I'm trying to handle a redirect authentication piece for a site we have. I configured ADFS on Server2012 R2 to handle this. I set up the relying party trust with a URL in our domain that I'm sending requests from. I added an endpoint back to the specific page they're coming from.
Basically, I'm taking this stuff here: How do I correctly prepare an 'HTTP Redirect Binding' SAML Request using C#
to try and send over a simple SAML request token.
public static string SAMLRequest = #"<samlp:AuthnRequest
xmlns:samlp=""urn:oasis:names:tc:SAML:2.0:protocol""
xmlns:saml=""urn:oasis:names:tc:SAML:2.0:assertion""
ID=""{0}""
Version=""2.0""
AssertionConsumerServiceIndex=""0""
AttributeConsumingServiceIndex=""0"">
<saml:Issuer>URN:xx-xx-xx</saml:Issuer>
<samlp:NameIDPolicy
AllowCreate = ""true""
Format=""urn:oasis:names:tc:SAML:2.0:nameid-format:transient"" />
</samlp:AuthnRequest>";
This is the template URL I'm sending over as a C# string (for the escape characters, and the string replacement on the ID value).
And here is the code I'm using to generate the request parameter that's going into my redirect URL:
public static string GetSAMLHttpRedirectUri(string idpUri)
{
var saml = string.Format(SAMLRequest, Guid.NewGuid());
var bytes = Encoding.UTF8.GetBytes(saml);
using (var output = new MemoryStream())
{
using (var zip = new DeflaterOutputStream(output))
{
zip.Write(bytes, 0, bytes.Length);
}
var base64 = Convert.ToBase64String(output.ToArray());
var urlEncode = HttpUtility.UrlEncode(base64);
return string.Concat(idpUri, "?SAMLRequest=", urlEncode);
}
}
When all is said and done, the page redirects me to the appropriate endpoint with the token base64 encoded properly. Well, sort of properly.
On the AD FS side of things, I get an error on the page and then it just stops authenticating. Looking in the event viewer of AD FS, it gives me this cryptic error:
System.IO.InvalidDataException: Block length does not match with its complement.
I've tried fiddling with the compression and some of the properties on the request object itself, to no avail. Anyone have any ideas I could try on this bad boy?
Assuming DeflaterOutputStream is from SharpZipLib, new DeflaterOutputStream(output) will, in fact, give you a ZLIB output stream, that is RFC 1950, not actually DEFLATE, from RFC 1951. The difference is only that ZLIB adds a header and footer to DEFLATE, which you can suppress in SharpZipLib with new DeflaterOutputStream(output, new Deflater(level: Deflater.DEFAULT_COMPRESSION, noZlibHeaderOrFooter: true)).
I'm using the Amazon .NET SDK to generate a pre-signed URL like this:
public System.Web.Mvc.ActionResult AsActionResult(string contentType, string contentDisposition)
{
ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides();
headerOverrides.ContentType = contentType;
if (!string.IsNullOrWhiteSpace(contentDisposition))
{
headerOverrides.ContentDisposition = contentDisposition;
}
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest()
.WithBucketName(bucketName)
.WithKey(objectKey)
.WithProtocol(Protocol.HTTPS)
.WithExpires(DateTime.Now.AddMinutes(6))
.WithResponseHeaderOverrides(headerOverrides);
string url = S3Client.GetPreSignedURL(request);
return new RedirectResult(url, permanent: false);
}
This works perfectly, except if my contentType contains a + in it. This happens when I try to get an SVG file, for example, which gets a content type of image/svg+xml. In this case, S3 throws a SignatureDoesNotMatch error.
The error message shows the StringToSign like this:
GET 1234567890 /blah/blabh/blah.svg?response-content-disposition=filename="blah.svg"&response-content-type=image/svg xml
Notice there's a space in the response-content-type, where it now says image/svg xml instead of image/svg+xml. It seems to me like that's what is causing the problem, but what's the right way to fix it?
Should I be encoding my content type? Enclose it within quotes or something? The documentation doesn't say anything about this.
Update
This bug has been fixed as of Version 1.4.1.0 of the SDK.
Workaround
This is a confirmed bug in the AWS SDK, so until they issue a fix I'm going with this hack to make things work:
Specify the content type exactly how you want it to look like in the response header. So, if you want S3 to return a content type of image/svg+xml, set it exactly like this:
ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides();
headerOverrides.ContentType = "image/svg+xml";
Now, go ahead and generate the pre signed request as usual:
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest()
.WithBucketName(bucketName)
.WithKey(objectKey)
.WithProtocol(Protocol.HTTPS)
.WithExpires(DateTime.Now.AddMinutes(6))
.WithResponseHeaderOverrides(headerOverrides);
string url = S3Client.GetPreSignedURL(request);
Finally, "fix" the resulting URL with the properly URL encoded value for your content type:
url = url.Replace(contentType, HttpUtility.UrlEncode(contentType));
Yes, it's a dirty workaround but, hey, it works for me! :)
Strange indeed - I've been able reproduce this easily, with the following observed behavior:
replacing + in the the URL generated by GetPreSignedURL() with its encoded form %2B yields a working URL/signature
this holds true, no matter whether / is replaced with its encoded form %2F or not
encoding the contentType upfront before calling GetPreSignedURL(), e.g. via the HttpUtility.UrlEncode Method, yields invalid signatures regardless of any variation of the generated URL
Given how long this functionality is available already, this is somewhat surprising, but I'd still consider it to be a bug - accordingly it might be best to inquiry about this in the Amazon Simple Storage Service forum.
Update: I just realized you asked the same question there already and the bug got confirmed indeed, so the correct answer can be figured out over time by monitoring the AWS team response ;)
Update: This bug has been fixed as of Version 1.4.1.0 of the SDK.
I'm using HttpListener to provide a web server to an application written in another technology on localhost. The application is using a simple form submission (application/x-www-form-urlencoded) to make its requests to my software. I want to know if there is already a parser written to convert the body of the html request document into a hash table or equivalent.
I find it hard to believe I need to write this myself, given how much .NET already seems to provide.
Thanks in advance,
You mean something like HttpUtility.ParseQueryString that gives you a NameValueCollection? Here's some sample code. You need more error checking and maybe use the request content type to figure out the encoding:
string input = null;
using (StreamReader reader = new StreamReader (listenerRequest.InputStream)) {
input = reader.ReadToEnd ();
}
NameValueCollection coll = HttpUtility.ParseQueryString (input);
If you're using HTTP GET instead of POST:
string input = listenerRequest.Url.QueryString;
NameValueCollection coll = HttpUtility.ParseQueryString (input);
The magic bits that fill out HttpRequest.Form are in System.Web.HttpRequest, but they're not public (Reflector the method "FillInFormCollection" on that class to see). You have to integrate your pipeline with HttpRuntime (basically write a simple ASP.NET host) to take full advantage.
If you want to avoid the dependency on System.Web that is required to use HttpUtility.ParseQueryString, you could use the Uri extension method ParseQueryString found in System.Net.Http.
Make sure to add a reference (if you haven't already) to System.Net.Http in your project.
Note that you have to convert the response body to a valid Uri so that ParseQueryString (in System.Net.Http)works.
string body = "value1=randomvalue1&value2=randomValue2";
// "http://localhost/query?" is added to the string "body" in order to create a valid Uri.
string urlBody = "http://localhost/query?" + body;
NameValueCollection coll = new Uri(urlBody).ParseQueryString();