In my asp.net mvc application I have the code:
response.ContentType = "application/octet-stream";
response.AddHeader("Content-Disposition", "attachment;filename=" +
HttpUtility.UrlEncode(attachment.FileName));
So that all the Chinese characters are url-encoded to something like %5C%2D. In IE/Chrome when users download the file, they get the Chinese file name(that is, IE/Chrome will automatically url-decode the file name). But in Firefox, they will get something like %5C%2D%0A.docx. Now I'm going to remove HttpUtility.UrlEncode in the code. But before doing this, I want to confirm that there is no security issues in this case. Would you please give me some ideas?
EDIT Corbin's answer is correct. But after removing the url-encoding of the filename, some users using old version IE will get strange messy file names. At last I do url-encode for those users only.
The name is allowed to be in quotes if it's ASCII.
If it's non-ASCII, then you have to use the encoding defined in RFC 2231 or the one in RFC 5987 or the one in RFC 2047... which browsers support which of these is a fun game, of course. :(
If you just stick the raw non-ASCII bytes into the header, it will almost certainly look like garbage for a large fraction of users.
please change your code as follow:
if (Request.Browser.Browser == "IE" || Request.Browser.Browser == "Chrome")
{
filename = HttpUtility.UrlPathEncode(filename);
}
Response.AddHeader("Content-Disposition", "attachment;filename=\"" + filename + "\"");
notes: your code miss "\"" for wrap file name in quotes
http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html
Unless I'm misunderstanding it, it looks like the name should just be in quotes, not url encoded.
Related
My application has a blazor page that calls an API function to generate a report and serve that report to the browser for downloading.
For this, I am using the ControllerBase.File method, specifying the suggested filename.
The browser successfully begins the download, but ignores the suggested filename. Instead, giving "file.nothing"
I have seen this behaviour in both Chrome and Firefox.
Minimal reproducable example
[HttpPost("report/level2/deliverable")]
public async Task<IActionResult> GenerateLevel2DeliverableReport()
{
byte[] bytes = Encoding.ASCII.GetBytes("This is some text");
string filename = "test.txt";
return File(bytes, "text/plain", filename);
}
In the real-world situation, I have experienced the same problem returning Word Documents and Zip files, with their respective content types.
HTTP Response as captured by Postman
The content-disposition (which is hard to read in the image) is
attachment; filename=test.txt; filename*=UTF-8''test.txt
Does anybody have any thoughts on why the browser is ignoring the filename?
Well, this is embarrassing.
I thought the download was being spawned by the line
return File(bytes, "text/plain", filename);
But it turns out, the API call was wrapped in some javascript code that pulled the filename out of the HTTP header using regex and started the download.
If the regex match failed it was hard-coding the filename to "file.nothing"
No wonder my google searches for "file.nothing" were all in vain.
There was an issue with the regex which has been fixed by a colleague and all is now well in the universe.
I have the following line of aspx link that I would like to encode:
Response.Redirect("countriesAttractions.aspx?=");
I have tried the following method:
Response.Redirect(Encoder.UrlPathEncode("countriesAttractions.aspx?="));
This is another method that I tried:
var encoded = Uri.EscapeUriString("countriesAttractions.aspx?=");
Response.Redirect(encoded);
Both redirects to the page without the URL being encoded:
http://localhost:52595/countriesAttractions?=
I tried this third method:
Response.Redirect(Server.UrlEncode("countriesAttractions.aspx?="));
This time the url itself gets encoded:
http://localhost:52595/countriesAttractions.aspx%3F%3D
However I get an error from the UI saying:
HTTP Error 404.0 Not Found
The resource you are looking for has been removed, had its name changed, or
is temporarily unavailable.
Most likely causes:
-The directory or file specified does not exist on the Web server.
-The URL contains a typographical error.
-A custom filter or module, such as URLScan, restricts access to the file.
Also, I would like to encode another kind of URL that involves parsing of session strings:
Response.Redirect("specificServices.aspx?service=" +
Session["service"].ToString().Trim() + "&price=" +
Session["price"].ToString().Trim()));
The method I tried to include the encoding method into the code above:
Response.Redirect(Server.UrlEncode("specificServices.aspx?service=" +
Session["service"].ToString().Trim() + "&price=" +
Session["price"].ToString().Trim()));
The above encoding method I used displayed the same kind of results I received with my previous Server URL encode methods. I am not sure on how I can encode url the correct way without getting errors.
As well as encoding URL with CommandArgument:
Response.Redirect("specificAttractions.aspx?attraction=" +
e.CommandArgument);
I have tried the following encoding:
Response.Redirect("specificAttractions.aspx?attraction=" +
HttpUtility.HtmlEncode(Convert.ToString(e.CommandArgument)));
But it did not work.
Is there any way that I can encode the url without receiving this kind of error?
I would like the output to be something like my second result but I want to see the page itself and not the error page.
I have tried other methods I found on stackoverflow such as self-coded methods but those did not work either.
I am using AntiXSS class library in this case for the methods I tried, so it would be great if I can get solutions using AntiXSS library.
I need to encode URL as part of my school project so it would be great if I can get solutions. Thank you.
You can use the UrlEncode or UrlPathEncode methods from the HttpUtility class to achieve what you need. See documentation at https://msdn.microsoft.com/en-us/library/system.web.httputility.urlencode(v=vs.110).aspx
It's important to understand however, that you should not need to encode the whole URL string. It's only the parameter values - which may contain arbitrary data and characters which aren't valid in a URL - that you need to encode.
To explain this concept, run the following in a simple .NET console application:
string url = "https://www.google.co.uk/search?q=";
//string url = "http://localhost:52595/specificAttractions.aspx?country=";
string parm = "Bora Bora, French Polynesia";
Console.WriteLine(url + parm);
Console.WriteLine(url + HttpUtility.UrlEncode(parm), System.Text.Encoding.UTF8);
Console.WriteLine(url + HttpUtility.UrlPathEncode(parm), System.Text.Encoding.UTF8);
Console.WriteLine(HttpUtility.UrlEncode(url + parm), System.Text.Encoding.UTF8);
You'll get the following output:
https://www.google.co.uk/search?q=Bora Bora, French Polynesia
https://www.google.co.uk/search?q=Bora+Bora%2c+French+Polynesia
https://www.google.co.uk/search?q=Bora%20Bora,%20French%20Polynesia
https%3a%2f%2fwww.google.co.uk%2fsearch%3fq%3dBora+Bora%2c+French+Polynesia
By pasting these into a browser and trying to use them, you'll soon see what is a valid URL and what is not.
(N.B. when pasting into modern browsers, many of them will URL-encode automatically for you, if your parameter is not valid - so you'll find the first output works too, but if you tried to call it via some C# code for instance, it would fail.)
Working demo: https://dotnetfiddle.net/gqFsdK
You can of course alter the values you input to anything you like. They can be hard-coded strings, or the result of some other code which returns a string (e.g. fetching from the session, or a database, or a UI element, or anywhere else).
N.B. It's also useful to clarify that a valid URL is simply a string in the correct format of a URL. It is not the same as a URL which actually exists. A URL may be valid but not exist if you try to use it, or may be valid and really exist.
I am using ABCpdf Version 5 in order to render some html-pages into PDFs.
I basically use HttpServerUtility.Execute() - Method in order to retrieve the html for the pdf:
System.IO.StringWriter writer = new System.IO.StringWriter();
server.Execute(requestUrl, writer);
string pageResult = writer.ToString();
WebSupergoo.ABCpdf5.Doc pdfDoc = new WebSupergoo.ABCpdf5.Doc();
pdfDoc.AddImageHtml(pageResult);
response.Buffer = false;
response.ContentType = "application/pdf";
response.AddHeader("Content-Disposition", "attachment;filename=MyPdf_" +
FormatDate(DateTime.Now, "yyyy-MM-dd") + ".pdf");
response.BinaryWrite(pdfDoc.GetData());
Now some special characters like Umlaute (äöü) are replaced with an empty space. Interestingly not all of them. What I did figure out:
Within the html-page I have.
`<meta http-equiv="content-type" content="text/xhtml; charset=utf-8" />`
If I parse this away, all special chars are rendered correctly. But this seems to me like an ugly hack.
In earlier days I did not use HttpServerUtility.Execute(), but I let ABCpdf call the URL itself: pdfDoc.AddImageUrl("someUrl");. There I had no such encoding-problems.
What could I try else?
Just came across this problem with ABCpdf 8.
In your code you retrieve HTML contents and pass the pageResult to AddImageHtml(). As the documentation states,
ABCpdf saves this HTML into a temporary file and renders the file
using a 'file://' protocol specifier.
What is not mentioned is that the temp file is UTF-8 encoded, but the encoding is not stated in the HTML file.
The <meta> tag actually sets the required encoding, and solved my problem.
One way to avoid the declaration of the encoding is to use the AddImageUrl() method that I expect to detect the HTML encoding from the HTTP/HTML response.
Encoding meta tag and AddImageURL method perhaps helps with simple document, but not in a chain situation, where encoding somehow gets lost despite encoding tag. I encountered this problem (exactly as described in original question - some foreign characters such as umlauts would disappear), and see no solution. I am considering getting rid of ABCPDF altogether and replace it with SSRS, which can render PDF formats.
I am using HttpContext object implemented in HttpHandler child to download a file, when I have non-ascii characters in file name it looks weird in IE whereas it looks fine in Firefox.
below is the code:-
context.Response.ContentType = ".cs";
context.Response.AppendHeader("Content-Length", data.Length.ToString());
context.Response.AppendHeader("Content-Disposition", String.Format("attachment; filename={0}",filename));
context.Response.OutputStream.Write(data, 0, data.Length);
context.Response.Flush();
when I supply 'ß' 'ä' 'ö' 'ü' 'ó' 'ß' 'ä' 'ö' 'ü' 'ó' in file name field it looks different than what I have in file name it looks fine in firefox. adding EncodingType and charset has been of no use.
In ie it is 'ß''ä''ö''ü''ó''ß''ä''ö''ü'_'ó' and in firefox it is 'ß' 'ä' 'ö' 'ü' 'ó' 'ß' 'ä' 'ö' 'ü' 'ó'.
Any Idea how this can be fixed?
I had similar problem. You have to use HttpUtility.UrlEncode or Server.UrlEncode to encode filename. Also I remember firefox didn't need it. Moreoverit ruined filename when it's url-encoded. My code:
// IE needs url encoding, FF doesn't support it, Google Chrome doesn't care
if (Request.Browser.IsBrowser ("IE"))
{
fileName = Server.UrlEncode(fileName);
}
Response.Clear ();
Response.AddHeader ("content-disposition", String.Format ("attachment;filename=\"{0}\"", fileName));
Response.AddHeader ("Content-Length", data.Length.ToString (CultureInfo.InvariantCulture));
Response.ContentType = mimeType;
Response.BinaryWrite(data);
Edit
I have read specification more carefully. First of all RFC2183 states that:
Current [RFC 2045] grammar restricts parameter values (and hence Content-Disposition filenames) to US-ASCII.
But then I found references that [RFC 2045] is absolete and one must reference RFC 2231, which states:
Asterisks ("*") are reused to provide
the indicator that language and
character set information is present
and encoding is being used. A single
quote ("'") is used to delimit the
character set and language information
at the beginning of the parameter
value. Percent signs ("%") are used as
the encoding flag, which agrees with
RFC 2047.
Which means that you can use UrlEncode for non-ascii symbols, as long as you include the encoding as stated in the rfc. Here is an example:
string.Format("attachment; filename=\"{0}\"; filename*=UTF-8''{0}", Server.UrlEncode(fileName, Encoding.UTF8));
Note that filename is included in addition to filename* for backwards compatibility. You can also choose another encoding and modify the parameter accordingly, but UTF-8 covers everything.
HttpUtility.UrlPathEncode might be a better option. As URLEncode will replace spaces with '+' signs.
For me this solution is working on all major browsers:
Response.AppendHeader("Content-Disposition", string.Format("attachment; filename*=UTF-8''{0}", HttpUtility.UrlPathEncode(fileName).Replace(",", "%2C"));
var mime = MimeMapping.GetMimeMapping(fileName);
return File(fileName, mime);
Using ASP.NET MVC 3.
The Replace is necessary, because Chrome doesn't like Comma (,) in parameter values: http://www.gangarasa.com/lets-Do-GoodCode/tag/err_response_headers_multiple_content_disposition/
You may want to read RFC 6266 and look at the tests at http://greenbytes.de/tech/tc2231/.
For me this solved the problem:
var result = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new ByteArrayContent(data)
};
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileNameStar = "foo-ä-€.html"
};
When i look ad the repsonse in fiddler i can see the filename has automaticcaly been encoded using UTF-8:
Fiddler response example with encoded Content-Disposition filename using UTF-8
If we look at the value of the Content-Disposition header we can see it will be the same as #Johannes Geyer his answer. The only difference is that we didn't have to do the encoding ourselfs, the ContentDispositionHeaderValue class takes care of that.
I used the Testcases for the Content-Disposition header on: http://greenbytes.de/tech/tc2231/ as mentioned by Julian Reschke.
Information about the ContentDispositionHeaderValue class can be found on MSDN.
For Asp.Net Core (version 2 as of this post) UrlPathEncode is deprecated, here's how to achieve the desired result:
System.Net.Mime.ContentDisposition cd = new System.Net.Mime.ContentDisposition
{
FileName = Uri.EscapeUriString(fileName),
Inline = true // false = prompt the user for downloading; true = browser to try to show the file inline
};
Response.Headers.Add("Content-Disposition", cd.ToString());
I`m using Uri.EscapeUriString for converts all characters to their hexadecimal representation, and string.Normalize for Unicode normalization form C.
(tested in ASP.NET MVC5 framework 4.5)
var contentDispositionHeader = new System.Net.Mime.ContentDisposition
{
Inline = false,
FileName = Uri.EscapeUriString(Path.GetFileName(pathFile)).Normalize()
};
Response.Headers.Add("Content-Disposition", contentDispositionHeader.ToString());
string mimeType = MimeMapping.GetMimeMapping(Server.MapPath(pathFile));
return File(file, mimeType);
I'm looking for a good method of generating an iCalendar file (*.ics) in c# (asp.net). I've found a couple resources, but one thing that has been lacking is their support for quoted-printable fields - fields that have carriage returns and line feeds.
For example, if the description field isn't encoded properly, only the first line will display and possibly corrupting the rest of the information in the *.ics file.
I'm looking for existing classes that can generate *.ics files and/or a class that can generate quoted-printable fields.
I use DDay.Ical, its good stuff.
Has the ability to open up an ical file and get its data in a nice object model. It says beta, but it works great for us.
Edit Nov 2016
This library has been deprecated, but was picked up and re-released as iCal.NET by another dev.
Notes about the release: rianjs.net/2016/07/dday-ical-is-now-ical-net
Source on GitHub: github.com/rianjs/ical.net
The easiest way I've found of doing this is to markup your HTML using microformats.
If you're looking to generate iCalendar files then you could use the hCalendar microformat then include a link such as 'Add to Calendar' that points to:
http://feeds.technorati.com/events/[ your page's full URL including the http:// ]
The Technorati page then parses your page, extracts the hCalendar info and sends the iCalendar file to the client.
I wrote a shim function to handle this. It's mostly compliant--the only hangup is that the first line is 74 characters instead of 75 (the 74 is to handle the space on subsequent lines)...
Private Function RFC2445TextField(ByVal LongText As String) As String
LongText = LongText.Replace("\", "\\")
LongText = LongText.Replace(";", "\;")
LongText = LongText.Replace(",", "\,")
Dim sBuilder As New StringBuilder
Dim charArray() As Char = LongText.ToCharArray
For i = 1 To charArray.Length
sBuilder.Append(charArray(i - 1))
If i Mod 74 = 0 Then sBuilder.Append(vbCrLf & " ")
Next
Return sBuilder.ToString
End Function
I use this for the summary and description on our ICS feed. Just feed the line with the field already prepended (e.g. LongText = "SUMMARY:Event Title"). As long as you set caching decently long, it's not too expensive of an operation.
iCal (ical 2.0) and quoted-printable don't go together.
Quoted-printable is used a lot in vCal (vCal 1.0) to represent non-printable characters, e.g. line-breaks (=0D=0A). The default vCal encoding is 7-bit, so sometimes you need to use quoted-printable to represent non-ASCII characters (you can override the default encoding, but the other vCal-compliant communicating party is not required to understand it.)
In iCal, special characters are represented using escapes, e.g. '\n'. The default encoding is UTF-8, all iCal-compliant parties must support it and that makes quoted-printable completely unnecessary in iCal 2.0 (and vCard 3.0, for that matter).
You may need to back your customer/stakeholder to clarify the requirements. There seems to be confusion between vCal and iCal.
I'm missing an example with custom time zones. So here a snippet that show how you can set a time zone in the ics (and send it to the browser in asp.net).
//set a couple of variables for demo purposes
DateTime IcsDateStart = DateTime.Now.AddDays(2);
DateTime IcsDateEnd = IcsDateStart.AddMinutes(90);
string IcsSummary = "ASP.Net demo snippet";
string IcsLocation = "Amsterdam (Netherlands)";
string IcsDescription = #"This snippes show you how to create a calendar item file (.ics) in ASP.NET.\nMay it be useful for you.";
string IcsFileName = "MyCalendarFile";
//create a new stringbuilder instance
StringBuilder sb = new StringBuilder();
//begin the calendar item
sb.AppendLine("BEGIN:VCALENDAR");
sb.AppendLine("VERSION:2.0");
sb.AppendLine("PRODID:stackoverflow.com");
sb.AppendLine("CALSCALE:GREGORIAN");
sb.AppendLine("METHOD:PUBLISH");
//create a custom time zone if needed, TZID to be used in the event itself
sb.AppendLine("BEGIN:VTIMEZONE");
sb.AppendLine("TZID:Europe/Amsterdam");
sb.AppendLine("BEGIN:STANDARD");
sb.AppendLine("TZOFFSETTO:+0100");
sb.AppendLine("TZOFFSETFROM:+0100");
sb.AppendLine("END:STANDARD");
sb.AppendLine("END:VTIMEZONE");
//add the event
sb.AppendLine("BEGIN:VEVENT");
//with a time zone specified
sb.AppendLine("DTSTART;TZID=Europe/Amsterdam:" + IcsDateStart.ToString("yyyyMMddTHHmm00"));
sb.AppendLine("DTEND;TZID=Europe/Amsterdam:" + IcsDateEnd.ToString("yyyyMMddTHHmm00"));
//or without a time zone
//sb.AppendLine("DTSTART:" + IcsDateStart.ToString("yyyyMMddTHHmm00"));
//sb.AppendLine("DTEND:" + IcsDateEnd.ToString("yyyyMMddTHHmm00"));
//contents of the calendar item
sb.AppendLine("SUMMARY:" + IcsSummary + "");
sb.AppendLine("LOCATION:" + IcsLocation + "");
sb.AppendLine("DESCRIPTION:" + IcsDescription + "");
sb.AppendLine("PRIORITY:3");
sb.AppendLine("END:VEVENT");
//close calendar item
sb.AppendLine("END:VCALENDAR");
//create a string from the stringbuilder
string CalendarItemAsString = sb.ToString();
//send the ics file to the browser
Response.ClearHeaders();
Response.Clear();
Response.Buffer = true;
Response.ContentType = "text/calendar";
Response.AddHeader("content-length", CalendarItemAsString.Length.ToString());
Response.AddHeader("content-disposition", "attachment; filename=\"" + IcsFileName + ".ics\"");
Response.Write(CalendarItemAsString);
Response.Flush();
HttpContext.Current.ApplicationInstance.CompleteRequest();
Check out http://www.codeproject.com/KB/vb/vcalendar.aspx
It doesn't handle the quoted-printable fields like you asked, but the rest of the code is there and can be modified.
According to RFC-2445, the comment and description fields are TEXT. The rules for a test field are:
[1] A single line in a TEXT field is not to exceed 75 octets.
[2] Wrapping is achieved by inserting a CRLF followed by whitespace.
[3] There are several characters that must be encoded including \ (reverse slash) ; (semicolon) , (comma) and newline. Using a \ (reverse slash) as a delimiter gives \ \; \, \n
Example: The following is an example of the property with formatted
line breaks in the property value:
DESCRIPTION:Meeting to provide technical review for "Phoenix"
design.\n Happy Face Conference Room. Phoenix design team
MUST attend this meeting.\n RSVP to team leader.
iCal can be complicated, so I recommend using a library. DDay is a good free solution. Last I checked it didn't have full support for recurring events, but other than that it looks really nice. Definitely test the calendars with several clients.
i know it is too late, but it may help others. in my case i wrote following text file with .ics extension
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Calendly//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20170509T164109Z
UID:your id-11273661
DTSTART:20170509T190000Z
DTEND:20170509T191500Z
CLASS:PRIVATE
DESCRIPTION:Event Name: 15 Minute Meeting\nDate & Time: 03:00pm - 03:15pm (
Eastern Time - US & Canada) on Tuesday\, May 9\, 2017\n\nBest Phone Number
To Reach You :: xxxxxxxxx\n\nany "link": https://wwww.yahoo.com\n\n
SUMMARY:15 Minute Meeting
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR
it worked for me.