So I tried this in several different formats and produced different results. I will include all relevant information below.
My company uses a web-based application to schedule the generation of reports. The service emails a URL that can be clicked on and will immediately begin the "Open Save As Cancel" dialogue box. I am trying to automate the process of downloading these reports with a C# script as part of a Visual Studio project (the end goal is to import these reports in SQL Server).
I am encountering terrible difficulty initiating the download of this file using WebClient Here is the closest I have gotten with any of the methods I have tried:
*NOTE: I removed all identifying information from the URL, but left all special characters and the basic architecture intact. Hopefully this will be a happy medium between protecting confidential info and giving you enough to understand my dilemma. The URL does work when manually copied and pasted into the address bar of internet explorer.
Error Message:
"Invalid URI: The hostname could not be parsed."
public void Main()
{
using (var wc = new System.Net.WebClient())
{
wc.DownloadFile(
new Uri(#"http:\\webapp.locality.company.com\scripts\rds\cgigetf.exe?job_id=3058352&file_id=1&format=TAB\report.tab"),
#"\\server\directory\folder1\folder2\folder3\...\...\...\rawfile.tab");
}
}
Note also that I have tried to set:
string sourceUri = #"http:\\webapp.locality.company.com\scripts\rds\cgigetf.exe?job_id=3058352&file_id=1&format=TAB\report.tab\abc123_3058352.tab";
Uri uriPath;
Uri.TryCreate(sourceUri, UriKind.Absolute, out uriPath);
But uriPath remains null - TryCreate fails.
I have attempted doing a webrequest / webresponse / WebStream, but it still cannot find the host.
I have tried including the download URL (as in my first code example) and the download URL + the file name (as in my second code example). I do not need the file name in the URL to initiate the download if I do it manually. I have also tried replacing the "report.tab" portion of the URL with the file name, but to no avail.
Help is greatly appreciated as I have simply run out of thoughts on this one. The only idea I have left is that perhaps one of the special characters in my URL is getting in the way, but I don't know which one that would be or how to handle it properly.
Thanks in advance!
My first thought would be that your URI backslashes are being interpreted as escape characters, leading to a nonsense result after evaluation. I would try a quick test where each backslash is escaped as itself (i.e. "\" instead of "\" in each instance). I'm also a little puzzled as to why your URI is not using forward slashes...?
// Create an absolute Uri from a string.
Uri absoluteUri = new Uri("http://www.contoso.com/");
Ref: Uri Constructor on MSDN
Related
My application has a blazor page that calls an API function to generate a report and serve that report to the browser for downloading.
For this, I am using the ControllerBase.File method, specifying the suggested filename.
The browser successfully begins the download, but ignores the suggested filename. Instead, giving "file.nothing"
I have seen this behaviour in both Chrome and Firefox.
Minimal reproducable example
[HttpPost("report/level2/deliverable")]
public async Task<IActionResult> GenerateLevel2DeliverableReport()
{
byte[] bytes = Encoding.ASCII.GetBytes("This is some text");
string filename = "test.txt";
return File(bytes, "text/plain", filename);
}
In the real-world situation, I have experienced the same problem returning Word Documents and Zip files, with their respective content types.
HTTP Response as captured by Postman
The content-disposition (which is hard to read in the image) is
attachment; filename=test.txt; filename*=UTF-8''test.txt
Does anybody have any thoughts on why the browser is ignoring the filename?
Well, this is embarrassing.
I thought the download was being spawned by the line
return File(bytes, "text/plain", filename);
But it turns out, the API call was wrapped in some javascript code that pulled the filename out of the HTTP header using regex and started the download.
If the regex match failed it was hard-coding the filename to "file.nothing"
No wonder my google searches for "file.nothing" were all in vain.
There was an issue with the regex which has been fixed by a colleague and all is now well in the universe.
I am trying to import data for custom dimension in Google Analytics through the .NET client library. In Google Analytics, when I view the uploads for a data set from Admin > Data Import > Manage Uploads, it says my uploads are successful, but the data for the custom dimension doesn't seem to show up in my report. Right now, I am just using my custom dimension to set the category for an article.
Here is how I am uploading through the .Net client library.
string accountId = "***";
string webPropertyId = "***";
string customDataSourceId = "***";
string contentType = "application/octet-stream";
IUploadProgress progress;
using (var dataStream = CreateArticleCsvStream(articles))
{
var fs = File.Create("test.csv");
dataStream.CopyTo(fs);
fs.Close();
progress = service.Management.Uploads.UploadData(accountId, webPropertyId, customDataSourceId, dataStream, contentType).Upload();
}
if (progress.Status == UploadStatus.Failed)
{
throw progress.Exception;
}
Here is the output for test.csv
ga:pagePath,ga:dimension1
/path/to/page/,"MyCategory"
When I download the file from the data set, I get the same file as the test.csv file, it just has a random filename that gets assisgned to it.
I found this other question similar to mine, but there was no solution posted. Any help would be appreciated.
I have also waited over 24 hours, but still nothing.
It took a few days of trial and error but I finally found the solution.
First thing to check is that your Website's URL is correct under Admin > View Settings. We had ours set up like my.domain.com/path/to/site when it should have just been my.domain.com. (We are using SharePoint, which is why path/to/site was appended to the site URL)
Second thing to check is that your key/pagePath entries are all correct. In our case, we had an extra forward slash at the end of the URL. For some reason, Google Analytics displays the trailing forward slash in reports, but does not actually store it for the pagePath.
Another error may be capitalization. It seems like GA applies filters after the data has been processed. If you add the lowercase/uppercase filter, notice that it only affects how the URLs display in your reports. Behind the scenes, it seems that GA still stores the URL with whatever capitalization the hit initially came in with. For example if the URL on your site is my.domain.com/path/to/PAGE.aspx and you apply the lowercase filter, the pagePath will display in your reports as /path/to/page.aspx. But, if you use the lowercase value in your csv import, the data will not join. You must use the pagePath that appears on your site (/path/to/PAGE.aspx in this case).
It would be nice if Google gave some log files when it tries to process and join the uploaded data with the existing data, rather than just saying the upload was successful even though the processing/joining stage may fail.
I have the following line of aspx link that I would like to encode:
Response.Redirect("countriesAttractions.aspx?=");
I have tried the following method:
Response.Redirect(Encoder.UrlPathEncode("countriesAttractions.aspx?="));
This is another method that I tried:
var encoded = Uri.EscapeUriString("countriesAttractions.aspx?=");
Response.Redirect(encoded);
Both redirects to the page without the URL being encoded:
http://localhost:52595/countriesAttractions?=
I tried this third method:
Response.Redirect(Server.UrlEncode("countriesAttractions.aspx?="));
This time the url itself gets encoded:
http://localhost:52595/countriesAttractions.aspx%3F%3D
However I get an error from the UI saying:
HTTP Error 404.0 Not Found
The resource you are looking for has been removed, had its name changed, or
is temporarily unavailable.
Most likely causes:
-The directory or file specified does not exist on the Web server.
-The URL contains a typographical error.
-A custom filter or module, such as URLScan, restricts access to the file.
Also, I would like to encode another kind of URL that involves parsing of session strings:
Response.Redirect("specificServices.aspx?service=" +
Session["service"].ToString().Trim() + "&price=" +
Session["price"].ToString().Trim()));
The method I tried to include the encoding method into the code above:
Response.Redirect(Server.UrlEncode("specificServices.aspx?service=" +
Session["service"].ToString().Trim() + "&price=" +
Session["price"].ToString().Trim()));
The above encoding method I used displayed the same kind of results I received with my previous Server URL encode methods. I am not sure on how I can encode url the correct way without getting errors.
As well as encoding URL with CommandArgument:
Response.Redirect("specificAttractions.aspx?attraction=" +
e.CommandArgument);
I have tried the following encoding:
Response.Redirect("specificAttractions.aspx?attraction=" +
HttpUtility.HtmlEncode(Convert.ToString(e.CommandArgument)));
But it did not work.
Is there any way that I can encode the url without receiving this kind of error?
I would like the output to be something like my second result but I want to see the page itself and not the error page.
I have tried other methods I found on stackoverflow such as self-coded methods but those did not work either.
I am using AntiXSS class library in this case for the methods I tried, so it would be great if I can get solutions using AntiXSS library.
I need to encode URL as part of my school project so it would be great if I can get solutions. Thank you.
You can use the UrlEncode or UrlPathEncode methods from the HttpUtility class to achieve what you need. See documentation at https://msdn.microsoft.com/en-us/library/system.web.httputility.urlencode(v=vs.110).aspx
It's important to understand however, that you should not need to encode the whole URL string. It's only the parameter values - which may contain arbitrary data and characters which aren't valid in a URL - that you need to encode.
To explain this concept, run the following in a simple .NET console application:
string url = "https://www.google.co.uk/search?q=";
//string url = "http://localhost:52595/specificAttractions.aspx?country=";
string parm = "Bora Bora, French Polynesia";
Console.WriteLine(url + parm);
Console.WriteLine(url + HttpUtility.UrlEncode(parm), System.Text.Encoding.UTF8);
Console.WriteLine(url + HttpUtility.UrlPathEncode(parm), System.Text.Encoding.UTF8);
Console.WriteLine(HttpUtility.UrlEncode(url + parm), System.Text.Encoding.UTF8);
You'll get the following output:
https://www.google.co.uk/search?q=Bora Bora, French Polynesia
https://www.google.co.uk/search?q=Bora+Bora%2c+French+Polynesia
https://www.google.co.uk/search?q=Bora%20Bora,%20French%20Polynesia
https%3a%2f%2fwww.google.co.uk%2fsearch%3fq%3dBora+Bora%2c+French+Polynesia
By pasting these into a browser and trying to use them, you'll soon see what is a valid URL and what is not.
(N.B. when pasting into modern browsers, many of them will URL-encode automatically for you, if your parameter is not valid - so you'll find the first output works too, but if you tried to call it via some C# code for instance, it would fail.)
Working demo: https://dotnetfiddle.net/gqFsdK
You can of course alter the values you input to anything you like. They can be hard-coded strings, or the result of some other code which returns a string (e.g. fetching from the session, or a database, or a UI element, or anywhere else).
N.B. It's also useful to clarify that a valid URL is simply a string in the correct format of a URL. It is not the same as a URL which actually exists. A URL may be valid but not exist if you try to use it, or may be valid and really exist.
I'm trying to do some work with fakemailgenerator, the url goes well with httpwebrequest and gets printed by MessageBox.Show properly, here is the piece of code with the problem, btw there no errors or exeptions.
//FOR EXAMPLE mail#fakemail.com
string[] mailSplit = mail.Split(new string[] { "#" },
StringSplitOptions.None); // MAKING AN ARRAY TO SPLIT USER
AND DOMAIN
string url = #"http://www.fakemailgenerator.com/#/" +
mailSplit[1] + "/" + mailSplit[0] + "/"; //GENERATING AND SAVING THE FAKE MAIL URL.
MessageBox.Show(url); //THIS PRINTS http://www.fakemailgenerator.com/#/fakemail.com/mail
Process.Start("chrome", url); //THIS GOES TO http://www.fakemailgenerator.com/#/fakemail.com
EDIT
This have nothing to do with fakemailgenerator.com, because as mentioned above i tried that with httpwebrequest, plus in the loading state it's just http://www.fakemailgenerator.com/#/fakemail.com and not the full url.
EDIT
I tried rightnow putting the url manually and it went good and have been opened in chrome successfully, and i have observed one problem with the url when printed with MessageBox.Show (while using variables, not setting url manually), is showing url like http://www.fakemailgenerator.com/#/domain.com /userwith a whitespace between .com and /user, so i've tried replacing the white space with \0 (null) using url.Replace(' ','\0'), but this failed, so i think maybe there is a way to remove the white space?
Ran the code and it worked fine. A new Chrome window opened with the correct (full) url. It's an error page for me though, but if the site really exists when you try to reach it, perhaps there is some kind of a redirect that redirects you to a site with the shorter url.
I've figured around the problem, i don't really know where it's coming from, but all i know is that a whitespace was being added to the url in a way that makes process.Start("chrome",url); receives only the part before the whitespace; http://www.fakemailgenerator.com/#/domain.com/ , so i've just removed the whitespace with url = url.Replace(" ",string.Empty); and now the code works just fine.
I am unable to use the drag-and-drop functionality within DotNetNuke version 7.1.
The drag-and-drop functionality of the Telerik RadEditor takes the browser's Base64 input and encases it in an img tag where the source is the data. E.g., src="data:image/jpeg;base64,[base64data]".
When using drag/drop to a RadEditor within the HTML Module and then saving the HTML content, that src definition is changed to a URI request by prepending the relative path for the DNN portal. E.g., src="/mysite/portals/0/data:image/jpeg;base64,[base64data]".
This converts what started out as a perfectly valid embedded image tag into a request and thereby causes the browser to request this "image" from the server. The server then returns a 414 error (URI too long).
Example without prepended relative path: http://jsfiddle.net/GGGH/27Tbb/2/
<img src="data:image/jpeg;base64,[stuff]>
Example with prepended relative path (won't display): http://jsfiddle.net/GGGH/NL85G/2/
<img src="mysite/portals/0/data:image/jpeg;base64,[stuff]>
Is there some configuration that I've missed? Prepending relative paths is OK for src="/somephysicalpath" but not for src="data:image...".
I ended up solving the problem prior to posting the question but wanted to add this knowledge to SO in case someone else encountered the same problem (has no one noticed this yet?). Also, perhaps, DNN or the community can improve upon my solution and that fix can make it into a new DNN build.
I've looked at the source code for RadEditor, RadEditorProvider and then finally the Html module itself. It seems the problem is in the EditHtml.ascx.cs, FormatContent() method which calls the HtmlTextController's ManageRelativePaths() method. It's that method that runs for all "src" tags (and "background") in the Html content string. It post-processes the Html string that comes out of the RadEditor to add in that relative path. This is not appropriate when editing an embedded Base64 image that was dragged to the editor.
In order to fix this, and still allow for the standard functionality originally intended by the manufacturer, the DotNetNuke.Modules.Html.EditHtm.ascx.cs, ManageRelativePaths needs to be modified to allow for an exception if the URI includes a "data:image" string at its beginning. Line 488 (as of version 7.1.0) is potentially appropriate. I added the following code (incrementing P as appropriate and positioned after the URI length was determined -- I'm sure there's a better way but this works fine):
// line 483, HtmlTextController.cs, DNN code included for positioning
while (P != -1)
{
sbBuff.Append(strHTML.Substring(S, P - S + tLen));
// added code
bool skipThisToken = false;
if (strHTML.Substring(P + tLen, 10) == "data:image") // check for base64 image
skipThisToken = true;
// end added code - back to standard DNN
//keep characters left of URL
S = P + tLen;
//save startpos of URL
R = strHTML.IndexOf("\"", S);
//end of URL
if (R >= 0)
{
strURL = strHTML.Substring(S, R - S).ToLower();
}
else
{
strURL = strHTML.Substring(S).ToLower();
}
// added code to continue while loop after the integers were updated
if (skipThisToken)
{
P = strHTML.IndexOf(strToken + "=\"", S + strURL.Length + 2, StringComparison.InvariantCultureIgnoreCase);
continue;
}
// end added code -- the method continues from here (not reproduced)
This is probably not the best solution as its searching for a hard coded value. Better would be functionality that allows the developers to add tags later. (But, then again, EditHtml.ascx.cs and HtmlTextController both hard code the two tags that they intend to post-process.)
So, after making this small change, recompiling the DotNetNuke.Modules.Html.dll and deploying, drag-and-drop should be functional. Obviously this increases the complexity of an upgrade -- it would be better if this were fixed by DNN themselves. I verified that as of v7.2.2 this issue still exists.
UPDATE: Fixed in DNN Community Version 7.4.0